Sign in with
Sign up | Sign in
Your question

Cnet: "the first Larrabee products will be too slow"

Last response: in CPUs
Share
August 12, 2008 7:15:28 PM

Some Information on Larrabee...

Full Article here:
http://news.cnet.com/8301-13512_3-10006184-23.html

Quote:
The paper is a pretty thorough summary of Intel's motives for developing Larrabee and the major features of the new architecture. Basically, Larrabee is about using many simple x86 cores--more than you'd see in the central processor (CPU) of the system--to implement a graphics processor (GPU). This concept has received a lot of attention since Intel first started talking about it last year


Quote:
Intel describes the Larrabee cores as "derived from the Pentium processor," but I think perhaps this is an oversimplification. The design shown in the paper is only vaguely Pentium-like, with one execution unit for scalar (single-operation) instructions and one primarily for vector (multiple-operation) instructions.


Quote:
The bottom line
So...what's Larrabee good for, and why did Intel bother with it?

I think maybe this was a science project that got out of hand. It came along just as AMD was buying ATI and so positioning itself as a leader in CPU-GPU integration. Intel had (and still has) no competitive GPU technology, but perhaps it saw Larrabee as a way to blur the line distinguishing CPUs from GPUs, allowing Intel to leverage its expertise in CPU design into the GPU space as well


Quote:
the first Larrabee products will be too slow, too expensive, and too hot to be commercially competitive


Not looking good...

Amd4Life!!
August 12, 2008 7:33:49 PM

To bad everyone else is enjoying the 4.0 club while your stuck at 2.0 Thunderman.
August 12, 2008 7:36:38 PM

In other words, it won't be much different from all of Intel's previous stabs at GPUs. Oh well, no surprises here.
Related resources
a c 122 à CPUs
August 12, 2008 7:45:35 PM

Wow. I love how they are judging something that no one has even gotten their hands on.

Its too slow: Um basing it on what you THINK does not work buddy

Too expensive: Predicting the prices on something 1+ years away.... yet hes not rich from the stock market

Too Hot: Lets see.... CPUs run on average 30c idle and 50-55c load (a quad @ 3GHz). GPUs now a days run at about 50c idle and 70c+ load....... And GPUs use much more power than CPUs.

Obviously Peter Glaskowsky is a technology genius. I mean thats why he works for IBM or Intel/AMD instead of CNet (which is a website).......

Sorry but this guy is an idiot. He is judging something without even being able to get his hands on it. He acts like he knows what he is saying but if this were true he would be working for them helping them make something.

Here is another way to look at it: Larrabee may not look good on paper to you Mr Peter Glaskowsky but it could be good in practice. Just look at AMDs Phenom. Looks great on paper but in practice its not as amazing as the paper said.
a c 122 à CPUs
August 12, 2008 8:18:43 PM

I think this thread is fine. I think that the articles author is full of BS since everything hes putting here is opininated and based off of paper which we all know you cannot trust paper.
August 12, 2008 8:45:24 PM

Hey don't forget who the poster is, Old hit and run Thunderdud himself. Too bad he didn't get the post from the same article who posted from about the author being totally wrong in what he was saying about Larrabee and how easy it will be to work with such a processor.

"by rauxbaught August 5, 2008 6:09 PM PDT As a long-time professional graphics programmer (who doesn't work for Intel), I can assure you that you are completely missing the point.

But before I get into that, let me correct a few complete fallacies in your article:
- There is no "reuse of information" across frames in video games. They took spaced out their samples because neighboring frames tend to be very similar. They wanted varying data points to determine the effect of varying loads on the various subsystems, and measure the scaling of the system as they add extra cores -- not to do sustained throughput measurements.
- Running multiple threads on a single core in in-order processors is generally done to cover memory latency. (This is different from out-of-order processors, which can have many more sequential instructions in flight at once, and use complicated logic to keep as many units as possible busy.) Hyperthreading in this case therefore increases the practical throughput of the system, since it's designed for parallelism instead of single-stream throughput.
- The use of 1Ghz cores was used to keep the math simple. (As was mentioned in the paper.) As far as we know, they could be using cores that run 2-3 times that frequency. (Or even half, for that matter...) Considering we don't know the frequency or the core count of their final hardware, comparing it to 2 year old hardware from nVidia is far from meaningful. (Thus making your claim that they won't be competitive very premature.)
- Binning (similar to previous techniques such as tiling) reduces the memory bandwidth to the framebuffer, not the polygons. If you'd read the paper, you'd see that their analysis indicates a significant memory bandwidth advantage over forward rendering, and memory bandwidth often predicts performance in graphics. As a side note, their algorithm is actually slightly different from traditional tiling techniques, which is presumably why they used different terminology.

Now, as for completely missing the point: They implemented everything in software. The very fact that they could contrast an immediate mode renderer against "binning" is a testament to how important Larrabee is as a paradigm shift. The performance balance of graphics development is currently determined by the hardware manufacturers. This has side effects like flat-shaded polygons being completely bound by numbers of rasterization and blending units. This means it's quite easy to put together a workload where more than half of the GPU is completely idle. Making the entire pipeline completely software-driven puts control of these decisions in the hands of developers.

Performance aside, having a software rasterization pipeline means the flexibility to set up whatever is desired. I've wanted fully programmable blending for 5 years. Now I can have it! Sending data back and forth across the bus taking too long? Process it all on the GPU! All of these things may be possible on current hardware using GPGPU programming techniques, but this is the first hardware that's designed for generality FIRST. That makes it a pretty big deal on its own, even if it's not the fastest chip on the block. "


a b à CPUs
August 13, 2008 7:36:09 AM

thunderman said:
Some Information on Larrabee...

Full Article here:
http://news.cnet.com/8301-13512_3-10006184-23.html

Quote:
The paper is a pretty thorough summary of Intel's motives for developing Larrabee and the major features of the new architecture. Basically, Larrabee is about using many simple x86 cores--more than you'd see in the central processor (CPU) of the system--to implement a graphics processor (GPU). This concept has received a lot of attention since Intel first started talking about it last year


Quote:
Intel describes the Larrabee cores as "derived from the Pentium processor," but I think perhaps this is an oversimplification. The design shown in the paper is only vaguely Pentium-like, with one execution unit for scalar (single-operation) instructions and one primarily for vector (multiple-operation) instructions.


Quote:
The bottom line
So...what's Larrabee good for, and why did Intel bother with it?

I think maybe this was a science project that got out of hand. It came along just as AMD was buying ATI and so positioning itself as a leader in CPU-GPU integration. Intel had (and still has) no competitive GPU technology, but perhaps it saw Larrabee as a way to blur the line distinguishing CPUs from GPUs, allowing Intel to leverage its expertise in CPU design into the GPU space as well


Quote:
the first Larrabee products will be too slow, too expensive, and too hot to be commercially competitive


Not looking good...

Amd4Life!!


Here we go again...


How can you shout about this when the Phenom is the most disasterous release of a processor and looks like bringing AMD to its knees... No other processor has bought a company to its knees like this one..


You and your motley crew shout about the Intel duffers but what about your famous Another Major Duffer of Duffers.


Im sorry Thunderman, but just your name alone winds people up because of the way you present your posts..


Just post the news, dont put up AMD4Life ( at this rate they wont be 6 months but thats by the by ), and on top dont keep putting "not looking good" on anything that Intel is doing or pioneering ( yes Larrabee is pioneering as no one has done what they are doing. Even Intel have proven not to get it right ( Pentium 60 & 66 anyone ) when AMD are so far from looking good that looking good dont appear in their research portfolio...

I am also amazed at Hector Ruiz stand down to be put somewhere else in the company which has just as more power as his previous mess-uptial.. He should be totally removed from the board and given a 9150 for his troubles...

I know that most ic designers have their work scattered around the house for ornaments in plastic resin but I bet even AMD engineers dont have a Phenom silicon in their collection.


Then you and your bedroom buddies will give minus scores because you can give out minus scores down to tht fact of your bitterness of anyone that states what you and your single track minded clan are.
August 13, 2008 11:44:16 AM

Alright, Larrabee will be slow, hot and expensive.

Lets recapitulate so we don't get lost in translation here. What do we really know ? Nothing.
It can be a PR trick. Or it will revolutionize the industry. We just can't be sure.

I am a DAMMIT fan. But i don't see the purpose of this thread.
a b à CPUs
August 13, 2008 12:11:20 PM

i no intel gets everithing right at the first attempt and everithing.......but doez any1 think that there is a veeeeeeeeerrrrrrrrrrrrryyyyyyyyyyy little chance that intel's gpu might be a flop.......

plz don gang up on me now!!!!!
a b à CPUs
August 13, 2008 12:28:56 PM

nothing of interest here ... move along now ... keep to the left please.
a b à CPUs
August 13, 2008 12:39:27 PM


WTF?? More of this stupid FUD and Rating War??


Lock this garbage, please.
August 13, 2008 1:02:44 PM

Love the ostrich mentality of many in here.


Problem: Some (more) evidence presented that Larrabee might be bad.

Action: Ignore evidence and attack the poster that presented evidence.




It has been under discussion for some time in the GPU forum just how good Larrabee will (or will not) be.



Fudzilla reported ages ago Intel had a 300W power envelope for Larrabee Mk.1.

http://www.fudzilla.com/index.php?option=com_content&ta...


We know the x86 will not be as tailored to massive parallel vector computations as something designed specifically for the job (otherwise GPUs would already be using an approach much closer to CPU design).



Fudzilla also reported on the PCB required for larrabee, and the amount of layers it needs - 12 (larrabee) versus 8 or less (ATI/Nvidia).

http://www.fudzilla.com/index.php?option=com_content&ta...




All the indicators are pointing in the one direction. Ignore it if you wish.
a c 122 à CPUs
August 13, 2008 1:23:09 PM

sarwar_r87 said:
i no intel gets everithing right at the first attempt and everithing.......but doez any1 think that there is a veeeeeeeeerrrrrrrrrrrrryyyyyyyyyyy little chance that intel's gpu might be a flop.......

plz don gang up on me now!!!!!


No Intel does not get everything right on the first try. If they did they would have had a IMC since the 486 and up.

Amiga500 said:
Love the ostrich mentality of many in here.


Problem: Some (more) evidence presented that Larrabee might be bad.

Action: Ignore evidence and attack the poster that presented evidence.




It has been under discussion for some time in the GPU forum just how good Larrabee will (or will not) be.



Fudzilla reported ages ago Intel had a 300W power envelope for Larrabee Mk.1.

http://www.fudzilla.com/index.php?option=com_content&ta...


We know the x86 will not be as tailored to massive parallel vector computations as something designed specifically for the job (otherwise GPUs would already be using an approach much closer to CPU design).



Fudzilla also reported on the PCB required for larrabee, and the amount of layers it needs - 12 (larrabee) versus 8 or less (ATI/Nvidia).

http://www.fudzilla.com/index.php?option=com_content&ta...




All the indicators are pointing in the one direction. Ignore it if you wish.


I never attacked thunderman. I attacked the author of the article because he is making assumptions that you cannot make based on paper. People made assumptions based on the paper of Phenom but it turned out not to be anything close to what the paper said it would do. In the server market yes Barcy performs well but on the desktop it does not perform like the paper said it would compared to C2Q.

Fudzillia is not always 100% reliable. If I remember correctly they also started the rumor that Intels low end Nehalems will not OC then that got way out of control. Its hard to trust a site thats is known for rumors.

My take on Larrabee is that we need to wait and see it as a physical card thats ready to be tested. Making assumptions like he is stating is just insane. Personally from what I have seen that Intel wants to do with Larrabee will actually work.

I for one wouldn't mind seeing Larrabee actually shake the market up and cause a nice price drop on high end hardware for the end user.

As for thunderman its his normal Intel is evil BS and AMD is my master crap that just gets annoying.
August 13, 2008 1:29:05 PM

Thunderpants=Something he read, and it came out the other end.
August 13, 2008 1:30:14 PM

jimmysmitty said:
Personally from what I have seen that Intel wants to do with Larrabee will actually work.



The only thing in Larrabee that has me excited is the definite possibility of porting programs over to it with relative simplicity.


The ability to use system RAM efficiently (to increase available memory size) would also be something I (and a lot of others) would also be VERY interested in.




As for the graphics abilities - it doesn't get me too excited. But, then again, the ability to solve medical and engineering problems quicker is of far more real world importance than another 4 fps in crysis IMO.
a c 122 à CPUs
August 13, 2008 1:36:37 PM

^I agree. The ability to code for Larrabee via x86 will easily beat nVidia since CUDA is a new language. Intel has learned that going with what works and is widley used will be much easier than going with something new.

I would guess they learned that with IA-64 and Itanium.

As for graphics performance, can't say what it will be able to do yet. I know Intels main focus is folding@home and the likes but it could do well in graphics.

I hope so because Intel putting out a card that makes nVidia and ATI sweat will help put prices down on GPUs.

Or it will just bee good for 3D movie making and thats it.
August 13, 2008 1:41:04 PM

jimmysmitty said:
^I agree. The ability to code for Larrabee via x86 will easily beat nVidia since CUDA is a new language. Intel has learned that going with what works and is widley used will be much easier than going with something new.



CUDA is based on C, as is OpenCL.


Personally, my worry is porting specific operators across. If some aren't available, then workarounds have to be devised - which are a little more inefficient.


CUDA and Open CL aren't "new" languages, they just are incomplete old ones.
August 13, 2008 2:10:31 PM

Amiga500 said:

CUDA and Open CL aren't "new" languages, they just are incomplete old ones.


You can call it Libraries. C/C++ is probably older than you. openCL and CUDA can be used in C++. And if you look to every point, every language, every stinking piece of code out there, it will probably have one part (or all) compiled and written in C or C++ variants. C was "invented" or "defined" in 1972. C++ in 1979.

Now, don't call a train a boat, because it is a train. At least you could have looked at Wikipedia before you posted.

i++
August 13, 2008 2:59:30 PM

radnor said:
You can call it Libraries.


Yeah, you could do.


CUDA/Open CL are incomplete C libraries rather than new languages (which intel have tried to imply).
August 13, 2008 3:29:03 PM

Amiga500 said:
Yeah, you could do.


CUDA/Open CL are incomplete C libraries rather than new languages (which intel have tried to imply).


OpenCL is always incomplete. A Library that is always being developed is always incomplete.
CUDA is incomplete. Of course it is. Only limited for a few models and still being tuned.

If Intel said that, something is very wrong on their camp.
a c 122 à CPUs
August 13, 2008 6:09:12 PM

^He hates Intel for making PCs useless for gaming??? Yet we have the best games and best graphics.... weird.
August 13, 2008 6:50:55 PM

jimmysmitty said:
^He hates Intel for making PCs useless for gaming??? Yet we have the best games and best graphics.... weird.


He was probably talking about Intel IGPs. Anyway, that's not entirely Intel's fault. Well, I would go for a 790GX over a G45 anyday, but even that it's still a joke for any serious gamer. If anyone expects that to play most games in a reasonable fashion then I can't help but feel sorry. Of course your mileage may very, though. Still a "sweet joke" for me.
August 13, 2008 7:04:47 PM

jimmysmitty said:
^He hates Intel for making PCs useless for gaming??? Yet we have the best games and best graphics.... weird.



Did you read the article? Google Tim Sweeney Intel. I didn't say it. I have no opinion. I'm playing devil's advocate. I got the story and had to post it.
August 13, 2008 7:32:04 PM

He was referring to Intel's IGP, not their processors. Frankly, anyone that buys a gaming machine knows enough to stay away from IGP in the first place. Just because they don't run the latest Unreal engine at 60+ FPS doesn't mean they're useless.
August 13, 2008 8:21:58 PM

radnor said:
OpenCL is always incomplete. A Library that is always being developed is always incomplete.
CUDA is incomplete. Of course it is. Only limited for a few models and still being tuned.

If Intel said that, something is very wrong on their camp.


Here:

http://www.dailytech.com/Intel+Sheds+Light+on+Larrabee+...

and the response

http://www.dailytech.com/NVIDIA+Clears+Water+Muddied+by...





Anyway, regarding complete/incomplete libraries. I always appreciate new functions being added to any language.

I was more getting at missing basic functionality that would otherwise be available if you were using an x86 CPU instead of a GPU.
a c 122 à CPUs
August 14, 2008 4:14:00 AM

Hey I didn't pin it on you BM. I was just saying the reason for the decline in PC gaming is not Intels fault but more along the lines that game systems are cheap. Now why they are cheap is usually because they would use older hardware that didn't cost as much.

In my eyes though PC gaming is the best.

BTY @ Gphoria just gave Halo 3 GoTY and MGS4 somehow beat Crysis in Best Graphics. Just goes to show ya that consoles get the better treatment even when PCs are better.
August 14, 2008 10:10:02 AM

Amiga500 said:
Here:

Anyway, regarding complete/incomplete libraries. I always appreciate new functions being added to any language.

I was more getting at missing basic functionality that would otherwise be available if you were using an x86 CPU instead of a GPU.


I read them a few days ago.

Quoting Daily Tech, the first link (Intel Sheds Light on "Larrabee", Dismisses NVIDIA CUDA):

Quote:
According to Gelsinger, programmers simply don%u2019t have enough time to learn how to program for new architectures like CUDA. Gelsinger told Custom PC, %u201CThe problem that we%u2019ve seen over and over and over again in the computing industry is that there%u2019s a cool new idea, and it promises a 10x or 20x performance improvements, but you%u2019ve just got to go through this little orifice called a new programming model. Those orifices have always been insurmountable as long as the general purpose computing models evolve into the future.%u201D


CUDA isn't a new standard, it is a library. Doesn't impose a new programming model, just adds up possibilities. I guess something is really bad at Intel camp. Anybody with a little back ground in C can do it. Just #include and use. The learning curve is small. Like in another library.

Pat Gelsinger is a quite respectable fellow (no irony or sarcasm here), but already stated that AMD64 will never be a standard, and if i recall correctly he announced the 10Ghz CPUs back in the days. I hope History repeats it self once more in this case.

Another silly remark by Mr Gelsinger:

Quote:
The Sony Cell architecture illustrates the point according to Gelsinger. The Cell architecture promised huge performance gains compared to normal architectures, but the architecture still isn%u2019t supported widely by developers.


The Sony Cell arquitecture thrives in uncompressed, un-cached, massive RAW data. While it might be great to F@Home and even for gaming, it is completely different from the X86. You don't think programmers would be glad to return to the RISC architecture. There is a reason why DirectX and Visual Studio are so successful. They are fairly easy to use and learn and to produce decent software in a small time-frame.

Mr Gelsinger have my respect, due to being a visionary, and for being around for so long. But like every other professional, he makes mistakes. Dunno why, but sometimes silly. I guess we all do.
a c 122 à CPUs
August 14, 2008 1:06:35 PM

^I have to agree with him on Cell though. It is not as easy to write for and I have a good example of this.

VALVes Gabe Newell stated he did not like the PS3 due to the Cells coding and how complicated it was to write for Cell. Due to that VALVe licensed the Orange Box to EA and EA did a piss poor job of porting it and doesn't support it with any updates thus far.

Now in my opinion, if one of the biggest PC game companies does not like it and is not willing to put the man power to write for it that has to mean its a too complicated standard to pick up.
August 14, 2008 1:41:23 PM

Quote:
another thread that should die a sudden death.


Amen.
a b à CPUs
August 14, 2008 2:31:20 PM

gee ... did you need some air you two ??
August 14, 2008 3:46:18 PM

I've always preferred PC gaming to console gaming, but I can understand why they're popular with many.

In a value sense the PS3 is not only a games console, but also an entertainment system with a Blu ray player. I've only played a handful of titles and for the price of the system, the graphics look pretty good.

The flaws with console based gaming, is that I find many of the games can be pretty brain dead. I would rather pay the expensive PC hardware prices and have a range of games that I prefer to play. I don't wish to flame, but PC gaming is more intelligent in my opinion.
a b à CPUs
August 14, 2008 5:47:38 PM

According to http://www.overclockers.com/index.php?option=com_content&view=article&id=4186:intels-industrial-ideology&catid=53:editorials:

"Larrabee no more needs to beat the highest-end nVidia or AMD GPUs in the hottest games when it comes out than the x3100 needs to beat nV/AMD notebook offerings today. What it needs to do is be (profitably) a good deal cheaper for X level of performance than the nV/AMD offerings.

To put it another way, if Intel can provide the same graphics power with Larrabees at half the chip cost of even lower-end nVidia/AMD GPUs, they may not get great reviews, but they'll have a real winner on their hands. Even if the first generation or two or three of these Larrabees aren't so good, so long as they're decent enough to be competitive in the low- to medium end, they can gut nVidia/AMD's lower-end product lines by making them unprofitable. Believe it or not, nV/AMD would go quickly broke if all they could sell was $500 video cards. The graphics industry works on economy of scale, too (though not as much as the CPU industry). High-priced items get the attention and make the profits, but the far larger lower-end sales pay the company's bills. If all nV/AMD could sell were high-end cards, that $500 video card would become a $1,500 card pretty fast, and the companies would price themselves out of existence. "

So apparently Ed thinks Laughabee will be cheaper than either of AMD or NVidia's low- to mid-tier offerings.
August 14, 2008 6:22:28 PM

fazers_on_stun said:
So apparently Ed thinks Laughabee will be cheaper than either of AMD or NVidia's low- to mid-tier offerings.


No, he didn't say that.


He said that is all Intel need it to be to be a justifiable success.
August 14, 2008 6:52:24 PM

I can see Int-hell Larrabee being a Low end component...Reading the article and Judging by the Tech specs.
Intel may have to enter the budget segment....Problem is Larrabee could potentially be worse than ATI and Nvidias low end GFX cards. It could be that Intel has only one place where Larrabee will go..........The Trash Can

Perhaps the Author is right when he says

Quote:
science project that got out of hand


AMD4Life!!
a c 122 à CPUs
August 14, 2008 7:02:38 PM

^Ladies and gentlemen his contribution to the discussion. Nothing but anti Intel this and that.

BTW only BM calls Intel Inhell...your Int-hell remark makes it seem like your are BM.

Either way he always is a contributing member of the THG society, no?
!