Cnet: "the first Larrabee products will be too slow"

thunderman

Distinguished
Nov 13, 2007
107
0
18,680
Some Information on Larrabee...

Full Article here:
http://news.cnet.com/8301-13512_3-10006184-23.html

The paper is a pretty thorough summary of Intel's motives for developing Larrabee and the major features of the new architecture. Basically, Larrabee is about using many simple x86 cores--more than you'd see in the central processor (CPU) of the system--to implement a graphics processor (GPU). This concept has received a lot of attention since Intel first started talking about it last year

Intel describes the Larrabee cores as "derived from the Pentium processor," but I think perhaps this is an oversimplification. The design shown in the paper is only vaguely Pentium-like, with one execution unit for scalar (single-operation) instructions and one primarily for vector (multiple-operation) instructions.

The bottom line
So...what's Larrabee good for, and why did Intel bother with it?

I think maybe this was a science project that got out of hand. It came along just as AMD was buying ATI and so positioning itself as a leader in CPU-GPU integration. Intel had (and still has) no competitive GPU technology, but perhaps it saw Larrabee as a way to blur the line distinguishing CPUs from GPUs, allowing Intel to leverage its expertise in CPU design into the GPU space as well

the first Larrabee products will be too slow, too expensive, and too hot to be commercially competitive

Not looking good...

Amd4Life!!
 
Wow. I love how they are judging something that no one has even gotten their hands on.

Its too slow: Um basing it on what you THINK does not work buddy

Too expensive: Predicting the prices on something 1+ years away.... yet hes not rich from the stock market

Too Hot: Lets see.... CPUs run on average 30c idle and 50-55c load (a quad @ 3GHz). GPUs now a days run at about 50c idle and 70c+ load....... And GPUs use much more power than CPUs.

Obviously Peter Glaskowsky is a technology genius. I mean thats why he works for IBM or Intel/AMD instead of CNet (which is a website).......

Sorry but this guy is an idiot. He is judging something without even being able to get his hands on it. He acts like he knows what he is saying but if this were true he would be working for them helping them make something.

Here is another way to look at it: Larrabee may not look good on paper to you Mr Peter Glaskowsky but it could be good in practice. Just look at AMDs Phenom. Looks great on paper but in practice its not as amazing as the paper said.
 

kg4icg

Distinguished
Mar 29, 2006
506
0
19,010
Hey don't forget who the poster is, Old hit and run Thunderdud himself. Too bad he didn't get the post from the same article who posted from about the author being totally wrong in what he was saying about Larrabee and how easy it will be to work with such a processor.

"by rauxbaught August 5, 2008 6:09 PM PDT As a long-time professional graphics programmer (who doesn't work for Intel), I can assure you that you are completely missing the point.

But before I get into that, let me correct a few complete fallacies in your article:
- There is no "reuse of information" across frames in video games. They took spaced out their samples because neighboring frames tend to be very similar. They wanted varying data points to determine the effect of varying loads on the various subsystems, and measure the scaling of the system as they add extra cores -- not to do sustained throughput measurements.
- Running multiple threads on a single core in in-order processors is generally done to cover memory latency. (This is different from out-of-order processors, which can have many more sequential instructions in flight at once, and use complicated logic to keep as many units as possible busy.) Hyperthreading in this case therefore increases the practical throughput of the system, since it's designed for parallelism instead of single-stream throughput.
- The use of 1Ghz cores was used to keep the math simple. (As was mentioned in the paper.) As far as we know, they could be using cores that run 2-3 times that frequency. (Or even half, for that matter...) Considering we don't know the frequency or the core count of their final hardware, comparing it to 2 year old hardware from nVidia is far from meaningful. (Thus making your claim that they won't be competitive very premature.)
- Binning (similar to previous techniques such as tiling) reduces the memory bandwidth to the framebuffer, not the polygons. If you'd read the paper, you'd see that their analysis indicates a significant memory bandwidth advantage over forward rendering, and memory bandwidth often predicts performance in graphics. As a side note, their algorithm is actually slightly different from traditional tiling techniques, which is presumably why they used different terminology.

Now, as for completely missing the point: They implemented everything in software. The very fact that they could contrast an immediate mode renderer against "binning" is a testament to how important Larrabee is as a paradigm shift. The performance balance of graphics development is currently determined by the hardware manufacturers. This has side effects like flat-shaded polygons being completely bound by numbers of rasterization and blending units. This means it's quite easy to put together a workload where more than half of the GPU is completely idle. Making the entire pipeline completely software-driven puts control of these decisions in the hands of developers.

Performance aside, having a software rasterization pipeline means the flexibility to set up whatever is desired. I've wanted fully programmable blending for 5 years. Now I can have it! Sending data back and forth across the bus taking too long? Process it all on the GPU! All of these things may be possible on current hardware using GPGPU programming techniques, but this is the first hardware that's designed for generality FIRST. That makes it a pretty big deal on its own, even if it's not the fastest chip on the block. "


 

Hellboy

Distinguished
Jun 1, 2007
1,842
0
19,810


Here we go again...


How can you shout about this when the Phenom is the most disasterous release of a processor and looks like bringing AMD to its knees... No other processor has bought a company to its knees like this one..


You and your motley crew shout about the Intel duffers but what about your famous Another Major Duffer of Duffers.


Im sorry Thunderman, but just your name alone winds people up because of the way you present your posts..


Just post the news, dont put up AMD4Life ( at this rate they wont be 6 months but thats by the by ), and on top dont keep putting "not looking good" on anything that Intel is doing or pioneering ( yes Larrabee is pioneering as no one has done what they are doing. Even Intel have proven not to get it right ( Pentium 60 & 66 anyone ) when AMD are so far from looking good that looking good dont appear in their research portfolio...

I am also amazed at Hector Ruiz stand down to be put somewhere else in the company which has just as more power as his previous mess-uptial.. He should be totally removed from the board and given a 9150 for his troubles...

I know that most ic designers have their work scattered around the house for ornaments in plastic resin but I bet even AMD engineers dont have a Phenom silicon in their collection.


Then you and your bedroom buddies will give minus scores because you can give out minus scores down to tht fact of your bitterness of anyone that states what you and your single track minded clan are.
 

radnor

Distinguished
Apr 9, 2008
1,021
0
19,290
Alright, Larrabee will be slow, hot and expensive.

Lets recapitulate so we don't get lost in translation here. What do we really know ? Nothing.
It can be a PR trick. Or it will revolutionize the industry. We just can't be sure.

I am a DAMMIT fan. But i don't see the purpose of this thread.
 

sarwar_r87

Distinguished
Mar 28, 2008
837
0
19,060
i no intel gets everithing right at the first attempt and everithing.......but doez any1 think that there is a veeeeeeeeerrrrrrrrrrrrryyyyyyyyyyy little chance that intel's gpu might be a flop.......

plz don gang up on me now!!!!!
 

Amiga500

Distinguished
Jul 3, 2007
631
0
18,980
Love the ostrich mentality of many in here.


Problem: Some (more) evidence presented that Larrabee might be bad.

Action: Ignore evidence and attack the poster that presented evidence.




It has been under discussion for some time in the GPU forum just how good Larrabee will (or will not) be.



Fudzilla reported ages ago Intel had a 300W power envelope for Larrabee Mk.1.

http://www.fudzilla.com/index.php?option=com_content&task=view&id=8413&Itemid=34


We know the x86 will not be as tailored to massive parallel vector computations as something designed specifically for the job (otherwise GPUs would already be using an approach much closer to CPU design).



Fudzilla also reported on the PCB required for larrabee, and the amount of layers it needs - 12 (larrabee) versus 8 or less (ATI/Nvidia).

http://www.fudzilla.com/index.php?option=com_content&task=view&id=8435&Itemid=34




All the indicators are pointing in the one direction. Ignore it if you wish.
 


No Intel does not get everything right on the first try. If they did they would have had a IMC since the 486 and up.



I never attacked thunderman. I attacked the author of the article because he is making assumptions that you cannot make based on paper. People made assumptions based on the paper of Phenom but it turned out not to be anything close to what the paper said it would do. In the server market yes Barcy performs well but on the desktop it does not perform like the paper said it would compared to C2Q.

Fudzillia is not always 100% reliable. If I remember correctly they also started the rumor that Intels low end Nehalems will not OC then that got way out of control. Its hard to trust a site thats is known for rumors.

My take on Larrabee is that we need to wait and see it as a physical card thats ready to be tested. Making assumptions like he is stating is just insane. Personally from what I have seen that Intel wants to do with Larrabee will actually work.

I for one wouldn't mind seeing Larrabee actually shake the market up and cause a nice price drop on high end hardware for the end user.

As for thunderman its his normal Intel is evil BS and AMD is my master crap that just gets annoying.
 

Amiga500

Distinguished
Jul 3, 2007
631
0
18,980



The only thing in Larrabee that has me excited is the definite possibility of porting programs over to it with relative simplicity.


The ability to use system RAM efficiently (to increase available memory size) would also be something I (and a lot of others) would also be VERY interested in.




As for the graphics abilities - it doesn't get me too excited. But, then again, the ability to solve medical and engineering problems quicker is of far more real world importance than another 4 fps in crysis IMO.
 
^I agree. The ability to code for Larrabee via x86 will easily beat nVidia since CUDA is a new language. Intel has learned that going with what works and is widley used will be much easier than going with something new.

I would guess they learned that with IA-64 and Itanium.

As for graphics performance, can't say what it will be able to do yet. I know Intels main focus is folding@home and the likes but it could do well in graphics.

I hope so because Intel putting out a card that makes nVidia and ATI sweat will help put prices down on GPUs.

Or it will just bee good for 3D movie making and thats it.
 

Amiga500

Distinguished
Jul 3, 2007
631
0
18,980



CUDA is based on C, as is OpenCL.


Personally, my worry is porting specific operators across. If some aren't available, then workarounds have to be devised - which are a little more inefficient.


CUDA and Open CL aren't "new" languages, they just are incomplete old ones.
 

radnor

Distinguished
Apr 9, 2008
1,021
0
19,290


You can call it Libraries. C/C++ is probably older than you. openCL and CUDA can be used in C++. And if you look to every point, every language, every stinking piece of code out there, it will probably have one part (or all) compiled and written in C or C++ variants. C was "invented" or "defined" in 1972. C++ in 1979.

Now, don't call a train a boat, because it is a train. At least you could have looked at Wikipedia before you posted.

i++
 

Amiga500

Distinguished
Jul 3, 2007
631
0
18,980


Yeah, you could do.


CUDA/Open CL are incomplete C libraries rather than new languages (which intel have tried to imply).
 

radnor

Distinguished
Apr 9, 2008
1,021
0
19,290


OpenCL is always incomplete. A Library that is always being developed is always incomplete.
CUDA is incomplete. Of course it is. Only limited for a few models and still being tuned.

If Intel said that, something is very wrong on their camp.
 

dattimr

Distinguished
Apr 5, 2008
665
0
18,980


He was probably talking about Intel IGPs. Anyway, that's not entirely Intel's fault. Well, I would go for a 790GX over a G45 anyday, but even that it's still a joke for any serious gamer. If anyone expects that to play most games in a reasonable fashion then I can't help but feel sorry. Of course your mileage may very, though. Still a "sweet joke" for me.
 

BaronMatrix

Splendid
Dec 14, 2005
6,655
0
25,790



Did you read the article? Google Tim Sweeney Intel. I didn't say it. I have no opinion. I'm playing devil's advocate. I got the story and had to post it.
 
He was referring to Intel's IGP, not their processors. Frankly, anyone that buys a gaming machine knows enough to stay away from IGP in the first place. Just because they don't run the latest Unreal engine at 60+ FPS doesn't mean they're useless.