Sign in with
Sign up | Sign in
Your question

Are cpus losing their clout?

Tags:
Last response: in CPUs
Share
November 6, 2009 8:37:53 AM

In this day and age of good enuff for average Joe, Fermi, LRB Multi core cheapness, are cpus losing their importance in average Joes eyes?
"Leslie Sobon, AMD vice president of product marketing, said most of their market research indicated that consumers found it very challenging to find what they needed in retail settings and many were looking for systems that were geared toward entertainment and gaming. Their research also suggested that these features were more important to consumers over higher-end processors, memory, and even storage space. "
http://www.pcper.com/article.php?aid=811
Im thinking, oh help me, Jensen may have been right.........
thoughts?

More about : cpus losing clout

a b à CPUs
November 6, 2009 9:49:24 AM

I think it is simply that more and more consumers are understanding the basics of a computer these days.

A decade ago the average consumer heard Pentium Mhz and that's all they looked at. Now they consider many more components in their purchase.

Have they lost their clout? In a way. I prefer to think of it as the CPU no longer being over emphasized.
a b à CPUs
November 6, 2009 9:53:42 AM

I'm thinking this is about having a more "balanced" system. With the advent of Youtube and HD content, the once capable processor is gasping for air. Before we could get away with playing DivX content solely on the processor, but today you'd probably need something better than an i7 just to play HD movies.
Related resources
a c 108 à CPUs
November 6, 2009 11:56:40 AM

There are still machines out there advertised for gaming because they have a couple of 9500GTs in SLI; what the consumer gets is all the possible noise, heat, and driver issues of multiple GPUs with none of the expected performance, regardless of the CPU it has. That's why consumers are confused.
Overall though, I think this marketing strategy makes some sense. I really don't care if AMD is trying to unload old tech, as long as it does what they say it can do; i.e. it will be fit for the purpose for which it was bought.
Now, will the battery last through a four hour flight...
November 6, 2009 12:17:27 PM

Just my personal experience is that a better GPU will do more for gaming than the processor. There are few games that will tax a processor 100% on a quadcore but the GPU is getting hit heavy.
November 6, 2009 12:21:58 PM

It does seem people want an everything option, and gfx the key part, and since most cpus cover thier resposibilities, it comes down to power usage is all, like perf no longer matters, but gfx do, music etc, plus connectivity
a b à CPUs
November 6, 2009 12:22:30 PM

I think we have finally reached a point where an inexpensive mid range, multi-core CPU is simply "fast enough" for 95% of the users out there. It is more about flexibility of the system and what you can plug into it that means a lot more to the average person.
I agree with TheViper that a decade ago people did look only at mhz, and yeah it HAD to be Pentium. But a decade ago, processors were slow, and more mhz made a huge, huge difference with everything you did.
November 6, 2009 12:27:15 PM

In case some folks arent getting my post, Im not trying to come from an enthusiast approach here, but to get us to think more like average Joe, see his needs desires etc.
As an enthusiast, of course wed want more power,faster,lighter,power efficient etc etc
a b à CPUs
November 6, 2009 7:53:54 PM

amnotanoobie said:
I'm thinking this is about having a more "balanced" system. With the advent of Youtube and HD content, the once capable processor is gasping for air. Before we could get away with playing DivX content solely on the processor, but today you'd probably need something better than an i7 just to play HD movies.

Not only can an i7 handle HD alone, it can do so without much effort. It's true that HD is a much bigger strain than video content used to be, but even so, it is easily handled by any modern CPU (the atom doesn't count here).
November 6, 2009 11:53:34 PM

And any decent IGP handles that a P in P as well, so you really dont need alot
November 7, 2009 1:14:03 AM

jitpublisher said:
I think we have finally reached a point where an inexpensive mid range, multi-core CPU is simply "fast enough" for 95% of the users out there. It is more about flexibility of the system and what you can plug into it that means a lot more to the average person.
I agree with TheViper that a decade ago people did look only at mhz, and yeah it HAD to be Pentium. But a decade ago, processors were slow, and more mhz made a huge, huge difference with everything you did.




I agree with this guy pretty much to a T. I would add to it by saying it wasn't even a decade ago that this happened though. I'd say it happened on a smaller scale with the athlon, and then huge with the core2's. What I mean by that is, you used to be able to bog down a computer, "have too many windows open", not be able to minimize a game without it locking up your whole system for 3 minutes. Well, at this point we are past that, and have been for two or 3 years. I challenge you to slow down any quad core with a pile of random programs. It's damned near impossible.

And, to the comment of the OP a few posts back, if the original thread was in regards to "gaming enthusiast".....when was the last time the cpu was the deciding factor? As long as I can remember, the gpu has ALWAYS been the bottleneck. I'd say the gpu's have stagnated way way more than the processors. The 8800Gtx I used was on top of the market for a year. How does that happen? Also, a few years down the road, this thing is still holding it's own. You can't really say that about a P4.
November 7, 2009 1:32:36 AM

I think overall computer development has slowed down because software developers are just going after money rather than high-tech stuff. People just want to stay with the biggest market (older technology) to make money and since all this stuff we're talking about has become more affordable and mainstream recently, I think we'll be stuck around here for a while.
November 7, 2009 1:49:40 AM

So, youre saying a old C2Q is 3 times slower than a i7 currently? Much like your old GTX is compared to a 5870 or a 295?
I think not.
This isnt about gaming, its just 1 usage average Joe does, but the overall usages, which have been handled quite well by cpus for awhile now, and stagnation has settled in, unless you can show me a cpu today thats 3 times faster than a C2Q, around the same release dates as your GTX
November 7, 2009 2:22:29 AM

Or, easier and better put, isnt the 6600 considered the best cpu ever made? by many people? and to this day considered completely compatable, and users are still waiting for something good enough to come along?
Hows Crysis, or all vid playback working on your GTX?
a b à CPUs
November 7, 2009 3:08:47 AM

hypocrisyforever said:
I agree with this guy pretty much to a T. I would add to it by saying it wasn't even a decade ago that this happened though. I'd say it happened on a smaller scale with the athlon, and then huge with the core2's. What I mean by that is, you used to be able to bog down a computer, "have too many windows open", not be able to minimize a game without it locking up your whole system for 3 minutes. Well, at this point we are past that, and have been for two or 3 years. I challenge you to slow down any quad core with a pile of random programs. It's damned near impossible.


You just wait, years before the X2 and FX were unstoppable. Look at where they are now.


And, to the comment of the OP a few posts back, if the original thread was in regards to "gaming enthusiast".....when was the last time the cpu was the deciding factor? As long as I can remember, the gpu has ALWAYS been the bottleneck. I'd say the gpu's have stagnated way way more than the processors. The 8800Gtx I used was on top of the market for a year. How does that happen? Also, a few years down the road, this thing is still holding it's own. You can't really say that about a P4. said:

And, to the comment of the OP a few posts back, if the original thread was in regards to "gaming enthusiast".....when was the last time the cpu was the deciding factor? As long as I can remember, the gpu has ALWAYS been the bottleneck. I'd say the gpu's have stagnated way way more than the processors. The 8800Gtx I used was on top of the market for a year. How does that happen? Also, a few years down the road, this thing is still holding it's own. You can't really say that about a P4.



Huh? What about the 7800GT, GF Ti 4200, Radeon 9600, etc.

Though back then developers were pushing the hardware limits year after year. Now advancement to visual quality has stagnated quite a bit, if the rate of development was the same as before, we should have more titles with the same quality as Crysis by now.
November 7, 2009 3:24:34 AM

I think if we strictly stick to HW, thats what I see.
The GTX also pulls more power than todays cards as well, runs hotter etc etc.
You can put a similar gpu in a mobile today thatd run circles over a GTX in a mobile, for power, time and perf, not so with cpus.

PS Add in things like HKMG and the fact that gpus are finally catching cpus in process sizes, and theres even a larger jump to come, since cpus already employ this today
a c 172 à CPUs
November 7, 2009 3:42:30 AM

If you are just talking about the semi-mythical average user, he (or she) can go down to the closest Big Box store and drop $600 and get a nice system - with software - ready to go out of the box.

The average user just wants an appliance good enough to do whatever he wants to do. He doesn't care about the advantages of a particular micro-architecture, anti-aliasing, or NCQ. Good enough is good enough.

I recently built a computer for my sister-in-law - GA-G41-ES2L, E5200 (!!!), 4 GB RAM, 320 GB HD. Primary use is internet and office apps. She's very happy with it. It is waay faster than her old Dell P4 Celeron.

Good enough really is good enough.

Me, I think that's great. It helps drive down the costs for the rest of us.
November 7, 2009 3:47:57 AM

Yep, it also points to trends like LRB/Fermi, where we should see a much more versatile chip doing many things cpus used to do, but using less power, plus gfx.
As gpgpu picks up, having the same perf as a cpu, a gpgpu can use much less power, and bodes well for the future, as we head more and more into mobiles
a b à CPUs
November 7, 2009 3:59:24 AM

I know exactly what you're saying and I agree. The processors of today are fast enough for the average joe. The average user would be perfectly happy with a lowest end Intel core based dual core. On the AMD side the average person will be more than happy with any x2 or later cpu for cpu.

Put an i7 system and a low level core2 based system side by side. Average consumer wouldnt be able able to tell the difference and will be happy with the speed of both.

Put a Phenom II x4 965 and a new athlon or a old k8 x2 side by side. Same thing.



November 7, 2009 4:04:52 AM

It all comes down to SW.
Today, were being told that cpus will have many cores and theyll help average Joe.
Thats just not the case.
Using many cores is the only way MT will work correctly, and todays cpus just dont have enough cores for those threads, so until that happens, the gpgpu will be the bridge.
Eventually, when we actually see cpus and SW doing all this, I think theyll be taking somewhat of a backseat in driving future progress
a b à CPUs
November 7, 2009 4:19:55 AM

JAYDEEJOHN said:
In this day and age of good enuff for average Joe, Fermi, LRB Multi core cheapness, are cpus losing their importance in average Joes eyes?
"Leslie Sobon, AMD vice president of product marketing, said most of their market research indicated that consumers found it very challenging to find what they needed in retail settings and many were looking for systems that were geared toward entertainment and gaming. Their research also suggested that these features were more important to consumers over higher-end processors, memory, and even storage space. "
http://www.pcper.com/article.php?aid=811
Im thinking, oh help me, Jensen may have been right.........
thoughts?

That is so going to come back and haunt you. :lol: 
November 7, 2009 5:31:19 AM

LOL, yea I know LOL
November 7, 2009 5:32:57 AM

I don't know if any of you will agree with me - but years ago, it was necessary for power users and gamers to get high end CPUs - I remember a Tom's article a while back when the Athlon XP's were king that recommended buying the second best processor in the series (at the time, the 3000+) for gaming.

The performance curves sure seemed to have changed. Overclocks in those days didn't bring too much more benefit - but the advent of the core series changed all that (and perhaps the original Athlon X2/64 line). Almost overnight the cheapest and second cheapest chips became the most popular...they were easy to overclock the snot out of for enthusiasts, and they were cheap, cool, and quiet for the average joe.

Now the CPU choice gaps have closed - instead of offering 6 or 7 chips with a more gradual price gradient, there's 3 or 4 in a series (with a pricing rage that's much more exponential). The demand for high-end chips is really decreasing a lot; those only with cash to burn spend them on such things, while everyone from enthusiasts to average consumers get the cheaper stuff.
November 7, 2009 5:59:32 AM

Good point, as it shows the leveling off in other ways as well
a c 108 à CPUs
November 7, 2009 12:48:17 PM

someguy7 said:
I know exactly what you're saying and I agree. The processors of today are fast enough for the average joe. The average user would be perfectly happy with a lowest end Intel core based dual core. On the AMD side the average person will be more than happy with any x2 or later cpu for cpu.

Put an i7 system and a low level core2 based system side by side. Average consumer wouldnt be able able to tell the difference and will be happy with the speed of both.

Put a Phenom II x4 965 and a new athlon or a old k8 x2 side by side. Same thing.


I mostly agree with this, to a point, and actually looked into it somewhat. My "Green Gamer" project PC was a test of this, "how low can you go." For Guild Wars, a 4850e was very playable, but was an obvious bottleneck. The 720BE that replaced it was much better, using the same HD4670 GPU. Games like Crysis (esp. at high settings) are outliers, and not at all typical of what the average user runs.

I suspect that a stock 720BE with a SSD will feel way faster than an overclocked quad with a rotating HDD. Hopefully a good Black Friday deal on an 80GB-128GB SSD will let me test this.

a c 108 à CPUs
November 7, 2009 12:53:59 PM

...but, to more directly address the OP, I suspect that rendering video and using Photoshop will be more common tasks for the average user than playing Crysis. This means that a faster CPU will remain an important consideration even for the average PC.
November 7, 2009 12:56:50 PM

gpgpu, just around the corner.
Im talking trends, as they dev, so does the SW
chicken an egg
November 7, 2009 1:04:31 PM

this topic has been discussed since the advent of computers. If moore's law were to stop suddenly, we would all soon start to feel it.
November 7, 2009 1:43:32 PM

Moores law wont stop, but exceeding it as we have in the past, as far as cpus goes, may have.
Simple trans. density and core doubling, but speeds and IPC has slowed to a crawl, which was/has been always part of the solution to Moores law
Now, cpus have to rely upon process shrink mainly, if not entirely
November 7, 2009 1:53:22 PM

Say, going from C2D at 2.9 to todays 3.2.
And having 15%? IPC maybe, thats 3 years, take any of the previous 3 years prior, IPC and clock speeds were always greater.
Now, it comes down to SW and core usage.
Currently, as we seek lower power solutions, many cored cpus arent going to happen, unless a LRB small many cored variant appears, or a gpu oriented type as well, thus leaving the cpu in the back seat, thus my post, along with average Joe uses, and current vs future SW, where again, itll mainly benefit gpgpu scenarios more than cpus
November 7, 2009 3:13:51 PM

that seems to be over-intellectualising things somewhat. history tells us that what consumers demand from technology will increase on a consistent basis. At the heart of that technology is processor technology. So no, cpu's are not losing their importance and never will.
November 7, 2009 3:22:24 PM

Just as hard as it is figure out we will always somehow need a motor/engine for a car.
But, gas mileage may require changes also.
Having alot of horse power is enough as well
I guess youre not getting what Im saying, and it is even easier than youre making it.
Average Joe is looking elsewheres for performance, not how fast, but what it can do, and its leaning away from cpu usage to other areas as those desires being more prominent.
Nothing hard to understand here.
The "heart" of average Joes desires arent what they used to be, and those usages dont align like they used to, regarding the cpu.
Oh, by the way, we still need electricity too
November 7, 2009 3:26:02 PM

to be honest i don't really think 'average joe' really cares too much about what drives the technology, just as long as said technology does what it says on the box.
November 7, 2009 3:37:48 PM

So, you dont get it.
My point is, average Joe finds things fast enough now, the cpu works , yipee, now, I want a few games, some sound, vid playback and for it to be small, and last.
Those things/wants arent cpu leaning, as theres better options out there now, and thats the point
November 7, 2009 3:45:05 PM

average joe, with an iq of 100, probably doesn't even know what a cpu is
November 7, 2009 3:50:20 PM

Or, what it means even
Agreed, but it all is part of a non conection as to what he wants.
Just like, people may not know what engines in their cars, but they want their stereo and air conditioning to work
a b à CPUs
November 7, 2009 4:23:22 PM

By answering this I would be only speculating but I would say that no... they are not losing their clout.

In my opinion and based on what we're seeing from Intel, AMD and nVIDIA is that CPUs are merging with GPUs (AMD calls the idea Fusion while Intel calls it Larrabee).

x86 will likely never die. Intel made the mistake of attempting to kill off x86 and switching over to IA64 only to be met with x64 by AMD. I think that what we may see in the future are x86 compatible processors from the likes of nVIDIA (GPU+CPU Fusion type processors) rather than GPUs taking over everything.
November 7, 2009 4:40:55 PM

I think what jdj is trying to say is that CPUs are losing their "luster" on the box of a computer. A few years ago, a basic user looked for a fast CPU because "that's what he/she heard made this good", and all they do with that machine is go online and check e-mail.

The newer generations of basic users do more intensive tasks - they play WoW and watch movies, they download and stream in HD and high-quality. While I won't go so far as to say they're educated, they know that their computers are more than their processor alone.

Whether or not the usefulness of CPUs has decreased I have no idea, but I'm pretty certain the popularity of expensive ones have.

As for fusion and CPU+GPU technologies, I really don't see that becoming the entire market. There are tons of people who will want to change GPUs with CPUs, and won't do so with such expensive, massive chips. Sure, the technology will probably become the norm in the entry-level market, but I can't see it penetrating midrange and certainly not high-end machines.
a b à CPUs
November 7, 2009 4:49:04 PM

I can agree with a lot of what you're saying relating to CPUs (as they currently are) possibly losing their importance (with some users, particularly gamers, also factoring in the GPU performance) but what I see coming is a merger of the two and not just at the low end.

Intel is investing heavily into a company named LucidLogix: http://www.legitreviews.com/article/1093/1/
This would allow Intel to sell nothing but Larrabee based units and users to also pair up other Larrabee or different Brand Graphics chips to boost performance further (add-in boards).

Intel is also hard at work on an x86 based GPGPU/CPU (MIMD architecture based) known as Larrabee. Larrabee won't be a low end product. The performance one can get out of Larrabee (even currently) is staggering. For it to be able to render, in real time, a life like raytracing scene at over 21FPS (as seen here: http://www.youtube.com/watch?v=mtHDSG2wNho & http://www.youtube.com/watch?v=G-FKBMct21g) is nothing short of awe inspiring.

That's more power than any RV870 (Radeon HD 5870) or Fermi (GF100) can muster (at least on paper). It is impressive to say the least.
November 7, 2009 4:54:03 PM

If the fusion solution is slottable (is this a word?), then that takes away that problem, tho connects still become a problem, but basically yea, FL, you got it.
I know when I say things sometimes, its outta the box in ways, threatens some people, while others just cant their heads around it, much like aveage Joe.
It is heading that ways, and to continue and better explain, the gpgpu solution to me, will only be temporary, as eventually proccesses will catch up, and allow for cpus to have tho many cores, and be deigned in a fusion element, and who knows, possibly a better connect to a discrete gfx card as well, working with all functions of the new cgpu
November 7, 2009 4:59:59 PM

Im thinking Lucid will turn into a dual player, avoiding licenses, and allow for better/faster connects. Maybe not with all gfx players, but certainly for LRB
As for LRB, talking to certain people, they arent as impressed with the LRB showing, speaking directly to that RT set up.
The RT set up itself is poor, and not what devs want, as it was the devs critiquing the showing.
The shortcuts taken were massive, and REAL RT would have creamed LRB
a b à CPUs
November 7, 2009 5:08:40 PM

JAYDEEJOHN said:
Im thinking Lucid will turn into a dual player, avoiding licenses, and allow for better/faster connects. Maybe not with all gfx players, but certainly for LRB
As for LRB, talking to certain people, they arent as impressed with the LRB showing, speaking directly to that RT set up.
The RT set up itself is poor, and not what devs want, as it was the devs critiquing the showing.
The shortcuts taken were massive, and REAL RT would have creamed LRB

Well I haven't heard that critique placed against Larrabee (but rather placed against nVIDIAs Ray Tracing efforts) but I could be wrong. The reason would be quite simple, Larrabee is an x86 architecture based processor. Larrabee can do anything a CPU can do therefore writing for it in C, or any other programming language, is as simple as it would be writing for any x86 based processor. CUDA, however, can only execute a fraction of the abilities of an x86 based processor and due to this nVIDIA have had to take several shortcuts to get ray tracing to work on their GPUs.
November 7, 2009 5:18:14 PM

Yea, C++ etc. Fermi will surprise, but neither will do RT
Its just too demanding
If I recall, the water was still, no ripple cant remember for sure, but it was poorly done, and LRB still didnt master it, even as bad as it was.
People werent impressed.
Given time, Fermi will show very well, IF nVidia gets that time for its libraries
November 7, 2009 5:23:59 PM

Theory is, language advantages will outshine inferior HW design.
No 86 functionality on LRB, no fixed function HW for gfx.
Crunching, I think it could be a monster, but again, isnt that what cpus do? Itll do well in HPC, but so will Fermi, just the languages need work, but nVidia will be there for that.
The more Fermi/CUDA gets used , the narrower the advantage for LRB is, and of course, Fermi will have fixed function HW for gfx
November 7, 2009 5:25:07 PM

Not even a 1 way street, but itll be interesting for sure
PS As long as everyone plays fair heheh, and thats saying alot for those 2
a b à CPUs
November 7, 2009 5:39:36 PM

JAYDEEJOHN said:
Theory is, language advantages will outshine inferior HW design.
No 86 functionality on LRB, no fixed function HW for gfx.
Crunching, I think it could be a monster, but again, isnt that what cpus do? Itll do well in HPC, but so will Fermi, just the languages need work, but nVidia will be there for that.
The more Fermi/CUDA gets used , the narrower the advantage for LRB is, and of course, Fermi will have fixed function HW for gfx

Larrabee has full x86 functions (Fermi, RV870 etc are the ones that don't). Larrabee will likely do well from Mid-range to near high-end performance wise. It should be quite good for HPC as well.

For Fermi, you can't work the language because the hardware doesn't support the functions. There are many functions within C, C++ etc that the nVIDIA/ATi hardware do not support.

As for Fixed Function Hardware for GFX, no one knows those details about Larrabee so we would be speculating (the same goes for Fermi as the only details released for Fermi are the Computational details in which is loses to RV870 in Single Precision by nearly a two-fold amount but bests RV870 in Double Precision tasks).
November 7, 2009 5:54:09 PM

Yea, nVidia completely upped its DP, by huge margins, 8x from last gen. Thats whats needed for most of this, and where the attention should lay.
LRB is not totally 86 combatable, if I find the link, I'll post it here.
Also, error correction is being dealt with, the real questions lie with whether they loop, have more working warps at once.
LRB will have some bad times with latencies, and the DP and full function was its main promotion over a more limited gpu solution, but it appears nVidia is addressing each one down the line, and, straight up, side by side, a gpu is much much faster, IF Fermi gets its error correction done right, and doesnt have to depend on latency hiding techniques.
Theres really no getting around LRBs latencies, so, its all up to nVidia/Fermi.
November 7, 2009 5:57:07 PM

Id add that the cache is playing a huge part in all this, thus my warps comment, and how things will be broken up if needed.
It appears Fermis cache is increased largely, as well as what Id mentioned previously.
Cache too will play a part, as it too adds its own latencies.
If theyve eliminated enough, Fermi may well be a monster, and leave LRB with nowhere to go, but, no one knows these things in particular yet
a b à CPUs
November 7, 2009 6:02:26 PM

JAYDEEJOHN said:
Not even a 1 way street, but itll be interesting for sure
PS As long as everyone plays fair heheh, and thats saying alot for those 2

I think that what might win the day for Fermi will be the 384-bit GDDR5 memory bus (win over RV870 in gaming that is). When it comes to folding under the GPU3 client (set to be released in Q1 2010) I think MANY people will be surprised by the RV870 (which I think will end up 2x faster than Fermi seeing as GPU3 will rely on Single Precision loads and the current issues with GPU2 not properly supporting RV770 and higher will be a thing of the past).

1- RV870 adds the ability to write and run threads in protected system memory (something only nVIDIA supported previously). This gives you a 40% folding advantage.
2- GPU3 will add support for Scattering and Thread Synchronization for ATi RV770 and higher hardware. This will effectively halve the current workload needed by ATi hardware to complete a single WU (both of these features have been supported in nVIDIA G80 and higher hardware since the introduction of GPU2 and explain why nVIDIA is better at folding).

Add those two components to the fact that RV770 and higher are two times more powerful in Single Precision workloads than their direct competitors (RV770 vs GT200 and RV870 vs. GF100/Fermi) and you've got an enormous boost in performance going from the GPU2 to the GPU3 client.

Of course don't take my word for it :) 

http://www3.interscience.wiley.com/journal/121677402/ab...
http://folding.typepad.com/news/2009/09/update-on-new-f...

And recently:
http://folding.typepad.com/news/2009/10/updates-on-new-...

Things are looking up for AMD (and rather bleak for nVIDIA right now).

And then there is Larrabee coming into the fray. Boy oh boy this will get fun :) 

Larrabee is more efficient at GPU Rasterization: http://www.bit-tech.net/news/hardware/2009/03/30/larrab...

Larrabee cores are entirely x86 compatible according to these links: http://www.xbitlabs.com/news/video/display/200904131433...
http://www.xbitlabs.com/news/video/display/200904152340...

Fermi will add more support for more Fortran and C++ code but still not the entire Library. This gives Larrabee another arena where it can thrive.. Calculation acceleration.
November 7, 2009 6:29:53 PM

Results Summary: Our parallel implementation of ray-casting delivers
close to 5.8x performance improvement on quad-core Nehalem
over an optimized scalar baseline version running on a single core
Harpertown. This enables us to render a large 750x750x1000 dataset
in 2.5 seconds. In comparison, our optimized Nvidia GTX280 implementation
achieves from 5x to 8x speed-up over the scalar baseline.
In addition, we show, via detailed performance simulation, that
a 16-core Intel Larrabee [26] delivers around 10x speed-up over single
core Harpertown, which is on average 1.5x higher performance
than a GTX280 at half the flops. At higher core count, performance
is dominated by the overhead of data transfer, so we developed a lossless
SIMD-friendly compression algorithm that allows 32-core Intel
Larrabee to achieve a 24x speed-up over the scalar baseline.
http://techresearch.intel.com/UserFiles/en-us/File/tera...
Now, taking all the improvements from the 280 to Fermi, 8x DP, 2.5? times shaders, slightly higher clocks, a arch geared from start to finish for this work, its easy to see LRB has some work to do
Also, older, but interesting on the gfx side:
Larrabee is in rough shape right now. The chip is buggy, the first time we met it it wasn't healthy enough to even run a 3D game. Intel has 6 - 9 months to get it ready for launch. By then, the Radeon HD 5870 will be priced between $299 - $349, and Larrabee will most likely slot in $100 - $150 cheaper. Fermi is going to be aiming for the top of the price brackets.
http://www.anandtech.com/video/showdoc.aspx?i=3651&p=7
Its just too early to really tell
!