Sign in with
Sign up | Sign in
Your question
Closed

CPU Performance Boosted 20% When CPU, GPU Collaborate

Last response: in News comments
Share
February 10, 2012 12:46:52 PM

Will this mean anything for the current generation of hardware?
Score
16
February 10, 2012 12:55:10 PM

pg3141Will this mean anything for the current generation of hardware?

nothing really, and don't game developers already know this and have been doing this for some time.
Score
-9
Related resources
a b à CPUs
February 10, 2012 12:57:47 PM

pg3141 said:
Will this mean anything for the current generation of hardware?

It could, if programmers get behind it.
Score
8
February 10, 2012 1:05:00 PM

Doesn't the Xbox 360 have the GPU and CPU on the same die?

Score
-15
February 10, 2012 1:22:25 PM

I am sure the more experienced engineers at Intel and Nvidia are past what the college paper-writing academia oriented folks are doing.
Score
4
February 10, 2012 1:24:17 PM

It is probably already done, just to be released at a later date determined by their marketing release schedule.
Score
2
a b à CPUs
February 10, 2012 1:26:38 PM

in other news SSDs make your system faster
Score
22
February 10, 2012 1:38:27 PM

Not really news. An AMD chip most likely performs better coupled with an AMD graphics card. It would only make sense on their part.
Score
-12
February 10, 2012 1:40:12 PM

Could this mean no more dedicated video cards in the future?
I hope not.
Score
-7
February 10, 2012 1:41:37 PM

Well, transistors are reaching a physical limit, so we need any kind of optimization we can get. Let's hope this technology gets implemented soon...
Score
6
February 10, 2012 1:57:08 PM

Often when gaming, my gpus and vram are at or near 100% while my system CPU and Memory are no where near being maxed out. Seems to me if the system memory and cpu were given some of the workload.

I know they are talking about cpu and gpu being on the same chip in this article, but it seems to me better utilization across the board could be taking place.
Score
5
February 10, 2012 2:05:04 PM

Maybe lower budget gaming PCs will be able to get away with integrated video solutions in the future.
Score
8
February 10, 2012 2:07:10 PM

Isn't Intel and AMD already doing this?
Score
-4
February 10, 2012 2:16:18 PM

digitalzom-bMaybe lower budget gaming PCs will be able to get away with integrated video solutions in the future.


You are completely missing it. This research proves separate GPUs are STUPID. I've been saying for at least a year now that the integrated GPU is the future. Little by little, the Separate GPU is going to disappear until only the most extreme GPUs will remain as separate units. Eventually SOC will be the norm. RAM and BIOS will be integrated on a single die someday also.
Score
-8
a b à CPUs
February 10, 2012 2:41:47 PM

drwho1Could this mean no more dedicated video cards in the future?I hope not.

When they get to the point where top video solutions are on the chip with the CPU, we'll all need one mother of a cooling solution.
Score
5
a b à CPUs
February 10, 2012 2:42:21 PM

Now this is an old idea but is becoming possible now....
First we had the CPU and graphical calculation were made by it.
Then they made the GPU and graphical calculations were shifted onto it.
Then they made the Sandy Bridge & The Llano which calculated the graphics with the help of a secondary Chip on board.
Now they're getting the CPU and the GPU onto one single die....
Lots of work... done... all for the same result....
Score
6
February 10, 2012 2:44:13 PM

Proper integration of command structure is not only a hardware issue, but also a software. For example, 3D rendering programs, which one would think would pull much of its data from the GPU during rendering, instead pulls all its power from the CPU (note I'm talking specifically about rendering). I would imagine that there should be some means of re-writing the code of a program like Maya or Modo so that the GPU helps out. It is a software instruction, after all, that allows for multiple cores or even multiple CPUs to work together.

I would think going the software route would not meet the efficiency angle these researchers seem to be looking at, but it might have better payoffs in terms of performance. Part of why CPUs with imbedded graphics controllers aren't more powerful than they are is that the heat generated is too much, something not really a problem if the GPU and CPU are on different dies as in a gaming rig. Again, APUs are more of an efficiency for low-power systems, not really performance. So the first uses of the ideas in this article will likely be either in smartphones, netbook/ultrabooks, tablets, or gaming consoles; any device that goes the SoC route. In short, means much more for the mobile industry than it does for enthusiasts and gamers.
Score
4
February 10, 2012 2:44:26 PM

But will it play Crysis?
Score
-8
February 10, 2012 2:52:10 PM

loomis86You are completely missing it. This research proves separate GPUs are STUPID. I've been saying for at least a year now that the integrated GPU is the future. Little by little, the Separate GPU is going to disappear until only the most extreme GPUs will remain as separate units. Eventually SOC will be the norm. RAM and BIOS will be integrated on a single die someday also.


http://www.guru3d.com/article/amd-a8-3850-apu-review/11

As long as you dont mind not being able to upgrade your graphics without upgrading your cpu and can put up with lowered fps and overall system performance... then yes apu's are "the future". Oh wait were you speaking of the idea that some people have been throwing about that tablet gaming is the future? yeah no.
Score
2
February 10, 2012 3:08:19 PM

This is why AMD had developed the APU.
Score
3
February 10, 2012 3:14:25 PM

warezmenothing really, and don't game developers already know this and have been doing this for some time.


This is actually not true. Just FYI, credentials wise, I am a software engineer that doesn't work in gaming but plays plenty of games. I have used openGL / openCL / etc.

PC game developers now have a technology that allows them to compute almost all game logic GPU side - openCL / CUDA - where before that had to be done CPU side. It is why a game like World of Addictioncraft used a lot of CPU resources when it came out, because it did collision detection CPU side because they wrote the game for an openGL standard that didn't support general computation outside vector processing on GPUs.

Today, with openCL (you can't make a game that uses CUDA if its an Nvidia chip and something else if it is AMD when you can just write openCL and be cross GPU) you can do a lot of parallelizable things GPU side that were previously outside the vectorization paradigm openGL fixes processing on the GPU to.

And the general pipeline of a game engine, at its basic roots, is process input (user, network, in engine message passing) -> update state (each agent reacts on a tick stamp to game world events) -> collision detection (to prevent overlapping models) -> GPU rendering of the game world. Today, everything but processing input can be offloaded to the GPU and done massively parallel through openCL / openGL.

The next generation of games "should", if properly implemented, use so few processor resources besides file and texture streaming and processing key events and handling network packets that you might get 10% of one CPU utilized in an extremely high fidelity game that pushes the GPU to the limit but barely uses any CPU resources.

It also makes no sense to do any of those parallel tasks CPU side either - GPUs are orders of magnitude faster at that stuff. It is why an i5 2500k for $225 will last you a decade but you can spend $1500 on 3 7970s in triple Crossfire and have them be outdated by 2015. Games are moving into a completely GPU driven architecture for everything, and it is a good thing. It hugely increases the performance you can get from a game.
Score
12
February 10, 2012 3:36:13 PM

greghomeIs this anything significant? sounds like another Captain Obvious Statement

While the statement is kind of obvious, the work they have done isn't. If this can be included in commercial products, AMD's fusion will improve further in later hardware iterations.
Score
1
February 10, 2012 4:09:00 PM

rohitbaranWhile the statement is kind of obvious, the work they have done isn't. If this can be included in commercial products, AMD's fusion will improve further in later hardware iterations.

Yes, basically because they put some intelligence behind the decision of what goes to the GPU. There are some computational problems that do not make sense to send to a GPU because they are serial in nature, and would take just as long or longer to run on a GPU than they would on a CPU. Seti@Home had work units like this, and many contributors who run GPUs complained about this because it is an inefficient use of a GPU's power. Fortunately, the people who run Seti@Home listened and no longer send highly serial WUs to GPUs. It makes sense, to me at least, to intelligently decide what tasks are best suited for GPUs.
Score
1
February 10, 2012 4:23:11 PM

vittauWell, transistors are reaching a physical limit, so we need any kind of optimization we can get. Let's hope this technology gets implemented soon...


they are close, but we are probably a good 10 or so years off of hitting the real limit, at least size wise.
if we ever figure out 3d, it could make a cpu well over twice as fast, and fit in a smaller footprint.

drwho1Could this mean no more dedicated video cards in the future?I hope not.


in the far future, yea, graphics cards are going to go the same way as sound cards. they will be fast enough to do damn newer everything for everyone, and the only people who want more will need a specialty item (probably wont be overly price inflated, due to size of the chips at the time)

ZannyThis is actually not true. Just FYI, credentials wise, I am a software engineer that doesn't work in gaming but plays plenty of games. I have used openGL / openCL / etc.PC game developers now have a technology that allows them to compute almost all game logic GPU side - openCL / CUDA - where before that had to be done CPU side. It is why a game like World of Addictioncraft used a lot of CPU resources when it came out, because it did collision detection CPU side because they wrote the game for an openGL standard that didn't support general computation outside vector processing on GPUs. Today, with openCL (you can't make a game that uses CUDA if its an Nvidia chip and something else if it is AMD when you can just write openCL and be cross GPU) you can do a lot of parallelizable things GPU side that were previously outside the vectorization paradigm openGL fixes processing on the GPU to. And the general pipeline of a game engine, at its basic roots, is process input (user, network, in engine message passing) -> update state (each agent reacts on a tick stamp to game world events) -> collision detection (to prevent overlapping models) -> GPU rendering of the game world. Today, everything but processing input can be offloaded to the GPU and done massively parallel through openCL / openGL. The next generation of games "should", if properly implemented, use so few processor resources besides file and texture streaming and processing key events and handling network packets that you might get 10% of one CPU utilized in an extremely high fidelity game that pushes the GPU to the limit but barely uses any CPU resources.It also makes no sense to do any of those parallel tasks CPU side either - GPUs are orders of magnitude faster at that stuff. It is why an i5 2500k for $225 will last you a decade but you can spend $1500 on 3 7970s in triple Crossfire and have them be outdated by 2015. Games are moving into a completely GPU driven architecture for everything, and it is a good thing. It hugely increases the performance you can get from a game.


i dont like the idea of a game running soley on the gpu, look at physx on a lower end card, you have to scale it back to the point it may as well not be there to get the game running at higher framerates.

Score
-2
February 10, 2012 4:24:37 PM

The way this is written, it makes it sound like a CPU is a memory controller and branch prediction unit. It's way off.

Ideally, you'd have the CPU schedule instructions in such a way they would seamlessly go to the GPU pipelines, much like you'd send a simple integer instruction down the correct pipeline. But, you'd still use the CPUs computational units for things it does better, and there a lot of things they do. Ever wonder why we've don't see GPU designs as CPUs running Operating Systems. They are good at what they do, but very poor at other things they don't do. Using the CPU as a memory fetcher and branch predictor is absurd.
Score
1
a b à CPUs
February 10, 2012 4:46:55 PM

AMD claims to have plans for things like this with their APUs in a few years, I think it was 2014. They want to have the graphics and CPU cores totally integrated and the CPU will not need software to tell it what work can be done faster on the GPU because it will be able to figure that out on it's own.

Let's see how far this goes.
Score
0
February 10, 2012 6:55:42 PM

Lol i read this years ago when AMD and NVIDIA where proposing HUR CUDA AND HUR OPENCL

years after i still cant find a damn usefull thing to do with these
Score
-3
February 10, 2012 7:02:39 PM

blazorthon said:
AMD claims to have plans for things like this with their APUs in a few years, I think it was 2014. They want to have the graphics and CPU cores totally integrated and the CPU will not need software to tell it what work can be done faster on the GPU because it will be able to figure that out on it's own.

Let's see how far this goes.


If the system will automatically determine what code to run on CPU and what to run on GPU then will I need or not need to program in OpenCL?
Score
0
February 10, 2012 8:21:31 PM

warezmenothing really, and don't game developers already know this and have been doing this for some time.

greghomeIs this anything significant? sounds like another Captain Obvious Statement


Today's games are pretty much one-way trip, CPU to GPU. What they talk about is "collaboration" between CPU and GPU - true heterogeneous computing, similar to what AMD is aiming with their HSA. The disadvantage of having GPU as an expansion card is the physical distance to CPU. The latency is high if data have to go from CPU to GPU and back. If discrete GPU is only doing the rendering, the one-way nature of it isn't much of a problem.

alidani dont like the idea of a game running soley on the gpu, look at physx on a lower end card, you have to scale it back to the point it may as well not be there to get the game running at higher framerates.


Today's GPUs are still too slow at context switching. The overhead and resource usage are still high when doing rendering and physics on a same card. While using CPU is good, it's still not the best for some highly parallel nature of physics. Using a dedicated GPU is better, but for a GPU it has transistors that are not needed for physics. That's why I think original Ageia's PPU is actually a good idea. It was a chip made solely for the physics.
Score
2
February 10, 2012 8:59:50 PM

that means the AMD fusion project are the future for computing, AMD bring it out, intel following it again
Score
-1
February 10, 2012 9:40:47 PM

"Bob, should we invest a few dozen million more into adding multicore and GPU support before releasing the new software?"

"No."
Score
1
February 10, 2012 11:46:26 PM

This paper is actually co-authored by Mike Mantor who is a Senior Fellow at AMD leading the Compute Domain Architecture initiatives to drive hardware and software improvements into a new class of APU processors that employ high performance X86 cores and GPU parallel processor cores with a shared memory subsystem. Mike has been a leader in the development of AMD/ATI high performance/low power GPU for the past 12 years. Mike has been a key innovator of the AMD Radeon GPU Shader Core System including enablement of more efficient general purpose compute. Mike has been heavily involved in the development of the Direct Compute and OpenCL APIs.
Score
1
February 11, 2012 12:56:20 AM

i wonder if the majority of the motherboard will eventually become obsolete,, all the chips on the board are made of silicon right? including the north/south bridge, audio amp chips, ect.. ect..
ssd drives are made of silicon too right? flash memory.
couldn't we produce a processor with the ram, flash memory, processor, gpu, audio, and everything else built in? we would eliminate the bottlenecks of the different interfaces, and save alot of e;ectricity, and effectively make the computer much smaller. could you imagine a motherboard the size of a ipod and just as thick, a small area for the powersupply to connect to, and a few usb ports, and a hdmi.
downside is it wouldn't be upgradeable.. but logically it seems like it will be the final form of the computer.
Score
0
February 11, 2012 1:42:00 AM

How is this new...news? Am I missing something?
Score
-3
a b à CPUs
February 11, 2012 2:26:16 AM

aidynphoenixi wonder if the majority of the motherboard will eventually become obsolete,, all the chips on the board are made of silicon right? including the north/south bridge, audio amp chips, ect.. ect.. ssd drives are made of silicon too right? flash memory. couldn't we produce a processor with the ram, flash memory, processor, gpu, audio, and everything else built in? we would eliminate the bottlenecks of the different interfaces, and save alot of e;ectricity, and effectively make the computer much smaller. could you imagine a motherboard the size of a ipod and just as thick, a small area for the powersupply to connect to, and a few usb ports, and a hdmi. downside is it wouldn't be upgradeable.. but logically it seems like it will be the final form of the computer.


It's not that simple. The single chip you want would need to be a very large chip to have everything on it even with modern technology. Also consider that the RAM alone would be a huge part of it, normal machines nowadays have 4GB+ RAM and normal gamnig machines tend to have 6 or 8GB of RAM. At the densest, each chip of each module is usally 256MiB. That would mean you need to have (with 4GB) the equivalent of 16 RAM chips, a CPU, a GPU, a northbridge, a south bridge, and any other integrated hardware all on one chip. Since the RAM would no longer be on separate modules that allow increased surface area for the chips, this solution could be as large as or even larger than current motherboards.

It would undoubtedly be more energy efficient, but having all of these heat producing components so close together would still generate a lot of heat on a small area. You would also have some memory chips pretty far from the processors to fit this all together and that's not a good thing for performance. For this to work we would need to use very expensive 512Mib or 1GiB RAM chips instead of the standard 256MiB chips. That would decrease the amount of surface area needed for this motherboard and reduce the maximum distances between the memory chips/dies and the processor dies to an acceptable level.

The heat problem wouldn't be to bad, no worse than what we have for current high end video cards, but it's still considerable. All in all, this idea is theoretically possible, but it could be more trouble than it's worth. However, it can be done and is done with low end systems that don't need as much hardware or very powerful hardware. We have things like the Raspberry Pi and it's competitor that I can't remember, but they aren't single chip systems. It's definitely possible, but a complete SoC (system on chip) wouldn't be too easy to build and have it perform well for general use.

This will probably be done some time in the future. Computers seem to get smaller and smaller as time goes by.
Score
1
February 11, 2012 4:00:34 AM

Done right, overclocking is safe and efficient !
Score
-4
February 11, 2012 4:05:29 AM

Keep playing with it until its stable.
Score
-3
a c 186 à CPUs
February 11, 2012 4:45:37 AM

Something the inactive igp's in our i5-2500k's and i7-2600k's could be doing :p  There might be like a 5-10% performance boost because of hd 3000
Score
3
Anonymous
a b à CPUs
February 11, 2012 8:00:38 AM

I think AMD has this in mind and is working towards binding them even further. If they can tie them together and use the GPU for FPU ops which it is great at and just leaves basic int. the speeds of their current path of cpu's would speed up quite a lot. Unlike Intel AMD has a better team in ATI to help bind a first rate cpu/gpu hybrid chip. They would still need a seperate gpu card to tie in faster graphic's memory or have to upgrade motherboards to accept it because current on board memory is too slow but that is not a roadblock. In fact if they adapted the faster GDDR memory for both CPU/GPU and with their current or future current build they would have a very fast product once they tied them together and running with Windows 8 they would def. be back in the top end performance segment. I have a feeling they are really working towards this end. One thing AMD is not afraid to do is try new ideas. We owe them a lot for where we are at now because they innovate and try new idea's. Left to Intel we would still be using 32bit chips. They were really bashing AMD about the 64bit when they did it as not needed and were on the huge pipeline mhz race until AMD built a better design. It's only because Intel had more rescorces and money that they were able to basicly take AMD's lead and perfect it.
Kind of like what Japan does with cars and just about everything else. No ground breaking new ideas, just takes current ones and makes them better. It's one reason we still need AMD even more than we need Intel.
We need inovators to continue to push new ideas because this is the only way we progress to better ways of doing things in the long run.
One thing Intel isn't is a long view company. It's hard and risky to do as we see with the first BD build from AMD. It's a radical design that will need more work but is perfect for a CPU/GPU merge.
Score
0
February 11, 2012 4:55:02 PM

I'm not 100% sure what they're saying they did here. Was it that they're suggesting that the CPU acted as a memory manager for the GPU? I'd be surprised if performance improved through that: if it did, that's a sign of where GPU engineers need to focus development: a 20% boost improvement on the same SPs and cache just by fixing issues with memory access is a huge thing.

What it sounds more likely is that they're speaking of integrating the GPU into the CPU's pipeline, at least virtually. This is something that makes sense: it's originally something that I was hoping would be seen with Llano. Anything that could be an "embarrassingly parallel" problem would be best offloaded to the GPU. While the SSE/AVX unit on current CPUs may be fine for some math, performance would indeed be better if the CPU could simply hand it off to another unit with vastly more power.

To put it into perspective, a single modern x86 CPU core, (including any of Zambezei's EIGHT cores) using an SSE instruction, can execute a single 4-wide 32-bit floating-point math instruction; the most common one used is "multiply-add," so that counts as a grand total of 8 FP instructions per core, per cycle. (4 multiply, followed by 4 add) This makes a 4-core Sandy Bridge top out at 32 ops/cycle, or a theoretical maximum of 108.8 gigaFLOPs for a 3.4 GHz Core i7 2600K. This is comparably VERY small once you put that side-by-side with a GPU, where each SP for an nVidia GPU, or each cluster of 4 SPs on an AMD GPU, can accomplish the same math throughput per cycle as a whole core on an x86 CPU.

Now, in all honesty I actually DON'T believe that having the GPU be separate on a discrete expansion card prevents this from being done; it merely introduces a lot of latency. While this might make some use less ideal, it's still quite possible for CPU tasks to be offloaded to the GPU, if latency isn't a critical requirement. It's quite possible that future architectures will provide us with a vastly lower-latency, more-direct interface between the CPU and the GPU. After all, current integration has put the main memory controller on the CPU die, and all but eliminated the Northbridge chipset.

The more telling thing here, though, is the fact that the original x87 line, before the 80486, was actually implemented as a separate chip on the motherboard, with its own socket, and it managed to work fine there. Granted, it DID simply sit on the same FSB as the CPU, but the physical distance proved to not be an issue. (similarly, cache used to be implemented on separate chips on the motherboard, which worked as well, albeit with higher latency)

loomis86You are completely missing it. This research proves separate GPUs are STUPID. RAM and BIOS will be integrated on a single die someday also.

That would be even stupider. Need to replace the BIOS? GL, there goes the CPU as well! There's a reason that the BIOS has been separate from the dawn of the CPU. (the Intel 4001 served this purpose for Intel's 4004)

Ditto for RAM; the stuff needs to be quite variable. That, and by now the amount of silicon needed for a proper supply is huge: implementing all the same components you speak of on a single die would require a massive silicon wafer that would be MORE expensive than the current arrangement. This is because cost goes up exponentially as your die surface area goes up: not only are you getting fewer chips per wafer (due to higher surface area) but the failure rate ALSO goes up: the number of defects per wafer tends to be constant, but 8 defects on a 100-chip wafer is a mere 8% failure rate, while 3 on a 25-chip-wafer is a whopping 32%. (this is a lesson nVidia has learned the hard way again and again)

The same thing here applies to discrete GPUs: putting it on the same die as a CPU is stupid to apply to all applications. While for a tablet or phone this may make perfect sense, if you need high power, you simply can't get enough transistors on a single die. And no, you can't just wait for the next die shrink, because that'll provide more space your competitor is going to use to make a more complex and powerful GPU.

alyoshkaThen they made the Sandy Bridge & The Llano which calculated the graphics with the help of a secondary Chip on board.

Actually, the GPU portion of Sandy Bridge and Llano ARE on the same die. They are not integrated onto the motherboard, or even on a separate die in the CPU's package.
Score
0
February 11, 2012 7:47:53 PM

Does AMD Dual Graphics mode share memory between the two GPU's? I know there is no memory buffer between CPU and integrated GPU.
Score
0
a b à CPUs
February 11, 2012 8:19:16 PM

billcatI think AMD has this in mind and is working towards binding them even further. If they can tie them together and use the GPU for FPU ops which it is great at and just leaves basic int. the speeds of their current path of cpu's would speed up quite a lot. Unlike Intel AMD has a better team in ATI to help bind a first rate cpu/gpu hybrid chip. They would still need a seperate gpu card to tie in faster graphic's memory or have to upgrade motherboards to accept it because current on board memory is too slow but that is not a roadblock. In fact if they adapted the faster GDDR memory for both CPU/GPU and with their current or future current build they would have a very fast product once they tied them together and running with Windows 8 they would def. be back in the top end performance segment. I have a feeling they are really working towards this end. One thing AMD is not afraid to do is try new ideas. We owe them a lot for where we are at now because they innovate and try new idea's. Left to Intel we would still be using 32bit chips. They were really bashing AMD about the 64bit when they did it as not needed and were on the huge pipeline mhz race until AMD built a better design. It's only because Intel had more rescorces and money that they were able to basicly take AMD's lead and perfect it. Kind of like what Japan does with cars and just about everything else. No ground breaking new ideas, just takes current ones and makes them better. It's one reason we still need AMD even more than we need Intel. We need inovators to continue to push new ideas because this is the only way we progress to better ways of doing things in the long run. One thing Intel isn't is a long view company. It's hard and risky to do as we see with the first BD build from AMD. It's a radical design that will need more work but is perfect for a CPU/GPU merge.


If we didn't have AMD, why would we still have 32 bit chips? Intel had 64 bit chips before AMD did and they're called Itanium. Itanium is a poor performer, but it still exists and is an older common place 64 bit CPU than AMD's first 64 bit consumer processors, the Athlon 64/FX families. AMD was the first to use 64 bit extensions of the x86 architecture, not the first have 64 bit chips intended for mass adoption. For the CPU/GPU hybrid to be first rate, does it need first rate CPU and first rate CPU or is it okay to have first rate GPU functionality and third rate CPU for the hybrid to be called a first rate CPU/GPU hybrid?

AMD does not have first rate CPUs anymore so it's hard to say whom could do this better. AMD makes good graphics, but their CPUs leave much to be desired right now. There's just no way to say that AMD's best consumer CPUs, the Phenom IIs, are first rate CPUs ever since Sandy Bridge came out. With Ivy around the corner and AMD still not having a decent successor to Phenom II (sorry, but Bulldozer is slower than Phenom II in most consumer applications), I'm not sure if AMD can pull out w win any time soon, if ever.

AMD even came out and admitted that they simply can't compete with Intel in performance anymore. Even if software could use 8 cores effectively, AMD would still lose to the i7s by a pretty wide margin, especially the Iv Bridge i7s. They lose by a huge margin to the six core SB-E too, but that's not a fair comparison. As a gamer, there is no denying that as of right now the SB i5s are the best option and the IB i5s will be even better.

AMD's graphics are still going good, but Nvidia will take back the lead with Kepler. made the GTX 480 about as fast as the previous generation's dual GPU card, the GTX 295. It stands to reason that Nvidia will do something similar, if not going well past the GTX 590 with their next single GPU card, presumably the GTX 680. Once this happens, AMD will need to drop their prices and try to compete more with value than raw performance. However, I think despite this that AMD will have a much better solution in the graphics market than the CPU market. Here AMD doesn't need to compete with raw performance, they jsut need to compete with performance per watt and per amount of currency (US dollars for me).

If I had to buy graphics cards right now, I think I'd go for a 6870. Nvidia has shown that they either refuse to or are unable to compete with AMD outside of the high end market. The slowest Nvidia card I consider worth buying from the current generation is the 560 Ti because the 560 uses about the same amount of power for considerably less performance, making the Radeon 6870 a much more attractive option and I don't think I need to explain why not to buy a GTX 550/550 Ti. AMD does well in graphics, I'll call their graphics first rate. However, AMD's CPUs are not first rate anymore and no amount of sugar coating can change this fact. Perhaps AMD will fix the problems with Bulldozer, maybe they will abandon it like Intel abandoned Netburst.
Score
0
February 25, 2012 9:01:47 PM

With Lano now capable of gaming BF3 in ultra ,how long before HSA allows ulv to do the same
especially when 21% is what current designs can milk from HSA.
Score
-1
a b à CPUs
February 28, 2012 4:11:36 AM

trinyWith Lano now capable of gaming BF3 in ultra ,how long before HSA allows ulv to do the sameespecially when 21% is what current designs can milk from HSA.


Good luck gaming with a sub-par CPU that has graphics roughly equal to a Radeon 5550, somewhat slower than even the 6570. Another 20% or so wouldn't even catch up with the 6670, let alone any mid-range graphics cards. A Sandy Bridge Celeron or Pentium paired with a Radeon 6570/6670/6750 will offer far better performance for the same amount (or even less) of money as any Llano processor.

Face it, AMD failed almost universally on the CPU side unless you count mobile CPUs and places where highly multi-threaded work is done without the need for something like an i7 or Xeon. For desktops, AMD only wins in low and mid end highly threaded work. For laptops, AMD only wins in low end and middle end systems. For net-books, well AMD pretty much wins all around here, but net-books aren't a great market anymore and really are losing interest.
Score
-1
a b à CPUs
February 29, 2012 12:28:41 PM

I forgot to mention this in my earlier post, but the A8 graphics of the top Llano APUs is equal to a Radeon 5550 (it is a modified Radeon 5550 or so anyway), not enough for gaming on ultra on anything even remotely graphics heavy. It is significantly slower than even the 6570 and that is not a card good enough for common gaming.

A8s have the best IGP, but it is not enough for serious gaming and can struggle with even minimum settings and resolutions in some of the modern games.
Score
0
a b à CPUs
February 29, 2012 12:50:58 PM

I forgot this again, but there is no way that an A8 can do ultra in BF3 even at minimum resolutions with playable frame rates. Not gonna happen. Not even remotely playable frame rates. Remember, it is a Radeon 5550, not a 5750 or something like that, not even close to a 6670. The 5550/6550D is probably around half of a 6670 and that is the entry level graphics card.
Score
0
a b à CPUs
March 2, 2012 1:47:49 AM

tvtbtdrathey will be fast enough to do damn newer everything for everyone, and the only people who want more will need a specialty item (probably wont be overly price inflated, due to size of the chips at the time)


An A8 can't even go beyond 1024x768 in Metro 2033 and BF3, not much higher in the other modern games except for Star Craft 2 which is a very light game compared to the others. Definitely not enough for even most gamers, especially most gamers on this website. Most of us use something faster than the Radeon 5550 which is about identical to the A8's 6550D.

If a Radeon 6670 or better is a "specialty item" to you then you are, at best, a casual gamer, and the opinion expressed in your comment has no weight for the vast majority of us. I am a casual gamer at best, but I am very knowledgeable about serious gaming and I can tell you that most video cards are not specialty items.

These A8s are fast enough for most non-gaming workloads or light gaming workloads like most people do, but so are the HD 2000 and HD 3000 IGPs from Intel's CPUs. Even the integrated crap on AMD's AM2+/3/3+ motherboards is good enough for regular work and it's even weaker than Intel's HD 2000.

Even the absolute garbage GMA 950 from my old Intel Pentium-Dual Core (a cut down first generation Core 2 Duo) is good enough for regular work and watching movies, although it might have problems with 1080p and and probably can't do 3D 1080p. However, the other graphics I listed most certainly can do 1080p and Intel's HD IGPs can do 3d 1080p.

Don't think for a second that any of this stuff can do 1080p in gaming, because there isn't a chance of it, and 3D 1080p is twice as intensive as regular 1080p so not even an A8 can do it for Star Craft 2 at even 15FPS, let alone a playable frame rate. I've heard that an A8 can get about 28FPS in 1080p with lowered settings in Star Craft 2, but don't think that it can do it at decent settings nor even that good in any other recent game.
Score
0
a b à CPUs
March 5, 2012 1:27:37 AM

we will have to wait a lot to get this
Score
-2
!