Sign in with
Sign up | Sign in
Your question

What is the benefit of integrated graphics (a la Fusion)

Tags:
  • CPUs
  • GPUs
  • Graphics
Last response: in CPUs
Share
April 4, 2007 2:02:55 AM

This might be a noob question, but I just gotta ask.

What is the benefit of integrating the GPU onto the same die as the CPU? I know that pulling the memory controller onto the CPU die yielded big gains in memory access latency, which translated to gobs of performance for the Athlon64 / Opteron. But that's because memory access is two-way communication. But for graphics, isn't most of the traffic one-way? The processor sends polygons in the form of vertexes to the GPU, and the GPU performs texturing, rendering, shading, lighting, and whatever to make it look real, then throws it at your eyeballs. It doesn't really matter if there is extra distance (latency) between the GPU and the CPU, because can you even tell that your display is being thrown onto your video screen 1 ms late? 1 ms is an eternity in the context of bus speeds, but too short an interval for human reflexes to detect. I don't see the benefit, from a graphics point of view, to bringing the GPU closer to the CPU, unless the PCIe bus is currently saturated by the graphics traffic, which I kinda doubt. From this standpoint, I think it makes sense to have a separate graphics processor sitting on your pcie bus. Judging by how big and power hungry the cards are today, there must be a lot of work being done by those cards... work that is done just as well off-die from the CPU as it could be done on-die. I'd much rather see the transistor budget on the cpu used toward more cache and processing cores.

More about : benefit integrated graphics fusion

a b à CPUs
April 4, 2007 2:56:28 AM

Hey at least there would only be one heat source in your case! Much easier to deal with. Think of those with water cooling, one less block in the chain.
April 4, 2007 3:23:59 AM

IMO a dedicated video card will always be better than one intergraded with the CPU, one simple reason heat.
I doubt OCing options will be very good in an intergraded solution.
At the budget end the intergraded CPU/GPU will become king.
I don't think an 8x AGP bus is even saturated by the graphics traffic. :? (Someone will no doubt correct me if I'm wrong. 8O )
Related resources
April 4, 2007 3:54:18 AM

its not about filling lanes...its about elminating them! allowing the gpu and cpu to start working together much better. cpu's were not built with gfx requirements in mind....and hence have taken a different approach to completing computations..now the two must unite!!
April 4, 2007 4:05:36 AM

cost! <---thats it!

a seperate gpu will always out perform since it can take the same stuff from a combined unit and increase its out put.
April 4, 2007 4:45:49 AM

We are talking PCs not game consoles. Integrated graphic/cpu chips will not be made with the gamer in mind, PCs are required to do more than game. I can see "fusion" and others like it being used in small mobile media, home theater, and low power workstations; not in the gaming PC.
April 4, 2007 5:25:56 AM

Don't forget the Nuclear power psu to run it. :lol: 
April 4, 2007 5:35:52 AM

What like 80 33mhz cores on one die? jk :lol: 
Add 80 gpu cores to that and you would have one hell of a FaH beast :!:
a b à CPUs
April 4, 2007 6:00:53 AM

They had alot of heat issues tho didnt they? At 5.7ghz. Or am I thinking of a different article.
a c 88 à CPUs
April 4, 2007 6:01:40 AM

Quote:
For the record on die gpu's are planned for mobile platforms ala fusion.


Correct. I havent heard of anyone talking about performance fusion. By melding the CPU and GPU onto one chip, they are talking about heat and power savings. Imagine ultra thin notebooks that have only the one chip in them. Less metal needed to cool them, and one controller dynamically adjusting the speed of both the CPU and GPU.
April 4, 2007 6:33:54 AM

Fusion is pretty good concept for notebooks and office PC's. Lower cost, less heat, saving space, you name it. Other aspect could be interesting too - GPU and CPU working together for some tasks (streaming computing).
a b à CPUs
April 4, 2007 7:07:31 AM

Must be, I'm sure I read somewhere that someone got a proc to 5.8ghz or something but it was a little too toasty. Oh well, you know what my memory's like... (well actually you probably dont, but its bad)
April 4, 2007 7:57:59 AM

Ok I've been reading up on this topic for a lot of time, so here are the pros and cons.

Pros:

1) GPUs gain early access to advanced manufacturing technologies, however, this is only true if AMD is going to fab the Fusions in-house, for the foreseable future, it doesn't look like they are.

2) Even a low-end GPU, when integrated into the CPU, would significantly improve GFLOPS, count. If you are a gamer, you can use the GPU as a GPU until you get an AIB one, then use it as a 'PPU', if you are a supercomputer engineer, you can use these processors to improve your performance. Also add the possibility of playing a game while running F@H in the bcakground.

3) Lower power consumption, there is no need for information to travel on buses, this would significantly reduce power consumption.

4) Cheaper cost of manufacture, although the die will be larger, it is still cheaper than two separate dies.

5) Smaller computers

6) Ray tracing will also become possible, imagine a 'graphics-centric' Fusion where you can dedicate more die are\transistors to the GPU, and have a small CPU next to that GPU.

7) Better and cheaper gaming laptops, only if AMD will give us the ability to buy these 'graphics-centric' Fusions. I do agree that the gamer market is a very niche one, but keep in mind that some supercomputer engineers would want more FP performance with every processor they buy, so AMD would have more than one market to sell these Fusion processors.

8- More CPU to GPU bandwidth.

- The on die GPU can run at the same clockspeed of the CPU, even a low-end GPU clocked 4X times higher than any other one has big ability to compete. however, this isn't 100% true, although AMD said it might happen, so that is why i dont really want to include it as a pro.

Cons:

1) Power consumption and heat given off by the GPU are a BIG headache when trying to make such a processor.

2) Memory bandwidth. Most of us know that GDDR memory of all types is much faster than normal DDR memory. The GPU would suffer memory bottlenecks which might decrease its performance, however, this can be solved by using eDRAM.

Here you go, in case I missed out something, or said something wrong, plz tell me.
April 4, 2007 8:08:44 AM

it is an everything platform actually, mobile first, desktops (soon) follow.
April 4, 2007 8:14:37 AM

Quote:
Ok I've been reading up on this topic for a lot of time, so here are the pros and cons.

Pros:

1) GPUs gain early access to advanced manufacturing technologies, however, this is only true if AMD is going to fab the Fusions in-house, for the foreseable future, it doesn't look like they are.



I am pretty sure AMD mentioned they would be using SOI for fusion. Considering that ATI used TSMC on Bulk, use of SOI definitely means AMD internally or outsourced to its partner Chartered
April 4, 2007 9:21:28 AM

Quote:
Ill check it but i was looking for a white paper on the Silicon on insulator process you mentioned versus what process you said the had prior to bieng bought by AMD.


I guess I came across a presentation explaining the two... let me check

http://www.google.com/url?sa=t&ct=res&cd=1&url=http%3A%...

Here you go... but frankly part of the presentation is way too technical 8O


I dont normally look at Bulk and SOI, but got interested when i heard that chartered had SOI and TSMC didn't and that at lower geometries SOI makes more sense as it allows for deep channel etching...

anyway, tons of companies still use bulk... biggest example being intel :) 
April 4, 2007 2:07:57 PM

What I'm reading here is the following:

A "Fusion" approach gives great benefits in cost, power consumption, and floating point performance.

But back to my original post, based on the fact that graphics traffic is inherently one-way, the best system for the high-end gamer, even after the availability of Fusion, is still an off-die GPU sitting on the PCIe bus coupled with the best multicore general-purpose CPU. I say this because today's games (afaik) are more dependent on integer performance than FP performance. In short, it takes a certain number of transistors to perform a certain amount of work, and if you can increase your total transistor budget by moving some off-die without hurting performance, then that's a plus. I speak only from a high-end gaming perspective. Maybe you could do some out-of-this-world fantastic things with the massive array of FP processors that moonlights as a graphics core, but building a better gaming machine than the guy who used the top of the line multicore general purpose CPU with a top of the line graphics card is not one of them. My conclusion is negated, of course, if graphics traffic is not inherently one-way, or if games really are floating-point limited. Any thoughts on this?
April 4, 2007 3:14:28 PM

Quote:
if the graphics engine has access to 80 local cpu cores i would imagine the graphics will be capable of ray tracing


Quote:
Based on these numbers and on the relative performance differences between the P4 and the latest processors from Intel and AMD, I think it's reasonable to guesstimate that a four-core Conroe or Athlon 64 system would get you to the point where you could do real-time ray tracing at 450M raysegs/s or higher. This would put software-based real-time ray tracing well in reach of a God Box in late 2007, and a Performance Gaming Box not too long thereafter. These numbers also make me wonder if you couldn't do real-time ray tracing on a PS3, or even on an Xbox 360 (less concurrency than the PS3, but higher per-thread performance).


Quote:
You may not need to spring for a four-core system to do real-time ray tracing, however, because dedicated hardware could do the trick. Researchers at Saarland University have used an FPGA to build a dedicated ray tracing processor that can't quite yet do real-time ray tracing, though it does show promise. The FPGA, which implements what the researchers are calling a ray tracing processing unit (RPU), runs at only 66MHz, but it can render some reasonably complex scenes at 512x384 and from 1.2 to 7.5 FPS. An RPU implemented on a CMOS process with a more respectable clock rate should be able to do real-time ray tracing at higher resolutions. (The RPU's designers don't use the same raysegs/s metric that the Intel group came up with, so direct comparisons between the software techniques described above and the RPU are difficult.)


both quotes from http://arstechnica.com/news.ars/post/20060805-7430.html

which talks about this article:

http://www.sciam.com/article.cfm?chanID=sa006&colID=1&a...

subscription required. sorry.

I had the issue at home and read it when it came out. Very interesting stuff. I don't think I still have it and I don't have access to the online portion.

Looks like real-time could be in reach with a 4-core or 8-core setup.
April 4, 2007 3:59:13 PM

Quote:
But back to my original post, based on the fact that graphics traffic is inherently one-way, the best system for the high-end gamer, even after the availability of Fusion, is still an off-die GPU sitting on the PCIe bus coupled with the best multicore general-purpose CPU.


Indeed. The only real performance benefit of having the GPU on the CPU core is that you get faster communication between the two... and since games generally aren't limited by how fast you can get data to the GPU, it's unlikely to be faster than two chips of the same size running on separate boards.

However, for building a cheap and highly integrated system which doesn't need top performance, it's a good plan. Whether it's something that AMD should be spending time on when they're already lagging behind Intel is another question, of course.

As for using the on-chip GPU for fast floating point... uh, why not just put a fast floating point on the CPU instead?
April 4, 2007 6:22:46 PM

Quote:

Indeed. The only real performance benefit of having the GPU on the CPU core is that you get faster communication between the two... and since games generally aren't limited by how fast you can get data to the GPU, it's unlikely to be faster than two chips of the same size running on separate boards.

However, for building a cheap and highly integrated system which doesn't need top performance, it's a good plan. Whether it's something that AMD should be spending time on when they're already lagging behind Intel is another question, of course.

As for using the on-chip GPU for fast floating point... uh, why not just put a fast floating point on the CPU instead?


AMD chose the Fusion path because they knew that some day they will lag behind Intel, making fusion processors means that AMD would have much better, cheaper, faster solutions with lower power consumption. There is a lot of money to be made out of IGPs, and Fusion is wat AMD needs.

As for your remark about the GPU as a FPU, GPUs would be much cheaper , so a CPU with a GPU on die, will have much more GFLOPS\$ count. However, for use in HPC, i think AMD will do what they have done for the stream processor, remove the TMUs, ROPs etc. and only keep the shaders. So a HPC Fusion is actually CPU Core(s) + shader cores, so you can consider this 'shader only' GPU a FPU.
April 4, 2007 11:52:57 PM

Quote:

AMD chose the Fusion path because they knew that some day they will lag behind Intel, making fusion processors means that AMD would have much better, cheaper, faster solutions with lower power consumption.


So they'll have bigger chips with more transistors... which will be better, cheaper and faster with lower power consumption than dedicated CPUs with no GPU.

Don't you see the slight inconsistency here?

Quote:
There is a lot of money to be made out of IGPs, and Fusion is wat AMD needs.


AMD needs competitive chips. While they're spending time making highly integrated chips for the bottom end of the market they're losing the mid-range and top end to Intel. Maybe that will turn out to be a good strategy, but I don't see it myself.

Quote:
As for your remark about the GPU as a FPU, GPUs would be much cheaper


Why do you think that putting a graphics chip into a CPU will be cheaper than a floating point unit? One requires a whole host of hardware which is irrelevant to floating point operations, one doesn't... but you think the more complicated chip will be cheaper and simpler?
April 5, 2007 10:43:22 AM

Quote:

So they'll have bigger chips with more transistors... which will be better, cheaper and faster with lower power consumption than dedicated CPUs with no GPU.

Don't you see the slight inconsistency here?


Bigger chips with more transistors, agreed. But still much cheaper than two separate dies, and lower power consumption than two GPUs connected by buses.

Quote:
AMD needs competitive chips. While they're spending time making highly integrated chips for the bottom end of the market they're losing the mid-range and top end to Intel. Maybe that will turn out to be a good strategy, but I don't see it myself.


No they aren't. AMD said that the number of engineers working on separate CPUs and separate GPUs hasn't decreased, so this means that their products are going to be as competitive as they always were, until and unless we hear about people from specific teams being moved to others, AMDs' CPUs will hopefully remain competitive.

Quote:
Why do you think that putting a graphics chip into a CPU will be cheaper than a floating point unit? One requires a whole host of hardware which is irrelevant to floating point operations, one doesn't... but you think the more complicated chip will be cheaper and simpler?


I am going to try to make myself clear. GPGPU is gaining fame and acceptance day by day, and soon we will see more programs aking advantage of GPGPUs. So let us say a company wants to do financial analysis, and they develop a GPGPU application for it. This app. can run on the on die GPU of the Fusion CPU, or on the separate GPU chips. Now if the Fusion CPU has a normal FPU, this GPGPU application will only run on the separate GPU chips, and to take advantage of the FPU, the program will have to be re-written. So an on die GPU with a CPU is more worthwhile since it can take advantage of applications written for separate GPUs.
!