Sign in with
Sign up | Sign in
Your question

AMD's "Fusion" processor to merge CPU and GPU

Last response: in CPUs
Share
October 25, 2006 4:46:45 PM

AMD today announced that it has completed the acquisition of graphics chip developer ATI. The company does not waste any time to make use of the acquired knowledge: In 2007, AMD will be upgrading its mobile platform and offer a Centrino-like platform as well as integrated solutions for commercial and media systems. And there will be a processor with built-in graphics.

Integrated CPU/GPU chip, that sounds really amazing!

But what will the future hold for Nvidia and other graphics processor manufacturers? What would happen to Intel on the gaming turf?
October 25, 2006 4:58:20 PM

well good for AMD but IIRC way back in 2000 Intel had a project on this thing CPU+GPU on a single die. But they scraped the project don't remember correctly if it were money matter or whatever, maybe someone can provide the link (CNet news?), so as far as Intel is concerned, I think they know the ABC of the technology they just need to get to XYZ.
October 25, 2006 5:06:04 PM

Some historical lessons may be gleaned from the Cyrix MediaGX, which was a system-on-a-chip for entry level PCs. But I'm sure AMD has considered that and understands what makes it different from Cyrix.

On the gaming turf, I don't expect anyone to abandon mainstream discrete graphics cards (only the lower end cards). Thus I think the Fusion chip would contain some unnecessary silicon real estate for gamers and enthusiasts, though that's not a detriment to performance. For the rest of the systems, it could save some money over an integrated video or low end discrete solution.
Related resources
October 25, 2006 5:14:49 PM

What would be good, is a series of processors with integrated graphics and add in cards. If they worked it like Xfire (yes with a completely different technical setup, I know its not that easy), you could get your processor with standards graphics card built in, then have the option to get a beafed up card to work along side it. I wonder if anything like that is feasible, or am I dense? :) 
October 25, 2006 5:32:34 PM

In my opinion, this might be a good step up for those who now use the "intergrated graphics" that is seen on cheap computers, mainly the Intel ones from what I've seen. It would off load the main cpu so that it could run at full power, while the graphics part did its thing.

At the same time, I don't see it working with the enthusiast crowd. Can you imagine wanting to upgrade your video card and having to buy a whole new cpu/graphics chip. or upgrading your cpu and having to do the same? Not me, for one. I tend to get a good cpu and keep it for a long time, upgrading the video card as I go along.
Anonymous
a b à CPUs
October 25, 2006 5:35:22 PM

The name was Timna

Quote:
Mooly Eden: It was a huge risk for the IDC. Banias market came just after Timna had been canceled [Timna was the codename for an integrated processor designed for the entry-level market and originally scheduled for the second half of 2000 - ed]. We had worked on Timna for two years and needed to make sure that we didn't get another project canceled. In such a case, the company may lose confidence in the development center. And worse than this, the people may lose confidence in themselves. But the biggest risk in this industry is not to take risks, because then you are doomed. If you want to play it safe, you are out of the game.


Quote:
Timna microprocessor family was announced by Intel in 1999. Timna was planned as a low-cost microprocessor with integrated graphics unit and memory controller designed to work with Rambus memory. The company anticipated that by the time the processor is released to market, that is in the second half of 2000, the price of Rambus memory would fall to the level where it could be used in value computer systems. As the price of Rambus memory failed to drop, Intel decided to use a bridge chip (Memory Translation Hub or MTH), that was already used with Intel 820 chipset, to link Rambus memory controller with less expensive SDRAM memory. When a serious bug was discovered in the MTH design in the first half of 2000, Intel recalled the MTH and delayed Timna release until the first quarter of 2001. After that, the company started redesign of the MTH component from scratch, but due to continuing problems with the newly redesigned MTH part, as well as due to lack of interest from many vendors, Timna family was finally cancelled on September 29,2000.


I think it can be a good option, just look at all the integration goin on motherboard, you need less and less expansion, I can see the trend moving on the CPU if the lithography allows it!
October 25, 2006 5:42:21 PM

That sounds possible, but I think that they would probably start with the entry level systems. The 'ondie' graphics might be so slow that the overhead to do the 'crossfire' would outweigh the benefits. Or it would be such a negligible increase in performance that they wouldn't both selling it.
October 25, 2006 5:47:31 PM

Quote:
In my opinion, this might be a good step up for those who now use the "intergrated graphics" that is seen on cheap computers, mainly the Intel ones from what I've seen. It would off load the main cpu so that it could run at full power, while the graphics part did its thing.

At the same time, I don't see it working with the enthusiast crowd. Can you imagine wanting to upgrade your video card and having to buy a whole new cpu/graphics chip. or upgrading your cpu and having to do the same? Not me, for one. I tend to get a good cpu and keep it for a long time, upgrading the video card as I go along.

Totally agree.
I can see its potential in many office systems, etc that don't need good graphics. However, I wonder if the GPU processor will have separate RAM for it or what - its not like they can stick it on the CPU+GPU chip (not much real estate).

Problem is that if you want to upgrade you're kind of stuck. Either you're stuck w/ the CPU+GPU chip or you add an expansion card and you're stuck with 1/2 a chip that doesn't do anything. And if its more expensive than just a CPU by itself, it seems like a waste of $ to get the CPU+GPU. A saving grace there might be strong floating point performance by tapping the GPU.

Another thing I'd throw out is that I would think this kind of integrated chip would be hard to yield in the fabs. But I'm not a yield expert, so hopefully someone else can comment on it.
October 25, 2006 5:53:36 PM

Quote:
well good for AMD but IIRC way back in 2000 Intel had a project on this thing CPU+GPU on a single die. But they scraped the project don't remember correctly if it were money matter or whatever, maybe someone can provide the link (CNet news?), so as far as Intel is concerned, I think they know the ABC of the technology they just need to get to XYZ.


I think i remember somthing like that. it was around the same time the P3 was still using the SEC (single edge cartridge) type CPU's. intel also said around this time that these type of CPUs may someday contain all the system components of a home computer.
October 25, 2006 5:57:05 PM

Quote:
AMD today announced that it has completed the acquisition of graphics chip developer ATI. The company does not waste any time to make use of the acquired knowledge: In 2007, AMD will be upgrading its mobile platform and offer a Centrino-like platform as well as integrated solutions for commercial and media systems. And there will be a processor with built-in graphics.

Integrated CPU/GPU chip, that sounds really amazing!

But what will the future hold for Nvidia and other graphics processor manufacturers? What would happen to Intel on the gaming turf?


A CPU/GPU combo will NOT happen before 2008. I would think that AMD will first get a GPU in a Torrenza socket as an accelerator and then move to on die at 45nm.

We may see HTX(slot) first, but by Q307 there will be a Torrenza chip from ATi. Imagine putting an X1950 next to Barcelona for FP or media streaming. Or how about two Barcelonas and two X1950s in aquad socket box.
8O

I think that nVidia will be fine as they can still sell to AMD and Intel. They will probably present the Havoc physics engine for PCIe and HTX.The difference in slot shouldn't affect their price structure while HTX will give LOTS more bandwidth for interconnects than even PCIe 2.0.

I also wouldn't be surprised if they use Torrenza ond the Open Socket to create their CPU/GPU. Licensing Intel's bus may not provide the horsepower and creating their own FSB (with enough BW to compete with HT2) would take too long.
October 25, 2006 6:10:31 PM

I cant imaging putting a x1950xt inside a processor die....


Can we say heat dispersion? With so little surface area, your looking to cool quite a lot of heat very quickly. This would invoke mandatory water cooling... or possible shuttle tiles??
October 25, 2006 6:17:43 PM

Baron, off topic, but congratulations on the AMD/ATI marriage. I expect some really exciting product offerings from AMD and not just great CPUs anymore. I'm already drooling about the possibilities. :D  Good luck, AMD!!
October 25, 2006 6:23:21 PM

Quote:
The name was Timna

Mooly Eden: It was a huge risk for the IDC. Banias market came just after Timna had been canceled [Timna was the codename for an integrated processor designed for the entry-level market and originally scheduled for the second half of 2000 - ed]. We had worked on Timna for two years and needed to make sure that we didn't get another project canceled. In such a case, the company may lose confidence in the development center. And worse than this, the people may lose confidence in themselves. But the biggest risk in this industry is not to take risks, because then you are doomed. If you want to play it safe, you are out of the game.


Quote:
Timna microprocessor family was announced by Intel in 1999. Timna was planned as a low-cost microprocessor with integrated graphics unit and memory controller designed to work with Rambus memory. The company anticipated that by the time the processor is released to market, that is in the second half of 2000, the price of Rambus memory would fall to the level where it could be used in value computer systems. As the price of Rambus memory failed to drop, Intel decided to use a bridge chip (Memory Translation Hub or MTH), that was already used with Intel 820 chipset, to link Rambus memory controller with less expensive SDRAM memory. When a serious bug was discovered in the MTH design in the first half of 2000, Intel recalled the MTH and delayed Timna release until the first quarter of 2001. After that, the company started redesign of the MTH component from scratch, but due to continuing problems with the newly redesigned MTH part, as well as due to lack of interest from many vendors, Timna family was finally cancelled on September 29,2000.


I think it can be a good option, just look at all the integration goin on motherboard, you need less and less expansion, I can see the trend moving on the CPU if the lithography allows it!

yes exactly Timna! thanx alot! If torrenza can happen then surely Timna with a new name and enhanced technology with latest manufacturing process with the best architecture. (Gesher possibly) can surely happen. Time will tell! once again thanx for the dig! I remembered coz it was already disscussed in a thread "Torrenza", so most of this technology is already discussed nothing new.
October 25, 2006 6:23:34 PM

Integrated CPU/GPU... it had to be tried again, didnt it?

I cant see it being all that amazing, despite what people say. 'Oh goody', I hear, 'PROPER integrated graphics, about time!'.

Eh, no. Not really. Its not going to set the world alight, or even create that much smoke. Its an interecting concept, but do we REALLY need it? After all, current integrated systems work perfectly fine for the users who require one. This is my take on the whole shebang...

(Disclaimer: These points are my own, and arent based on anything other than my own thoughts and theories, which may be proven wrong at a later stage. If you feel the need to post derisory crap telling em I'm wrong, thats fine. Just remember, you're not actually doing anything constructive when you do so.)

1) Connectivity.

Pin density on CPUs is already getting rather crowded, and likewise on the GPU side of things. Now, while pin counts wont exactly double, HOW are you going to provide enough connections points without making the package size much bigger? From 30 seconds of thought, it'll be a similar package as current GPUs have. Which means these units will be soldered to the motherboard. Not good news for upgrades.

2) Die Size.

Ok, not as much of an issue as a lot of people might think. I guess the popular first image is of a CPU with a GPU on the same silicon. Yes, in a way it is. But it'll be tightly integrated with the CPU, in a way that will possibly see the two devices sharing circuitry. This sharing model will reduce overall heat production and power consumption. The downside will be a larger die needed to cope with the extra circuitry, and accordingly, less dies per wafer. This could potentially increase costs. Still, I have to concede that it would probably still be cheaper than a CPU and supporting external GPU/chipset.

3) Memory Control

Again, no so much of an issue, more a concern regarding control of the memory available to both parts. Will the IMC be responsible for allocating the memory for both the CPU and the GPU? Can it cope? Certainly, it will need a revision to allow for the fact that it now has to cope with roughly twice the workload, but this could very well be a good thing. After all, if the IMC is more efficient in its operation, it will benefit both devices. The downside is we're still using system RAM. But again, I guess for those people who will be using this setup, GFX performance is hardly a deciding factor.


4) Heat

I mentioned this back in #1, heat could be an issue for these combined units. Probably not as hot as a Prescott (or, say, the center of the sun, or a blast furnace), but its not going to be that much. I say this as ultimatly, I cant see this CPU/GPU approach being used for the higher end units, as that would be somewhat pointless. I'll wait and see what info we get regarding this before I decide any further.

I'm going to leave it there for a while, as I cant think of anything more right now. I'm interested to see what will come from this venture, as AMD is certainly in no position to release a product that cannot perform well and gain significant market share. If anyone has any valid and useful insights or information regarding this, please feel free to post.

If you intend to flame, I'll set ActionMan and StrageStranger on you. And if you continue, I might have to send BM, 9NM and Sharikou round to pay you a visit... you have been warned :D 
October 25, 2006 6:28:24 PM

Quote:

A saving grace there might be strong floating point performance by tapping the GPU.


They did say it's not just for GFX, but also to
Quote:

...leverage the floating point capabilities of graphics engines.

Seems like they intend to be able to use it for more than just GFX. So for a gaming rig you would probably get a dedicated GFX card (so that it could be very close to large ram caches on a dedicated bus) and the "GPU/CPU" would then be used as a "GPGPU/CPU" to bump up the floating point power of the CPU to do physics or something while the discrete GFX card did the imaging. So it's not just to replace "integrated GFX" but also to basically bring back the math-coprocessor we did away with so very long ago when CPUs were getting faster at such a rate that it was more cost effective to get a new CPU then add a math co-processor to your system if you needed more computational power.

The CPU companies are desperately trying to give us a reason to want to upgrade every year again since they can't figure out how to make a CPU much faster without creating small furnaces. Unfortunately, without the code to run it, the extra floating point power of the extra logic in a GPU/CPU will go to waste just like the extra cores in the dual-core processors they are trying to convince us are 2x as fast all the time. Fortunately, the extra "GPU" logic won't make it just 2x as fast, it'll be more like 5x as fast (when and if you actually use it of course, but a much higher incentive to make the extra coding effort) and they can use the same logic and silicon for low-end desktops and high-end servers wich will streamline manufacturing. How they are going to convince the high-end server market to pay 10x as much for the same exact logic being put into budget laptops is my question.

A lot of things up in the air about this. And with AMD borrowing 2.5billion USD to invest in a market with razor-thin margins that is rapidly dropping in retail value I'm worried that this might be a shake-up that is bad for consumers. Intel, nVidia, AMD/ATI are all treading on thin ice (It's a concept that has been repeatedly scrapped in the past) and racing eachother to provide a solution to a problem that may not even exist. Intel may be the only one with enough financial clout to be able to ride out the storm. Why are 3 completely different companies all working on the same exact "new" (very old) concept at the same time and trying to get consumers excited about it ~2years prior to it even coming out? This makes no sense to me.
October 25, 2006 6:54:18 PM

Quote:
I would think that AMD will first get a GPU in a Torrenza socket as an accelerator and then move to on die at 45nm.


I've seen this speculation mentioned elsewhere, I think there was even a quote from AMD suggesting this was their plan.

Quote:

We may see HTX(slot) first.

What is this "HTX slot" you speak of? Hyper Transport Xisasupercoollettertouseinanacronym Slot or something like that? I missed this one. Could you link some articles about it?

Quote:

I also wouldn't be surprised if they use Torrenza ond the Open Socket to create their CPU/GPU. Licensing Intel's bus may not provide the horsepower and creating their own FSB (with enough BW to compete with HT2) would take too long.


Yay! Open socket! This deserves way more attention than it is getting. I posted a thread about it not too long ago and it got zero replies :(  Open socket is way cooler with far more potential to give consumers good products than any of the other crap Intel, AMD/ATI, or nVidia have been talking about. Pimp the open socket!
October 25, 2006 7:50:03 PM

There is a huge market for the CPU/GPU thing for the $100/$200 type PC for emerging markets... 2/3 of the population of the world are classed as emerging markets, that’s a huge potential there... And who knows, these markets will probably not be emerging for ever.
October 25, 2006 9:37:10 PM

Quote:
I cant imaging putting a x1950xt inside a processor die....


Can we say heat dispersion? With so little surface area, your looking to cool quite a lot of heat very quickly. This would invoke mandatory water cooling... or possible shuttle tiles??


Contextually it should be clear that I meant in a socket not on the die. The on die verison will more than likely be just the pipelines, with the CPU controlling them.

If you look at the size of an IGP, you wil see that at 65nm those would be 50mm2 or so.
Anonymous
a b à CPUs
October 25, 2006 10:37:38 PM

Wiki is all knowledgeable

Quote:
HTX and Co-processor interconnect
The issue of bandwidth between CPUs and co-processors has usually been the major stumbling block to their practical implementation. After years without an officially recognized one, a connector designed for such expansion using a HyperTransport interface was recently introduced and is known as HyperTransport eXpansion (HTX). Using the same mechanical connector as a 16-lane PCI-Express slot, HTX allows plug-in cards to be developed which support direct access to a CPU and DMA access to the system RAM. Recently, co-processors such as FPGAs have appeared which can access the HyperTransport bus and become first-class citizens on the motherboard. Current generation FPGAs from both of the main manufacturers (Altera and Xilinx) can directly support the HyperTransport interface and have IP Cores available.

However, the existing HTX specification allows Hypertransport devices attached through HTX connectors to communicate at only a quarter of Hypertransport's full throughput, as it uses PCI-E's 16-bit connector and is downclocked to a mere 1.4GHz in spite of an earlier Samtec connector [2] supporting 32-bit, 2.8GHz operation.

November 3, 2006 5:11:03 PM

I dont expect heat would be that big of a deal, the die would be larger and perhaps the entire package would be a little larger.

This would remove the northbridge strain, shorten the video bus down to nothing. It could potentially create a significantly faster integrated video solution while decreasing overall system heat while shrinking the motherboard significantly.

I'm a little worried about what AMD will do now that they are combined with ATI however. Depending on whether AMD decides to start isolating themselves.

AMD can ensure that it's chipsets only support its processors and its video cards.. or provide decent updates and optimizations for its own parts only.

It could be very good, but it could be very bad for anyone using Intel processors, or Nvidia video...
November 3, 2006 5:26:40 PM

We all need to remember that even AMD/ATI admits that the graphics/cpu chip would be a low-end setup. It by no means would be for us people here at THG Forumz that talk about overclocking everything. For example, it would NOT be:

FX-62 + Radeon 1950


It would more likely be:

Low end graphics + Sempron
November 3, 2006 5:42:18 PM

I have only read the first post so apologies if Im going over old ground.

I cant really see that its much to get excited about. my guess it will be aimed at budget systems with low graphics peformance requirements. Or at the Ultra high end for the few that like to burn money. Imagine trying to keep up with Nvidia. A new high cost Cpu every 3 months or so anybody ?
November 3, 2006 5:43:48 PM

eventually they are talking about doing it for everything though. Just as they have combined certain pieces of hardware in the past.
November 3, 2006 5:45:51 PM

Quote:
We all need to remember that even AMD/ATI admits that the graphics/cpu chip would be a low-end setup. It by no means would be for us people here at THG Forumz that talk about overclocking everything. For example, it would NOT be:

FX-62 + Radeon 1950


It would more likely be:

Low end graphics + Sempron


I would have to agree with you. IMO it will do little to measure up to another manufacturers "fusion" product, made by Diamond way back when. Am I the only one who remembers the Diamond Fusion card (The first of the 2D/3D cards) ?
November 3, 2006 7:08:11 PM

If you are referring to the voodoo2 cards.. then no this is nothing like that.
this is taking the video processor which is normally integrated into the motherboard at the northbridge and dumping all of its processing into the same package as the main processor.

No it will not likely have anyhigh performance, but it has potential to be significantly better than current onboard graphics while taking up less space on the mobo and decreasing power.
November 3, 2006 7:13:13 PM

One day we'll just have a PC in one huge solid block of silicon.
November 4, 2006 5:22:56 AM

Quote:
If you are referring to the voodoo2 cards.. then no this is nothing like that.
this is taking the video processor which is normally integrated into the motherboard at the northbridge and dumping all of its processing into the same package as the main processor.

Yes, I know that. I was not referning to the physical integration, I was refering to the "branding" fusion. I was also not refering to the voodoo2 cards, but their predecessor, branded as "Fusion"


Quote:

No it will not likely have anyhigh performance, but it has potential to be significantly better than current onboard graphics while taking up less space on the mobo and decreasing power.


One would certainly hope so.
November 4, 2006 8:12:14 AM

Now here's the way for AMD to recover lost ground:

True Quad, 3.6GHz, DX10 on one chip.

My Visa card number is just itchin' to be typed into newegg for that one!!!

:D 
November 4, 2006 10:33:19 PM

Quote:
One day we'll just have a PC in one huge solid block of silicon.


Like this one?



Crystal Skulls

Supposively, they are thought to be ancient computer. :?
November 4, 2006 11:23:06 PM

Nice. Well I can't wait to see pc going for extreme nanotechnology. It seems gaming pc's are getting bigger and bigger in size and power consumption.
November 6, 2006 1:05:19 PM

Well, I dont mind unplugging my Oven's 30amp power connector just to plug in my computer and play BattleField 4. Go technology.
November 8, 2006 10:19:16 AM

CPU and GPU Merge – Biggest Microprocessor Evolution Since x86-64, Says AMD.

Phil Hester, chief technology officer at Advanced Micro Devices, the world’s second largest maker of central processing units, said at a conference that the integration of graphics processing units (GPUs) into central processing units will allow personal computers to achieve performance of supercomputers eventually.

“Get ready for round two of the "attack of the killer micros. By combining graphics processing unit (GPU) and CPU functions in heterogeneous cores, microprocessors will bring supercomputer performance to the desktop,” said Phil Hester, in a keynote speech at the International Conference on Computer-Aided Design (ICCAD) in San Jose, California, reports EETimes web-site.

The chief technologist at AMD believes that in order to achieve tremendous computing power on the desktop, central processing units (CPUs) should start utilizing heterogeneous multi-core design, where each of the cores will be able to perform certain types of tasks very rapidly. Given that theoretical peak power of modern GPUs is much higher than that of CPUs, it is natural to built in GPUs into CPUs to increase performance.

“A step increase in microprocessor performance per watt per dollar is needed. But simply adding more homogeneous CPU cores to a baseline architecture is not good enough. The solution is to adopt a heterogeneous architecture with GPU/CPU silicon-level integration,” Mr. Hester is reported to have said.

Mr. Hester also called integration of graphics processing engines into AMD’s chips as the “biggest microprocessor evolution” since the introduction of x86-64 concept back in 1999. Advanced Micro Devices proposed 64-bit extensions to x86 architecture seven years ago and has managed to transform the x86-64 technology into an industrial standard since then, which stresses how significant the idea to combine CPU and GPU is.

However, according to Mr. Hester, there are two significant design challenges in developing heterogeneous architectures (that combine CPU and GPU) – power management and memory hierarchy.
November 8, 2006 12:45:36 PM

THEIF!
the above statement is (c) Copyright 2006 by JanTech Inc. Reproduction is strictly prohibited without express written concent from the owner.
!