My thoughts why AMD is so quiet.

kitchenshark

Distinguished
Dec 30, 2005
377
0
18,780
Firts off, I've been gone a bit so I apologize if this has already been discussed.

I've been thinking a lot about AMD's aquisition of ATI and at first, for a long time, I thought it was the worst move they could have made. It seemed pure idiocy to make such a huge acquisition at a critical time where they should have been pouring money int R&D for a new chip to beat Intel.

I see now that I was wrong.

Instead of reacting to Intel's new assault, AMD has been acting. Acting and thinking in new ways. Even so far to be considered a paradigm.

If AMD can take ATI's GPU technology and optomize it well enough to be used as a massively parallel execution engine AMD has a chance to be THE supplier of major computational horsepower.

For example, if you could have a general purpose execution engine performing at 300Gflops, and you can connect four of them using HTT 3.0 in a 1U format, that would turn some heads, would it not? Maybe use 65nm uarch to keep the heat and power requirements down.

I suspect that AMD might be trying to use 4x4 as a smokescreen to conceal their REAL next move with ATI's technology. I believe AMD means to move most of their resources out of the consumer/enthusiast space and into this new area of computing. They have a REAL chance of completely dominating, and dominating in ways that Intel may or may not fully realize yet. In fact AMD might have a shot at crushing Sun and IBM and Cray in terms of raw deployable computing horsepower.

I do wonder though if AMD has tipped their hand a little too early. I don't suppose it matters too much one way or the other. It will however be interesting to me to see if this is conjecture on my part or if it is what's really going on behind AMD's closed doors.

I know if I had a chance to take over the computing world in this fashion, I know I would be all over it too.
 

kitchenshark

Distinguished
Dec 30, 2005
377
0
18,780
Very true, but what I'm thinking about will take more time.

It will take a while for AMD to figure out how to adapt GPU tech to be used for efficient parallel computing and then shrunk to 65nm.
 

kitchenshark

Distinguished
Dec 30, 2005
377
0
18,780
Thank you for your reply JumpingJack.

You touched a great deal on the rest of my thoughts that I was having, especially about the Cell processor.

I didn't know that Intel was heading in the same direction either.

Heh, sorry, I do get excited from time to time about what i see.

Do you think mixed-type chips will really be big? What I was writing about was pure execution in massive parallel in large clusters or workstations. General purpose stuff. Do you see mixed type chips (encoding, audio, video circuits on one die or MB) filtering down to the consumers?

Definitely a lot for me to think about.
 

darkstar782

Distinguished
Dec 24, 2005
1,375
0
19,280
Thank you for your reply JumpingJack.

You touched a great deal on the rest of my thoughts that I was having, especially about the Cell processor.

I didn't know that Intel was heading in the same direction either.

Heh, sorry, I do get excited from time to time about what i see.

Do you think mixed-type chips will really be big? What I was writing about was pure execution in massive parallel in large clusters or workstations. General purpose stuff. Do you see mixed type chips (encoding, audio, video circuits on one die or MB) filtering down to the consumers?

Definitely a lot for me to think about.

Ohhhh, yeah --- hetrogeneous multicore processors are where the industry is headed. IBM/Sony pretty much demonstrated (and are selling) a prototype (Cell). Both Rattner (Justin Rattner of Intel) and Fred Weber (formerly of AMD) touted the Cell as a design ahead of it's time. They are right, this is the direction the microprocessor is going.

I am fuzzy though on how product differentiation will work... the CPU found it's utility in being multi-purpose. But if one divides the CPU into parallel modules, then the strength of the chip derives from the weighted mix of the processing cores you throw in... for example, say AMD makes a CPU/GPU/Multimedia Video crunching CPU with 16 cores.

One unit might look like:
6 CPU, 6 GPU, 4 Multimedia (balanced)

Another might look like:
3 CPU, 10 GPU, 3 Multimedia (3D/Graphics heavy)

Etc. etc.

The CPU becomes 'non' multipurpose and this restricts my choices as a consumer/user/enthusiast to pick and select the best of the best. If it all is wrapped up in one package, how can I selectively add 'GPU strength'.

I think this is where Torrenza technology will play a huge huge role.

Jack

Much as I agree with you Jack, the enthusiasts are going to have hours of fun deciding if 3 CPUs and 11 GPUs is better than 4 CPUs and 10 GPUs or 5 CPUs and 9 GPUs etc....


Heh.... I forsee an era where there is no "best" CPU, they all have advatages in different tasks, hell they all have advantages in different games, some of which are more CPU intensive and some of which are more Gfx intrensive.
 

BGP_Spook

Distinguished
Mar 20, 2006
150
0
18,680
I think we will not see parallel processing as quite the spread that you appear to be thinking.

I think future CPU's will function in a more hierarchal parallel way than purely parallel.
 

kitchenshark

Distinguished
Dec 30, 2005
377
0
18,780
I see what you're saying Jack. And I agree that the technology is heading towards unification. Heh, guess kinda like the recent unification of shaders and whatnot on graphics cards.

GPU's on the CPU die though. If GPU on CPU die is what you're talking about ( or even in a CPU type socket) there must be some kind of new high speed memory solution. Any GPU would be somewhat crippled by the limits of a motherboard bus, would it not? And would the costs of such a high speed bandwidth rich system be feasible?

Unless there's something I don't know about (most likely) then the fastest application of AMD's aquisition of ATI is to produce a multi Gflop capable processor as a drop in solution for something like torrenza. This is what I got so excited over earlier.

If AMD can get that done, maybe they can squeeze some more performance towards the consumer without raising prices too high.

Thanks for all the posts, very interesting reading for me.
 

kitchenshark

Distinguished
Dec 30, 2005
377
0
18,780
Thanks Jack. :)

Another thing that I was looking at is that if it doesn't take too much optimization to turn a GPU into a drop-in parallel execution unit, then AMD could offer the enterprise market a powerful computing solution for a fraction of the cost of a large cluster.

From recent articles on ATI and nVidia GPU's seem a nearly ready-made powerhouse.
 
Now if only they could reduce the heat output from the GPU's. But as has probably been said before, that might just happen with the eventual progression to smaller processes.
Excellent thread by the way. I just wonder what kind of avatar would match such an interesting name. I too hope to see you stick around.
 

quantumsheep

Distinguished
Dec 10, 2005
2,341
0
19,790
You started an interesting topic... with your post count, you are relatively new... please hang around and chat for while, build that count, it is good to have you in the Forum.

Jack

Agreed, nice to see someone who can post something of intellectual value (unlike myself :D). I think it's nice to read, and gives me a nice insight into what is actually happening within the CPU industry.
 

sailer

Splendid
I am fuzzy though on how product differentiation will work... the CPU found it's utility in being multi-purpose. But if one divides the CPU into parallel modules, then the strength of the chip derives from the weighted mix of the processing cores you throw in... for example, say AMD makes a CPU/GPU/Multimedia Video crunching CPU with 16 cores.

One unit might look like:
6 CPU, 6 GPU, 4 Multimedia (balanced)

Another might look like:
3 CPU, 10 GPU, 3 Multimedia (3D/Graphics heavy)

Etc. etc.

The CPU becomes 'non' multipurpose and this restricts my choices as a consumer/user/enthusiast to pick and select the best of the best. If it all is wrapped up in one package, how can I selectively add 'GPU strength'.

I think this is where Torrenza technology will play a huge huge role.

Jack

I've read a bit about the idea of the multi core cpu/gpu/multimedia thing and it sets me to wondering what will happen to the enthusiast market.

From what I've read, Vista will be locked down with only one change allowed before either a new copy of Vista has to be bought or a special allowance is made by Microsoft to reactivate it. If that's really the case, system modifications will drop to almost zero. A person would pretty much be stuck with running the computer as it was first bought or built. This multi purpose cpu/gpu thing would fit right in with that, since once the choice was made, it would be locked in until a whole new multi processor was bought. One Vista license, one multi cpu, not real changes allowed.

The enthusiast would be limited to making his/her selection at the time of build, thus encouraging him to buy the best, most expensive available, because the idea of buying a part that was only average and then upgrading later would be lost. Yeah, good for the companies sellign hardware, but bad for the enthusiast.

Just some further thoughts. Don't know if it will happen or not, or is close in some way but not in another. All we can do is just wait and see what happens.
 

No_Frills

Distinguished
Nov 19, 2006
2
0
18,510
I read in Maximum PC that integrating a CPU and GPU on a single die would take time that could potentially partially obsolete both the CPU and GPU, so that "Fusion"-like products would be relegated to the mid-range market. That, and the convinience or replacing a discrete CPU or GPU independent of the other would be lost.
I wonder if we'll eventually get MP motherboards with two Socket 1207 sockets so you could drop in an AMD CPU and an ATI GPU then connect them with an HT bus, that way they could be offered as discrete products.
 

kitchenshark

Distinguished
Dec 30, 2005
377
0
18,780
Having one socket for CPU and one for GPU is something i've been mulling over too. But even with HT 3.0 I believe the memory bandwidth hungry GPU would quickly overwhelm the bus.

Maybe if they put in a dedicated GPU memory bus? I don't know. I'm waiting (impatiently) to see what the hardware manufacturers comes up with.

The Fusion products would probably serve a great many low to mid range consumer systems quite well though.
 

aggiebroz

Distinguished
Mar 26, 2006
22
0
18,510
this is a great thread actually worth reading, thanks everybody for great information, thoughts, and ideas on where computers are heading.

Now here is my two cents worth of where i think they might go with the new chips eventually. I think they will get around to where there is one die/core made up of 100s of multipurpose calculation units. then depending on the tasks being done, the system will dynamically allocate some units to doing what our cpu's currently do and some to what our gpu's currently do and some to whatever else the system needs to process. kinda like how the new gpu's have unified shader units. And to help with power consumption and heat they cane shut down units that are not needed under light load.

I think this would be better than having to choose a chip with x cpu cores and y gpu cores as somebody said earlier, because your system will use the processing units optimally for the tasks at hand. Of course such a chip would need some kind of extremely high bandwidth bus to the rest of the system. also i think there should be one large group of extremely fast memory that will get dynamically used for both system and graphics purposes.
 

kitchenshark

Distinguished
Dec 30, 2005
377
0
18,780
And to help with power consumption and heat they cane shut down units that are not needed under light load.

Now that definitely makes sense. Don't GPU's and CPU's nowadays do this already by themselves? Or do you need drivers and such like Cool 'n' Quiet?
 

joset

Distinguished
Dec 18, 2005
890
0
18,980
this is a great thread actually worth reading, thanks everybody for great information, thoughts, and ideas on where computers are heading.

Now here is my two cents worth of where i think they might go with the new chips eventually. I think they will get around to where there is one die/core made up of 100s of multipurpose calculation units. then depending on the tasks being done, the system will dynamically allocate some units to doing what our cpu's currently do and some to what our gpu's currently do and some to whatever else the system needs to process. kinda like how the new gpu's have unified shader units. And to help with power consumption and heat they cane shut down units that are not needed under light load.

I think this would be better than having to choose a chip with x cpu cores and y gpu cores as somebody said earlier, because your system will use the processing units optimally for the tasks at hand. Of course such a chip would need some kind of extremely high bandwidth bus to the rest of the system. also i think there should be one large group of extremely fast memory that will get dynamically used for both system and graphics purposes.

This is where I believe the trend leads to, though not until a few "Vistas" away... :wink:


Cheers!
 

sailer

Splendid
They said the same thing about Windows XP with upgrading components.

I can't remember quite the same concerns being voiced about XP. The big thing I remember is the complaint of having to register it with Microsoft for it to work and that people who weren't hooked up to the internet, and sometimes the internet wasn't even available if they wanted to use it, couldn't. Therefore they had to keep buying Win 98 or buy Win 2000.

The only time I experienced trouble upgrading under XP was when I replaced the cpu, gpu, and motherboard at the same time. I called Microsft and they gave me a key to reactivate it without asking any questions. That's far different from the Vista policy that allows one upgrade and only one upgrade.
 

stephen1960

Distinguished
Oct 20, 2006
40
0
18,530
I am fuzzy though on how product differentiation will work... the CPU found it's utility in being multi-purpose. But if one divides the CPU into parallel modules, then the strength of the chip derives from the weighted mix of the processing cores you throw in... for example, say AMD makes a CPU/GPU/Multimedia Video crunching CPU with 16 cores.

One unit might look like:
6 CPU, 6 GPU, 4 Multimedia (balanced)

Another might look like:
3 CPU, 10 GPU, 3 Multimedia (3D/Graphics heavy)

Etc. etc.

The CPU becomes 'non' multipurpose and this restricts my choices as a consumer/user/enthusiast to pick and select the best of the best. If it all is wrapped up in one package, how can I selectively add 'GPU strength'.

I think this is where Torrenza technology will play a huge huge role.

Jack

I've read a bit about the idea of the multi core cpu/gpu/multimedia thing and it sets me to wondering what will happen to the enthusiast market.

From what I've read, Vista will be locked down with only one change allowed before either a new copy of Vista has to be bought or a special allowance is made by Microsoft to reactivate it. If that's really the case, system modifications will drop to almost zero. A person would pretty much be stuck with running the computer as it was first bought or built. This multi purpose cpu/gpu thing would fit right in with that, since once the choice was made, it would be locked in until a whole new multi processor was bought. One Vista license, one multi cpu, not real changes allowed.

The enthusiast would be limited to making his/her selection at the time of build, thus encouraging him to buy the best, most expensive available, because the idea of buying a part that was only average and then upgrading later would be lost. Yeah, good for the companies sellign hardware, but bad for the enthusiast.

Just some further thoughts. Don't know if it will happen or not, or is close in some way but not in another. All we can do is just wait and see what happens.

Microsoft has relaxed the two computers restriction in the Vista license and a good thing they did too, how would we work that? I have been buying retail versions of their operating systems so that I wouldn't worry about how many hardware changes I went through.

Regarding future processor/gpu architectural hypotheses, there is, I think, a limit to how much one chip should hold although that will be increasing. I am interested in the communications amongst IC's on the motherboard and that is why AMD is getting me excited. I don't know what Intel is up to as I haven't been checking on their plans. I would like to see things like proximity communication between IC's, stacking them on top of each other, and really fast communications between isolated IC's. Along with this is division of work.

New concepts must eventually be implimented. Consider that all the major components on our motherboards are silicone chips. What is the next step? Surely there is something better. This kind of stuff keeps me up at night thinking.

I'm in need of more research in this area to give useful dialogue at this point but you get the idea. How much longer can they continue to shrink their processes? Already they have begun to spread out, that's why we are seeing multicore cpu's.
 

Chil

Distinguished
Feb 20, 2006
192
0
18,680
With all this talk of upwards of dozens of unified cores on a single die, I know someone must have thought about the logistics of such a CPU/GPU hybrid. How much further can Moore's Law go before (I think someone posted the specifics in another topic) every circuit has to be literally a few atoms in size?

Eventually, if computing power is going to increase at the rate it's going now, we'll have to find some revolutionary new way of making the devices themselves. Again, someone else in a different topic mused about light and lasers and such, but I'm no physicist so I can't go into it any further. Happy second-guessing the CPU industry, because I'll be here playing my games and not being worried about what I'll be buying 10 years from now.
 

casewhite

Distinguished
Apr 11, 2006
106
0
18,680
If what I heard and saw at SC'06 in Tampa this week is any indication we are on the cusp of a major shift in the computing structure across the board. http://sc06.supercomputing.org/ The two hottest topics not on the agenda are the IBM Roadrunner design for Los Alamos Labs. http://www.supercomputingonline.com/article.php?sid=11894
http://www.hpcwire.com/hpc/893353.html and the GPU acceleration by ATI and nVidia. http://www.hpcwire.com/hpc/1092927.html ATI had hardware on site ,nVidia had a press release for their design called Cuda.


The most interesting thing about the IBM cell design is that it only uses single core Opterons cutting down on the power and heat load. Los Alamos has been constrained by the electrical infrastructure of PSNM in the Santa Fe area isn't strong enough to handle a new major supercomputer so they have been dependent on an intranet link to Sandia in Albuquerque. Roadrunner looks to have only 40% of the power requirement of the Cray XT4's going in at Oak Ridge and Lawrence Berkely. All will be 1 petaflop + installations. What this means is that with a single core CPU it will be possible to build a 500 gigaflop desktop in the next twelve months that will use less power than the Intel core 2 that gets about 42 gigaflops peak. The concept of the Cell+ is the other joker in the IBM deck.

"On average, Cell is eight times faster and at least eight times more power-efficient than current Opteron and Itanium processors, despite the fact that Cell's peak double-precision performance is fourteen times slower than its peak single-precision performance. If Cell were to include at least one fully usable pipelined double-precision floating-point unit, as proposed in the Cell+ implementation, these performance advantages would easily double"
The current design called QS-20 by IBM has been operational for about six months now at the University of Manchester in England.

On the GPU acceleration front, ATI had previously announced their design for physics acceleration about a year ago. http://ati.amd.com/technology/crossfire/physics/index.html The Folding at Home project at Stanford is giving real life to this as well as being the test bed. http://folding.stanford.edu/FAQ-ATI.html As Jack has mentioned earlier, AMD has excelled at co-opting or acquiring the engineering and physics talent that they need to develop a design. Stanford will essentially deliver the working drivers for Stream to ATI/AMD. My guess based on Friday's announcement is that TACC in Austin will be the major test bed for this type of design. http://www.tacc.utexas.edu/research/users/features/track2.php This should also be a very power efficient design also. TSUBAME in Japan is essentially the prototype for this and the results there with acceleration has been very promising. http://techreport.com/onearticle.x/10993 The big advance as I understand it at TACC is the use of HTT 3.0 which would allow the direct access to the system memory by the Stream Processor. That would eliminate the need for on board memory without a significant latency penalty and a considerable power savings. congratulations Jack you will have a spiffy new toy to play with.

As to many cores on one chip, manufacturing costs will probably kill that. Each additional core you add to the die doubles the reject rate so it is much more efficient to use a single core and multiple sockets. That is what Cray has done with the XT3 and the SeaStar chip design. It is much easier and cheaper to design single core chips and multisocket motherboards. The XT3 is the proof of that. Jack has on occasion in the past raised the issue of wafer yields. Single core design minimizes that risk. The other advantage is that it simplifies inventoryand maintaince thereby cutting costs of operation. For the customer the advantage is that they design to machine to be optimal for their use , Leggo style. The accelerators raise the computing power so much that more than 2 accelerators will give you 1 teraflop of computing power which is a hundred times the power of most desk top chips today. DARPA has figured out that once you get beyond that point you are in HPC country and other issues that are not shown in HPL (linpack) begin to make more difference than pure speed. http://www.taborcommunications.com/hpcwire/hpcwireWWW/04/0625/107896.html
http://www.hpcwire.com/hpc/724626.html http://www.gcn.com/print/25_5/40021-1.html
http://www.hpcchallenge.org/

AMD doesn't need to be beating the press drums. They have all the business they can handle and then some.
 

stephen1960

Distinguished
Oct 20, 2006
40
0
18,530
This is intuitively the direction I think we need to be going, so this news gets me excited.

Now then, I wonder if motherboard architecture will progress in such a way as to help this evolution along, you know, a better way to connect everything together, I think this is the key to our next ultimate architecture.

Bah, I keep looking for news on this but haven't found any. Other than your post, of course.
 

intelamduser

Distinguished
Feb 19, 2004
183
0
18,680
I just had to respond when I saw the heading. The AMD nutcases on this board will have a tantrum at this.

If AMD's new cpu's were any better/faster than their existing cpu's and they are going to be released in the next two weeks, all of the testing websites would already have been given samples and test data would be flowing.

When Intel stated a few months ago they had a new chip and it would be released, almost all of the web technical sites recieved parts to review the new design, information was released. When the information was shown that the new Intel design was much faster as well as more economical the worms crawled out of the woodwork to denounce the tests and data.

Now those same worms are stating that AMD is so smart that they are just holding back and they have products which will be released to the consumer which will outperform what is available today in a couple of weeks even though no test site has recieved a sample.

For those who believe this I have a lot of property for sale on the moon, send me a check for $50,000 an acre for all you want and I will send you some pictures and a deed.
 

intelamduser

Distinguished
Feb 19, 2004
183
0
18,680
I just had to respond when I saw the heading. The AMD nutcases on this board will have a tantrum at this.

If AMD's new cpu's were any better/faster than their existing cpu's and they are going to be released in the next two weeks, all of the testing websites would already have been given samples and test data would be flowing.

When Intel stated a few months ago they had a new chip and it would be released, almost all of the web technical sites recieved parts to review the new design, information was released. When the information was shown that the new Intel design was much faster as well as more economical the worms crawled out of the woodwork to denounce the tests and data.

Now those same worms are stating that AMD is so smart that they are just holding back and they have products which will be released to the consumer which will outperform what is available today in a couple of weeks even though no test site has recieved a sample.

For those who believe this I have a lot of property for sale on the moon, send me a check for $50,000 an acre for all you want and I will send you some pictures and a deed.