Sign in with
Sign up | Sign in
Your question

My thoughts why AMD is so quiet.

Last response: in CPUs
Share
November 19, 2006 3:38:11 PM

Firts off, I've been gone a bit so I apologize if this has already been discussed.

I've been thinking a lot about AMD's aquisition of ATI and at first, for a long time, I thought it was the worst move they could have made. It seemed pure idiocy to make such a huge acquisition at a critical time where they should have been pouring money int R&D for a new chip to beat Intel.

I see now that I was wrong.

Instead of reacting to Intel's new assault, AMD has been acting. Acting and thinking in new ways. Even so far to be considered a paradigm.

If AMD can take ATI's GPU technology and optomize it well enough to be used as a massively parallel execution engine AMD has a chance to be THE supplier of major computational horsepower.

For example, if you could have a general purpose execution engine performing at 300Gflops, and you can connect four of them using HTT 3.0 in a 1U format, that would turn some heads, would it not? Maybe use 65nm uarch to keep the heat and power requirements down.

I suspect that AMD might be trying to use 4x4 as a smokescreen to conceal their REAL next move with ATI's technology. I believe AMD means to move most of their resources out of the consumer/enthusiast space and into this new area of computing. They have a REAL chance of completely dominating, and dominating in ways that Intel may or may not fully realize yet. In fact AMD might have a shot at crushing Sun and IBM and Cray in terms of raw deployable computing horsepower.

I do wonder though if AMD has tipped their hand a little too early. I don't suppose it matters too much one way or the other. It will however be interesting to me to see if this is conjecture on my part or if it is what's really going on behind AMD's closed doors.

I know if I had a chance to take over the computing world in this fashion, I know I would be all over it too.

More about : thoughts amd quiet

November 19, 2006 3:43:15 PM

In december amd is supposed to release 65 nm stuff
November 19, 2006 3:48:27 PM

Very true, but what I'm thinking about will take more time.

It will take a while for AMD to figure out how to adapt GPU tech to be used for efficient parallel computing and then shrunk to 65nm.
Related resources
November 19, 2006 4:22:22 PM

Thank you for your reply JumpingJack.

You touched a great deal on the rest of my thoughts that I was having, especially about the Cell processor.

I didn't know that Intel was heading in the same direction either.

Heh, sorry, I do get excited from time to time about what i see.

Do you think mixed-type chips will really be big? What I was writing about was pure execution in massive parallel in large clusters or workstations. General purpose stuff. Do you see mixed type chips (encoding, audio, video circuits on one die or MB) filtering down to the consumers?

Definitely a lot for me to think about.
November 19, 2006 5:17:42 PM

Quote:
Thank you for your reply JumpingJack.

You touched a great deal on the rest of my thoughts that I was having, especially about the Cell processor.

I didn't know that Intel was heading in the same direction either.

Heh, sorry, I do get excited from time to time about what i see.

Do you think mixed-type chips will really be big? What I was writing about was pure execution in massive parallel in large clusters or workstations. General purpose stuff. Do you see mixed type chips (encoding, audio, video circuits on one die or MB) filtering down to the consumers?

Definitely a lot for me to think about.


Ohhhh, yeah --- hetrogeneous multicore processors are where the industry is headed. IBM/Sony pretty much demonstrated (and are selling) a prototype (Cell). Both Rattner (Justin Rattner of Intel) and Fred Weber (formerly of AMD) touted the Cell as a design ahead of it's time. They are right, this is the direction the microprocessor is going.

I am fuzzy though on how product differentiation will work... the CPU found it's utility in being multi-purpose. But if one divides the CPU into parallel modules, then the strength of the chip derives from the weighted mix of the processing cores you throw in... for example, say AMD makes a CPU/GPU/Multimedia Video crunching CPU with 16 cores.

One unit might look like:
6 CPU, 6 GPU, 4 Multimedia (balanced)

Another might look like:
3 CPU, 10 GPU, 3 Multimedia (3D/Graphics heavy)

Etc. etc.

The CPU becomes 'non' multipurpose and this restricts my choices as a consumer/user/enthusiast to pick and select the best of the best. If it all is wrapped up in one package, how can I selectively add 'GPU strength'.

I think this is where Torrenza technology will play a huge huge role.

Jack

Much as I agree with you Jack, the enthusiasts are going to have hours of fun deciding if 3 CPUs and 11 GPUs is better than 4 CPUs and 10 GPUs or 5 CPUs and 9 GPUs etc....


Heh.... I forsee an era where there is no "best" CPU, they all have advatages in different tasks, hell they all have advantages in different games, some of which are more CPU intensive and some of which are more Gfx intrensive.
November 19, 2006 5:50:17 PM

I think we will not see parallel processing as quite the spread that you appear to be thinking.

I think future CPU's will function in a more hierarchal parallel way than purely parallel.
November 19, 2006 6:45:16 PM

I see what you're saying Jack. And I agree that the technology is heading towards unification. Heh, guess kinda like the recent unification of shaders and whatnot on graphics cards.

GPU's on the CPU die though. If GPU on CPU die is what you're talking about ( or even in a CPU type socket) there must be some kind of new high speed memory solution. Any GPU would be somewhat crippled by the limits of a motherboard bus, would it not? And would the costs of such a high speed bandwidth rich system be feasible?

Unless there's something I don't know about (most likely) then the fastest application of AMD's aquisition of ATI is to produce a multi Gflop capable processor as a drop in solution for something like torrenza. This is what I got so excited over earlier.

If AMD can get that done, maybe they can squeeze some more performance towards the consumer without raising prices too high.

Thanks for all the posts, very interesting reading for me.
November 19, 2006 7:19:46 PM

Thanks Jack. :) 

Another thing that I was looking at is that if it doesn't take too much optimization to turn a GPU into a drop-in parallel execution unit, then AMD could offer the enterprise market a powerful computing solution for a fraction of the cost of a large cluster.

From recent articles on ATI and nVidia GPU's seem a nearly ready-made powerhouse.
November 19, 2006 7:32:50 PM

Now if only they could reduce the heat output from the GPU's. But as has probably been said before, that might just happen with the eventual progression to smaller processes.
Excellent thread by the way. I just wonder what kind of avatar would match such an interesting name. I too hope to see you stick around.
November 19, 2006 7:35:06 PM

Quote:
You started an interesting topic... with your post count, you are relatively new... please hang around and chat for while, build that count, it is good to have you in the Forum.

Jack


Agreed, nice to see someone who can post something of intellectual value (unlike myself :D ). I think it's nice to read, and gives me a nice insight into what is actually happening within the CPU industry.
November 19, 2006 7:43:40 PM

Quote:

I am fuzzy though on how product differentiation will work... the CPU found it's utility in being multi-purpose. But if one divides the CPU into parallel modules, then the strength of the chip derives from the weighted mix of the processing cores you throw in... for example, say AMD makes a CPU/GPU/Multimedia Video crunching CPU with 16 cores.

One unit might look like:
6 CPU, 6 GPU, 4 Multimedia (balanced)

Another might look like:
3 CPU, 10 GPU, 3 Multimedia (3D/Graphics heavy)

Etc. etc.

The CPU becomes 'non' multipurpose and this restricts my choices as a consumer/user/enthusiast to pick and select the best of the best. If it all is wrapped up in one package, how can I selectively add 'GPU strength'.

I think this is where Torrenza technology will play a huge huge role.

Jack


I've read a bit about the idea of the multi core cpu/gpu/multimedia thing and it sets me to wondering what will happen to the enthusiast market.

From what I've read, Vista will be locked down with only one change allowed before either a new copy of Vista has to be bought or a special allowance is made by Microsoft to reactivate it. If that's really the case, system modifications will drop to almost zero. A person would pretty much be stuck with running the computer as it was first bought or built. This multi purpose cpu/gpu thing would fit right in with that, since once the choice was made, it would be locked in until a whole new multi processor was bought. One Vista license, one multi cpu, not real changes allowed.

The enthusiast would be limited to making his/her selection at the time of build, thus encouraging him to buy the best, most expensive available, because the idea of buying a part that was only average and then upgrading later would be lost. Yeah, good for the companies sellign hardware, but bad for the enthusiast.

Just some further thoughts. Don't know if it will happen or not, or is close in some way but not in another. All we can do is just wait and see what happens.
November 19, 2006 8:47:57 PM

They said the same thing about Windows XP with upgrading components.
November 19, 2006 8:51:18 PM

I read in Maximum PC that integrating a CPU and GPU on a single die would take time that could potentially partially obsolete both the CPU and GPU, so that "Fusion"-like products would be relegated to the mid-range market. That, and the convinience or replacing a discrete CPU or GPU independent of the other would be lost.
I wonder if we'll eventually get MP motherboards with two Socket 1207 sockets so you could drop in an AMD CPU and an ATI GPU then connect them with an HT bus, that way they could be offered as discrete products.
November 19, 2006 9:00:13 PM

Having one socket for CPU and one for GPU is something i've been mulling over too. But even with HT 3.0 I believe the memory bandwidth hungry GPU would quickly overwhelm the bus.

Maybe if they put in a dedicated GPU memory bus? I don't know. I'm waiting (impatiently) to see what the hardware manufacturers comes up with.

The Fusion products would probably serve a great many low to mid range consumer systems quite well though.
November 19, 2006 9:05:18 PM

this is a great thread actually worth reading, thanks everybody for great information, thoughts, and ideas on where computers are heading.

Now here is my two cents worth of where i think they might go with the new chips eventually. I think they will get around to where there is one die/core made up of 100s of multipurpose calculation units. then depending on the tasks being done, the system will dynamically allocate some units to doing what our cpu's currently do and some to what our gpu's currently do and some to whatever else the system needs to process. kinda like how the new gpu's have unified shader units. And to help with power consumption and heat they cane shut down units that are not needed under light load.

I think this would be better than having to choose a chip with x cpu cores and y gpu cores as somebody said earlier, because your system will use the processing units optimally for the tasks at hand. Of course such a chip would need some kind of extremely high bandwidth bus to the rest of the system. also i think there should be one large group of extremely fast memory that will get dynamically used for both system and graphics purposes.
November 19, 2006 9:10:30 PM

Quote:
And to help with power consumption and heat they cane shut down units that are not needed under light load.


Now that definitely makes sense. Don't GPU's and CPU's nowadays do this already by themselves? Or do you need drivers and such like Cool 'n' Quiet?
November 19, 2006 9:41:05 PM

I just want one to play with....
November 19, 2006 10:30:26 PM

Quote:
this is a great thread actually worth reading, thanks everybody for great information, thoughts, and ideas on where computers are heading.

Now here is my two cents worth of where i think they might go with the new chips eventually. I think they will get around to where there is one die/core made up of 100s of multipurpose calculation units. then depending on the tasks being done, the system will dynamically allocate some units to doing what our cpu's currently do and some to what our gpu's currently do and some to whatever else the system needs to process. kinda like how the new gpu's have unified shader units. And to help with power consumption and heat they cane shut down units that are not needed under light load.

I think this would be better than having to choose a chip with x cpu cores and y gpu cores as somebody said earlier, because your system will use the processing units optimally for the tasks at hand. Of course such a chip would need some kind of extremely high bandwidth bus to the rest of the system. also i think there should be one large group of extremely fast memory that will get dynamically used for both system and graphics purposes.


This is where I believe the trend leads to, though not until a few "Vistas" away... :wink:


Cheers!
November 19, 2006 11:38:41 PM

Quote:
They said the same thing about Windows XP with upgrading components.


I can't remember quite the same concerns being voiced about XP. The big thing I remember is the complaint of having to register it with Microsoft for it to work and that people who weren't hooked up to the internet, and sometimes the internet wasn't even available if they wanted to use it, couldn't. Therefore they had to keep buying Win 98 or buy Win 2000.

The only time I experienced trouble upgrading under XP was when I replaced the cpu, gpu, and motherboard at the same time. I called Microsft and they gave me a key to reactivate it without asking any questions. That's far different from the Vista policy that allows one upgrade and only one upgrade.
November 20, 2006 12:07:06 AM

Quote:

I am fuzzy though on how product differentiation will work... the CPU found it's utility in being multi-purpose. But if one divides the CPU into parallel modules, then the strength of the chip derives from the weighted mix of the processing cores you throw in... for example, say AMD makes a CPU/GPU/Multimedia Video crunching CPU with 16 cores.

One unit might look like:
6 CPU, 6 GPU, 4 Multimedia (balanced)

Another might look like:
3 CPU, 10 GPU, 3 Multimedia (3D/Graphics heavy)

Etc. etc.

The CPU becomes 'non' multipurpose and this restricts my choices as a consumer/user/enthusiast to pick and select the best of the best. If it all is wrapped up in one package, how can I selectively add 'GPU strength'.

I think this is where Torrenza technology will play a huge huge role.

Jack


I've read a bit about the idea of the multi core cpu/gpu/multimedia thing and it sets me to wondering what will happen to the enthusiast market.

From what I've read, Vista will be locked down with only one change allowed before either a new copy of Vista has to be bought or a special allowance is made by Microsoft to reactivate it. If that's really the case, system modifications will drop to almost zero. A person would pretty much be stuck with running the computer as it was first bought or built. This multi purpose cpu/gpu thing would fit right in with that, since once the choice was made, it would be locked in until a whole new multi processor was bought. One Vista license, one multi cpu, not real changes allowed.

The enthusiast would be limited to making his/her selection at the time of build, thus encouraging him to buy the best, most expensive available, because the idea of buying a part that was only average and then upgrading later would be lost. Yeah, good for the companies sellign hardware, but bad for the enthusiast.

Just some further thoughts. Don't know if it will happen or not, or is close in some way but not in another. All we can do is just wait and see what happens.

Microsoft has relaxed the two computers restriction in the Vista license and a good thing they did too, how would we work that? I have been buying retail versions of their operating systems so that I wouldn't worry about how many hardware changes I went through.

Regarding future processor/gpu architectural hypotheses, there is, I think, a limit to how much one chip should hold although that will be increasing. I am interested in the communications amongst IC's on the motherboard and that is why AMD is getting me excited. I don't know what Intel is up to as I haven't been checking on their plans. I would like to see things like proximity communication between IC's, stacking them on top of each other, and really fast communications between isolated IC's. Along with this is division of work.

New concepts must eventually be implimented. Consider that all the major components on our motherboards are silicone chips. What is the next step? Surely there is something better. This kind of stuff keeps me up at night thinking.

I'm in need of more research in this area to give useful dialogue at this point but you get the idea. How much longer can they continue to shrink their processes? Already they have begun to spread out, that's why we are seeing multicore cpu's.
November 20, 2006 2:27:59 AM

With all this talk of upwards of dozens of unified cores on a single die, I know someone must have thought about the logistics of such a CPU/GPU hybrid. How much further can Moore's Law go before (I think someone posted the specifics in another topic) every circuit has to be literally a few atoms in size?

Eventually, if computing power is going to increase at the rate it's going now, we'll have to find some revolutionary new way of making the devices themselves. Again, someone else in a different topic mused about light and lasers and such, but I'm no physicist so I can't go into it any further. Happy second-guessing the CPU industry, because I'll be here playing my games and not being worried about what I'll be buying 10 years from now.
November 20, 2006 2:36:49 AM

If what I heard and saw at SC'06 in Tampa this week is any indication we are on the cusp of a major shift in the computing structure across the board. http://sc06.supercomputing.org/ The two hottest topics not on the agenda are the IBM Roadrunner design for Los Alamos Labs. http://www.supercomputingonline.com/article.php?sid=118...
http://www.hpcwire.com/hpc/893353.html and the GPU acceleration by ATI and nVidia. http://www.hpcwire.com/hpc/1092927.html ATI had hardware on site ,nVidia had a press release for their design called Cuda.


The most interesting thing about the IBM cell design is that it only uses single core Opterons cutting down on the power and heat load. Los Alamos has been constrained by the electrical infrastructure of PSNM in the Santa Fe area isn't strong enough to handle a new major supercomputer so they have been dependent on an intranet link to Sandia in Albuquerque. Roadrunner looks to have only 40% of the power requirement of the Cray XT4's going in at Oak Ridge and Lawrence Berkely. All will be 1 petaflop + installations. What this means is that with a single core CPU it will be possible to build a 500 gigaflop desktop in the next twelve months that will use less power than the Intel core 2 that gets about 42 gigaflops peak. The concept of the Cell+ is the other joker in the IBM deck.

"On average, Cell is eight times faster and at least eight times more power-efficient than current Opteron and Itanium processors, despite the fact that Cell's peak double-precision performance is fourteen times slower than its peak single-precision performance. If Cell were to include at least one fully usable pipelined double-precision floating-point unit, as proposed in the Cell+ implementation, these performance advantages would easily double"
The current design called QS-20 by IBM has been operational for about six months now at the University of Manchester in England.

On the GPU acceleration front, ATI had previously announced their design for physics acceleration about a year ago. http://ati.amd.com/technology/crossfire/physics/index.h... The Folding at Home project at Stanford is giving real life to this as well as being the test bed. http://folding.stanford.edu/FAQ-ATI.html As Jack has mentioned earlier, AMD has excelled at co-opting or acquiring the engineering and physics talent that they need to develop a design. Stanford will essentially deliver the working drivers for Stream to ATI/AMD. My guess based on Friday's announcement is that TACC in Austin will be the major test bed for this type of design. http://www.tacc.utexas.edu/research/users/features/trac... This should also be a very power efficient design also. TSUBAME in Japan is essentially the prototype for this and the results there with acceleration has been very promising. http://techreport.com/onearticle.x/10993 The big advance as I understand it at TACC is the use of HTT 3.0 which would allow the direct access to the system memory by the Stream Processor. That would eliminate the need for on board memory without a significant latency penalty and a considerable power savings. congratulations Jack you will have a spiffy new toy to play with.

As to many cores on one chip, manufacturing costs will probably kill that. Each additional core you add to the die doubles the reject rate so it is much more efficient to use a single core and multiple sockets. That is what Cray has done with the XT3 and the SeaStar chip design. It is much easier and cheaper to design single core chips and multisocket motherboards. The XT3 is the proof of that. Jack has on occasion in the past raised the issue of wafer yields. Single core design minimizes that risk. The other advantage is that it simplifies inventoryand maintaince thereby cutting costs of operation. For the customer the advantage is that they design to machine to be optimal for their use , Leggo style. The accelerators raise the computing power so much that more than 2 accelerators will give you 1 teraflop of computing power which is a hundred times the power of most desk top chips today. DARPA has figured out that once you get beyond that point you are in HPC country and other issues that are not shown in HPL (linpack) begin to make more difference than pure speed. http://www.taborcommunications.com/hpcwire/hpcwireWWW/0...
http://www.hpcwire.com/hpc/724626.html http://www.gcn.com/print/25_5/40021-1.html
http://www.hpcchallenge.org/

AMD doesn't need to be beating the press drums. They have all the business they can handle and then some.
November 20, 2006 3:01:34 AM

This is intuitively the direction I think we need to be going, so this news gets me excited.

Now then, I wonder if motherboard architecture will progress in such a way as to help this evolution along, you know, a better way to connect everything together, I think this is the key to our next ultimate architecture.

Bah, I keep looking for news on this but haven't found any. Other than your post, of course.
November 20, 2006 3:10:11 AM

I just had to respond when I saw the heading. The AMD nutcases on this board will have a tantrum at this.

If AMD's new cpu's were any better/faster than their existing cpu's and they are going to be released in the next two weeks, all of the testing websites would already have been given samples and test data would be flowing.

When Intel stated a few months ago they had a new chip and it would be released, almost all of the web technical sites recieved parts to review the new design, information was released. When the information was shown that the new Intel design was much faster as well as more economical the worms crawled out of the woodwork to denounce the tests and data.

Now those same worms are stating that AMD is so smart that they are just holding back and they have products which will be released to the consumer which will outperform what is available today in a couple of weeks even though no test site has recieved a sample.

For those who believe this I have a lot of property for sale on the moon, send me a check for $50,000 an acre for all you want and I will send you some pictures and a deed.
November 20, 2006 3:10:37 AM

I just had to respond when I saw the heading. The AMD nutcases on this board will have a tantrum at this.

If AMD's new cpu's were any better/faster than their existing cpu's and they are going to be released in the next two weeks, all of the testing websites would already have been given samples and test data would be flowing.

When Intel stated a few months ago they had a new chip and it would be released, almost all of the web technical sites recieved parts to review the new design, information was released. When the information was shown that the new Intel design was much faster as well as more economical the worms crawled out of the woodwork to denounce the tests and data.

Now those same worms are stating that AMD is so smart that they are just holding back and they have products which will be released to the consumer which will outperform what is available today in a couple of weeks even though no test site has recieved a sample.

For those who believe this I have a lot of property for sale on the moon, send me a check for $50,000 an acre for all you want and I will send you some pictures and a deed.
November 20, 2006 3:31:57 AM

Quote:
I just had to respond when I saw the heading. The AMD nutcases on this board will have a tantrum at this.

If AMD's new cpu's were any better/faster than their existing cpu's and they are going to be released in the next two weeks, all of the testing websites would already have been given samples and test data would be flowing.

When Intel stated a few months ago they had a new chip and it would be released, almost all of the web technical sites recieved parts to review the new design, information was released. When the information was shown that the new Intel design was much faster as well as more economical the worms crawled out of the woodwork to denounce the tests and data.

Now those same worms are stating that AMD is so smart that they are just holding back and they have products which will be released to the consumer which will outperform what is available today in a couple of weeks even though no test site has recieved a sample.

For those who believe this I have a lot of property for sale on the moon, send me a check for $50,000 an acre for all you want and I will send you some pictures and a deed.


This would explain the rumors I have heard of the base on the backside of the moon.

Thanks for the heads up though, we will keep an eye open for these worms. Should they show up perhaps we should send you an email? This way you could swiftly deal with them with your sharp wit, surely they will be dispensed with easily.
November 20, 2006 11:00:18 AM

Quote:
I just had to respond when I saw the heading. The AMD nutcases on this board will have a tantrum at this.

If AMD's new cpu's were any better/faster than their existing cpu's and they are going to be released in the next two weeks, all of the testing websites would already have been given samples and test data would be flowing.

When Intel stated a few months ago they had a new chip and it would be released, almost all of the web technical sites recieved parts to review the new design, information was released. When the information was shown that the new Intel design was much faster as well as more economical the worms crawled out of the woodwork to denounce the tests and data.

Now those same worms are stating that AMD is so smart that they are just holding back and they have products which will be released to the consumer which will outperform what is available today in a couple of weeks even though no test site has recieved a sample.

For those who believe this I have a lot of property for sale on the moon, send me a check for $50,000 an acre for all you want and I will send you some pictures and a deed.

This man speakth the truth. Maybe just not the part about the moon base. :?
November 20, 2006 11:48:54 AM

doesn't someone claim to actually own property on the moon?
November 20, 2006 12:22:29 PM

Ahhhh... thread hijacking at its finest, and here I thought this thread was still on course...

Anyways, back to the topic at hand, I still only see the fusion and integrated CPU/GPU for low-midstream and mobile computing markets. Here is some more info on AMD's Fusion:

http://www.hardocp.com/news.html?news=MjI0OTUsLCxobmV3c...

Key emphasis on this slide:

http://www.hardocp.com/image.html?image=MTE2MzY5MTQ0MUJ...

So most people would benefit from this tech, but we will still be able to get what we want best for ourselves. How it will be in 5 years, I don't know, but I still see for a while to come computer components will most likely remain discreet.

On the higher end server market for parallel computing... eh, I don't want to speculate on that too much. Casewhite has a point about more rejects the more core's you put on, but I'm fairly certain Intel and AMD can refine their manufacturing process enough that at most, there will be very few failures, but just have most cores run at slower speeds (just like they do now). So I don't really see that as a problem. The Roadrunner architecture looks really nice, and I'm looking forward to seeing Cell perform in a task it was designed for.
November 20, 2006 12:24:56 PM

Quote:
Firts off, I've been gone a bit so I apologize if this has already been discussed.

I've been thinking a lot about AMD's aquisition of ATI and at first, for a long time, I thought it was the worst move they could have made. It seemed pure idiocy to make such a huge acquisition at a critical time where they should have been pouring money int R&D for a new chip to beat Intel.

I see now that I was wrong.

Instead of reacting to Intel's new assault, AMD has been acting. Acting and thinking in new ways. Even so far to be considered a paradigm.

If AMD can take ATI's GPU technology and optomize it well enough to be used as a massively parallel execution engine AMD has a chance to be THE supplier of major computational horsepower.

For example, if you could have a general purpose execution engine performing at 300Gflops, and you can connect four of them using HTT 3.0 in a 1U format, that would turn some heads, would it not? Maybe use 65nm uarch to keep the heat and power requirements down.

I suspect that AMD might be trying to use 4x4 as a smokescreen to conceal their REAL next move with ATI's technology. I believe AMD means to move most of their resources out of the consumer/enthusiast space and into this new area of computing. They have a REAL chance of completely dominating, and dominating in ways that Intel may or may not fully realize yet. In fact AMD might have a shot at crushing Sun and IBM and Cray in terms of raw deployable computing horsepower.

I do wonder though if AMD has tipped their hand a little too early. I don't suppose it matters too much one way or the other. It will however be interesting to me to see if this is conjecture on my part or if it is what's really going on behind AMD's closed doors.

I know if I had a chance to take over the computing world in this fashion, I know I would be all over it too.



I think you're absolutely right. AMDs strategy since Opteron came out was to break into the "BigIron" space. They have owned the Db Transaction world for two years and are now #2 on the SuperComputer list.

I also thought acquiring ATi was a mistake but only because of nVidia's role in building AMD on the desktop.

I knew that they would use ATi for their chipsets and parallel processing on the server side. I have no doubt that the new "Stream Prcessor" will be the first Torrena part.

I expect that AMD 8000/8131 will be upgraded with ATi tech and provide a 16 socket chipset for Barcelona.

They know that they can't beat Intel's marketing machine or sheer pocket-size but that they can compete with platform and CPU tech. Barcelona is due to sample next month and I woudn't be surprised if it went out with an "ATi" chipset.

They let nVidia have the desktop and enthusiast markets and ATi brands will start out in Bulldozer and Barcelona - though there is a 4x4 chipset planned for Jan.

They may also have delayed 4x4 to wait for X2 Brisbane, which is due Dec 5. 65nm will do wonders for AMD as 65nm chips on 300mm wafers will be at least 3-4x what they get from FAB 30 with 90nm chips on 200mm wafers.

I would bet that the first million are already sold. Maybe the whole supply until Jan/Feb.
November 20, 2006 12:32:51 PM

I'll bet you the first million are not sold, and I'll raise that bet that the first 500,000 will be sitting in AMD's warehouses because C2D is still better price-wise and performance-wise...

And uh... you mean Torrenza right?

http://en.wikipedia.org/wiki/Torrenza

Torrena is a 336.5m tower of telecommunications...

http://en.wikipedia.org/wiki/Torrena

AMD acquiring ATI is probably not a mistake, and a either a move they had to do, or a smart move, either way, they can't go wrong with it, only the timing might have been bad, but thats been talked about for a while...

4x4... we'll just wait and see...
November 20, 2006 1:44:29 PM

Quote:
I'll bet you the first million are not sold, and I'll raise that bet that the first 500,000 will be sitting in AMD's warehouses because C2D is still better price-wise and performance-wise...

And uh... you mean Torrenza right?

http://en.wikipedia.org/wiki/Torrenza

Torrena is a 336.5m tower of telecommunications...

http://en.wikipedia.org/wiki/Torrena

AMD acquiring ATI is probably not a mistake, and a either a move they had to do, or a smart move, either way, they can't go wrong with it, only the timing might have been bad, but thats been talked about for a while...

4x4... we'll just wait and see...


Don't be an a-hole. Of course I meant Torrenza. It was a typo. Analysts are reporting that AMD Live is outselling Viiv by like a few percent. AMD Sempron still owns the retail space. They still showed growth in Q3.
At this point, when 65nm ramps to half capacity, AMD will be able to undercut Intel. With the same bills -excluding ATi as they provide revenue - they will save nearly 75% on a wafer. If Chartered ramps 65nm quickly ( they have already been qualified) Intel will be hard pressed to keep the price war going.

Even when they EOL NetBurst there will still be a lot left to compete with Core2.

4x4 will be an upgrade to FX without quad core. Period! It's due within a week, so yes we will see. it won't make each core faster but it WILL remove most/all background code from the 3rd and 4th CPU.
November 20, 2006 2:06:58 PM

Quote:
doesn't someone claim to actually own property on the moon?


Yes, a man went claimed it as his own as noone else had done before. He now sells parts of it for a few dollars.
November 20, 2006 2:07:24 PM

Quote:

I am fuzzy though on how product differentiation will work... the CPU found it's utility in being multi-purpose. But if one divides the CPU into parallel modules, then the strength of the chip derives from the weighted mix of the processing cores you throw in... for example, say AMD makes a CPU/GPU/Multimedia Video crunching CPU with 16 cores.

One unit might look like:
6 CPU, 6 GPU, 4 Multimedia (balanced)

Another might look like:
3 CPU, 10 GPU, 3 Multimedia (3D/Graphics heavy)

Etc. etc.

The CPU becomes 'non' multipurpose and this restricts my choices as a consumer/user/enthusiast to pick and select the best of the best. If it all is wrapped up in one package, how can I selectively add 'GPU strength'.

I think this is where Torrenza technology will play a huge huge role.

Jack


I've read a bit about the idea of the multi core cpu/gpu/multimedia thing and it sets me to wondering what will happen to the enthusiast market.

From what I've read, Vista will be locked down with only one change allowed before either a new copy of Vista has to be bought or a special allowance is made by Microsoft to reactivate it. If that's really the case, system modifications will drop to almost zero. A person would pretty much be stuck with running the computer as it was first bought or built. This multi purpose cpu/gpu thing would fit right in with that, since once the choice was made, it would be locked in until a whole new multi processor was bought. One Vista license, one multi cpu, not real changes allowed.

The enthusiast would be limited to making his/her selection at the time of build, thus encouraging him to buy the best, most expensive available, because the idea of buying a part that was only average and then upgrading later would be lost. Yeah, good for the companies sellign hardware, but bad for the enthusiast.

Just some further thoughts. Don't know if it will happen or not, or is close in some way but not in another. All we can do is just wait and see what happens.

Thats what was going to happen, but Microsoft looked at the outcry's and changed it so that instead of being able to update once before getting locked out. You can update your PC as many times as you want, and if your building a entirely new system you can move Vista from your old computer to your new one, you can't keep it on the old one though.
November 20, 2006 2:19:35 PM

Consumer market is where the money is made, AMD doesnt have the resources to leave that market.

And IBM is a company even Intel has its difficulties competing with, AMD wouldnt stand a chance with their Opterons or ATis Radeons. If you would XFire 2 X1950s youd still be far away from a POWER5.
November 20, 2006 3:07:38 PM

Quote:
Consumer market is where the money is made, AMD doesnt have the resources to leave that market.

And IBM is a company even Intel has its difficulties competing with, AMD wouldnt stand a chance with their Opterons or ATis Radeons. If you would XFire 2 X1950s youd still be far away from a POWER5.


Actually, the money is made in the server/business arena, the premiums are much higher. A person on average will not spend over 2 grand on a system, while companies do so very regularly. And that's just the desktops.

As far as a unified system goes, I was hoping we might see a "glue together" solution. As in, manufacture each core seperately, then add them together however you want. It would at least keep yields up. I don't think we're heading that particular direction though.

I also think that buffered DDR3 DIMMs could probably adequately supply an on-chip GPU. If you've tried to OC a modern graphics card's memory, it doesn't make much difference. The limit is in the core clock.

We've come to the point where the process shrinks have come down far enough to make putting these things on a chip together actually reasonable, as well as the adequate memory bandwidth to supply them both. If you think about AMD's arch., they're set up for independant core memory access with seperate HT links. It is only reasonable to replace a core with a GPU and have its own memory already there and upgradable!

Plus, you won't have to spend $400 on a new CPU and $5-600 on a new graphics card if it is all on the CPU die. It doesn't make the dies more expensive necessarily, so it will be cheaper to upgrade and we can do it more often. Upgrade the CPU/GPU at the same time for $5-600, then maybe drop in some more RAM later if you want (for either the CPU or the GPU, or both).
November 20, 2006 3:09:35 PM

My thoughts are that AMD is branching out way too much way too fast. They should have a stable base of income while they make inroads into other places. I would say their base used to be desktop, but with opteron, they're making huge inroads into the server market. Desktop wise, I don't know if they're going to abandon it to Intel (Probably not, but I'm not sure how competitive they will be there), and the 4x4 is only for enthusiasts and workstation level at best.

If AMD starts attacking IBM's place instead of Intel... I actually don't know how that will turn out...
November 20, 2006 3:18:48 PM

Been googling for Power5 flop ratings, but haven't been able to find any. :/  I would love a link if you have one.

What I'm looking forward to is the possiblity of what AMD might have planned. If AMD can get 300Gflop into general purpose chip I'm sure it will outperform a great deal of current hardware. AMD has that kind of potential with a single chip. And I do realize it's only potential at the moment.

True AMD's base is in the consumer market, they of course can't pull out. True Intel and IBM have a great deal of share in different markets. If AMD can transition GPU's into parallel processing though, they could have an easier time of penetrating deeper into IBM, Cray, Sun, Intel territory. Especially if they can make the transition quickly.

And to reply to intelamduser. I agree, the processors must be nowhere near ready because as pointed out, no one has any to test.
November 20, 2006 3:30:50 PM

Quote:
Consumer market is where the money is made, AMD doesnt have the resources to leave that market.

And IBM is a company even Intel has its difficulties competing with, AMD wouldnt stand a chance with their Opterons or ATis Radeons. If you would XFire 2 X1950s youd still be far away from a POWER5.

AMD will definitely stick to what they do best. IBM has always been really good at using old tech and making it relevant again. First they made a killing at making incompatible systems communicate for large and medium corporations, and now they are using big iron to virtually offset corporate data loads during peak times. AMD and IBM will never directly compete in the future again, if historical trends continue.
November 20, 2006 3:59:38 PM

Quote:
I'll bet you the first million are not sold, and I'll raise that bet that the first 500,000 will be sitting in AMD's warehouses because C2D is still better price-wise and performance-wise...

And uh... you mean Torrenza right?

http://en.wikipedia.org/wiki/Torrenza

Torrena is a 336.5m tower of telecommunications...

http://en.wikipedia.org/wiki/Torrena

AMD acquiring ATI is probably not a mistake, and a either a move they had to do, or a smart move, either way, they can't go wrong with it, only the timing might have been bad, but thats been talked about for a while...

4x4... we'll just wait and see...


Don't be an a-hole. Of course I meant Torrenza. It was a typo. Analysts are reporting that AMD Live is outselling Viiv by like a few percent. AMD Sempron still owns the retail space. They still showed growth in Q3.
At this point, when 65nm ramps to half capacity, AMD will be able to undercut Intel. With the same bills -excluding ATi as they provide revenue - they will save nearly 75% on a wafer. If Chartered ramps 65nm quickly ( they have already been qualified) Intel will be hard pressed to keep the price war going.

Even when they EOL NetBurst there will still be a lot left to compete with Core2.

4x4 will be an upgrade to FX without quad core. Period! It's due within a week, so yes we will see. it won't make each core faster but it WILL remove most/all background code from the 3rd and 4th CPU.

Hey wake and smell the coffee. Intel did the leap ahead slogan and AMD thought it was funny to put out the leap beyond. That never happened for AMD. Sorry to say, AMD has made the giant mad. The consumer are going to benefit greatly in great new product from Intel. It will be awhile before AMD will catch up.
November 20, 2006 4:13:47 PM

Quote:
I'll bet you the first million are not sold, and I'll raise that bet that the first 500,000 will be sitting in AMD's warehouses because C2D is still better price-wise and performance-wise...

And uh... you mean Torrenza right?

http://en.wikipedia.org/wiki/Torrenza

Torrena is a 336.5m tower of telecommunications...

http://en.wikipedia.org/wiki/Torrena

AMD acquiring ATI is probably not a mistake, and a either a move they had to do, or a smart move, either way, they can't go wrong with it, only the timing might have been bad, but thats been talked about for a while...

4x4... we'll just wait and see...


Don't be an a-hole. Of course I meant Torrenza. It was a typo. Analysts are reporting that AMD Live is outselling Viiv by like a few percent. AMD Sempron still owns the retail space. They still showed growth in Q3.
At this point, when 65nm ramps to half capacity, AMD will be able to undercut Intel. With the same bills -excluding ATi as they provide revenue - they will save nearly 75% on a wafer. If Chartered ramps 65nm quickly ( they have already been qualified) Intel will be hard pressed to keep the price war going.

Even when they EOL NetBurst there will still be a lot left to compete with Core2.

4x4 will be an upgrade to FX without quad core. Period! It's due within a week, so yes we will see. it won't make each core faster but it WILL remove most/all background code from the 3rd and 4th CPU.

Hey wake and smell the coffee. Intel did the leap ahead slogan and AMD thought it was funny to put out the leap beyond. That never happened for AMD. Sorry to say, AMD has made the giant mad. The consumer are going to benefit greatly in great new product from Intel. It will be awhile before AMD will catch up.


Was there a point to said tirade?
November 20, 2006 4:28:44 PM

Quote:
As far as a unified system goes, I was hoping we might see a "glue together" solution. As in, manufacture each core seperately, then add them together however you want. It would at least keep yields up. I don't think we're heading that particular direction though.


You see, placing a gpu on a chip with a cpu will displace the additional cpu that could otherwise be there. Is this advantagous? For low power computing, sure, I guess, but is not the answer, I think, to real advances in computing. I believe the motherboard integration is where the advance is. More cores per die and a move towards desktop supercomputing is where I see the advance going. Think hypertransport and derivatives.

With however many cores per die the state of the industry can produce to it's economic advantage, if a company can use that technology and combine 2, 4, 8, 16 of such packages on a motherboard inexpensively, that company will have the most powerful product. The problem has been that being able to afford a single socket motherboard is all most consumers have been able to do. There has not been enough effort in building motherboards which can use a large number of cpus. Of course, most of us can't afford more than one cpu anyhow, so I am hoping that will change also and I don't know how right now.
November 20, 2006 4:33:50 PM

Quote:
Hey wake and smell the coffee. Intel did the leap ahead slogan and AMD thought it was funny to put out the leap beyond. That never happened for AMD. Sorry to say, AMD has made the giant mad. The consumer are going to benefit greatly in great new product from Intel. It will be awhile before AMD will catch up.


I don't believe Intel is angry with AMD, not at all. Intel is much too mature for that and understands that AMD is a good thing for it's consumers and for itself as a company. Of course, Intel wants to be in the lead and that's ok, they deserve to be.

Now that Intel is up there kickin', I just worry about AMD. If I were running Intel, I would send some engineers over to AMD to check up on them. AMD going out of business would be bad for Intel.
November 20, 2006 5:07:40 PM

Quote:
I'll bet you the first million are not sold, and I'll raise that bet that the first 500,000 will be sitting in AMD's warehouses because C2D is still better price-wise and performance-wise...

And uh... you mean Torrenza right?

http://en.wikipedia.org/wiki/Torrenza

Torrena is a 336.5m tower of telecommunications...

http://en.wikipedia.org/wiki/Torrena

AMD acquiring ATI is probably not a mistake, and a either a move they had to do, or a smart move, either way, they can't go wrong with it, only the timing might have been bad, but thats been talked about for a while...

4x4... we'll just wait and see...


Don't be an a-hole. Of course I meant Torrenza. It was a typo. Analysts are reporting that AMD Live is outselling Viiv by like a few percent. AMD Sempron still owns the retail space. They still showed growth in Q3.
At this point, when 65nm ramps to half capacity, AMD will be able to undercut Intel. With the same bills -excluding ATi as they provide revenue - they will save nearly 75% on a wafer. If Chartered ramps 65nm quickly ( they have already been qualified) Intel will be hard pressed to keep the price war going.

Even when they EOL NetBurst there will still be a lot left to compete with Core2.

4x4 will be an upgrade to FX without quad core. Period! It's due within a week, so yes we will see. it won't make each core faster but it WILL remove most/all background code from the 3rd and 4th CPU.

Hey wake and smell the coffee. Intel did the leap ahead slogan and AMD thought it was funny to put out the leap beyond. That never happened for AMD. Sorry to say, AMD has made the giant mad. The consumer are going to benefit greatly in great new product from Intel. It will be awhile before AMD will catch up.


Was there a point to said tirade?

Yup! I just think you need to cool it and stop trying to know it all with your stupid posting.
November 20, 2006 5:26:32 PM

I have not read everything in this thread but I plan to. With that being said...

I suspect that the operating system will soon be the bottleneck to specialized core architectures. I also suspect that there is a grand new opportunity for the operating system vendors out there trying to get a leg up on Microsoft.

Some questions come to my mind as a programmer. How do you write programs for a multi-core processor when all cores are not identical? If they were all identical then your task is easy and the operating system does the bulk of the dirty work for you, so long as you code your threads of execution correctly.

Could a programmer write code that targets a specific core? Does the programmer target a specialized instruction set that then targets the core(s) that implement that instruction set? Does the operating system try to make the programmer's job easier by choosing the core based on the code being executed (a more dynamic approach to the previous question)? Basically, it is all a question of how does the operating system reveal the capabilities of these systems to the programmer, if at all. None of these are easy questions and none of them fit well in the standard Windows way of thinking about programming. If the application writer cannot choose how to best utilize the cores for their application, then how will this be any different than using a "standard" multi-core today with Windows (or Linux, or whatever)?

Just some thoughts. Now back to reading the thread...
November 20, 2006 5:33:51 PM

That's one of the reasons why the PS3 is a biatch to program for, and why development costs are so much. Programmers are going to have plenty of trouble programming for 2 cores, much less multi... but hopefully once it hits mainstream, there will be better kits and development enviroments that can help them take over...

http://www.anandtech.com/tradeshows/showdoc.aspx?i=2868...

That's a pretty interesting read on how they plan to multi-thread their apps. Talks about some of the difficulties, benefits, etc...

Quote:
As far as console hardware goes, the engine is already running on Xbox 360, with support for six simultaneous threads. The PC platform and Xbox 360 are apparently close enough that getting the software to run on both of them does not require a lot of extra effort. PS3 on the other hand.... The potential to support PS3 is definitely there, but it doesn't sound like Valve has devoted any serious effort into this platform as of yet. Given that the hardware isn't available for consumer purchase yet, that might make sense. The PS3 Cell processor does add some additional problems in terms of multithreading support. First, unlike Xbox 360 and PC processors, the processor cores available in Cell are not all equivalent. That means they will have to spend additional effort making sure that the software is aware of what cores can do what sort of tasks best (or at all as the case may be). Another problem that Cell creates is that there's not a coherent view of memory. Each core has its own dedicated high-speed local memory, so all of that has to be managed along with worrying about threading and execution capabilities. Basically, PS3/Cell takes the problems inherent with multithreading and adds a few more, so getting optimal utilization of the Cell processor is going to be even more difficult.


So yeah... read and enjoy.
November 20, 2006 5:56:45 PM

Quote:
I have not read everything in this thread but I plan to. With that being said...

I suspect that the operating system will soon be the bottleneck to specialized core architectures. I also suspect that there is a grand new opportunity for the operating system vendors out there trying to get a leg up on Microsoft.

Some questions come to my mind as a programmer. How do you write programs for a multi-core processor when all cores are not identical? If they were all identical then your task is easy and the operating system does the bulk of the dirty work for you, so long as you code your threads of execution correctly.

Could a programmer write code that targets a specific core? Does the programmer target a specialized instruction set that then targets the core(s) that implement that instruction set? Does the operating system try to make the programmer's job easier by choosing the core based on the code being executed (a more dynamic approach to the previous question)? Basically, it is all a question of how does the operating system reveal the capabilities of these systems to the programmer, if at all. None of these are easy questions and none of them fit well in the standard Windows way of thinking about programming. If the application writer cannot choose how to best utilize the cores for their application, then how will this be any different than using a "standard" multi-core today with Windows (or Linux, or whatever)?

Just some thoughts. Now back to reading the thread...


Extensions, man, lol. You know the deal.

#if core_xxxx
#then use_blah_blah // You know you love it.
#elseif ...
...
#endif
#endif

Bah, I don't remember the syntax.

Let me see, how to work with 64 cpus. I suppose we must involve the programmer but there are things that can be done without the programmer, yes? Programming languages must deal with it, I think, allowing the programmer to easily use many threads. I did this using Unix extensions, once upon a time. Lol, you have to know what "reentrancy" means. Nevertheless, some operations must be a sequence of events. How to get around that? I don't know. Do it a different way I guess.

Perhaps a software or firmware layer should be placed above the cpu's that functions as a virtual processor which divides and distributes the work. Sounds very difficult and involving execution of code ahead of the time when it might be actually needed otherwise there would not be enough work to go around. But there you go, an idea to put 64 processors to work without involving the programmer.
November 20, 2006 6:06:35 PM

Jumpingjack :) 
Your reasoned analysis "hits the nail squarely on the head" :lol: 
!