Sign in with
Sign up | Sign in

AMD Says That CPU Core Race Can't Last Forever

By - Source: Tom's Hardware US | B 96 comments

128 cores? Sadly, no.

It wasn't too many CPU generations ago that the main focus of performance was clock speed. The public perception was that the more megahertz (or gigahertz), the better.

Now, the current competitive number appears to be in the number of cores inside a processor. Donald Newell, AMD's chief technology officer for servers, believes that this sort of race to an even greater number of cores cannot continue.

Interestingly enough, Newell knows all about the numbers game as he previously spent 16 years at Intel, during which time there was the very clock-happy Pentium 4 generation.

"We thought we were going to build a 10GHz chip. It was only when we discovered that they would get so hot it would melt through the Earth, that we decided not to do that," Newell said, jokingly, in an interview with IDG.

Now it's about who has more cores, but Newell doesn't see that continuing indefinitely.

"There will come an end to the core-count wars. I won't put an exact date on it, but I don't myself expect to see 128 cores on a full-sized server die by the end of this decade," said Newell. "It is not unrealistic from a technology road map, but from a deployment road map, the power constraints that people expect [servers] to live in."

While we haven't seen the end of core count growth, the next big competitive ground could be integrating specialized functions into the processor.

"There is nothing to prevent us to put specific features on die that enable more efficient processing," Newell said. "So you should expect to see heterogenous architectures to emerge where we identify functions that are broadly useful but don't necessarily map into an instruction that you'd want to add directly into the x86 architecture."

Both AMD and Intel are integrating graphics components into their processors, but AMD's Fusion solution promises to be the more capable offering with greater power available for GPGPU functions.

Source: IDG.

Discuss
Display all 96 comments.
This thread is closed for comments
Top Comments
  • 27 Hide
    micr0be , October 18, 2010 8:33 AM
    the day we reach a brick wall in technology, robots will be there to take it down.
  • 26 Hide
    sinsear , October 18, 2010 7:43 AM
    I dunno, but I always thought that this was kinda obvious....
    As they say, "All good things must come to an end".
  • 15 Hide
    molo9000 , October 18, 2010 10:50 AM
    How about replacing x86 instead of adding more and more stuff to it?
    I'm not very familiar with x86, but is a 32year old instruction set still useful?
    The number of transistors in a processor has grown from 29 thousand to well over a billion in those 32 years.
Other Comments
  • 26 Hide
    sinsear , October 18, 2010 7:43 AM
    I dunno, but I always thought that this was kinda obvious....
    As they say, "All good things must come to an end".
  • 3 Hide
    N.Broekhuijsen , October 18, 2010 7:46 AM
    aww.... and I was looking forward to the years when the general population no longer knew the term Ghz, but cores instead.

    "My PC has 512 Core Clusters!" :p 

    Not surprising in the end though

  • -3 Hide
    Anonymous , October 18, 2010 7:50 AM
    he maybe right. quantum mechanics processor is in the future. let say who can make it out for the public. amd or intel. you never know. by the time when you look back probably everything you have now is all junk. always happy to see new technology for better tomorrow.
  • 13 Hide
    Gin Fushicho , October 18, 2010 7:53 AM
    So finally going eliminate old instructions that are not used? I'd love to see them get more efficient, programmers do not program for multiple cores it seems, or at least very few do.
  • 15 Hide
    Stardude82 , October 18, 2010 8:15 AM
    Or maybe the technology will just hit a wall. For instance, passenger jets don't travel faster today than they did 50 years ago despite years of development. A lot of technologies just didn't work out like supersonic flight and right now huge efforts have to be made for incremental improvements in efficiency.
  • 12 Hide
    peterkidd , October 18, 2010 8:19 AM
    So what is next? As Donald states, architecture, but is that all? The finality of current computer tech has always been stated to end. Every decade the tech community estimates the fall of Moore's law, and yet it continues to flourish. A new technology will take the old's place, but only when current technology is sufficiently exhausted. Think vaccuum tubes --> transistors --> microcontrollers. The evolution of technology will never be stagnant. It hasn't in the past, and why would it be now.
  • 27 Hide
    micr0be , October 18, 2010 8:33 AM
    the day we reach a brick wall in technology, robots will be there to take it down.
  • 9 Hide
    dragoon190 , October 18, 2010 8:47 AM
    Quote:
    Or maybe the technology will just hit a wall. For instance, passenger jets don't travel faster today than they did 50 years ago despite years of development. A lot of technologies just didn't work out like supersonic flight and right now huge efforts have to be made for incremental improvements in efficiency.


    /Off topic

    The reason why passenger jets don't travel any faster now is because they would go (locally) supersonic if they fly any faster, and that would cause all sorts of noise and regulation problem (think Concord and how it's only allowed to fly over the ocean).

    /endOffTopic

    Anyway, I don't really see the CPUs going to 128 cores when the majority of the programs nowadays barely even utilize more than 2 cores.
  • 7 Hide
    sudeshc , October 18, 2010 8:59 AM
    Its time for software industry to mature and make changes to utilize the multi-core environment to full, then only we can imagine on increasing the cores otherwise there is not much use of those big no. of cores.
    and from this point it seems like decades for this to happen.
  • 10 Hide
    Horhe , October 18, 2010 9:32 AM
    I prefer speed rather than more cores. Every application benefits from increased speed, but very few applications benefit from many cores. Unfortunately not all games can be made to benefit from an increased number of cores (turn-based strategies like Total War and Heroes of Might and Magic are an example). IMO there should be some segmentation: gaming CPUs, which focus on speed and have a maximum of 8-12 cores, and workstation CPUs that focus on many cores.
  • 3 Hide
    lukeeu , October 18, 2010 10:09 AM
    wakskskskshe maybe right. quantum mechanics processor is in the future. let say who can make it out for the public. amd or intel. you never know. by the time when you look back probably everything you have now is all junk. always happy to see new technology for better tomorrow.
    Sorry but quantum computing can't run anything like x86 instruction set :(  It can only solve some problems that can be formulated as operations on Hermitian Matrices so even most of computer science phds I know wont touch it.
    Also there is a problem with the way people are designing quantum computers:
    1) Cool it down to 0K
    2) Let's try to make it work some of the times.
  • -2 Hide
    randomizer , October 18, 2010 10:10 AM
    peterkiddA new technology will take the old's place, but only when current technology is sufficiently exhausted. Think vaccuum tubes --> transistors --> microcontrollers. The evolution of technology will never be stagnant. It hasn't in the past, and why would it be now.

    Because in the past you had a few hundred computers to replace. Now you'll need the radical new technology to somehow be backwards compatible with billions of x86 machines because no business is going to changeover its entire IT framework in a day.

    At some point it will happen though. The MOSFET is far too power-hungry to be viable in the long term future. Power consumption is what caused every other major technological transition of the most basic component in a computer.
  • 2 Hide
    cronik93 , October 18, 2010 10:38 AM
    So should we develop completely new CPU technology?
  • 0 Hide
    alyoshka , October 18, 2010 10:44 AM
    Of course there is a limit, after all how many cores can you fit onto a 1 square inch of PCB? It has all the physical limitations one could possibly think of...
  • 15 Hide
    molo9000 , October 18, 2010 10:50 AM
    How about replacing x86 instead of adding more and more stuff to it?
    I'm not very familiar with x86, but is a 32year old instruction set still useful?
    The number of transistors in a processor has grown from 29 thousand to well over a billion in those 32 years.
  • -5 Hide
    back_by_demand , October 18, 2010 11:02 AM
    dragoon190The reason why passenger jets don't travel any faster now is because they would go (locally) supersonic if they fly any faster, and that would cause all sorts of noise and regulation problem (think Concord and how it's only allowed to fly over the ocean)

    The reason is not because of regulation or noise as the Concorde fleet ran flawlessly for nearly 30 years in the lucrative Trans-Atlantic market.

    The reason why no-one travels supersonic anymore is the Concorde fleet was retired due to the French not keeping their runways clean, and the fact that the majority of Concordes regular passengers were killed in 9/11.

    Virgin Atlantic offered to buy the Corcorde fleet and bring them up to 21st century specs but the UK Government refused to issue a license. On top of that no-one has the money to design and build a new fleet of supersonic airliners so the focus has now become one of increasing the comfort level of passengers, rather than reducing flight time.
  • -3 Hide
    Anonymous , October 18, 2010 11:19 AM
    I want maaany cores so every program/service can run on its own core. Program for one core and let the OS decide which core to use/is available.
  • 3 Hide
    Scott2010au , October 18, 2010 11:40 AM
    They've already effectively eliminated the old instructions as the microcode is internally RISC and externally CISC for almost all, maybe even all, x86 and x64 CPU architectures.
  • 6 Hide
    back_by_demand , October 18, 2010 11:44 AM
    The whole point of multicore was not to run a program on each core, although that does quite nicely when your AV kicks in, because who has more than 6 serious CPU intensive programs running at the same time?

    The idea, and rightly so, is to split a single program up between several cores to make it run faster. Great idea in principle but where are the slew of multicore programs? ..... Silence.

    Developers, please get off your collective fat asses and write the next generation of programs that can actually utilize all this expensive hardware I already own.
Display more comments