Sign in with
Sign up | Sign in

The Multi-Core Trend

Talking Heads: Motherboard Manager Edition, Q4'10, Part 1
By , Chris Angelini

Question: There is an increasing move away from emphasizing raw clock rates in favor of parallelized multi-core designs. Do you think this will change as CPU/GPU hybrids take advantage of GPGPU programming, such as DirectCompute, CUDA, and Stream, easing the CPU's role in tasks that once relied on threaded processors for their performance?

  • As more and more taxing application come, multi-core designs become increasingly necessary for computing.
  • GPGPU solutions are not truly general purpose; they're really optimized for doing a few applications extremely well. Besides, CPU makers still want to add more horsepower to their CPUs, and the best way to do that currently is by adding more cores.
  • I foresee the growth toward both on multi-core design and GPGPU programming.


At the moment, it seems like GPGPU is an eventuality, but it is by no means a short-term certainty. We've been hearing about CUDA for what, almost five years now? And while the technology has done amazing things in the scientific, financial, and medical fields, its utility on the desktop is still far less pervasive--to the point where we certainly wouldn't recommend one card over another specifically for its CUDA support.

During that same time frame, CPU architectures made a transition from cramming as much information through a single core as possible to parallelizing across multiple cores, and slowing clocks down to maintain manageable power envelopes given existing manufacturing technologies. And even then, we're still fighting an uphill battle to get developers onboard with threaded code. It's happening, though, and at least one of our respondents picked up on that fact.

Is it probably that we'll see a return to super-high clocks and minimal parallelism? Not likely. Will shrinking process technology enable more complex multi-core CPUs? We're almost sure of it. Will the next revolutionary move in this space involve integration along the lines of what AMD and Intel are planning? It seems increasingly probable. Sure, integration is a cost-saving strategy, but it also has the potential to enhance performance as latencies are slashed and very high-speed pathways between very bandwidth-hungry components are better utilized.

React To This Article