Nvidia: Moore's Law is Dead, Multi-core Not Future

Bill Dally, the chief scientist and senior vice president of research at Nvidia, wrote an article for Forbes purporting that Moore's Law, the theory that transistor count and performance would double every 18 months, is dead.

The problem, according to Dally's paper on Forbes, is that current CPU architectures are still serial processors, while he believes that the future is in parallel processing. He gives the example of reading an essay, where a single reader can only read one word at a time – but having a group of readers assigned to a paragraph each would greatly accelerate the process.

"To continue scaling computer performance, it is essential that we build parallel machines using cores optimized for energy efficiency, not serial performance. Building a parallel computer by connecting two to 12 conventional CPUs optimized for serial performance, an approach often called multi-core, will not work," he wrote. "This approach is analogous to trying to build an airplane by putting wings on a train. Conventional serial CPUs are simply too heavy (consume too much energy per instruction) to fly on parallel programs and to continue historic scaling of performance."

"Going forward, the critical need is to build energy-efficient parallel computers, sometimes called throughput computers, in which many processing cores, each optimized for efficiency, not serial speed, work together on the solution of a problem. A fundamental advantage of parallel computers is that they efficiently turn more transistors into more performance," Dally added.

Dally also posed that focusing on parallel computing architectures will help resurrect Moore's law, "Doubling the number of processors causes many programs to go twice as fast. In contrast, doubling the number of transistors in a serial CPU results in a very modest increase in performance--at a tremendous expense in energy."

One big driver of the current processor design are the programs written to run on current chips. Dally said that the long-standing, 40-year-old serial programming practices are ones that will be hard to change, and that programmers trained in parallel programming are scarce.

"The computing industry must seize this opportunity and avoid stagnation, by focusing software development and training on throughput computers - not on multi-core CPUs," Dally concluded. "Let's enable the future of computing to fly--not rumble along on trains with wings."

Marcus Yam
Marcus Yam served as Tom's Hardware News Director during 2008-2014. He entered tech media in the late 90s and fondly remembers the days when an overclocked Celeron 300A and Voodoo2 SLI comprised a gaming rig with the ultimate street cred.
  • Parsian
    somebody has to tell the dude,DUH!!!
    Reply
  • figgus
    Translation:

    "Our tech is the future, everyone else has no idea what they are doing. Please buy our GPGPU crap, even though it is inferior to what our competitors are making right now for everyday use."
    Reply
  • 2zao
    !!
    Maybe this will open some eyes.... but i doubt many for now....

    too many are in a stupor doing things they way that is the norm and easiest for them instead of how should be... how long will it take for people to wake up to the direction change needs to go?
    Reply
  • yay
    Thats all well and good, until you need to do one thing BEFORE another, like when rendering a scene. Or maybe he forgot that.
    Reply
  • eyemaster
    What are you waiting for then, Bill Dally, go ahead and create that chip... ...ha, that's what I thought, even you can't do it.
    Reply
  • mindless728
    except without serial optimizations general apps (not compute apps) will suffer since the serial optimizations allow for fast comparisons where as their compute cores on the GPU are very inefficient at this. Yes it will help computing, but general apps will suffer.

    And really, some programs (algorithms) can never turned into a parallel app
    Reply
  • matt_b
    and that programmers trained in parallel programming are scarce.
    I totally agree with this statement here. However, if this were to change, and more were trained in how to properly program for parallel computing, then the same could be said about the need to train more on how to properly program for serial/series computing - which is where we are currently in processor design. I think it's more fair to say the insufficiency lies on both sides.

    On another note, am I the only one finding it amusing that the chief scientist of R&D at Nvidia is stating the CPU consumes too much energy??? Did he forget about the monster they just released, or does he still consider it to be within acceptable power requirements or efficient enough?
    Reply
  • ravewulf
    Dally said that the long-standing, 40-year-old serial programming practices are ones that will be hard to change, and that programmers trained in parallel programming are scarce.

    This. Extremely few programs today even properly use the limited amount of cores we have now. Look at all the programs that are still single threaded that could easily benefit from parallelism (QuickTime and iTunes for one). There are also other algorithms that simply CAN"T be made parallel (some parts of video encoding that depend on previous results for the next task).
    Reply
  • triculious
    while I agree that there are instances where parallel processing works way better than serialized, you can't altogether switch from one to the other
    then there's parallelized code, which is hell for programmers

    and then there's what we could call "Dally's law": your graphics card must be twice as hot every 12 months
    Reply
  • rhino13
    Wait, so then you'd have a bunch of people who only understood one paragraph and nothing else? It's all gotta go back to serial at some point! This is a bad example.

    But I do agree with what he's saying. We need to put more effort into parallel speed than serial.
    Reply