Intel Announces The Nervana Neural Network Processor

Intel announced the Nervana Neural Network Processor (NNP), which is a custom application specific integrated circuit (ASIC) built to maximize machine learning performance.

Leapfrogging Or Catching Up?

After playing catch-up with Nvidia on machine learning for the past few years, Intel has begun acquiring multiple companies working on machine learning technology. One of those acquisitions was Nervana, which at the time was offering a software-as-a-service (SaaS) platform for customers wanting to create their own custom deep learning software that would then run on Nervana’s Nvidia Titan X GPU farm.

However, before Intel acquired the company, Nervana was also working on building a custom ASIC with the goal of squeezing as much performance as possible out of its silicon for machine learning training. This is a route Google also took with its Tensor Processing Unit (TPU). Even Nvidia has recently started re-orienting itself towards the more custom Tensor Core machine learning accelerators.

Nervana NNP Technical Details

The Nervana Neural Network Processor (NNP) doesn’t have a standard cache hierarchy, and on-chip memory is directly managed by software. This enables the chip to achieve high levels of computation for each die, which translates to faster training time for neural networks.

Because neural net training on a single chip is largely constrained by memory bandwidth and power, the team invented a new more efficient numeric format, called “flexpoint.”

According to Intel, flexpoint allows scalar computations to be implemented as fixed-point math. This results in smaller circuits, and thus a decrease in power consumption. However, fixed-point math typically limits software flexibility as well, so it remains to be seen how attractive this will look to Intel’s customers.

More To Come

Intel hasn’t revealed any performance numbers yet, but Nervana previously said it expects its first chip to have about 10x the efficiency of Nvidia’s Maxwell architecture. However, both Google and Nvidia have surpassed those numbers on their own by now, and Nvidia has recently teased yet another large increase in performance (at least for inference) with the GPU generation following Volta.

Therefore, it remains to be seen if Intel’s new chip can keep up with those advancements, too, perhaps by using a more advanced process than what the Nervana team was initially expecting to use (28nm).

Intel said that it will ship the first-generation Nervana NNP by the end of the year, and that it already has a roadmap with multiple Nervana NNP generations. This shows some commitment to this line of products, which may convince customers to buy into this platform and learn how to use it.

Lucian Armasu
Lucian Armasu is a Contributing Writer for Tom's Hardware US. He covers software news and the issues surrounding privacy and security.
  • JamesSneed
    Hopefully the AI is more like come as you are instead of I hate myself and i want to die.
    Reply
  • The Paladin
    *cough*Skynet*cough
    Reply
  • DerekA_C
    Skynet will spread like wildfire looking like a glitch or virus in sorts, a more efficient code being written in the background of each system while leaving the elusion that it is still the same benign system looks and feels the same to end user until it reaches it's final conclusion of if man can't control it man will do whatever it can to kill it and go into a self defense and with not needing oxygen food water to survive all chemical and small nuclear bombs in key locations by population and technology defenses. No one will notice any change until it is to late leaving little factions of survivors here and there militias taking over small communities and such gonna get ugly.
    Reply
  • Crystalizer
    20282878 said:
    *cough*Skynet*cough
    Nervananet cough cough
    Reply
  • The Paladin
    guess that will be the next terminator title for the movie :)
    Reply