Sign in with
Sign up | Sign in

Nvidia GPUs Can Outperform Google Brain

By - Source: Tom's Hardware US | B 22 comments

Nvidia talks machine learning and the popular faces of humans and cats.

Nvidia's Jen-Hsun Huang talked a great deal about Machine Learning during his GTC 2014 keynote presentation. Machine Learning is a branch of artificial intelligence that becomes smarter as more data is presented; it actually learns, giving the impression that the PC is thinking.

"This is a pretty exciting time for data," he told the keynote audience. "As you know, we're surrounded by data; there are torrents of data from your cameras, from your GPS, from your cell phone, from the video you upload, on searches that we do, on purchases that you make. And in the future, as your car drives around, we're going to be collecting enormous, enormous amounts of data. And all of this data can contribute to machines be smarter."

He goes on to talk about programs that are running on massive super-computers that emulate how the brain functions. Our brains have neurons that recognize edges; we have a neuron for every type of edge. These edges turn into features that, when combined with other features, become a face. Computer scientists call this object recognition.

A breakthrough in machine learning came by way of Google Brain, which consisted of 1,000 servers (16,000 CPU cores) simulating a model of the brain with a billion synapses (connections). Google Brain was trained using ten million 200x200 images unsupervised in three days. At the end, Google Brain revealed that there are two types of images that show up on the Internet quite frequently: faces and cats.

He said that a billion synapses is what you'll find in a honey bee. To emulate an actual human brain, you'll need a 100 billion neurons with a thousand connections each, equaling around 100 trillion connections. To train this brain using Google Brain's setup, you'll need a lot more images -- around 500 million images – and lots of time: about 5 million times longer than that of the honeybee brain setup.

Naturally Nvidia tackled this problem by developing a solution of its own. Huang said that it's now possible using three GPU-accelerated servers: 12 GPUs in total, 18,432 CUDA processor cores (Google Brain has around 16,000 cores). The Nvidia solution uses 100 times less energy, and a 100 times less cost.

Add your comment Display 22 Comments.
  • -2 Hide
    DarkSable , March 26, 2014 2:48 PM
    ... okay, it's really tempting to build my own MULTIVAC now...
  • 19 Hide
    fonzy , March 26, 2014 2:53 PM
    Maybe we will actually see enemy AI in games actually do something correctly now.
  • -1 Hide
    skit75 , March 26, 2014 4:50 PM
    Where did these 10 million images come from? Is it a library we can access?
  • 0 Hide
    skit75 , March 26, 2014 4:52 PM
    The most important part of the article in my opinion, had the least amount of information.
  • 4 Hide
    Shankovich , March 26, 2014 5:35 PM
    I think it's already clear parallel computing beats serial workloads.
  • 2 Hide
    bjaminnyc , March 26, 2014 6:43 PM
    100x cheaper & 100x less power = what % of performance of google brain. Likely higher than 1%, but with simple cores. Still interesting though.
  • 6 Hide
    InvalidError , March 26, 2014 6:59 PM
    Quote:
    I think it's already clear parallel computing beats serial workloads.

    Was there ever any doubt? Parallelizing workloads wherever parallelizing them was practical has been done for the past 30+ years in mainframes and mini-computers.

    The reason desktop computing is so heavily reliant on single-thread performance is because most user-interactive code does not multi-thread easily, often does not scale well or meaningfully.
  • 7 Hide
    Blazer1985 , March 26, 2014 7:59 PM
    Yeah, still 1 cuda core != 1 cpu core so this parallelism (word that fits perfectly here) is highly incorrect imho.
  • 3 Hide
    vaughn2k , March 26, 2014 9:42 PM
    Skynet on it works... ;) 
  • 0 Hide
    Caephyn , March 27, 2014 2:00 AM
    Depends on the workload, in HPC often the workloads are doing the same operation on lots of data, exactly what GPGPU systems are good at as they are SIMD (Single instruction multiple data) systems. I am a researcher in CFD (Computational Fluid Dynamics) and it is a field that is moving away from solving Navier Stokes on a finite volume solver (which can be sped up with CUDA to a degree) to using Lattice Boltzman techniques as they solve much quicker on GPGPU systems using CUDA (or the similar open standards but a lot currently use CUDA)
  • 1 Hide
    knowom , March 27, 2014 3:24 AM
    Well it's neat, but considering we have brains of our own until it assists ours in better ways more frequently pretty much IDGAF to be blunt about it. We need more real world practicality for this to matter much from a consumer standpoint.
  • 4 Hide
    ddpruitt , March 27, 2014 5:41 AM
    Quote:
    Machine Learning is a branch of artificial intelligence that becomes smarter as more data is presented; it actually learns, giving the impression that the PC is thinking.
    Please get someone to write articles that actually knows what they're talking about. Seriously, you can tell a hack wrote the article because all the facts are wrong. Machine learning is a classification method that given a set of training data classifies input data. Hence what Google brain did. Or from wikipedia:
    Quote:
    The core of machine learning deals with representation and generalization.
    Quote:
    Huang said that it's now possible using three GPU-accelerated servers: 12 GPUs in total, 18,432 CUDA processor cores (Google Brain has around 16,000 cores)
    That's not what he said, he said you could fit the Google brain in three Titan Zs, he stops short of mentioning it has the same abilities. Besides don't you read the technical details of the articles you write? On more than one occasion it's been pointed out that processing power =/= # of cores.Thinking, like this article, actually requires a great deal more than just machine learning.
  • 0 Hide
    JeanLuc , March 27, 2014 6:08 AM
    ^^^ Parrish has been put the sword, lol. I would say cut the guy some slack but Toms Hardware is meant to be one of the biggest tech sites in the world.
  • 0 Hide
    Blax34 , March 27, 2014 6:40 AM
    Quote:
    Quote:
    Quote:
    Are these Maxwell 860M or Keplar 860M? God i hate what Nvidia did with the 860M naming scheme.
    It's Kepler. MSI's overseas announcements indicate a higher-end model (probably with the 2880x1620 display it was shown with at CeBIT) with an 870M which is Kepler-only. Speculation is they went with the Kepler 860M so they could use the same motherboard with both models.
    Quote:
    Overpriced hardware, YAY!
    This is pretty cutting-edge hardware. It's a quad core Haswell, which just by itself means about a $1000 laptop. On top of that it's got a gaming-worthy GPU. It's AFAIK the lightest quad core 15" laptop out there despite having a metal chassis (less than 2 kg) Unlike ultrabooks its memory and drives are all upgradeable. It actually has what looks like two M.2 SSD slots, not mSATA. So conceivably you could set it up with a ~2 GB/sec RAID-0. And you can also put in a 2.5" drive. The panel is PLA (IPS-equivalent) with the lone Russian review so far saying it covers 95%-100% of sRGB. And the keyboard has gotten rave reviews.If you're in the market for a $400 laptop, obviously it isn't being marketed it to you. But if you're like me and are willing to pay for a truly portable workstation which will let me edit my photos as well as play 3D games, it's very tempting. Cheaper and lighter than both the Macbook Pro and Dell XPS 15 with the same or better features with much better upgradability.
    But editing photos are waaay better on higher res screen such as Macbook's Retina Display or QHD display on XPS, or QHD+ (3200x1800) on Samsung ATIV 9 Plus for around the same price or even cheaper. Those MSI laptops only sport FHD display and yes, laptop GPU are all overpriced. If u can afford gaming laptops, most likely u have a great gaming rig as well on ur desk. I'll just play on my bigger screen with my desktop than tiring my hands on laptop palm rest and straining my eyes and neck on laptop smaller screen in unergonomic position to play (you have to choose between comforting ur hands but hurting ur neck in the long run or comforting ur neck but weird hands placement). If u are saying "then hook up a keyboard or hook it up to a bigger screen" I'd just play on my desktop, gaming laptop is overpriced hardware. Gaming on the go? I'll go for tablets, gaming with 15" or bigger on a car, bus, or plane is not comfortable except by train and the travel duration is long. Gaming anywhere e.g cafe, or friend's place? I'll make a gaming HTPC with portable monitor since u will need to connect ur charger anyway to get the max performance from gaming laptop otherwise 2 hours is ur max gaming time.Well if u are smart like me, I'll buy a better screen laptop with decent spec for editing photos (at least if u are saying professional level, if not then go ahead) and gaming anywhere as my comment states. This applies if you don't want to waste ur money buying this overpriced hardware. Otherwise, buy it anyway.
    If you were smart you wouldn't have any professional relationship with editing photos lol, you'd have a better job
  • 0 Hide
    back_by_demand , March 27, 2014 10:28 AM
    I for one welcome our Learning Machine overlords
  • 0 Hide
    ashburner , March 27, 2014 11:23 AM
    I have the GS70 from last year. I absolutely love it. I use it primarily for work 10+ hours a day, and when I am travelling for work, I can easily game on it. It has worked flawlessly. I have dual 128GB SSD's in Raid 0 and a 750GB D; drive. Boots in about 6 seconds. The screen is great and 1080p is fine. When I am at home, I have 2x 27" 2560x1440 monitors plugged in, as well as a 22" 1080p, and the primary screen. That's 4 monitors that I can extend the desktop across.One thing I noticed is that the new version only has 1 mDP which is a disappointment. The processor is the same, as well as amount of memory and Killer wireless (which works great- extremely strong signal)
  • 0 Hide
    Exia00 , March 27, 2014 12:54 PM
    Quote:
    Maybe we will actually see enemy AI in games actually do something correctly now.
    you havent been playing much games have you
  • 0 Hide
    coolitic , March 27, 2014 1:18 PM
    Google is currently making a derp face after reading this article.
  • 0 Hide
    WebsWalker , March 27, 2014 8:04 PM
    Quote:
    Quote:
    I think it's already clear parallel computing beats serial workloads.
    Was there ever any doubt? Parallelizing workloads wherever parallelizing them was practical has been done for the past 30+ years in mainframes and mini-computers.The reason desktop computing is so heavily reliant on single-thread performance is because most user-interactive code does not multi-thread easily, often does not scale well or meaningfully.
    actually it's not totally true. There is some limitation to parallelization, especially with Cuda. The problem with neurons is they are asynchronous and they are not really similar to transitors in the way they work. Thus despite the claims of NVIDIA, the efficiency will not be that great simulating brain with GPUS... The number of Cores matters but is not the only consideration to have when it comes to simulation
Display more comments
React To This Article