How Google’s AlphaGo AGI Could Soon Beat The World’s Best Go Player

Last month, Google announced that its AlphaGo program, powered by the DeepMind AI technology that it acquired in 2014, managed to beat the European Go champion, Fan Hui, in five out of five games. The company announced that next month, it will pit AlphaGo against Lee Se-dol, who has been the highest-ranked Go player in the world for the past decade.

Se-Dol’s Match

Lee Se-dol has a 71.8 winning percentage and is supposedly a much better player than Fan Hui. However, it’s hard to tell whether he has a rather good chance of beating Google’s AlphaGo AI agent or whether it will be impossible for him to beat it, too. AlphaGo beat Fan Hui 5-0, but it could’ve been either slightly better than him or millions of times better, which would mean Lee Se-dol doesn’t stand a chance either.

For now, Se-dol seems confident that it can beat Google’s AI, but he knows it’s only a matter of time until that AI becomes more advanced or until Google runs it on more powerful machines.

“I have heard that Google DeepMind’s AI is surprisingly strong and getting stronger, but I am confident that I can win at least this time,” said Se-dol.

The match between AlphaGo and Lee Se-dol will consist of five games, each played on a different day, presumably because humans need rest after such intense matches, which can alter their performance on the next games. The games will be streamed live on YouTube on March 9, 10, 12, 13 and 15. If Se-dol wins, he would get a $1 million prize.

Deep Blue vs DeepMind

The previous announcement from Google that its AI managed to beat a Go grandmaster is highly reminiscent of when IBM’s Deep Blue beat Gary Kasparov, the world’s best chess player at the time (but only after Kasparov beat it first, a year earlier).

However, this time things are different for two reasons. One is that chess is orders and orders of magnitude easier to beat by an AI. There are on average 35 possible moves you can make on a chessboard, while there are 250 possible positions on a Go board, and after each one you get 250 more, and so on. That means there are more possible positions on a Go board than atoms in the universe, according to Demis Hassabis, DeepMind’s creator.

In other words, until we develop large quantum computers, at least, such computations are not possible in a short amount of time. This is why beating human Go players has always been believed as sort of a holy grail in AI research. It would be some kind of breaking point from which we could make AI that actually starts resembling the human mind, and it wouldn’t be just a bunch of pre-programmed instructions on how to do a very specific thing (like win at chess).

The second reason why this time is so different from the Deep Blue “AI” is because unlike Deep Blue, which beat Kasparov through “bruteforce” (by actually doing all the possible calculations ahead, and choosing the best one), Google’s DeepMind-powered AlphaGo agent didn’t use bruteforce calculations.

AlphaGo’s General Intelligence

The real breakthrough was that AlphaGo learned on its own how to play Go by “watching” 30 million Go moves played by human experts. AlphaGo is merely a version of DeepMind, which Google used to train the AI how to play Go, but in reality DeepMind is actually an “AGI,” or artificial general intelligence.

Google has talked before about how it has been training it to learn arcade games from the 70s and 80s, as well as the Doom game from the 90s. The DeepMind agent is not pre-programmed to play any of those games, including Go.

It actually learns everything from scratch, even the game rules or how to use “button actions” during a video game. It initially fails all the time until it eventually figures out what are the moves that “reward” it with moving forward into the game.

In AlphaGo’s case, it learned how to predict a human’s movement 57 percent of the time, which means that on average it could beat human experts, but it could still be beaten in some of those games. To make it an even better player, Google pitted it against itself, so it could constantly improve its gameplay. This made AlphaGo so good at playing Go that without any tree search (bruteforce computations) at all, it could beat the best Go AIs out there that rely on enormous search trees.

Google believes that one day it should be possible to use this Artificial General Intelligence to address real-world problems, from climate modelling to complex disease analysis.

Google DeepMind: Ground-breaking AlphaGo masters the game of Go

Lucian Armasu is a Contributing Writer for Tom's Hardware. You can follow him at @lucian_armasu. 

Follow us on FacebookGoogle+RSSTwitter and YouTube.

Create a new thread in the US News comments forum about this subject
This thread is closed for comments
8 comments
Comment from the forums
    Your comment
  • Johnpombrio
    Finally, my son's (in the PhD program for Comp Sci at Brown) boast about how even the best computer AIs cannot win against a human GO player is now toast, heh.
  • LOLROFL
    Dad?
  • Avus
    It is NORMAL a super computer (which learned from many players/masters) to able to beat one single human mind.