After 60 matches against top Go players online earlier this year under the name “Master,” and after recently defeating Ke Jie, the world’s best (human) Go player, twice, the AlphaGo AI has become the ultimate Go master. This latest development is proof that artificial intelligence may soon begin to surpass humans at other highly complex tasks.
The "God Of Go"
On Saturday, Ke Jie had his final match against AlphaGo by playing white, which gave Ke Jie a small advantage as the second mover. Ke Jie’s strategy this time around was to make the game as complex as possible, hoping he could force AlphaGo into making a mistake by poorly connecting its own stones.
In theory, this was quite a good strategy on Ke Jie’s part, because he was hoping AlphaGo is still just a machine that can’t “connect the dots” as well as humans can by looking at the bigger picture on the board. However, AlphaGo had both a strong local and global game, thus denying Ke Jie the opportunity to lure it into making a mistake.
AlphaGo may have “known” what Ke Jie was trying to do, so it may have made its moves taking into account those calculations, or it may have simply played the game as perfectly as possible, not allowing Jie any opportunity on the board.
Jie said in a post-game interview that AlphaGo sees the whole universe of Go (“Weiqi” in pinyin), while he could only see a small area around him. He also said it’s like playing Go in his backyard while AlphaGo explores the universe.
Before the matches, Jie presumed that AlphaGo--and AI in general--would be better with smooth transitions, as opposed to big clashes between itself and the opponent. However, in his second match especially, he noticed that the AlphaGo is not only dealing with big clashes easily now, but it even had much better solutions for getting out of them than humans do. In the same interview, Jie likened AlphaGo to a “God of Go.”
AlphaGo Gives Hope For The Future
After the game, Jie said that AlphaGo’s skill in beating humans so easily now gives hope for the future, because that skill may be harnessed for other tasks. Jie mainly hopes that AlphaGo’s technology can be used in medical science so it cure diseases or at least find better treatments for them.
DeepMind, a U.K. company Google acquired in 2014 for more than $500 million and the creator of the AlphaGo AI, has already started collaborating with the National Health Service (NHS) agency in UK to improve access to quality healthcare and the speed of care. The DeepMind technology may also invent new methods of diagnosis that could lead to earlier diagnostics of health issues.
Medical science is not the only area where DeepMind technology has been used. The technology is already turning up a profit for Google by cutting its data center cooling costs by up to 40%.
DeepMind technology has also been used to generate synthetic speech with near-human level voice quality. However, the company is still working on reducing the computing resources for such a task, so it has not yet been deployed at scale in Google’s products.
From Go To 'StarCraft 2'
DeepMind CEO Demis Hassabis said in a recent post that the final match between Ke Jie and AlphaGo was also the last time the company would organize a match event and that it would step back from competitive play. This has probably disappointed many players, as they may have wanted to take a shot at beating the “God of Go” in a tournament, however fruitless that effort may have been.
However, Hassabis did say that DeepMind would release 50 “special games” in which AlphaGo plays against itself. The company will also release a Go teaching tool that will be developed in collaboration with Ke Jie.
Even though DeepMind is looking towards more “serious” issues to tackle with its artificial intelligence, Go is not the last game its AI will try to master. StarCraft 2 seems to be the next in line.
In Go, the AI had “perfect information,” as it could “see” exactly what moves the opponent made. In StarCraft 2, players operate on much more limited information. They don’t know where each other are or what they’re building and training because of the fog of war, so the DeepMind AI has to make the best of that situation.
Unlike typical AI in games, which simply receives the information it needs from the game’s code, the DeepMind AI will have to learn StarCraft in the same way human players do: by watching others play and through trial and error. It will also be limited by how many actions it can do per minute, because humans are also limited by how fast their hands move on the keyboard.
DeepMind and Blizzard plan on opening up the project this year so other researchers can help improve the AI to the point where it can beat a top human player. DeepMind said that the “messiness” of the StarCraft 2 environment is quite close to the real world, so it should help its AI better understand how our world works. That could lead to new super-human problem-solving capabilities for the DeepMind AI, especially in the area of robotics, but potentially others, too.