Update, 3/15/16, 8:35am PT: AlphaGo won the fifth and final match against Lee Sedol, the 18-time world champion at Go. Yesterday, AlphaGo also received an honorary 9-dan rank, which is reserved for "divine players" such as Sedol.
The fifth and final match was more intense than the other four, as AlphaGo made one major mistake during the game. However, it recovered and managed to catch up to, and then beat, Sedol.
Although Sedol lost to an AI, the five matches between the two helped bring global attention to the game of Go. In that sense, it's a win for the Go community, as more people become interested in learning and playing Go. Ultimately, it's a win for humanity that we've created artificial intelligence that can learn as humans do and then beat them at Go, one of the hardest games for AI. This artificial intelligence can now be put to better use in healthcare or in other important areas, where it could solve major problems that humans can't or have been too slow to solve.
Over the weekend, Lee Sedol played two more matches against Google’s AlphaGo AI, losing in the first one, but scoring a surprising and much needed win for Sedol’s and other Go players’ morale.
AlphaGo Wins Best Three Out Of Five
On Saturday (South Korean time), AlphaGo scored the third successive win against Lee Sedol, which means the AI already has a majority of wins out of the five planned matches. Sedol has therefore also lost the $1 million prize.
For the third game, Sedol looked for weaknesses in AlphaGo’s play and tried to open in a more aggressive way to unbalance the AI. The game became more exciting, but eventually Sedol ran out of time way before AlphaGo did, and he had to resign.
Fourth Match: Sedol Wins First Game
Sedol managed to win his fourth match., which was no doubt a relief to the many professional Go players watching the game and proved that AlphaGo did have weaknesses. Some attributed the win to a “genius move” on Sedol’s part (move 78). Others attributed it to a play error on AlphaGo’s part during the subsequent set of moves, which were deemed surprisingly weak by other professional Go commentators.
After almost four hours had past, AlphaGo was also running out of time, and it had already made too many mistakes after Sedol’s move 78. Its probability of winning had dropped considerably, and Google’s AlphaGo engineers programmed it to resign if its chances of winning dropped below 20 percent. This would be more in line to how human Go players resign after they feel their chance of success is too low, rather than continuing to drag the game out. AlphaGo then resigned, giving Sedol his sole win in the series.
Sedol said that this one win was very important to him, especially after losing three times consecutively to AlphaGo:
"If today I was, let's say, winning three consecutive games and if I had lost one single game it would have really hurt tremendously. But because I lost three matches and I was able to get one single win, I think this one win is so valuable I would not exchange this with anything in the world," Sedol said in the post-match conference.
Humans Can Still Win At Go (For Now)
The founder of DeepMind, the company that created the DeepMind AI behind AlphaGo, said in the post-match conference that AlphaGo’s defeat is valuable because now the team will get to see where the AI still has some weaknesses and improve it further.
In a bigger sense, this victory by Sedol could also mean that no matter how “smart” general artificial intelligence becomes, there may always be some weaknesses that can be exploited by humans, even if they would be increasingly harder to detect.
The final match between Lee Sedol and AlphaGo will be streamed live on YouTube tonight at 12am ET (1pm KST, March 15).
Lucian Armasu is a Contributing Writer for Tom's Hardware. You can follow him at @lucian_armasu.
This seems like nothing more than wishful thinking -- "no matter how smart they'll become, we humans will always have the potential to outsmart them! We simply must have!" If the AIs become so smart that the weaknesses become not just "increasingly harder to detect" but "humanly impossible to detect", it's an entirely moot point. It's also a little silly because, of course, every *human* intelligence has weaknesses that can be exploited, and often quite pitifully so.
No, my friends, we are facing the very real possibility that we *will* be able, one day, to build machines that are better than we are in every area we'd like to think of. Rather than hide from this terrifying prospect, we'd better consider our options carefully, and in particular, we need to be very careful about how we go about building such machines and how to make sure they don't accidentally or intentionally do terrible things that we're not clever enough to foresee. Killbots a la Skynet is just one (clichéd) extreme, and not necessarily a realistic one, but AIs don't even have to have malicious intent to really ruin our world. Just look at what we do to ourselves, for starters...