This article is more than 1 year old

Why should you care about Google's AI winning a board game?

This is actually a pretty big deal, let us explain why

Water coolerEl Reg, what's all this about a Google AI playing a board game against a human?

For the last week or so, Google-owned DeepMind's AlphaGo machine learning project has been locked in a competition with Lee Sedol, the world's top-ranked Go player, to test AlphaGo's ability to solve the sort of complex problems that the human mind is capable of handling.

How is it doing that? By playing Go, a Chinese board game invented thousands of years ago. Users take turns placing stones on a 19 by 19 grid, aiming to surround the opponent's pieces with their own to take those pieces off the board. Lee Sedol is putting DeepMind's AlphaGo software to the test with a series of five games.

Didn't we already do this, like, a bunch of times? If you're referring to Garry Kasparov's chess matches against Deep Blue and the IBM Watson appearance against Ken Jennings on Jeopardy!, then yes. These sorts of publicity stunt competitions are a popular way to show the public how AI platforms have advanced.

So, why is this one any different? Picking a chess move or querying a massive database of trivia is something that we can accomplish with brute-force computing power. Go, on the other hand, presents a far more complex challenge for a computer. There are countless billions of possible moves in a Go game, far higher than chess: in a 400-move match, there are 361400 possible ways a game will play out. Selecting the right move within a reasonable amount of time using conventional search-tree algorithms would be pretty much impossible for even our most powerful supercomputers.

If Go is so hard, how did Google solve this problem? DeepMind, through its AlphaGo software, doesn't rely solely on the search-tree algorithms. Rather, it utilizes its machine-learning chops, drawing on an archive of games against other human opponents and simulated games with itself to analyze the board and whittle the lists of possible moves down to a more manageable number.

Basically, faced with an extremely high-level challenge, it had to develop its own intuition – and one strong enough to flummox a human world champ.

How has DeepMind fared, then? Very well, indeed. AlphaGo won the first three matches against Sedol in the best-of-five series, stumping the human grand champion into uttering that the AI was making moves "that could not have been possible for a human being to choose." You can step through all the games, with commentary, here.

What does that mean for artificial intelligence? This is a huge victory for DeepMind and, overall, a major milestone for AI. Prior to the rise of AlphaGo, most people in the supercomputing and AI fields figured we were at least a good ten years from being able to assemble a system capable of playing Go on par with a top professional human player.

This also helps to validate DeepMind's machine learning techniques and the neural network construction behind it. Having proven their mettle in Go, the DeepMind team could now have the confidence (and funding) to tackle more complex AI challenges.

Oh no, is there any hope for man? Well, the series is not yet done and dusted. Though AlphaGo has already clinched the overall victory, Lee Sedol won the latest match, and should he emerge victorious tomorrow, meat-based beings will have at least turned in a respectable 3-2 defeat.

Otherwise, this is indeed just one more field in which we have managed to create a machine that has become our better.

Well, it could be worse. At least Google isn't working on putting it into super-strong military killbots or anything. Yeah, about that... ®

More about

TIP US OFF

Send us news


Other stories you might like