Humans strike back at Go-playing AI systems
Amateur fleshbag defeats synthetic in 14 of 15 games
Think that puny humans don't stand a chance when playing strategy games against an AI? You may have to think again. One person in the US beat an AI at the ancient game of Go by simply distracting it from the attack he was making, a tactic that would be unlikely to work on another meatbag.
The player, Kellin Pelrine, is apparently not quite at the top of the amateur rankings for playing Go, but managed to best the AI in 14 out of 15 games, according to the Financial Times. Pelrine used tactics that involved distracting the algorithm with moves in other corners of the board while he worked to surround groups of his opponents' stones.
Go is a board game in which two players place a black or white stone on a 19 x 19 board, the object being to surround a larger area than your opponent. Stones are removed from the board if surrounded by opposing stones.
It seems that the Go-playing AI did not notice the predicament it was in, even when the encirclement was nearly complete – a strategy that would have been apparent at a glance to another human player.
This is all a far cry from 2016, when Google-owned DeepMind's AlphaGo managed to beat Lee Sedol – one of the highest ranked Go players in the world – and many people regarded that development as being game over for human players as they would in future have rings run round them by ever more sophisticated machine learning models.
Pelrine was not playing against AlphaGo, but instead against several other Go-playing AI systems, including KataGo, which is based on techniques used by DeepMind in the creation of AlphaGo Zero.
Ironically, it appears that the approach for defeating these AI systems was discovered by a computer program, which was specifically created by a team of researchers (including Pelrine) to probe for weaknesses in AI strategy that a human player could take advantage of. The program played more than a million games against KataGo in order to analyze its behavior, we're told.
The strategy found by the software is not completely trivial but neither is it difficult for human players to get to grips with, Pelrine told the FT, and it can be put to use by an intermediate player to successfully beat the Go-playing AI models.
- Chinese Go Association suspends player 'for using AI'
- Go champion retires after losing to AI, Richard Nixon deepfake gives a different kind of Moon-landing speech...
- DeepMind quits playing games with AI, ups the protein stakes with machine-learning code
- Don't try and beat AI, merge with it says chess champ Garry Kasparov
The latest move highlights that AI systems may appear to be expert at whatever processes their model has been trained to perform, but there can still be surprising holes in their capabilities.
"I think the 'surprising failure mode' is the real story here," software engineer and professional chess and mindsports player Alain Dekker told The Register. "Think of a Tesla car driving into the side of a van because it has mistaken its light color for the skyline."
Dekker said any highly trained AI is likely to have these blind spots, that adding more and more complexity to cover the blind spots is partly why it is so hard to get it working well, and why it might take longer than anticipated to get driverless cars on our roads.
The research team told the FT that the exact cause of the blind spot in the AI Go players' strategies is a matter of conjecture, but it is likely the approach used by Pelrine is so rare that the algorithm does not recognize it. If so, it seems likely that updated models trained to recognize this strategy may in future not be so easily fooled.
A paper detailing the adversarial tactics used to discover the winning strategy can be found here [PDF]. ®