AI gets carded, China and US agree on robot wars, Amazon claims Rekognition is just fine
And more from the world of machine learning
Roundup Here's a quick summary of recent AI news to kickstart your week beyond what we've already reported.
DeepMind sets its eyes on a new game: With Go, chess, old Atari games, and StarCraft under its belt, DeepMind is taking on a new challenge: the card game Hanabi.
Unlike previous games, Hanabi is a cooperative game and players have to work together to succeed. The goal is to collect multiple different series of cards in sequence to set off a make-believe firework show (Hanabi means fireworks in Japanese).
Each player is dealt five cards, but they cannot look at them. The other teammates, however, can reveal that information to the person who can’t see their own cards with the power of specific tokens. There are a number of limited tokens, so everyone has to be thrifty with uncovering their cards.
Alternatively, players can switch a card in their hand or play one by setting it down and kickstarting the chain of cards that needs to be collected. The chain has to start with a certain colour and the number one set, then the number twos - all the way until all the number fives are played to amass the right chain of cards.
There are a number of ways to win and lose the game, and every round can be scored differently depending on the game (here are the rules in more detail.)
Anyway, DeepMind has teamed up with Google Brain to create a virtual environment that will help developers train Hanabi bots. Hanabi is challenging for two reasons: It’s requires cooperation and communication and is an imperfect information game, unlike Go or chess where the complete state of the game can be observed by studying the board.
“We argue that Hanabi elevates reasoning about the beliefs and intentions of other agents to the foreground,” the researchers wrote in a paper.
"We believe developing novel techniques capable of imbuing artificial agents with such theory of mind will not only be crucial for their success in Hanabi, but also in broader collaborative efforts, and especially those with human partners."
It isn’t the first time researchers have tried to pursue Hanabi using AI as one researcher pointed out on Twitter.
I completely forgot that @chrisamaphone and @Yawgmoth46 wrote a paper on Hanabi-playing as an AI challenge in 2017 https://t.co/9NgyzRLtft and @togelius ran a Hanabi competition in 2018. @DeepMindAI’s paper cites both of these but also claims the problem is novel ¯\_(ツ)_/¯— Mark O. Riedl (@mark_riedl) February 5, 2019
Amazon insists police aren’t misusing Rekognition: Amazon has bared the brunt of the whole hoo-ha surrounding the harmful impacts of facial recognition with its Rekognition tool.
Unhappy with all the negative attention, Michael Punke, VP of Global Public Policy at Amazon Web Services, wrote a statement refuting the recent criticism that Rekognition was inaccurate, biased and was being used incorrectly.
“In recent months, concerns have been raised about how facial recognition could be used to discriminate and violate civil rights. You may have read about some of the tests of Amazon Rekognition by outside groups attempting to show how the service could be used to discriminate,” he said.
“In each case, we’ve demonstrated that the service was not used properly; and when we’ve re-created their tests using the service correctly, we’ve shown that facial recognition is actually a very valuable tool for improving accuracy and removing bias when compared to manual, human processes.”
Punke insisted that the online tap market had “not received a single report of misuse by law enforcement.” He also said police needed to look over any decisions made by machines, and encouraged them to be transparent about how it’s being used and any potential mishaps. Amazon may have these guidelines, but its customers can use it however they see fit.
“We support the calls for an appropriate national legislative framework that protects individual civil rights and ensures that governments are transparent in their use of facial recognition technology,” he concluded.
AI and the military in China: Military use of AI is inevitable, Chinese officials said, according to a report published by the Center for a New American Security, a Washington DC-based security think tank.
Gregory Allen, a senior fellow at CNAS, wrote about some of the opinions expressed by Chinese government officials. Zeng Yi, a senior executive at China’s third largest defense company is quoted, having said: “In future battlegrounds, there will be no people fighting.” Yi said using AI in combat was “inevitable”. “We are sure about the direction and that this is the future.”
The same sentiments have also been echoed by the US. Alexander Kott, chief of the Network Science Division of the US Army Research Laboratory, also previously said that humans would play a much smaller part in future battlefields.
Yi predicted all this would happen by 2025, and Kott believed it’d be here in 20 years’ time. Great, so both AI superpowers believe that machines will be fighting in the military and are both actively pursuing the idea. So, definitely no AI arms race then, eh?
You can read the full report here.
AI and art: AI continues to creep into the art world, Sotheby’s, one of the largest art dealers in the world is selling a portrait made by a neural network.
The auction on 6 March will feature portraits painted with computer code by Mario Klingemann, currently an artist in resident at Google. His eerie work was made by generative adversarial networks trained on the faces of paintings from the 17th to 19th century.
The result is pretty disturbing. You have mishmash of two ghostly faces with three eyes, but one body. Check them out here.
It’s not the first time AI art has been auctioned off. The first one sold for a hefty $432,500 back in October. ®