Sydney, Australia, today hosted the first day of a competition to pit humans against artificial intelligence, with the two sources of reasoning competing in rampantly-popular game Angry Birds.
The challenge was hosted by the Australian National University's Artificial Intelligence Group, where Associate Professor Jochen Renz earlier this year realised Angry Birds offers several of the problems researchers in the field are tackling.
“Angry Birds looks like an easy problem, but is actually very hard,” Renz told The Register. “You need to solve computer vision, learning and diagnosis problems to play the game. Those are some of the different sub-groups of artificial intelligence, so we [from the group] were all able to work together on the problem.”
Renz also felt a competition would advance research on the project, an idea he conceived of in June but was only able to activate five weeks ago due to unexpected complexities preparing for the event. 16 teams have entered but half dropped out, stumped by the challenge.
Renz and his colleagues have created Angry Birds playing app, dubbed NAÏVE, that offers a model of the game and can play it passably well by simulating the mouse drags and clicks that launch birds. The app is a Chrome extension and the competition takes place within the Chrome Web Store version of the game. Competitors can build on NAÏVE or create their own code.
Renz said the team approached Rovio for help to conduct the project, received no response, and has therefore reverse engineered some aspects of the game to make the extension work.
The aim of the challenge is not to test for the best possible trajectory and then fire off a bird in the knowledge it will do a lot of damage, then repeating the process in the hope that the birds on offer will do the job. Instead, Renz hopes participants will find the best ways to solve the problem presented by each level, even if that means some birds don’t achieve high scores but do some useful damage advantageous to subsequent missiles.
At the time of writing, the first round of contestants were close to completing their allotted hour of pig-pounding action. Several had successfully completed the ten custom levels created for the event.
Tomorrow, those scores will be put to the test against human combatants, to test whether the best-performing AIs can match it with people.
Upon completion of the human vs. AI bout, Renz will release NAÏVE’s source code. We’ll bring you that link, and the results, as soon as they come to hand. Or should that be trotter? ®
Sponsored: Webcast: Simplify data protection on AWS