Montezuma's Revenge still too tough for AI, new Google Brain office, and other bits and bytes
A wonderful week in machine learning
Roundup Hello, here are some quick AI announcements from this week. A researcher reminds us to be wary of the hype around Montezuma's Revenge, there are some new framework updates from Google and Microsoft, and a new Google Brain office in Amsterdam.
Montezuma’s Revenge isn’t solved yet: OpenAI and DeepMind researchers have boasted about achieving record scores on the old Atari game using AI agents, but the challenge isn’t over.
In a blog post, Arthur Juliani, a reinforcement learning researcher from Unity, explains why Montezuma’s Revenge is such a difficult game to teach machines. The rewards are sparse, and although OpenAI and DeepMind have reached decent scores it’s because both methods rely on human imitation rather than learning.
Both DeepMind and OpenAI relied on training bots to play the game by copying human gameplay in videos. By learning through imitation, the agent is more likely able to memorize good sequences of moves and would not achieve the same high scores if it had to learn from scratch.
“Rather than developing a general-purpose solution to game playing (as the two DeepMind papers titles suggest), what has really been developed is an intelligent method for exploiting a key weakness in Montezuma’s Revenge as an experimental platform: its determinism,” he explained.
“Every time a human or agent plays Montezuma’s Revenge, they are presented with the exact same set of rooms, each containing the exact same set of obstacles and puzzles. As such, the simple memorization of the movements through each room is enough to lead to a high-score, and the ability to complete the level.”
So, don’t believe all the hype out there.
New TensorFlow goodies: TensorFlow 1.9.0 is out and includes new features to work more seamlessly with the Keras API.
It also has an improved interface for Python, and has fixed some bugs and added some new Estimator models. Download it here.
Google Brain Amsterdam: Google Brain has launched a new office in Amsterdam.
It has started looking for research scientists to work in natural language processing, search, hardware, mobile compilers - you name it.
“Much of our work is best understood as part of the 'deep learning' subfield of machine learning, but we are interested in any methods — such as evolutionary computing, novelty search or reinforcement learning — that advance the capabilities of machine intelligence. We have resources and access to projects impossible to find elsewhere,” it boasted in a job advert.
Some researchers from OpenAI and DeepMind including Tim Salimans, Durk Kingma, Nal Kalchbrenner, and Lasse Espeholt have joined so far.
Reversible generative models: We’re obsessed with using machine learning to augment faces with filters. OpenAI has released Glow, a model that can easily blend two faces together or alter certain attributes and then reverse all the changes.
You can manipulate Glow to make a person smile more or less, look older or younger, give them blonder hair, narrower or wider eyes, and even whack on a beard. The more amusing option creates a series of images that increasingly maps one person’s face onto another’s. All these changes can be reverted to retrieve the original input.
It’s all down to clever encoding. The inputs are encoded and an average latent vector between the inputs is computed between different attributes, whether it be two people’s faces or making the hair blonder.
The vector direction between the two can be manipulated to add as little or as much of a certain a feature as you like. The idea has been around for a while, and researchers at OpenAI have made their model more efficient by simplifying the architecture.
Glow was trained on 30,000 images of faces using five machines with eight GPUs each. The inference stage takes about 130 milliseconds to generate a 256x256 image using an Nvidia 1080 Ti card.
Updated Microsoft framework: Microsoft Research has launched ML.Net.03, the framework now allows users to export models to run on Windows 10 devices.
It supports ONNX (Open Neural Network Exchange), an open-source collaboration between software and hardware companies like Facebook, Amazon, Microsoft, ARM and Huawei. The goal is to make it easier for developers to transfer models written in different frameworks like Caffe 2, PyTorch, or Microsoft’s Cognitive Toolkit.
“With this release of ML.NET v0.3 you can export certain ML.NET models to the ONNX-ML format. ONNX models can be used to infuse machine learning capabilities in platforms like Windows ML which evaluates ONNX models natively on Windows 10 devices taking advantage of hardware acceleration,” it said.
The third iteration of ML.Net also includes new features to make it easier to include binary and multiclass classification and regression algorithms. The first version of the framework was introduced at its Build conference this year.
You can read more about the updates and see code samples here. ®