Analysis Google is thundering on with its mission to "democratize AI" with its Cloud AutoML platform – even though it doesn’t quite live up to the hype.
It was first introduced earlier this year, and was marketed as a tool for businesses that wanted to use machine learning but had little to no expertise with the technology. If you squint, it looks like a drop-down menu of AI software you can select to deploy – the menu offering image recognition, and language parsing and translation.
These three available services are pretty straightforward machine-learning technologies: lots of clouds and software libraries provide them. However, Cloud AutoML sounds exciting because, according to Google, it uses “neural architecture search” and “transfer learning” to automatically build custom neural networks for you.
So, if you want a thing that, say, tells vultures from seagulls, select the image recognition option, give it a big bunch of bird pics, and it will create a special model to tell them apart just for you.
Let's take a closer look at that underlying tech, though.
Neural architecture search has been described as AI designing AI. It is machine-learning software that attempts to automatically build new neural networks for you to your requirements.
Transfer learning is the ability to train a model for one task, and then use it to solve a closely related problem, such teaching code to master one video game and make it play another title.
It all sounds impressive, yet in reality, these techniques can’t easily both be used on the same problem, as Rachel Thomas, cofounder of fast.ai and an assistant professor teaching data science at the University of San Francisco in the US, explained this week.
Transfer learning relies on building general neural networks that can apply their knowledge to a range of tasks. Meanwhile, neural architecture search involves developing a unique architecture specifically for a particular dataset or problem. They don't quite fit together.
Here's how Thomas put it:
When neural architecture search discovers a new architecture, you must learn weights for that architecture from scratch, while with transfer learning, you begin with existing weights from a pre-trained model.
In this sense, you you can’t use neural architecture search and transfer learning on the same problem: if you’re learning a new architecture, you would need to train new weights for it; whereas if you are using transfer learning on a pretrained model, you can’t make substantial changes to the architecture.
It’s unclear exactly how Google employs both these methods for its Cloud AutoML image recognition offering, aka its vision API – and refused to talk about it earlier this year. If you put aside Google's insistence that Cloud AutoML automatically crafts unique models for you, then you're just left with bog-standard AI tools.
In other words, without the AI-designing-AI hype, it's another online image recognition service.
Google answers 'Why Google Cloud?' with services and spectacleREAD MORE
“I’ve used AutoML, the cloud vision API, just to explore what it does, although I don’t know of anyone that has used it in production," Thomas told The Register earlier today.
"It does a variety of well-studied computer vision topics, such as identifying the objects in a photo and where they are located. This could be a useful service, but these techniques are widely implemented and not as unique as Google implies. Also, consumers can choose from a variety of such services, or coders can build such a service with open source tools."
“AutoML Natural Language helps you automatically predict custom text categories specific to domains our customers desire,” Google AI's chief scientist Fei-Fei Li explained. "And with AutoML Translation you can upload translated language pairs to train your own custom translation model."
It’s unclear if these two new services also use transfer learning and neural architecture search. We asked Google for more details, although we are not holding our breath. In the meantime, Thomas argued the web giant should at least be more transparent about how well the tech performs.
“I just want to know how well it works, and how it compares to other options," she told us.
"For instance, I would love to see Google share performance comparisons of their service on specific problems. It’s bad for everyone when AI companies overhype their work and make misleading promises. As consumers end up disappointed, many people may conclude that the whole field of AI is a fraud.”
The whole industry is hellbent on "democratizing AI," the idea that making neural networks more accessible and easy to use is a good thing. And sometimes it is helpful to make code and APIs public, and sometimes it’s just a sneaky sales tactic to lure and lock developers in.
“I do think it is good and useful to create tools that are easier to use and increase access," Thomas told El Reg. "However, I think companies need to be clear about what these tools can and can not do.
"The tech industry has a history of putting an idealistic spin on their sales, for example, they’re connecting the world, when really, they’re selling access to your data to advertisers.
"I think it’s particularly pronounced in AI for a few reasons. There is often extreme arrogance. AI experts are constantly being told how brilliant and rare they are. Consumers are particularly vulnerable, as most feel intimidated by AI, and aren’t able to critically inspect what they’re being sold.” ®