This article is more than 1 year old
AI pioneer suggests trickle-down approach to machine learning
ML master pitches trimmed-down deep learning for the masses
In 2015, modern AI trailblazer Andrew Ng's recipe for success was to go big on neural networks, data, and monolithic systems. Now that recipe has created a problem: the technology is dominated by only a few rich companies with the money and headcount to build such immense systems.
But the landscape doesn't need to hinge on such mainstream accessibility, according to Ng, the Baidu and Google Brain alum (and current CEO of software maker Landing.AI). Instead, he suggests an approach to make machine learning inclusive and open during a session at Nvidia's GPU Technology Conference last week.
Ng suggested building better analytical AI tools and domain knowledge, with the goal of being able to do more with less, essentially. The key to AI accessibility is to be able to understand patterns and trends from smaller-sized datasets.
"We know that in consumer internet companies you may have a billion users and a giant dataset. But when you go to other industries, the sizes are often much smaller," said Ng.
Ng referred to building AI systems in sites like hospitals, schools, or factories, which lack the resources and datasets to develop and train AI models.
"AI is supposed to change all industries. We're not yet seeing this happen at the pace we would like, and we need data-centric AI tools and principles to make AI useful for everyone... not just to large consumer internet companies," Ng said.
For instance, he cites the thousands of $1-5 million projects in places like hospitals, which typically have budget crunches, that could move to smaller custom AI systems to improve analytics.
Ng said he saw some environments in manufacturing which had just 50 images on which to build a computer-vision based inspection system to root out defective parts.
"The only way for the AI community to get these mainly very large numbers of systems built is you can start to build vertical platforms that aggregate all of these use cases. That makes it possible to enable the end customer to build the custom AI system," Ng said.
One such step better "data preparation" – as opposed to data cleaning – to improve the machine-learning system iteratively. The idea isn't to improve all the data in a large dataset, but to implement an error analysis technique that helps identify a subset or slice of data, which can then be improved.
"Rather than trying to improve all the data, which is just too much, maybe you know you want to improve this part of the data, but let's leave the others. You can be much more targeted," Ng said.
- Check Point spreads AI goodness throughout its security portfolio
- Microsoft to upgrade language translator with new class of AI model
- Nvidia's Morpheus AI security framework to land in April
- How Pfizer used AI and supercomputers to design COVID-19 vaccine, tablet
For example, if a part of an image is identified as defective, the error analysis can zoom in on acquiring more targeted and specific data, which is a better way to train systems. That small-data approach is more efficient than the wider approach to acquiring broader data, which can be expensive and resource-intensive.
"This allows you to go out to a much more targeted data acquisition process where you go and say, 'Hey, let's go to the manufacturing plant and get a lot more pictures,'" Ng said, adding that consistent and efficient labeling is a big part of the process.
Ng gave a specific example of error analysis in speech recognition systems to filter out car noise from human speech in a soundbite. Engineers were tempted to build a system to detect the car noise, to filter out the car noise, and then get rid of the car noise.
A more efficient approach is to generate more data on human speech and background car noise, then use error analysis to slice out the problematic car noise data, and then use a targeted data augmentation and generation approach to improve performance on that problematic slice of data.
The conventional big-data approaches to AI are still good, but the error-analysis systems are better for limited datasets. "You decide what you want to improve and associate the cost of more data acquisition – [whether] it is reasonable relative to the potential benefit," Ng said.
It may take a decade and thousands of research papers to flesh out a consistent data-centric model for deep learning, he said.
"We're still in the early phases of figuring out the principles as well as the tools for entering the data systematically," Ng said, adding: "I'm looking forward to seeing lots of people do that and celebrate their work as well." ®