This article is more than 1 year old

Former Intel AI boss Naveen Rao is now counting the cost of machine learning, literally

MosaicMLdelving into the details

A former head of artificial intelligence products at Intel has started a company to help companies cut overhead costs on AI systems.

Naveen Rao, CEO and co-founder of MosaicML, previously led Nervana Systems, which was acquired by Intel for $350m. But like many Intel acquisitions, the marriage didn't pan out, and Intel killed the Nervana AI chip last year, after which Rao left the company.

MosaicML's open source tools focus on implementing AI systems based on cost, training time, or speed-to-results. They do so by analyzing an AI problem relative to the neural net settings and hardware, which then paves an efficient path to generate optimal settings while reducing electric costs.

One component is Composer, which provides the building blocks on which AI applications can be efficiently trained. MosaicML developed these methods after months of researching common settings in computer vision models that include ResNets and natural language processing models like Transformer and GPT.

But developers will ultimately need to chose the best approach. That is where the second component, Explorer, steps in. The tool has a visual interface that provides fine-grained details on parameters that include better results, training time or cost, and users can filter results by the hardware type, cloud and technique.

"We change the learning algorithms themselves to make them use less compute to arrive at the result," Rao told The Register.

AI systems can be inefficient and costly, and more thought needs to be put into economizing machine learning, Rao said. "We find Nvidia GPUs give us the fastest and easiest way to get going. We plan on adding support for other chips in the future," he explained.

The library works within PyTorch right now, and support for Tensorflow will be added later, Rao said.

AI isn't a one-size fits all approach, and inefficiencies in both software and hardware are considerations when accounting for total cost of ownership, said Dan Hutcheson, analyst a VLSI Research.

"The amount of computation required to train the largest models is estimated to be growing >5x every year, yet hardware performance per dollar is growing at only a fraction of that rate," MosaicML said in a blog, citing a 2018 study by OpenAI.

OpenAI in a study last year said algorithmic progress have shown further AI speedups than hardware efficiency.

Many systems use racks of power-hungry Nvidia GPUs for machine learning. Rao is a proponent of this distributed approach, with AI processing split over a network of cheaper chips and components that include low-cost DDR memory and PCI-Express interconnects.

In a tweetstorm last week, he took a jab at monolithic AI chips like the WSE-2 chip produced by Cerebras Systems Inc. being inefficient in AI relative to performance-per-dollar.

The distributed approach reflects a fundamental flaw in understanding the cost of doing AI at a chip level and scaling performance, Cerebras CEO Andrew Feldman told The Register.

"The real waste - it's got nothing to do with the individual chip level. To get 10x the performance you're going to spend 100 or 1,000 times the power," Feldman said.

Feldman invoked Moore's Law, saying "What Intel showed over decades is that you can build a great business if you can keep your prices flat and doubling performance every three to four years."

One angel investor in MosaicML, Steve Jurvetson of Future Ventures, in a tweet floated the idea of a "Mosaic's Law" corollary to measuring advances in algorithms per dollar spent.

Venture capitalists have poured $37m into MosaicML, with other investors also including Lux Capital, DCVC Playground Global, AME, Correlation and E14. ®

More about

TIP US OFF

Send us news


Other stories you might like