This article is more than 1 year old

Intel is doing so well at AI acceleration, it's dropped $2bn on another neural-net chip upstart (third time's a charm)

Habana joins the ranks of runaway successes Movidius and Nervana

Intel has snapped up AI accelerator chip designer Habana Labs for a hefty $2bn to bolster its efforts in bringing Chipzilla-flavored machine-learning tech to cloud platforms and big biz.

The microprocessor giant is hell-bent on getting internet giants and enterprises to use its neural-network math coprocessors, rather than GPUs and specialist custom parts from Nvidia and others, and it's hoping this Habana upstart will help that dream come true.

“This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need – from the intelligent edge to the data center,” Navin Shenoy, executive veep and general manager of Intel’s Data Platforms Group, said on Monday.

“More specifically, Habana turbo-charges our AI offerings for the data center with a high-performance training processor family and a standards-based programming environment to address evolving AI workloads.”

Founded in 2016 and based in Israel, Habana has two main products: Gaudi and Goya. Gaudi is a processor that handles machine learning training workloads, and has 32GB of memory built-in, a memory bandwidth of 1TB per second, and consumes up to 200W. It claims to have 3.8X the throughput of Nvidia’s V100 chips, and is capable of crunching through 1,650 images per second when training ResNet-50, a popular convolutional neural network architecture often used for benchmark tests.

Goya, on the other hand, is an AI accelerator for inference purposes. The chip contains eight tensor processor cores and supports mixed precision from FP32 to UINT8. Again, Habana crows over its latency rates being better than Nvidia’s T4 inference GPU.

Goya can apparently zip through a rate of over 15,000 images per second for the ResNet-50 with one millisecond of latency using a batch size of ten. That’s three times as fast as Nvidia’s T4 that can deliver 26 milliseconds of latency at a batch size of 128. The T4 can get down to a millisecond of latency, but only if it considers a batch size of 1 and zips through 1000 images per second.

Illustration of an AI-focused computer chip

Amazon sticks AI inference chip up for rent in the cloud for machine-learning geeks

READ MORE

Habana's chips run its custom Synapse API software to support models written in the various AI frameworks, including TensorFlow, MXNet, PyTorch. The API sits on top another of its own platforms Synapse AI, which integrates Habana's library and its graph compiler.

Intel has gobbled up other neural-net chip startups besides Habana in its quest for dominance. Its current AI hardware team is led by Nervana Systems’ CEO Naveen Rao. Nervana was acquired by Intel in 2016 for $350m.

Despite having been within the beast's belly for more than three years, Nervana has hit a series of delays, and has yet to bring its NNP-T1000 chip for training, and its NNP-I1000 chip for inference, to the cloud for general customers. Instead, the tech is only available to Facebook and Baidu, which have both struck partnerships to test Intel’s silicon. Nervana said the NNP-T1000 and the NNP-I1000 should be available to all next year.

Intel also bought Movidius to develop machine-learning chips for edge devices in 2016, and self-driving car sensor slinger Mobileye in 2017, and FPGA giant Altera in 2015 for $16.7bn, and eASIC in 2018, and Omnitek this year.

Not all of the above are strictly artificial-intelligence related, though they are part of Intel's drive to get Chipzilla-owned acceleration and customization into applications – especially after the Xeon Phi family, hoped by Intel to be the next Nvidia GPU killer in the data center and supercomputers, was scrapped. We note that the x86 goliath has added AI-boosting instructions and features to its recent microprocessors.

Habana will remain in Israel and report to Intel’s Data Platforms Group. ®

More about

TIP US OFF

Send us news


Other stories you might like