This article is more than 1 year old

Intel, Accenture put bow on open-source AI kits to push adoption

Think ML is hard to implement? That excuse is getting harder to make as you avoid the rush

Intel and IT consultancy Accenture are hoping to help businesses adopt AI applications quicker with bundles of open-source software designed to speed performance and lower development costs.

The open source AI reference kits, announced Tuesday, come with the software components necessary — including pre-trained deep learning models and various code optimizations — for businesses to implement AI applications in the cloud, at the edge, and on-prem.  

Starting out, the kits developed by Intel and Accenture are targeting four application areas: customer chatbots, visual quality monitoring in factories, document indexing, and utility asset health monitoring. They are available to download for free on Github.

"These reference kits, built with components of Intel's end-to-end AI software portfolio, will enable millions of developers and data scientists to introduce AI quickly and easily into their applications or boost their existing intelligent solutions," said Wei Li, vice president and general manager of AI and analytics at Intel.

The reference kits are part of Intel's larger efforts to whittle away at Nvidia's dominance in the AI computing space, and they rely on what is essentially Intel's answer to the GPU giant's CUDA parallel programming platform that has been pervasive among developers.

Intel's answer to CUDA is oneAPI, which the chipmaker calls an "open, standards-based, unified" programming model that's supposed to make it easier for developers to target and optimize applications to run on a variety of silicon, including, in some cases, processors from Intel's rivals.

OneAPI has many facets, including a variety of libraries, toolkits and optimizations that Intel and Accenture are using to serve as the foundation for these AI reference kits.

For instance, the quality control kit for factories makes use of a computer vision model that was trained with the Intel AI Analytics Toolkit, which includes Intel distributions of Python and Modin libraries, plus optimizations for the TensorFlow and PyTorch frameworks, among other things.

Intel said its optimizations, which also make use of the company's inference-focused OpenVino toolkit, were shown to enable 20 percent faster training performance and 55 percent faster inference performance for the quality control model compared to Accenture's stock implementation.

For the customer chatbot kit, Intel said it was able to improve 45 percent faster inference performance for natural language processing models compared to Accenture's stock implementation, thanks to its use of OpenVINO and PyTorch optimizations.

While Intel's main hook is improved training and inference performance for deep learning models, the company promised its optimizations will also help businesses save money. Unfortunately, Intel didn't provide any examples of this, but given the emphasis on speeding up development, we're guessing the purported lower costs would be the result of that.

If the idea sounds interesting but the four kits announced today aren't applicable to your business, Intel said more open source AI software bundles are set to come out over the next 12 months. ®

More about

TIP US OFF

Send us news


Other stories you might like