This article is more than 1 year old

Big-in-Japan AI code 'Chainer' shows how Intel will gun for GPUs

Chainer makes TensorFlow look like treacle, but until this week it didn't speak Xeon

Ever heard of “Chainer”, the open-source framework for creating neural networks?

I hadn't either until yesterday Intel decided to give it a big hug, taking Chainer from being big in Japan, where its parent company Preferred Networks works with the likes of Toyota on secret projects, to rather greater prominence.

Chainer can use the help: launched in 2015 and open-sourced last year, the tool's GitHub repo is busy but hardly the most lively place on the internet.

That's probably about to change because Intel has decided Chainer is a fine way to develop AI workloads that create demand for its silicon. Doubly so if it can be taught to speak fluent Xeon, instead of only chatting to NVIDIA GPUs as was previously the case. The deal between Intel and Preferred means Chainer will from now on be developed for Intel architectures and changes shared on Intel's GitHub repo for the project.

Why should we care that Intel's decided to give Chainer a leg-up?

On the purely technical side of things, it looks like good gear. Chainer CEO Toru Nishikawa yesterday showed Intel's AI Day in Tokyo the slide below on which he claims to have made Google's TensorFlow look like it was working in treacle when measured on training time for image net classifications. Nishikawa-San also said Chainer had tied in a recent Amazon.com test to train robots to pick stock.

Chainer vs. comparable AI tools

So you could do worse than have a look if you are thinking about neural networks.

But the Chainer tie-up is also worth considering because it shows how Intel builds markets and will try to make itself the dominant player in Artificial Intelligence, a market widely assumed to be on the cusp of a boom.

It's also a market that is currently keen on GPUs. So Intel wants to build a portfolio of products to make Xeons the heart of AI, not GPUs.

Intel's not, however, using SciFi definitions of AI. Amir Khosrowshahi, former CTO of Nervana and now holder of the same position in Intel's new AI Group, prefers to describe AI as involving deep statistical analysis of very closely-observed events so that we can infer likely outcomes with satisfying precision.

Modern hardware can do that analysis and wrangle the necessary mountains of data collected to make the analysis useful, but it mostly brute-forces it. Dedicated hardware will speed things up and that's where Intel is going, by building and/or buying that hardware and building the software ecosystem to match.

You may have seen this movie before when virtualization was obviously the next big thing and Intel added extensions to its silicon so it would be especially good at hosting multiple VMs. Chipzilla's also done things like bridge the Lustre and HDFS file systems so that HPC clusters runnings Lustre could run Hadoop, which relies on HDFS. Intel wins either way: it's invested in Hadoop provider Cloudera and has lots of HPC customers who didn't scream when Chipzilla made their rigs more useful. Intel's also optimised its consumer CPUs for video transcoding because editing HD home movies without having to stay up all night is one of the few compelling reasons to buy a new PC.

Intel's now using the same playbook for AI. Buying field-programmable gate array (FPGA) vendor Altera gave Intel the tech to build hybrid Xeons that offer integrated programmability so you can get silicon speed for exotic analyses that would make a vanilla Xeon weep. Altera is now working to make sure that developing code for FPGAs, once the province of embedded systems engineers, is not a stretch for the average Java developer.

Bernhard Friebe, Intel's director of planning and marketing for FPGA Design Software and Intellectual Property, said Intel is developing libraries for common AI tasks, gives them away and has built tools that mean developers need to write just one line of code to target FPGAs.

Nervana gave Intel silicon tailored to AI and lots of the software developers will need to use it.

The two companies also give Intel the technology it will one day bake into Xeons so they become better at the kind of data-crunching AI needs.

We'll see those products emerge later in 2017 when the “Lake Crest” Xeon adds a discrete AI accelerators for AI workloads. The Skylake Xeon with a joined-at-the-hip FPGA, code-named Knights Crest, will debut later in the same year. Intel's being shy about exact specs, but Both use proprietary inter-chip links and a new architecture called “Flexpoint” to improve parallelism. But they're early products: both promise 10x parallelism. By 2020 Intel pledges to reduce the time needed to train an AI model by a factor of 100.

But the main game here is that by adding AI abilities to those Xeons, Intel can talk to mainstream users about doing AI with familiar kit, rather than wrapping their head around GPUs. And it can point to Chainer and many other software investments to show that existing developers won't struggle to at least start playing with AI.

The company still has exotica up its sleeve. Barry Davis, general manager of Intel's “Accelerator Workload Group”, told El Reg that by the second half of 2018 we'll also see “Knight's Mill”, the next-generation Xeon Phi optimised for AI. Details of the product are scarce, but Intel is talking up the fact it will be able to address up to 400GB of memory, far more than some GPUs.

Once everyday Xeons are good at AI, there will be little excuse not to consider them. More exotic products like Xeon Phi or FPGA-bonded Xeons can also run in the cloud, where users can try them out without capital expenditure.

By the time the ready-for-AI range is mature, Chainer will have been running on Intel hardware for about three years and will probably be rather improved thanks to the input Intel's support will have generated.

An improved Chainer won't tip anyone over into a decision to go Intel when contemplating AI. But bringing Chainer into Intel's world is one of a dozen or a hundred other efforts. Some of those efforts are blindingly obvious billion-dollar acquisitions. Some are imperceptible nudges to useful open source projects. Others will be thoroughly obscure instructions issued to server-makers.

They'll all add up to an ecosystem designed to make Intel all-but-impossible to leave off a list of “vendors to consider when doing AI” .

Of course the world's not going to stand still and let Intel do this. But Chipzilla is confident it can dominate any rivals.

At this point it's tempting to point out that Intel is nowhere in mobile, a field in which it felt the modus operandi it developed with PC-makers would do the trick, but ended up being slaughtered by ARM.

Barry Davis thinks Intel has figured out why: his version of recent history says ARM always wanted to start at the edge of the network and work its way in to the data centre. In the mobile field, Intel found itself late to the party and without the right friends once it arrived. The execs I met yesterday didn't dismiss challenges ARM presents in AI, but feel that as ARM is yet to become a significant data centre player and therefore isn't in a position to spearhead an ecosystem-creating challenge that will satisfy businesses and developers.

Of course Intel would say that, wouldn't it? Or is its confidence derived from deep statistical analysis of closely-observed events that let it infer likely outcomes with satisfying precision? ®

More about

TIP US OFF

Send us news


Other stories you might like