This article is more than 1 year old

Google unwraps its gateway drug: Edge TPU chips for IoT AI code

Custom ASICs make decisions on sensors as developers get hooked on ad giant's cloud

Google has designed a low-power version of its homegrown AI math accelerator, dubbed it the Edge TPU, and promised to ship it to developers by October.

Announced at Google Next 2018 today, the ASIC is a cutdown edition of its Tensor Processing Unit (TPU) family of in-house-designed coprocessors. TPUs are used internally at Google to power its machine-learning-based services, or are rentable via its public cloud. These chips are specific designed for and used to train neural networks and perform inference.

Now the web giant has developed a cut-down inference-only version suitable for running in Internet-of-Things gateways. The idea is you have a bunch of sensors and devices in your home, factory, office, hospital, etc, connected to one of these gateways, which then connects to Google's backend services in the cloud for additional processing.

Inside the gateway is the Edge TPU, plus potentially a graphics processor, and a general-purpose application processor running Linux or Android and Google's Cloud IoT Edge software stack. This stack contains lightweight Tensorflow-based libraries and models that access the Edge TPU to perform AI tasks at high speed in hardware. This work can also be performed on the application CPU and GPU cores, if necessary. You can use your own custom models if you wish.

AI boss and human slaves

Google Cloud AutoML: Neural nets designed by neural nets? It may as well be AI hyped by AI


The stack ensures connections between the gateway and the backend are secure. If you wanted, you could train a neural network model using Google's Cloud TPUs and have the Edge TPUs perform inference locally.

The goal is to perform as much AI inference on the gateway as possible using incoming data from sensors and equipment, which means less information being shuttled back and forth between gadgets and backend internet servers, which means lower latency and faster decisions, less bandwidth consumed, and less data security risk.

We're told the Edge TPU, which uses PCIe or USB to interface with the host system-on-chip, can perform inference tasks on real time video at up to 30 frames per second, using 8-bit and 16-bit integer precision. This suggests one primetime use for the hardware is analyzing camera feeds in your home or workplace for particular objects, people, movement, and so on.

The search'n'ads goliath is working with NXP and Arm as well as Accton, Harting, Hitachi Vantara, Nexcom, and Nokia to produce the chips and gateways. A gateway developer kit – pictured below – using an Arm-based NXP system-on-chip, Wi-Fi controller, secure element, and Edge TPU on a module will apparently be ready to order by October this year.

We note it also appears to include HDMI, USB and Ethernet interfaces, and presumably general purpose IO pins.

"Edge TPU is Google’s purpose-built ASIC chip designed to run TensorFlow Lite ML models at the edge," said Google's IoT veep Injong Rhee in announcing the tech.

"When designing Edge TPU, we were hyperfocused on optimizing for 'performance per watt' and 'performance per dollar' within a small footprint. Edge TPUs are designed to complement our Cloud TPU offering, so you can accelerate ML training in the cloud, then have lightning-fast ML inference at the edge. Your sensors become more than data collectors—they make local, real-time, intelligent decisions."

This is essentially Google's answer to Internet-of-Things gateway hardware and backend services touted by Arm and separately Microsoft, with bonus inference hardware acceleration and pay-as-you-go Google cloud lock-in. Come for the developer kit, stay for the monthly software-as-a-service and online storage payments.

Of course, Arm and other chip designers have their own IoT and gadget-grade machine-learning accelerator offerings. What's interesting here is Google letting people get their hands directly on a member of the TPU family – a line of custom chips that until now have been locked away deep inside the advertising beast's data centers.

You can find a few more details from Google here, here, and here. ®

We'll be examining machine learning, artificial intelligence, and data analytics, and what they mean for you, at Minds Mastering Machines in London, between October 15 and 17. Head to the website for the full agenda and ticket information.

More about


Send us news

Other stories you might like