Updated It appears Google has quietly built an in-house processor with close ties to parallel computing and networking.
Evidence of the CPU, destined for internal use only, emerged today in source code patches for the LLVM C/C++ compiler, allowing programmers to produce executables for the hardware. Not that you can get your hands on any.
Getting the patches accepted into LLVM, though, will make life much easier for Google staff, as it will ease the process of keeping up to date with the main toolchain code.
Looking at the specs, the processor core, dubbed "Lanai", is relatively simple – it's more like a well-equipped microcontroller and unlikely to run compute workloads. However, it could be a building block in a massive parallel computer.
Lanai is described as a simple in-order 32-bit processor with 32 32-bit registers including: two fixed value registers (one probably being zero); four state registers including the program counter, stack pointer, and frame pointer; and two registers reserved for threading support. There is no floating-point hardware, so it won't be juggling tasks involving lots of math.
Google software engineer Jacques Pienaar said the blueprints for Lanai were derived from the textbook Parallel Computer Architecture: A Hardware / Software Approach [PDF] which describes how to build machines that process huge amounts of data efficiently and simultaneously in parallel.
We've heard that Google is using to some degree customized Nvidia chips for its machine-learning systems. The web giant is also toying with ARM and POWER architectures in its data centers, and poking around RISC-V, too. We've known for some time, therefore, that Google is exploring the world of chip design; it's eyebrow-raising to spot such efforts in public.
"This is internal hardware for us, so there's not a lot [of information] we can share, and you can't really grab a version of the hardware," said Googler Chandler Carruth.
"We're working on the backend a bunch, and it didn't make sense to keep it walled off. Especially if there is anything that can be reused in other backends and/or if there is any common infrastructure we need, this makes it easy to test."
The patches submitted today:
Although the source code updates make no mention of a vendor, the Googlers are using Myricom's LANai linker, suggesting the Lanai we've glimpsed today is a custom spin of Myri's high-end network controllers of the same name. In 2013, Myricom's assets were bought by Massachusetts-based CSPi, which builds hardware for hyper-scale cloud providers, and hyper-converged compute and storage hardware for data centers.
Google's Lanai is likely a heavily customized programmable network controller based on Myricom's designs. Its purpose would be to build intelligence into the fabric of the internet giant's data centers, perhaps to weave a complex software-defined network for its server warehouses.
Spokespeople for Google and CSPi were not available for immediate comment. ®
Updated to add
A well-placed industry source familiar with Myricom has confirmed that Google's Lanai is derived from Myri's technology. We're told the web giant obtained the designs in 2013, and it appears to have spent the past few years tailoring the blueprints for its hyper-scale networking needs.
Google engineers are actively working on developing code for the hardware, which is excellent at accelerating memcache workloads. Memcache stores objects in huge pools of memory distributed across thousands of servers, and is a crucial component in many large-scale services.
"In 2012, Google was searching for ultra low-latency networking technology," our source, who spoke on condition of anonymity, said.
"Google invited several Myricom engineers in to discuss this, and find out more about Myricom’s new silicon, due out in early 2013. By March 2013, Google had tendered an offer to Myricom to acquire its soon-to-be-released 10/40GbE controller chip, along with a number of its PhDs who worked on the hardware and software architecture.
"Little is publicly known regarding the use of the chip."
Well, until now.