This article is more than 1 year old

Power9: Google gives Intel a chip-flip migraine, IBM tries to lures big biz

The CPU arch that refuses to die

OpenPower Summit IBM's Power9 processor, due to arrive in the second half of next year, will have 24 cores, double that of today's Power8 chips, it emerged today.

Meanwhile, Google has gone public with its Power work – confirming it has ported many of its big-name web services to the architecture, and that rebuilding its stack for non-Intel gear is a simple switch flip.

There was a lot announced at this morning's OpenPower Summit in San Jose, California. Here's what went down:

Big Blue teases Power9 details

Talk about core war. Intel announces a bunch of 22-core Xeon E5 v4 server chips, and a week or so later, IBM says its next big-iron chip – the Power9 – will have 24 cores.

Big Blue eked out a few more details about its processor for the first time today (we last saw a roadmap for the Power family way back in August). The Power9 will be a 14nm high-performance FinFET product fabbed by Global Foundries. It is directly attached to DDR4 RAM, talks PCIe gen-4 and NVLink 2.0 to peripherals and Nvidia GPUs, and can chuck data at accelerators at 25Gbps.

IBM says the design is optimized for two-socket scale-out servers, hence the name Power9 SO, and includes on-chip acceleration for compression and encryption.

OpenPower opened up ... Click to enlarge any photo

The chip is aimed at big biz and supercomputers crunching analytics, big data, machine learning, and that sort of stuff. Make no mistake: Intel has the data center compute market crushed; Power is still plucking away as a niche architecture. The Power9 is due to arrive in 2017, and be the brains in the US Department of Energy's Summit and Sierra supercomputers.

Don't forget about IBM's OpenPower Foundation, which licenses blueprints to the CPU's architecture, server hardware and software out to the world. Chinese companies are preparing to launch their own Power8 and 9 chips – dubbed "partner chips" – using the OpenPower blueprints in 2018 to 2020. Those will be built out of 7nm to 10nm gates.

So Uncle Sam is spinning up Power9 supercomputers next year, and then the year after China will have its own supply of Power8 or 9 processors to fill up its racks. And yet, there's a ban on supplying high-end Intel Xeons to Chinese supercomputer builders. Either the US government hasn't thought to outlaw the export of CPU blueprints, or Big Blue's technology in foreign hands isn't seen as a strategic threat to national security.

Summit's peak performance should be 300 peta-FLOPs, thrashing China's leading 55 PFLOPS Tianhe-2, but a good chunk of the American system's performance will come from the Nvidia Volta GPUs rather than the Power9s.


Google ports its big-name web services to Power

Google loves to keep its options for suppliers open, and like any other moneybags hyper-scale cloud provider, it has the cash to splash on experiments with non-Intel-x86 architectures.

We know it's toying with 64-bit ARMv8 cores, and now Power chips. This isn't too much of a surprise because Google is a founding member of the OpenPower Foundation.

Google says it has ported many of its big-name web services to run on Power systems; its toolchain has been updated to output code for x86, ARM or Power architectures with the flip of a configuration flag. We can imagine a shedload of Google's internal source code is rather portable, and cross compiling it isn't beyond its programmers. Indeed, Google senior director Gordon MacKean said in 2015 that the cloud goliath strives to keep its software platform agnostic. For one thing, targeting multiple architectures prevents bit rot by weeding out esoteric bugs.

Given the rate of increase in use of Google's services, the ad giant knows it has to try out competing technologies to ensure it's using the best possible combinations of hardware and software to meet demand – it has to be sure it's getting the best bang for the buck, and that requires testing and experimentation.

"A lot has changed at Google since I joined nine years ago," said Maire Mahony, a Google engineering manager and an OpenPower Foundation director.

"Search could find just under a trillion web addresses, now that's up to 60 trillion web addresses. Gmail has more than a billion active users, more than double the users we had in 2012. YouTube had seven hours of video uploaded every minute, now YouTube has 400 hours of video uploaded per minute. The demand on compute has been relentless, and I can't see it abating any time soon.

Scaling problems ... Mahony's slides at the OpenPower Summit

"Compute technology development is at a crossroads. The cost of making transistors smaller is increasing, and all of this overhead makes it more challenging for us to deliver on that equation of performance per TCO dollar. We need to have a different approach. Google is backing the vision that underpins the OpenPower Foundation.

"That vision is to build scale-out server solutions based on OpenPower. We're really excited where this platform will take us."

You could hear the screams from Intel's campuses in Oregon all the way down here in San Jose.

"We have ported our infrastructure onto the Power architecture. What that means is that our toolchain supports Power; for our Google developers, enabling Power for their software applications is simply a matter of modifying a config file and off they go," she added.

"Everyone needs a second source," shrugged an Intel staffer over coffee when it emerged Google was testing Qualcomm's ARM server-grade chips. Well, here's a third source. Now it must be said that Google appears to be assessing the Power architecture at this stage – the vast majority of its systems are Intel-driven.

However, IBM's architecture is enough of a draw for the web giant that it's added support for the chips into its toolchain, so that a shift from Intel is a recompile away. Rarely is this highly secretive Google so open about its internal structures.

Which leads us into news that broke an hour before Mahony took to the stage: Google and Rackspace working together on Power9 server blueprints for the Open Compute Project. These designs are compatible with the 48V Open Compute racks Google and Facebook are working on.

The blueprints can be given to hardware factories to turn out machines relatively cheaply, which is the point of the Open Compute Project: driving down costs and designing hardware to hyper-scale requirements. Rackspace will use the systems to run Power9 workloads in its cloud.

The system itself is codenamed Zaius: a dual-socket Power9 SO server with 32 DDR4 memory slots, two NVlink slots, three PCIe gen-4 x16 slots, and a total core count of 44. And what's not to like? For one thing: high-speed NVlink interconnects between CPUs and Nvidia GPU accelerators, which Google likes to throw its deep-learning AI code at.

Rackspace also announced the arrival of its Power8 Barreleye servers – you can find out more here on our sister site, The Next Platform.

Intel Inside meets Power

The OpenPower Foundation has rolled out an "OpenPower Ready" branding for Power systems that meet certain criteria, so that buyers know what they're getting into. It sorta reminded us of Intel Inside.

A vendor requests the right to stick the badge on their gear, and either claims they meet all the necessary requirements; demonstrate they meet the requirements at an event; or get someone to verify for them. Then, if accepted, they get the badge and go into the foundation's online catalog of gear that's been given the thumbs up. And now you know. ®

More about


Send us news

Other stories you might like