This article is more than 1 year old
Microsoft 'Catapults' geriatric Moore's Law from CERTAIN DEATH
FPGAs DOUBLE data center throughput despite puny power pump-up, we're told
Microsoft has found a way to massively increase the compute capabilities of its data centers, despite the fact that Moore's Law is wheezing towards its inevitable demise.
In a paper to be presented this week at the International Symposium on Computer Architecture (ISCA), titled A Reconfigurable Fabric for Accelerating Large-Scale Datacenter Services, a troupe of top Microsoft Research boffins explain how the company has dealt with the slowdown in single-core clock-rate improvements that has occurred over the past decade.
To get around this debilitating problem – more on this later – Microsoft has built a system it calls Catapult, which automatically offloads some of the advanced tech that powers its Bing search engine onto clusters of highly efficient, low-power FPGA chips attached to typical Intel Xeon server processors.
Think of FPGAs – field-programmable gate arrays – as chips whose circuits can be customised and tweaked as required, allowing crucial tasks to be transferred away from the Xeons and instead accelerated in FPGA hardware.
This approach may save Microsoft from a rarely acknowledged problem that lurks in the technology industry: processors are not getting much faster.
Wait. What?
For those not familiar with the chip industry, a primer. For the past 50 years, almost every aspect of our global economy has been affected by Moore's Law, which states that the number of transistors on a chip of the same size will double every 18 months – or so – resulting in faster performance and better power efficiency
One slight problem: Moore's Law is not, in fact, a law. Instead, it was an assertion by Intel founder Gordon Moore in a 1965 article that the semiconductor industry got rather carried away with. In the past ten years, the salubrious effects of Moore's Law have started to wane, because although companies are packing more and more transistors onto their chips, the performance gains that those transistors bring with them are not as great as they were during the law's halcyon days.
Intel has yoked its entire business to the successful fulfillment of Moore's Law, and proudly announces each new boost in transistor counts. And, yes, those "new" transistors can help to increase a compute core's all-important instruction per cycle (IPC) metric – improved branch prediction, larger caches, more-efficient scheduling, beefier buffers, whatever – but the simple fact is that although chips have gone multi-core and are getting better at multi-tasking, those individual cores are not getting much faster due to any significant new discovery.
As AMD CTO Joe Macri recently told us, "There's not a whole lot of revolution left in CPUs." He did, however, note that "there's a lot of evolution left."
Microsoft's Catapult is a bit of both.
Programmable software, meet programmable hardware
Under new chief executive Satya Nadella, Microsoft is throwing billions of dollars at massive data centers in its attempt to become a cloud-first company. Part of that effort – and that investment – is to figure out a way to jump-start consistent data-center compute-performance boosts.
The solution that Microsoft Research has come up with is to pair field-programmable gate arrays with typical x86 processors, then let some data-center services such as the Bing search engine offload certain well-understood operations to the arrays.
To say that the performance improvements in this approach have been noticeable would be a gross understatement. Microsoft tells us that a test deployment on 1,632 servers was able to increase query throughput by 95 per cent, while only increasing power consumption by 10 per cent.
Though FPGA technology is well understood and used widely in the embedded technology industry, it's rare to hear of it being paired with standard off-the-shelf CPUs for accelerating web-facing software – until now, that is.
"We're moving into an era of programmable hardware supporting programmable software," Microsoft Research's Doug Burger told The Register. "We're just starting down that road now."
If Microsoft has indeed figured out how to almost double the performance of its computers while only paying a tenth more in electricity for large-scale data center tasks – and we see no reason to doubt them – that's not only a huge saving, but also one that saves the company from the slowdown in run-of-the-mill CPUs chips.
"Based on the results, Bing will roll out FPGA-enhanced servers in one data center to process customer searches starting in early 2015," Derek Chiou, a principal architect of Bing, said in a statement emailed to El Reg.
"We were looking to make a big jump forward in data center capabilities. It's an important area," Microsoft Research's Doug Burger explained to us.
"We wanted to do something that we thought could put us on a path that makes some really big leaps. Rather than banking on scaling to many, many more cores, let's take a different path – what can we do in hardware? We think specialization is going to be the next big thing."
Microsoft isn't doing this on a hunch. Burger wrote a paper [PDF] in 2011, Dark Silicon and the end of Multicore Scaling, which predicted that "left to the multicore path, we may hit a 'transistor utility economics' wall in as few as three to five years, at which point Moore's Law may end, creating massive disruptions in our industry."
So far, there are few signs to the contrary. [And if you want a real horror story, take a gander at the slow development of EUV lithography — Ed.]