Wow, what an incredible 12 months: 2017's data center year in review
Predictions of the present past from today's future, or something
Comment The data center market is hot, especially now that we are getting a raft of funky new stuff, from promising non-Intel chips and system architectures to power and cooling optimizations.
Since we're all thinking several quarters ahead anyway, we're practically in 2018. So from that point of view, we may as well look back at 2017 and tell you how it all went down and why. Thus, here is my “2017 Data Center Year in Review,” with ten byte-sized inevitable outcomes.
Data center optimization is here
Data centers are information factories with lots of components and moving parts. There was a time when companies started becoming much more complex, which fueled the massive enterprise resource planning market. Managing everything in the data center is in a similar place. To automate, monitor, troubleshoot, plan, optimize, cost-contain, report, etc, is a giant task, and we were happy to see new apps in this area.
Data center infrastructure management was billed as the cool new way to provide visibility into and control of IT. Everyone realized how badly they needed it and wondered how they did without it. Some day, it will be one cohesive thing, but for now, because it’s such a big task, there are several companies addressing different parts of it.
Azure will grow faster than AWS
Cloud is the big wave, of course, and almost anything that touches it is on the right side of history. So, it was little surprise that private and hybrid clouds grew nicely and they even tempered the growth of public clouds. But the growth of public clouds continued to impress, despite increasing recognition that they are not the cheapest option.
AWS led again, capturing most new apps. However, Azure grew faster, on the strength of landing new apps but also bringing along the existing apps, where Microsoft maintains a significant footprint.
Moving Exchange, Office, and other apps to the cloud, in addition to operating lots of regional data centers and having lots of local feet on the ground, must have helped.
Some of the same dynamics helped Oracle show a strong hand and get close to Google and IBM, and large telcos stayed very much in the game. Smaller players persevered and even grew, but they also started to realize that public clouds are their supplier or partner, not their competition! It was cheaper for them to OEM services from bigger players, or offer joint services, than to build and maintain their own public cloud.
Great Wall of persistent memory will become a thing
Just as the hottest thing in enterprise became Big Data, we find that the most expensive part of computing is moving all that data around. Figures, right? Naturally, we started seeing in-situ processing: instead of data going to compute, compute would go to data, processing it locally wherever data happens to be.
But then the gap between CPU speed and storage speed separated apps and data. Memory became the bottleneck. In came storage class memory (mostly flash, with a nod to other promising technologies), getting larger, faster and cheaper.
So, we started seeing examples of apps using a Great Wall of persistent memory, built by hardware and software solutions that bridge the size/speed/cost gap between traditional storage and DRAM. Eventually, we expect programming languages to naturally support byte-addressable persistent data.
System vendors will announce racks, not servers
Vendors already configured and sold racks, but they often populated them with servers that were designed as if they’d be used stand-alone. Vendors with rack-level thinking were doing better because designing the rack vs the single node let them add value to the rack, while removing unneeded value from server nodes.
So server vendors started thinking of a rack, not a single-node, as the system they sell. Intel’s Rack Scale Architecture continued to be on the right track, a real competitive advantage, and an indication of how traditional server vendors must adapt. The server rack became the next level of integration and is now what a “typical system” looks like. Going forward, multi-rack systems are where server vendors have a shot at adding real value. HPC vendors have long been there.
Server revenue growth will be lower than GDP growth
Traditional enterprise apps – the bulk of what runs on servers – showed that they had access to enough compute capacity already. Most of that work is transactional, so their growth is correlated with the growth in GDP, minus efficiencies in processing.
New apps, on the other hand, are hungry, but they are much more distributed, more focused on mobile clients, and more amenable to what we call high-density processing: algorithms that have a high ops/bytes ratio running on hardware that provides similarly high ops/byte capability – ie, compute accelerators like GPUs, FPGAs, vector processors, manycore CPUs, and ASICs.
On top of that, there was more in-situ processing: processing the data wherever it happens to be, locally, vs sending it around to, say, the backend. This was made easier by the significant rise in client-side computing power and more capable switches and storage nodes that can do a lot of local processing.
We also continued to see cloud computing and virtualization eliminate idle servers and increase the utilization rates of existing systems. Finally, commoditization of servers and racks, driven by fewer-but-larger buyers and standardization efforts like the Open Compute Project, put pressure on server costs and continued to limit the areas in which server vendors can add value. The old adage in servers: “I know how to build it so it costs $1m, but don’t know how to build it so it’s worth $1m” was never more true.
These all combined to keep server revenues in check. We saw the 5G’s wow-speeds but modest roll-out, and thought it could drive a jump in video and some server-heavy apps, but that’d have to wait.