Wow, what an incredible 12 months: 2017's data center year in review

Predictions of the present past from today's future, or something

Comment The data center market is hot, especially now that we are getting a raft of funky new stuff, from promising non-Intel chips and system architectures to power and cooling optimizations.

Since we're all thinking several quarters ahead anyway, we're practically in 2018. So from that point of view, we may as well look back at 2017 and tell you how it all went down and why. Thus, here is my “2017 Data Center Year in Review,” with ten byte-sized inevitable outcomes.

Data center optimization is here

Data centers are information factories with lots of components and moving parts. There was a time when companies started becoming much more complex, which fueled the massive enterprise resource planning market. Managing everything in the data center is in a similar place. To automate, monitor, troubleshoot, plan, optimize, cost-contain, report, etc, is a giant task, and we were happy to see new apps in this area.

Data center infrastructure management was billed as the cool new way to provide visibility into and control of IT. Everyone realized how badly they needed it and wondered how they did without it. Some day, it will be one cohesive thing, but for now, because it’s such a big task, there are several companies addressing different parts of it.

Azure will grow faster than AWS

Cloud is the big wave, of course, and almost anything that touches it is on the right side of history. So, it was little surprise that private and hybrid clouds grew nicely and they even tempered the growth of public clouds. But the growth of public clouds continued to impress, despite increasing recognition that they are not the cheapest option.

AWS led again, capturing most new apps. However, Azure grew faster, on the strength of landing new apps but also bringing along the existing apps, where Microsoft maintains a significant footprint.

Moving Exchange, Office, and other apps to the cloud, in addition to operating lots of regional data centers and having lots of local feet on the ground, must have helped.

Some of the same dynamics helped Oracle show a strong hand and get close to Google and IBM, and large telcos stayed very much in the game. Smaller players persevered and even grew, but they also started to realize that public clouds are their supplier or partner, not their competition! It was cheaper for them to OEM services from bigger players, or offer joint services, than to build and maintain their own public cloud.

Great Wall of persistent memory will become a thing

Just as the hottest thing in enterprise became Big Data, we find that the most expensive part of computing is moving all that data around. Figures, right? Naturally, we started seeing in-situ processing: instead of data going to compute, compute would go to data, processing it locally wherever data happens to be.

But then the gap between CPU speed and storage speed separated apps and data. Memory became the bottleneck. In came storage class memory (mostly flash, with a nod to other promising technologies), getting larger, faster and cheaper.

So, we started seeing examples of apps using a Great Wall of persistent memory, built by hardware and software solutions that bridge the size/speed/cost gap between traditional storage and DRAM. Eventually, we expect programming languages to naturally support byte-addressable persistent data.

System vendors will announce racks, not servers

Vendors already configured and sold racks, but they often populated them with servers that were designed as if they’d be used stand-alone. Vendors with rack-level thinking were doing better because designing the rack vs the single node let them add value to the rack, while removing unneeded value from server nodes.

So server vendors started thinking of a rack, not a single-node, as the system they sell. Intel’s Rack Scale Architecture continued to be on the right track, a real competitive advantage, and an indication of how traditional server vendors must adapt. The server rack became the next level of integration and is now what a “typical system” looks like. Going forward, multi-rack systems are where server vendors have a shot at adding real value. HPC vendors have long been there.

Server revenue growth will be lower than GDP growth

Traditional enterprise apps – the bulk of what runs on servers – showed that they had access to enough compute capacity already. Most of that work is transactional, so their growth is correlated with the growth in GDP, minus efficiencies in processing.

New apps, on the other hand, are hungry, but they are much more distributed, more focused on mobile clients, and more amenable to what we call high-density processing: algorithms that have a high ops/bytes ratio running on hardware that provides similarly high ops/byte capability – ie, compute accelerators like GPUs, FPGAs, vector processors, manycore CPUs, and ASICs.

On top of that, there was more in-situ processing: processing the data wherever it happens to be, locally, vs sending it around to, say, the backend. This was made easier by the significant rise in client-side computing power and more capable switches and storage nodes that can do a lot of local processing.

We also continued to see cloud computing and virtualization eliminate idle servers and increase the utilization rates of existing systems. Finally, commoditization of servers and racks, driven by fewer-but-larger buyers and standardization efforts like the Open Compute Project, put pressure on server costs and continued to limit the areas in which server vendors can add value. The old adage in servers: “I know how to build it so it costs $1m, but don’t know how to build it so it’s worth $1m” was never more true.

These all combined to keep server revenues in check. We saw the 5G’s wow-speeds but modest roll-out, and thought it could drive a jump in video and some server-heavy apps, but that’d have to wait.

Similar topics

Narrower topics

Other stories you might like

  • North Korea pulled in $400m in cryptocurrency heists last year – report

    Plus: FIFA 22 players lose their identity and Texas gets phony QR codes

    In brief Thieves operating for the North Korean government made off with almost $400m in digicash last year in a concerted attack to steal and launder as much currency as they could.

    A report from blockchain biz Chainalysis found that attackers were going after investment houses and currency exchanges in a bid to purloin funds and send them back to the Glorious Leader's coffers. They then use mixing software to make masses of micropayments to new wallets, before consolidating them all again into a new account and moving the funds.

    Bitcoin used to be a top target but Ether is now the most stolen currency, say the researchers, accounting for 58 per cent of the funds filched. Bitcoin accounted for just 20 per cent, a fall of more than 50 per cent since 2019 - although part of the reason might be that they are now so valuable people are taking more care with them.

    Continue reading
  • Tesla Full Self-Driving videos prompt California's DMV to rethink policy on accidents

    Plus: AI systems can identify different chess players by their moves and more

    In brief California’s Department of Motor Vehicles said it’s “revisiting” its opinion of whether Tesla’s so-called Full Self-Driving feature needs more oversight after a series of videos demonstrate how the technology can be dangerous.

    “Recent software updates, videos showing dangerous use of that technology, open investigations by the National Highway Traffic Safety Administration, and the opinions of other experts in this space,” have made the DMV think twice about Tesla, according to a letter sent to California’s Senator Lena Gonzalez (D-Long Beach), chair of the Senate’s transportation committee, and first reported by the LA Times.

    Tesla isn’t required to report the number of crashes to California’s DMV unlike other self-driving car companies like Waymo or Cruise because it operates at lower levels of autonomy and requires human supervision. But that may change after videos like drivers having to take over to avoid accidentally swerving into pedestrians crossing the road or failing to detect a truck in the middle of the road continue circulating.

    Continue reading
  • Alien life on Super-Earth can survive longer than us due to long-lasting protection from cosmic rays

    Laser experiments show their magnetic fields shielding their surfaces from radiation last longer

    Life on Super-Earths may have more time to develop and evolve, thanks to their long-lasting magnetic fields protecting them against harmful cosmic rays, according to new research published in Science.

    Space is a hazardous environment. Streams of charged particles traveling at very close to the speed of light, ejected from stars and distant galaxies, bombard planets. The intense radiation can strip atmospheres and cause oceans on planetary surfaces to dry up over time, leaving them arid and incapable of supporting habitable life. Cosmic rays, however, are deflected away from Earth, however, since it’s shielded by its magnetic field.

    Now, a team of researchers led by the Lawrence Livermore National Laboratory (LLNL) believe that Super-Earths - planets that are more massive than Earth but less than Neptune - may have magnetic fields too. Their defensive bubbles, in fact, are estimated to stay intact for longer than the one around Earth, meaning life on their surfaces will have more time to develop and survive.

    Continue reading

Biting the hand that feeds IT © 1998–2022