Google Cloud takes a gap year. It may come back with very different ideas

Twelve months without refreshment can break an addiction

Opinion Taking a look at the latest financial results from Google/Alphabet made some of us do a double-take ... and not because of the $40bn+ in ad revenue.

If you read closely, you'll see that Google Cloud has lessened its habitual loss by extending the operational lifespan of its cloud servers by a year, and stretching out some of its other infrastructure for longer.

Google is not a bit player on the market; it could push ahead with its upgrade cycle with some adjustments if it wanted to, and say as much, but no. It is opting out

So what, you might say, wearily playing along in the office with hardware that gets refreshed less often than an octogenarian teetotaller. But this is Google Cloud, one of the headline players in the most important enterprise IT market of our time.

If it's saying that it's improving its competitive offer by not bothering to upgrade its core CPU farm, that says a lot about the cloud, the processor market, and the future of both.

You can see the cloud as it is sold to you, easing the capex/opex ratio, adding flexibility, dialled-in scale, and performance while reducing managerial overhead. In a different light, it's also a fantastic experiment in abstracting what IT actually means in business: paying other people to worry about all the boring stuff on your behalf.

Security, energy, hardware tending, meeting demand at a global scale – or just giving you an instant few cores of server to run up an idea or proof of concept without you having to buy so much as a multiway plug.

So when Google says in effect it doesn't care about upgrading CPUs this time around, you can believe it. Issues like the chip shortage and global economic uncertainty will factor into the decision, but reports from the front line of the server industry indicate that if you've got the clout, you get your share. Google is not a bit player on the market; it could push ahead with its upgrade cycle with some adjustments if it wanted to, and say as much, but no. It is opting out.

This is even more significant because Google is one of the most processor-focused providers. It reveals the processors it uses for different classes of task, sometimes even letting you pick the ones you want, and sometimes they'll even be in the region and the available configuration that you fancy.

Compare that to the choices offered by Amazon AWS EC2, which are number of cores per instance and whether you want multithreading. That's it, and that's much more typical of cloud service providers (CSPs). For most workloads, these firms don't compete on CPU. Storage tiers get the works with latency versus capacity versus cost, but compute performance? Acceptable is good enough. You will get virtual machines running on virtual CPUs, and you will like it.

This leaves the chip companies with some hard questions. They really can't shake the "performance" metric as the drug of choice, and it's still an easy sell to investors.

Headline numbers look good, HPC is always a happy place to be, and you can find plenty of other places where you need lots of performance grunt. General-purpose CPUs have to face off against GPUs and other hardware-optimised silicon there – although massively parallel tasks mostly don't care about x86's legacy.

And, as Apple has proven with its M1 architecture, x86 legacy doesn't have to count for that much elsewhere these days. It's not that CSPs and data centres are gagging for M1s, which work so well because they are so highly evolved for Apple's market.

The x86 emulation overhead is perfectly bearable there while the ecosystem catches up with native versions; acceptable is good enough, and the path forward is clear.

But CSPs aren't gagging for the latest x86 magic either; they'll happily take it at the right price and at the right time, but they'll leave it for a while too. That's a gap, which is suddenly much more interesting. MacBook owners like battery life, but CSPs really don't like the new era of accelerating energy costs.

Chancing your Arm

The ARM-ification of servers at scale has been predicted a few times now, although it's never been quite clear how you get there from here. The M1, however, is a great proof of concept: and the energy bills, specifically, are a great motivator to pay attention.

It is easy now to imagine what the M1's cloud-component cousin would look like. It could be a system-on-chip with a set of computing cores that are intrinsically efficient and can be even more so with the right workload, very tightly coupled to IO and integrated memory, but instead of being tuned for an Apple machine, the SoC would work very well for a particularly configured VM – and work acceptably for others. There would be nothing here that would be beyond the talents of a competent design team, no matter where they work.

With a sea of these, a CSP could have a new, performant, and very competitively priced tier that rewards workloads that are optimised for the native, highly efficient modes, but one that would remain competitive for the older tasks that would otherwise be happy running on the older hardware already in the racks.

The CSPs would get enough wriggle room to price-nudge the clientele into the low-energy workload domain while still picking up a bit more margin.

The world has already moved into the sort of containerised, multi-platform, open-dev, automated regime with the necessary tools and techniques for making apps for such an architecture. That means not much novel engineering would be needed at the codeface.

The motivation is there, the methods are at hand, and the barriers to transition are much reduced. Maybe Google's gap year is an indication that business will not resume as usual. ®

Other stories you might like

  • Experts: AI should be recognized as inventors in patent law
    Plus: Police release deepfake of murdered teen in cold case, and more

    In-brief Governments around the world should pass intellectual property laws that grant rights to AI systems, two academics at the University of New South Wales in Australia argued.

    Alexandra George, and Toby Walsh, professors of law and AI, respectively, believe failing to recognize machines as inventors could have long-lasting impacts on economies and societies. 

    "If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge," they wrote in a comment article published in Nature. "Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions."

    Continue reading
  • Declassified and released: More secret files on US govt's emergency doomsday powers
    Nuke incoming? Quick break out the plans for rationing, censorship, property seizures, and more

    More papers describing the orders and messages the US President can issue in the event of apocalyptic crises, such as a devastating nuclear attack, have been declassified and released for all to see.

    These government files are part of a larger collection of records that discuss the nature, reach, and use of secret Presidential Emergency Action Documents: these are executive orders, announcements, and statements to Congress that are all ready to sign and send out as soon as a doomsday scenario occurs. PEADs are supposed to give America's commander-in-chief immediate extraordinary powers to overcome extraordinary events.

    PEADs have never been declassified or revealed before. They remain hush-hush, and their exact details are not publicly known.

    Continue reading
  • Stolen university credentials up for sale by Russian crooks, FBI warns
    Forget dark-web souks, thousands of these are already being traded on public bazaars

    Russian crooks are selling network credentials and virtual private network access for a "multitude" of US universities and colleges on criminal marketplaces, according to the FBI.

    According to a warning issued on Thursday, these stolen credentials sell for thousands of dollars on both dark web and public internet forums, and could lead to subsequent cyberattacks against individual employees or the schools themselves.

    "The exposure of usernames and passwords can lead to brute force credential stuffing computer network attacks, whereby attackers attempt logins across various internet sites or exploit them for subsequent cyber attacks as criminal actors take advantage of users recycling the same credentials across multiple accounts, internet sites, and services," the Feds' alert [PDF] said.

    Continue reading

Biting the hand that feeds IT © 1998–2022