SDI wars: WTF is software defined infrastructure?

This time we play for ALL the marbles


Sysadmin blog The Software Defined Infrastructure (SDI) war is coming, and it will reshape the information technology landscape like nothing has since the invention of the PC itself.

It consists of sub-wars, each important in their own right, but the game is bigger than any of them.

We have just been through the worst of the storage wars. The networking wars are almost in full swing. The orchestration and automation wars are just beginning and the predictive analytics wars can be seen on the horizon.

Each of these wars would be major events unto themselves. Billions upon billions of dollars will change hands. Empires will rise and startups will fall. Yet despite all of that, each of those wars is a tactical skirmish compared to the strategic – and tactical – war that is only just beginning.

The SDI war is to be the net result of all of the sub-wars listed above, as well as several other smaller ones that are mostly irrelevant. The SDI war is the final commoditisation of servers – and entire datacenters – in one last gasp to counter the ease of use of public cloud computing and the inflated expectations brought about by the proliferation of walled garden smartphone and tablet technology.

What's in an SDI block?

The SDI wars will not focus on storage, networking or compute, but on radically changing the atomic element of computing consumed. Instead of buying "a server" or "an array", loading it with a hypervisor, then backups, monitoring, WAN acceleration and so forth, we will buy an "omni-converged" compute unit. I shall dub this an SDI block until someone comes up with better a marketing buzzword.

When the dust settles, an SDI block will contain – but by no means be limited to – the following key elements:

  1. A server that will provide compute resources (CPU, RAM, GPU, etc).
  2. Distributed storage resources. Fully inline deduplication and compression are no longer optional (think server SANs).
  3. Fully automated and integrated backups – application aware, auto-configuring, auto-testing. This new generation will be as close to "zero-touch" as is possible.
  4. Fully automated and integrated disaster recovery. Application aware, auto-configuring, auto-testing. This new generation will be as close to "zero-touch" as is possible.
  5. Fully integrated hybrid cloud computing, with resources in the public cloud consumed as easily as local. The ability to move between multiple cloud providers, based on cost, data sovereignty requirements or latency/locality needs. The providers who want to win the hybrid cloud portion of the exercise will build in awareness of privacy and security and allow administrators to easily select not only geo-local providers, but those known to have zero foreign legal attack surface, and they will clearly differentiate between them.
  6. WAN optimisation technology.
  7. A hypervisor or hypervisor/container hybrid running on the metal.
  8. Management software to allow us to manage the hardware (via IPMI) and the hypervisor.
  9. Adaptive monitoring software that will detect new applications and operating systems and automatically monitor them properly. This means only alerting systems administrators when something actually needs to be cared about, not flooding their inboxes with so much crap they stop paying attention. Adaptive monitoring will emphatically not require manual configuration.
  10. Predictive analytics software that will determine when resources will exceed capacity, when hardware is likely to fail, or when licensing can no longer be worked around.
  11. Automation and load maximization software that will make sure the hardware and software components are used to their maximum capacity, given the existing hardware and existing licensing bounds.
  12. Orchestration software that will not only spin up groups of applications on demand or as needed, but will provide an "app-store" like (or Docker-like, or public cloud-like) experience for selecting new workloads and getting them up and running on your local infrastructure in just a couple of clicks.
  13. Autobursting, as an adjunct of Orchestration will intelligently decide between hot-adding capacity to legacy workloads (CPU, RAM, etc) or spinning up new instances of modern burstable applications to handle load. It would, of course, then scale them back down when possible.
  14. Hybrid identity services that work across private infrastructure and public cloud spaces. They will not only manage identity but provide complete user experience management solutions that work anywhere.
  15. Complete software defined networking stack, including layer 2 extension between data centres as well as the public and private cloud. This means that spinning up a workload will automatically configure networking, firewalls, intrusion detection, application layer gateways, mirroring, load balancing, content distribution network registration, certificates and so forth.
  16. Chaos creation in the form of randomised automated testing for failure of all non-legacy workloads and infrastructure elements to ensure that the network still meets requirements.

What's the point?

The ultimate goal is that of true stateless provisioning. This would be similar to the "golden master" concept so familiar to those employing Virtual Desktop Infrastructure (VDI) brought to all workloads.

So you want a MySQL database tuned for the SDI block you are running? It will deploy a golden master from the orchestration software pre-configured and pre-tested to run optimally on that hardware. Your data and customizations are separate from the OS and the application itself. When the OS and app are updated, the image will be altered by the vendor; you simply restart the VM and you're good to go.

All monitoring, backups, networking, storage configuration and so forth will simply take care of themselves. Resources will be allocated dynamically based on the hardware available and the constraints placed by systems administrators on what can be sent to which clouds and when.

Unlike the public cloud, this won't be available just to new workloads coded from the ground up. Legacy workloads are here to stay and SDI blocks are all about instrumenting them as fully as possible and enabling them to have as much of the cloud-like simplicity as their aged design allows.


Other stories you might like

  • It's 2022 and there are still malware-laden PDFs in emails exploiting bugs from 2017
    Crafty file names, encrypted malicious code, Office flaws – ah, it's like the Before Times

    HP's cybersecurity folks have uncovered an email campaign that ticks all the boxes: messages with a PDF attached that embeds a Word document that upon opening infects the victim's Windows PC with malware by exploiting a four-year-old code-execution vulnerability in Microsoft Office.

    Booby-trapping a PDF with a malicious Word document goes against the norm of the past 10 years, according to the HP Wolf Security researchers. For a decade, miscreants have preferred Office file formats, such as Word and Excel, to deliver malicious code rather than PDFs, as users are more used to getting and opening .docx and .xlsx files. About 45 percent of malware stopped by HP's threat intelligence team in the first quarter of the year leveraged Office formats.

    "The reasons are clear: users are familiar with these file types, the applications used to open them are ubiquitous, and they are suited to social engineering lures," Patrick Schläpfer, malware analyst at HP, explained in a write-up, adding that in this latest campaign, "the malware arrived in a PDF document – a format attackers less commonly use to infect PCs."

    Continue reading
  • New audio server Pipewire coming to next version of Ubuntu
    What does that mean? Better latency and a replacement for PulseAudio

    The next release of Ubuntu, version 22.10 and codenamed Kinetic Kudu, will switch audio servers to the relatively new PipeWire.

    Don't panic. As J M Barrie said: "All of this has happened before, and it will all happen again." Fedora switched to PipeWire in version 34, over a year ago now. Users who aren't pro-level creators or editors of sound and music on Ubuntu may not notice the planned change.

    Currently, most editions of Ubuntu use the PulseAudio server, which it adopted in version 8.04 Hardy Heron, the company's second LTS release. (The Ubuntu Studio edition uses JACK instead.) Fedora 8 also switched to PulseAudio. Before PulseAudio became the standard, many distros used ESD, the Enlightened Sound Daemon, which came out of the Enlightenment project, best known for its desktop.

    Continue reading
  • VMware claims 'bare-metal' performance on virtualized GPUs
    Is... is that why Broadcom wants to buy it?

    The future of high-performance computing will be virtualized, VMware's Uday Kurkure has told The Register.

    Kurkure, the lead engineer for VMware's performance engineering team, has spent the past five years working on ways to virtualize machine-learning workloads running on accelerators. Earlier this month his team reported "near or better than bare-metal performance" for Bidirectional Encoder Representations from Transformers (BERT) and Mask R-CNN — two popular machine-learning workloads — running on virtualized GPUs (vGPU) connected using Nvidia's NVLink interconnect.

    NVLink enables compute and memory resources to be shared across up to four GPUs over a high-bandwidth mesh fabric operating at 6.25GB/s per lane compared to PCIe 4.0's 2.5GB/s. The interconnect enabled Kurkure's team to pool 160GB of GPU memory from the Dell PowerEdge system's four 40GB Nvidia A100 SXM GPUs.

    Continue reading

Biting the hand that feeds IT © 1998–2022