SDI wars: WTF is software defined infrastructure?

This time we play for ALL the marbles


Sysadmin blog The Software Defined Infrastructure (SDI) war is coming, and it will reshape the information technology landscape like nothing has since the invention of the PC itself.

It consists of sub-wars, each important in their own right, but the game is bigger than any of them.

We have just been through the worst of the storage wars. The networking wars are almost in full swing. The orchestration and automation wars are just beginning and the predictive analytics wars can be seen on the horizon.

Each of these wars would be major events unto themselves. Billions upon billions of dollars will change hands. Empires will rise and startups will fall. Yet despite all of that, each of those wars is a tactical skirmish compared to the strategic – and tactical – war that is only just beginning.

The SDI war is to be the net result of all of the sub-wars listed above, as well as several other smaller ones that are mostly irrelevant. The SDI war is the final commoditisation of servers – and entire datacenters – in one last gasp to counter the ease of use of public cloud computing and the inflated expectations brought about by the proliferation of walled garden smartphone and tablet technology.

What's in an SDI block?

The SDI wars will not focus on storage, networking or compute, but on radically changing the atomic element of computing consumed. Instead of buying "a server" or "an array", loading it with a hypervisor, then backups, monitoring, WAN acceleration and so forth, we will buy an "omni-converged" compute unit. I shall dub this an SDI block until someone comes up with better a marketing buzzword.

When the dust settles, an SDI block will contain – but by no means be limited to – the following key elements:

  1. A server that will provide compute resources (CPU, RAM, GPU, etc).
  2. Distributed storage resources. Fully inline deduplication and compression are no longer optional (think server SANs).
  3. Fully automated and integrated backups – application aware, auto-configuring, auto-testing. This new generation will be as close to "zero-touch" as is possible.
  4. Fully automated and integrated disaster recovery. Application aware, auto-configuring, auto-testing. This new generation will be as close to "zero-touch" as is possible.
  5. Fully integrated hybrid cloud computing, with resources in the public cloud consumed as easily as local. The ability to move between multiple cloud providers, based on cost, data sovereignty requirements or latency/locality needs. The providers who want to win the hybrid cloud portion of the exercise will build in awareness of privacy and security and allow administrators to easily select not only geo-local providers, but those known to have zero foreign legal attack surface, and they will clearly differentiate between them.
  6. WAN optimisation technology.
  7. A hypervisor or hypervisor/container hybrid running on the metal.
  8. Management software to allow us to manage the hardware (via IPMI) and the hypervisor.
  9. Adaptive monitoring software that will detect new applications and operating systems and automatically monitor them properly. This means only alerting systems administrators when something actually needs to be cared about, not flooding their inboxes with so much crap they stop paying attention. Adaptive monitoring will emphatically not require manual configuration.
  10. Predictive analytics software that will determine when resources will exceed capacity, when hardware is likely to fail, or when licensing can no longer be worked around.
  11. Automation and load maximization software that will make sure the hardware and software components are used to their maximum capacity, given the existing hardware and existing licensing bounds.
  12. Orchestration software that will not only spin up groups of applications on demand or as needed, but will provide an "app-store" like (or Docker-like, or public cloud-like) experience for selecting new workloads and getting them up and running on your local infrastructure in just a couple of clicks.
  13. Autobursting, as an adjunct of Orchestration will intelligently decide between hot-adding capacity to legacy workloads (CPU, RAM, etc) or spinning up new instances of modern burstable applications to handle load. It would, of course, then scale them back down when possible.
  14. Hybrid identity services that work across private infrastructure and public cloud spaces. They will not only manage identity but provide complete user experience management solutions that work anywhere.
  15. Complete software defined networking stack, including layer 2 extension between data centres as well as the public and private cloud. This means that spinning up a workload will automatically configure networking, firewalls, intrusion detection, application layer gateways, mirroring, load balancing, content distribution network registration, certificates and so forth.
  16. Chaos creation in the form of randomised automated testing for failure of all non-legacy workloads and infrastructure elements to ensure that the network still meets requirements.

What's the point?

The ultimate goal is that of true stateless provisioning. This would be similar to the "golden master" concept so familiar to those employing Virtual Desktop Infrastructure (VDI) brought to all workloads.

So you want a MySQL database tuned for the SDI block you are running? It will deploy a golden master from the orchestration software pre-configured and pre-tested to run optimally on that hardware. Your data and customizations are separate from the OS and the application itself. When the OS and app are updated, the image will be altered by the vendor; you simply restart the VM and you're good to go.

All monitoring, backups, networking, storage configuration and so forth will simply take care of themselves. Resources will be allocated dynamically based on the hardware available and the constraints placed by systems administrators on what can be sent to which clouds and when.

Unlike the public cloud, this won't be available just to new workloads coded from the ground up. Legacy workloads are here to stay and SDI blocks are all about instrumenting them as fully as possible and enabling them to have as much of the cloud-like simplicity as their aged design allows.


Other stories you might like

  • Robotics and 5G to spur growth of SoC industry – report
    Big OEMs hogging production and COVID causing supply issues

    The system-on-chip (SoC) side of the semiconductor industry is poised for growth between now and 2026, when it's predicted to be worth $6.85 billion, according to an analyst's report. 

    Chances are good that there's an SoC-powered device within arm's reach of you: the tiny integrated circuits contain everything needed for a basic computer, leading to their proliferation in mobile, IoT and smart devices. 

    The report predicting the growth comes from advisory biz Technavio, which looked at a long list of companies in the SoC market. Vendors it analyzed include Apple, Broadcom, Intel, Nvidia, TSMC, Toshiba, and more. The company predicts that much of the growth between now and 2026 will stem primarily from robotics and 5G. 

    Continue reading
  • Deepfake attacks can easily trick live facial recognition systems online
    Plus: Next PyTorch release will support Apple GPUs so devs can train neural networks on their own laptops

    In brief Miscreants can easily steal someone else's identity by tricking live facial recognition software using deepfakes, according to a new report.

    Sensity AI, a startup focused on tackling identity fraud, carried out a series of pretend attacks. Engineers scanned the image of someone from an ID card, and mapped their likeness onto another person's face. Sensity then tested whether they could breach live facial recognition systems by tricking them into believing the pretend attacker is a real user.

    So-called "liveness tests" try to authenticate identities in real-time, relying on images or video streams from cameras like face recognition used to unlock mobile phones, for example. Nine out of ten vendors failed Sensity's live deepfake attacks.

    Continue reading
  • Lonestar plans to put datacenters in the Moon's lava tubes
    How? Founder tells The Register 'Robots… lots of robots'

    Imagine a future where racks of computer servers hum quietly in darkness below the surface of the Moon.

    Here is where some of the most important data is stored, to be left untouched for as long as can be. The idea sounds like something from science-fiction, but one startup that recently emerged from stealth is trying to turn it into a reality. Lonestar Data Holdings has a unique mission unlike any other cloud provider: to build datacenters on the Moon backing up the world's data.

    "It's inconceivable to me that we are keeping our most precious assets, our knowledge and our data, on Earth, where we're setting off bombs and burning things," Christopher Stott, founder and CEO of Lonestar, told The Register. "We need to put our assets in place off our planet, where we can keep it safe."

    Continue reading

Biting the hand that feeds IT © 1998–2022