If any idiot can do it, we're heading in the right direction

I am a simple man – and that's the way I like it

Sysadmin Blog The enemy of success is complexity. Although I am in general a fan of the concept of intricately intertwined Rube Goldbergian nonsense, my life thus far could be summed up as learning the value of simplicity face first. IT is all about complexity, and unpicking which combination of barely functional crap is least likely to go boom is not as straightforward as the chattering masses of the internet would have us all believe.

Let's take storage as an example. I accept as true the axiom "if your data does not exist in more than one location, then it does not exist". I have had hard drives die, run SSDs out past their write life, and Jibbers only knows if Schrödinger's tape drive will read the tapes I feed it.

In order to get my data to live in more than one place I can approach the problem in one of two ways. If my needs are modest then the solutions are simple. If I have a fixed data set and am in no particular hurry to get to the backup copy if the primary dies then I can simply make one copy once, putting one of the copies in a different place than the original. There are umpteen ways to accomplish this task simply.

As soon as I become more demanding, complexity increases. If I write to my primary storage after making the initial copy, then at some point I have to copy the new data. This can require a new storage device to send away, or somehow updating the backup device already sent away.

If I want to get access to the backup copy of my data quickly when things go sideways, I need to ensure there is a means to do so. If I want realtime protection of my data as it is written then I've entered a whole other realm of complexity.

On the simple side of things, I could copy data to a drive and put it in a safe deposit box. On the complex side, I'm engaged in arguments about discreet SANs versus hyperconvergence and using terms like "data locality" and "incremental forever".

The complexity burden

When we're out in the weeds arguing about storage complexities the problem metastasises. Nerds very easily get caught up examining individual trees for pests and lose sight of their location in the forest. In our storage example, this manifests itself in forum wars where nerds argue about "data paths" and the relative complexity of underlying technologies that practitioners and implementers of those technologies simply don't play with.

In practice, information technology is littered with abstraction layers and other hidden complexities. While it's sure fun to debate these things at an academic level, it behooves us to remember that none of this actually matters to the folks with boots on the ground.

If and when the vendor does an adequate job of abstracting away complexity, then for all intents and purposes it is gone. Inefficiency can be masked by simply throwing egregiously excessive amounts of hardware at something. Eventually enough inefficiencies will build up in the design that someone will come along, redesign the whole thing from scratch and built a better, faster mousetrap that they can sell cheaper.

That's IT: every single aspect of it is a cyclical masking of complexity until we ultimately master it enough to truly commoditise it.

What's absolutely critical about all of this is that it's the outcomes produced that matters. If a hyperconverged whatsit is just as fast and reliable as a fibrechannel whosit, but cheaper and easier to set up and manage, then you buy the hyperconverged thingamabob. The relative "under-the-hood complexity" doesn't matter. Only the outcomes.

Relative complexity

After a recent article, several people asked me why I cared about twinaxial Direct Attach Cables (DACs). As they saw it, fibre was simply "better", and my usage of anything else baffled them.

I prefer DACs when and where I can use them because they're simpler. In my experience, they take a heck of a beating and keep on working. Any idiot can plug them in. I can walk people through unplugging one device and plugging it into another over the phone. If the new cleaner decides to clean the racks and bangs into them, pulls on them or gets caught in them, they don't tend to break or come undone.

A single fibre link has seven things I need to worry about: two transceiver modules, one fibre cable, and four rubber caps. Someone will inevitably lose the rubber caps so I need to stock spares. I also need a widget to clean my transceivers and cables.

So now, in order to link server A into switch B, the poor salesdroid on the other end of the scratchy phone at 4am has to keep track of a backpack full of gear. With DACs, linking a server to a switch means keeping track of exactly one thing: the DAC.

Oh, sure, fibre works over greater distances. But when all you need is to plug four servers into a switch, none of that matters. The concerns of someone tasked with campus-wide wiring don't burden my SMBs, and I don't need realtime metro-area high availability storage for my personal video collection.

Everyday life is increasingly complex. Picking up the phone and dialling a number became finding the phone, swiping sideways, entering a password, opening the phone app, selecting the dialpad, dialling and then hitting send. Let's not add to it by burdening ourselves – and others – with more IT complexity than is absolutely required.

Similar topics

Other stories you might like

  • 381,000-plus Kubernetes API servers 'exposed to internet'
    Firewall isn't a made-up word from the Hackers movie, people

    A large number of servers running the Kubernetes API have been left exposed to the internet, which is not great: they're potentially vulnerable to abuse.

    Nonprofit security organization The Shadowserver Foundation recently scanned 454,729 systems hosting the popular open-source platform for managing and orchestrating containers, finding that more than 381,645 – or about 84 percent – are accessible via the internet to varying degrees thus providing a cracked door into a corporate network.

    "While this does not mean that these instances are fully open or vulnerable to an attack, it is likely that this level of access was not intended and these instances are an unnecessarily exposed attack surface," Shadowserver's team stressed in a write-up. "They also allow for information leakage on version and build."

    Continue reading
  • A peek into Gigabyte's GPU Arm for AI, HPC shops
    High-performance platform choices are going beyond the ubiquitous x86 standard

    Arm-based servers continue to gain momentum with Gigabyte Technology introducing a system based on Ampere's Altra processors paired with Nvidia A100 GPUs, aimed at demanding workloads such as AI training and high-performance compute (HPC) applications.

    The G492-PD0 runs either an Ampere Altra or Altra Max processor, the latter delivering 128 64-bit cores that are compatible with the Armv8.2 architecture.

    It supports 16 DDR4 DIMM slots, which would be enough space for up to 4TB of memory if all slots were filled with 256GB memory modules. The chassis also has space for no fewer than eight Nvidia A100 GPUs, which would make for a costly but very powerful system for those workloads that benefit from GPU acceleration.

    Continue reading
  • GitLab version 15 goes big on visibility and observability
    GitOps fans can take a spin on the free tier for pull-based deployment

    One-stop DevOps shop GitLab has announced version 15 of its platform, hot on the heels of pull-based GitOps turning up on the platform's free tier.

    Version 15.0 marks the arrival of GitLab's next major iteration and attention this time around has turned to visibility and observability – hardly surprising considering the acquisition of OpsTrace as 2021 drew to a close, as well as workflow automation, security and compliance.

    GitLab puts out monthly releases –  hitting 15.1 on June 22 –  and we spoke to the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, about what will be added to version 15 as time goes by. During a chat with the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, The Register was told that this was more where dollars were being invested into the product.

    Continue reading

Biting the hand that feeds IT © 1998–2022