Architecting for IT service delivery

Service by design? Really?


Lab A few years back I was involved in a project that turned out far more interesting than I expected. The plan was to write a training course about a software development methodology. As you see, it did start from a reasonably low point in terms of interest – but it quickly evolved into a much more worthwhile exercise.

The course in question documented a Sun Microsystems internal approach known as “3DM”, or 3-dimensional methodology. For those familiar with the Rational Unified Process (RUP), it aimed to extend this into how to deploy applications in an appropriate manner to meet service criteria such as scalability, availability and so on.

In fact, like all good approaches, 3DM was not based on theory. Rather, it distilled down best practices learned by consultants in the field, around when and where to adopt clustering, load balancing, replication and failover, and other such constructs.

It’s all good stuff, and the general lesson is that good practice is out there. This isn’t the place to document the whys and wherefores, not only because the truth is ‘out there’ but also because it tends to depend on the hardware and software involved.

But surely, says the outsider, IT is going to be more about virtualised machines running modular applications on industry-standard servers? Doesn’t that mean that IT gets simpler and simpler, reducing any dependency on good design?

Sadly, no. Despite the suggestion from some quarters that IT is getting simpler and simpler, the need for skills to build reliable, scalable systems is as prevalent today as it ever was.

Good systems design always was, and still remains, a constant battle between the theoretically possible and the actually practical. Today’s IT systems can be impossibly complex, running layer upon layer of barely compatible software, linking together older and newer systems that were never meant to be linked. From the outside in, IT may look like a Ford Mondeo – standardised to the point of being impossibly dull. To the engineers working on the inside however, IT is more like a Morgan, with each individual part, each connection and configuration item custom made.

As a result we can bandy around terms like ‘failover’ regardless of their actual practicality. While failover is nominally about taking a single workload and getting it up and running on another server, but in reality there is often a complex web of dependencies between server, network and storage hardware that can be difficult to unravel, never mind replicate. Are both servers (source and target) identically configured? Do they share the same network connectivity with the storage? Is the storage itself configured correctly to support the application in question? And so on and so on.

Virtualisation may help answer some of these questions of course, but only if it is considered architecturally, which brings us to the nub of the matter. Part of the challenge is that we’re not building in good service by design – it’s just not being costed into the business cases for new systems and applications, as we’ve seen in several research studies. As a result, such things as failover have to be bolted onto systems and applications after the event, rather than being built in from the start.

As with many complex problems, the temptation might be to offload the complexity onto a third party, which is one reason perhaps why the interest in hosted services is growing.

However, unless you have worked out a way to offload the more complex stuff, you might just be adding to the problem. In the outsourcing wave, many reported the issue of outsourcing the best guys, leaving the dross behind (in some cases, to run the contracts). Might we end up with a similar issue with third party hosting, in that the easier systems will migrate, leaving the complexity behind, and creating an integration challenge that now runs across the firewall?

One thing’s for sure. If we are going to achieve any state of IT nirvana any time soon, some pretty fundamental shifts are going to be required in terms of the role of good design. Clearly best practice exists, if we choose to take it up and work through the short term pain and additional costs required to get things onto a firmer footing.

Or perhaps the only realistic option is to keep going with the candle-wax and string, patching things together as they go wrong and adding new layers of technology and complexity on a regular basis. Perhaps we secretly prefer it this way – just as a Morgan may suffer from the foibles of being custom built, the design itself taking into account the subsequent need to be tinkered with. To change this would require a major shift in mindsets and behaviours at all levels, without which it is difficult to see how any nirvana state of service delivery could ever be achieved. If you feel any different, do tell. ®


Other stories you might like

  • Florida's content-moderation law kept on ice, likely unconstitutional, court says
    So cool you're into free speech because that includes taking down misinformation

    While the US Supreme Court considers an emergency petition to reinstate a preliminary injunction against Texas' social media law HB 20, the US Eleventh Circuit Court of Appeals on Monday partially upheld a similar injunction against Florida's social media law, SB 7072.

    Both Florida and Texas last year passed laws that impose content moderation restrictions, editorial disclosure obligations, and user-data access requirements on large online social networks. The Republican governors of both states justified the laws by claiming that social media sites have been trying to censor conservative voices, an allegation that has not been supported by evidence.

    Multiple studies addressing this issue say right-wing folk aren't being censored. They have found that social media sites try to take down or block misinformation, which researchers say is more common from right-leaning sources.

    Continue reading
  • US-APAC trade deal leaves out Taiwan, military defense not ruled out
    All fun and games until the chip factories are in the crosshairs

    US President Joe Biden has heralded an Indo-Pacific trade deal signed by several nations that do not include Taiwan. At the same time, Biden warned China that America would help defend Taiwan from attack; it is home to a critical slice of the global chip industry, after all. 

    The agreement, known as the Indo-Pacific Economic Framework (IPEF), is still in its infancy, with today's announcement enabling the United States and the other 12 participating countries to begin negotiating "rules of the road that ensure [US businesses] can compete in the Indo-Pacific," the White House said. 

    Along with America, other IPEF signatories are Australia, Brunei, India, Indonesia, Japan, South Korea, Malaysia, New Zealand, the Philippines, Singapore, Thailand and Vietnam. Combined, the White House said, the 13 countries participating in the IPEF make up 40 percent of the global economy. 

    Continue reading
  • 381,000-plus Kubernetes API servers 'exposed to internet'
    Firewall isn't a made-up word from the Hackers movie, people

    A large number of servers running the Kubernetes API have been left exposed to the internet, which is not great: they're potentially vulnerable to abuse.

    Nonprofit security organization The Shadowserver Foundation recently scanned 454,729 systems hosting the popular open-source platform for managing and orchestrating containers, finding that more than 381,645 – or about 84 percent – are accessible via the internet to varying degrees thus providing a cracked door into a corporate network.

    "While this does not mean that these instances are fully open or vulnerable to an attack, it is likely that this level of access was not intended and these instances are an unnecessarily exposed attack surface," Shadowserver's team stressed in a write-up. "They also allow for information leakage on version and build."

    Continue reading
  • A peek into Gigabyte's GPU Arm for AI, HPC shops
    High-performance platform choices are going beyond the ubiquitous x86 standard

    Arm-based servers continue to gain momentum with Gigabyte Technology introducing a system based on Ampere's Altra processors paired with Nvidia A100 GPUs, aimed at demanding workloads such as AI training and high-performance compute (HPC) applications.

    The G492-PD0 runs either an Ampere Altra or Altra Max processor, the latter delivering 128 64-bit cores that are compatible with the Armv8.2 architecture.

    It supports 16 DDR4 DIMM slots, which would be enough space for up to 4TB of memory if all slots were filled with 256GB memory modules. The chassis also has space for no fewer than eight Nvidia A100 GPUs, which would make for a costly but very powerful system for those workloads that benefit from GPU acceleration.

    Continue reading
  • GitLab version 15 goes big on visibility and observability
    GitOps fans can take a spin on the free tier for pull-based deployment

    One-stop DevOps shop GitLab has announced version 15 of its platform, hot on the heels of pull-based GitOps turning up on the platform's free tier.

    Version 15.0 marks the arrival of GitLab's next major iteration and attention this time around has turned to visibility and observability – hardly surprising considering the acquisition of OpsTrace as 2021 drew to a close, as well as workflow automation, security and compliance.

    GitLab puts out monthly releases –  hitting 15.1 on June 22 –  and we spoke to the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, about what will be added to version 15 as time goes by. During a chat with the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, The Register was told that this was more where dollars were being invested into the product.

    Continue reading
  • To multicloud, or not: Former PayPal head of engineering weighs in
    Not everyone needs it, but those who do need to consider 3 things, says Asim Razzaq

    The push is on to get every enterprise thinking they're missing out on the next big thing if they don't adopt a multicloud strategy.

    That shove in the multicloud direction appears to be working. More than 75 percent of businesses are now using multiple cloud providers, according to Gartner. That includes some big companies, like Boeing, which recently chose to spread its bets across AWS, Google Cloud and Azure as it continues to eliminate old legacy systems. 

    There are plenty of reasons to choose to go with multiple cloud providers, but Asim Razzaq, CEO and founder at cloud cost management company Yotascale, told The Register that choosing whether or not to invest in a multicloud architecture all comes down to three things: How many different compute needs a business has, budget, and the need for redundancy. 

    Continue reading

Biting the hand that feeds IT © 1998–2022