Pushing service delivery beyond the enterprise boundary

No more need for delivery managers? Pah!


Workshop So far in this series we’ve been considering all things service delivery, with an emphasis on how the various elements of IT infrastructure can be managed as a single whole.

IT infrastructure isn’t what it once was however, as demonstrated by the increasing range of ‘cloud-based’ options available from third parties.

The word ‘service’ has been adopted, if not hi-jacked by terms like software as a service, platform as a service, infrastructure as a service and so on. The central question becomes, if such externally hosted services become more prevalently used, does the role of the service delivery manager become redundant?

Let’s get one thing straight: while some vendors talk about ‘the journey to the cloud’ as if it is a new place that leaves existing environments behind, we have seen no evidence of any wholesale shift of application workloads from internally managed platforms. However, our research does suggest hosted services add to the range of options for new workloads. Not only does this lead to trade-off decisions about where processing should take place, it also implies that future IT environments will incorporate both internal and external services, all of which will need to be “managed”.

From a service delivery standpoint, the challenge is to keep service delivery going wherever processing takes place. Some issues around service delivery are not going to be so different from those caused by the fragmented environments we have today, as discussed in the previous article in this series. There are, however, some additional considerations we need to take into account.

First and foremost are factors around where information exists and how it is being protected, particularly if there are compliance or security constraints associated with the data to be maintained in the service provider’s environment. Regulation may impose certain restrictions on where some kinds of data may reside, or at least mandate that you know where it is stored, which can be a problem with those service providers whose infrastructure crosses national boundaries.

Second, we have the architectural and commercial implications of data processing, transport and storage. Where high volumes of data are concerned, for example, it is generally more performant and economical to bring the processing to the data rather than vice versa. Routinely shipping huge volumes of information into a hosted compute cloud for number crunching can actually work out more expensive and time consuming than building a private cloud infrastructure to do the same job. Given that many cloud-based services today rely on an individual making a purchase through a credit card, keeping visibility on what services cost can also be an issue.

Third, particular attention needs to be paid to data security and access. Beyond the obvious question of whether a cloud service is secure per se, there is the question of policy management. The last thing you need, for example, is the overhead and risk of maintaining one set of policies and rules in-house, then another for each provider you make use of. Again, this is a management issue, and while standards and best practices remain immature, you will need to be careful that your own policies are being adhered to.

A final question is how information flows between service providers, or between in-house and external systems. In one study for example, IT professionals were generally more comfortable with the SaaS proposition in relation to discrete applications such as sales force automation, and were less certain about solutions with many integration touch points, such as ERP. At the root of this were concerns about physical integration (development, maintenance and support of interfaces) and the associated question of who is responsible for what when problems arise. Issues with visibility and policy management across domains were also highlighted.

As well as suggesting the need for appropriate due diligence before signing up to externally hosted services, it’s pretty clear from the above considerations that the need to keep control of things increases rather than decreases when operating a hybrid environment. Tracking, maintaining and monitoring a mix of in-house and externally provided assets in a coherent way requires practices and tools that support a comprehensive view of both internal and externally sourced components, complete with all of the dependencies.

To add to the complexity, there will be parts of the external environment that can’t be viewed – it’s unlikely that Salesforce.com wants to give you a live feed of server status for example. Monitoring these ‘black holes’ of service management become part and parcel of keeping tabs on the environment as a whole. Of course, the degree to which you might want to or be able to monitor and manage assets in a service provider environment is debatable particularly if the right service level agreements are in place (though we realise there is a big due diligence question here, particularly given the level of immaturity around service level guarantees from providers).

One element that needs managing across the on-premise and service provider domains, however, is provisioning and resource allocation. This is particularly the case if you want to take advantage of hybrid configurations and overflow capabilities – e.g. if a Web application runs out of resources in your data centre because of an unanticipated peak in demand, you may require the capability to provision virtual servers in the hosted domain and move workloads onto them with ease. The caveat, again, is that visibility on the cost of external resources becomes increasingly difficult if personal/corporate credit cards are relied upon as the payment mechanism – a factor that many service providers are only just starting to take into account.

In conclusion, any notion of cloud computing devaluing the role of service delivery managers is a bit of a myth. Perhaps there will be less need for some routine manual activities, but the impact of externally provided services is to make things more complex, not simpler, and management skills will need to respond accordingly. Indeed, the opportunity is to look for places where the management of less business-critical capabilities, or indeed the more boring elements of low-level infrastructure can indeed be offloaded, freeing up valuable human resource for the higher-order challenges which will undoubtedly emerge. ®


Other stories you might like

  • Zuckerberg sued for alleged role in Cambridge Analytica data-slurp scandal
    I can prove CEO was 'personally involved in Facebook’s failure to protect privacy', DC AG insists

    Cambridge Analytica is back to haunt Mark Zuckerberg: Washington DC's Attorney General filed a lawsuit today directly accusing the Meta CEO of personal involvement in the abuses that led to the data-slurping scandal. 

    DC AG Karl Racine filed [PDF] the civil suit on Monday morning, saying his office's investigations found ample evidence Zuck could be held responsible for that 2018 cluster-fsck. For those who've put it out of mind, UK-based Cambridge Analytica harvested tens of millions of people's info via a third-party Facebook app, revealing a – at best – somewhat slipshod handling of netizens' privacy by the US tech giant.

    That year, Racine sued Facebook, claiming the social network was well aware of the analytics firm's antics yet failed to do anything meaningful until the data harvesting was covered by mainstream media. Facebook repeatedly stymied document production attempts, Racine claimed, and the paperwork it eventually handed over painted a trail he said led directly to Zuck. 

    Continue reading
  • Florida's content-moderation law kept on ice, likely unconstitutional, court says
    So cool you're into free speech because that includes taking down misinformation

    While the US Supreme Court considers an emergency petition to reinstate a preliminary injunction against Texas' social media law HB 20, the US Eleventh Circuit Court of Appeals on Monday partially upheld a similar injunction against Florida's social media law, SB 7072.

    Both Florida and Texas last year passed laws that impose content moderation restrictions, editorial disclosure obligations, and user-data access requirements on large online social networks. The Republican governors of both states justified the laws by claiming that social media sites have been trying to censor conservative voices, an allegation that has not been supported by evidence.

    Multiple studies addressing this issue say right-wing folk aren't being censored. They have found that social media sites try to take down or block misinformation, which researchers say is more common from right-leaning sources.

    Continue reading
  • US-APAC trade deal leaves out Taiwan, military defense not ruled out
    All fun and games until the chip factories are in the crosshairs

    US President Joe Biden has heralded an Indo-Pacific trade deal signed by several nations that do not include Taiwan. At the same time, Biden warned China that America would help defend Taiwan from attack; it is home to a critical slice of the global chip industry, after all. 

    The agreement, known as the Indo-Pacific Economic Framework (IPEF), is still in its infancy, with today's announcement enabling the United States and the other 12 participating countries to begin negotiating "rules of the road that ensure [US businesses] can compete in the Indo-Pacific," the White House said. 

    Along with America, other IPEF signatories are Australia, Brunei, India, Indonesia, Japan, South Korea, Malaysia, New Zealand, the Philippines, Singapore, Thailand and Vietnam. Combined, the White House said, the 13 countries participating in the IPEF make up 40 percent of the global economy. 

    Continue reading
  • 381,000-plus Kubernetes API servers 'exposed to internet'
    Firewall isn't a made-up word from the Hackers movie, people

    A large number of servers running the Kubernetes API have been left exposed to the internet, which is not great: they're potentially vulnerable to abuse.

    Nonprofit security organization The Shadowserver Foundation recently scanned 454,729 systems hosting the popular open-source platform for managing and orchestrating containers, finding that more than 381,645 – or about 84 percent – are accessible via the internet to varying degrees thus providing a cracked door into a corporate network.

    "While this does not mean that these instances are fully open or vulnerable to an attack, it is likely that this level of access was not intended and these instances are an unnecessarily exposed attack surface," Shadowserver's team stressed in a write-up. "They also allow for information leakage on version and build."

    Continue reading
  • A peek into Gigabyte's GPU Arm for AI, HPC shops
    High-performance platform choices are going beyond the ubiquitous x86 standard

    Arm-based servers continue to gain momentum with Gigabyte Technology introducing a system based on Ampere's Altra processors paired with Nvidia A100 GPUs, aimed at demanding workloads such as AI training and high-performance compute (HPC) applications.

    The G492-PD0 runs either an Ampere Altra or Altra Max processor, the latter delivering 128 64-bit cores that are compatible with the Armv8.2 architecture.

    It supports 16 DDR4 DIMM slots, which would be enough space for up to 4TB of memory if all slots were filled with 256GB memory modules. The chassis also has space for no fewer than eight Nvidia A100 GPUs, which would make for a costly but very powerful system for those workloads that benefit from GPU acceleration.

    Continue reading
  • GitLab version 15 goes big on visibility and observability
    GitOps fans can take a spin on the free tier for pull-based deployment

    One-stop DevOps shop GitLab has announced version 15 of its platform, hot on the heels of pull-based GitOps turning up on the platform's free tier.

    Version 15.0 marks the arrival of GitLab's next major iteration and attention this time around has turned to visibility and observability – hardly surprising considering the acquisition of OpsTrace as 2021 drew to a close, as well as workflow automation, security and compliance.

    GitLab puts out monthly releases –  hitting 15.1 on June 22 –  and we spoke to the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, about what will be added to version 15 as time goes by. During a chat with the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, The Register was told that this was more where dollars were being invested into the product.

    Continue reading

Biting the hand that feeds IT © 1998–2022