Hybrid cloud: The 'new' but not-new IT service platform

The combination effect

Today, the term hybrid IT is typically used when talking about bridging IT on multiple premises. But this is an oversimplification. Buried deep within any hybrid IT discussion will be a need to talk about standards, compliance and some difficult decisions about how we even conceptualize our approach to IT.

As a marketing term “hybrid-anything” means the integration of two things that were previously separate. A hybrid storage array contains both flash and magnetic media. Hybrid WAN networking is a network topology containing more than one connection type, for example MPLS and an internet-based tunnel.

In 2017, however, when we talk about "hybrid IT" we're talking about a combination of on-premises IT and public cloud IT. Sometimes we might even throw in service-provider hosted IT as well. But we should always be clear what sort of hybridization we are talking about.

IT is already hybrid

Giving a special marketing label to multi-premises and/or multi-provider IT sets it apart. Without even digging into the label itself, we're conditioned to think of it as something different, something that requires special consideration.

Throwing the word "hybrid" at it implies that we should think of multi-premises IT as a novelty bringing dramatic changes, especially in ease of use. Like smooshing together a PDA, MP3 player and mobile phone changed the world with the smartphone, hybrid IT will change the face of IT!

However, there is nothing special, novel or unique about hybrid IT. It isn't something that you should consider doing. It isn't something you need to draft long term plans for. Hybrid IT is, except in exceptionally niche cases, something you are already doing.

You may want to call it multi-premises IT, multi-provider IT or hybrid IT. I simply call it IT. It's the people who only do on-premises IT or public cloud IT that are weird. The overwhelming majority of organisations that use some combination of public, private and service provider solutions are perfectly normal.

That said, just because hybrid IT is common doesn't mean it's being practised efficiently. There are still a lot of things to learn about the how and why of it.


IT services can be broken into three broad categories. Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). I know you've heard those categories before; they were standard marketing shtick for public cloud providers for the better part of ten years.

IaaS has no special sauce. Push button, receive operating system. It doesn't matter whether you're on premises, in AWS or a service provider cloud, it's just a VM with an OS in it. PaaS requires a little bit more attention, but excepting a few version constraints, a LAMP server is a LAMP server, regardless of the provider. Hadoop is Hadoop, no matter who is standing it up.

IaaS and PaaS are easy to do as a multi provider affair. If one scripts the bits that need to be built on top of a bare OS, or a standardized platform then shopping around for who provides the lowest cost, lowest latency and/or best service is easy. IT infrastructure becomes a true commodity.

SaaS is different. SaaS has traditionally been something that consumed as a contract with the SaaS developer directly, and where the platform that they used was not discussed. One would get a Dropbox subscription, but wouldn't specify "Dropbox on Google Cloud".

This is starting to change. Smaller vendors are taking advantage of the marketplaces offered by public, private and service provider clouds. Salesforce, for example, is large enough to bully customers into ignoring which cloud provider they use, but provides services on all the major public clouds.

For those who want to keep all their public cloud IT restricted to one provider, this is great news; SaaS providers are falling in line, and tight integration between workloads can occur with minimal latency and a single bill to pay. For those who wish to spread their workloads across multiple providers it is also great news: chances are good that, with a little effort, infrastructure provider resiliency can be achieved for all workloads.

Our reliance on IaaS or PaaS for cross platform capabilities is decreasing.

Cloud brokerage

IaaS and PaaS are not inherently bad. There is something to be said for the old-school hybrid IT dream of lighting up a workload on-premises and simply migrating it, unchanged, to a service provider or public cloud provider. Ravello, tragically bought by Oracle, was really good at this. VMware is increasingly able to do this with its cloud offerings. Other providers are expected to deliver on this in 2017.

This has created a market for cloud brokers - third parties that will find you the best place to run your workloads based on criteria you select. These criteria can be price, latency, data sovereignty, data locality, regulatory certification and so forth.

As provider prices and capabilities change, the cloud broker will advise clients to move workloads. Depending on how deeply integrated the cloud broker's software is with the customer's infrastructure, they may even be able to trigger workload migrations for the customer.

It is, of course, possible to build a virtual cloud broker as a software widget. This could live on the customer's premises, or in a hosted instance they control. It could read publicly available data, or receive updates from the virtual cloud broker software. While this is great for customer control over workload placement, it may not be the optimal approach.

Cloud brokers that more directly control the placement of customer workloads gain the ability to bargain collectively on behalf of their clients. While a government or Fortune 500 may be able to command discounts or concessions from a service provider or public cloud provider, no SME or mid-market company will.

A cloud broker advocating for thousands of smaller organisations, however, can start commanding discounts and concessions from vendors of all sizes. Some next-generation "workload brokers" are seeking to exercise the capability both in squeezing hosting provider when looking to place workloads off site, and in squeezing tin providers in getting on-premises infrastructure for local placements.

In a world of hyperscale providers, banding together makes good sense. The bit where SaaS applications are also starting to be available on multiple platforms is plugging the last hole in this approach. Elimination of platform exclusivity puts the power back in the hands of the customers instead of hosted infrastructure providers.

The need for standards

The ability to easily move workloads from A to B is a prerequisite to play the multi-premises IT game properly. As a bare minimum there needs to be a way to get data and configurations from one place to another.

This begins a discussion about standards. Consider for a moment what it takes to move an IaaS workload around. The easiest way to do this is to simply move VMs from A to B. That isn't always possible, or easy, so oftentimes workloads are rebuilt on the destination infrastructure rather than migrated. How you go about this depends on whether or not the workloads in question are template-based or recipe-based.

Recipe-based workloads light up a blank VM, inject an operating system and an agent. The agent checks with a master server to see what configuration it should have, pulls that down, installs relevant applications, attaches data storage and applies application configurations to use that data. Everything is scripted and automated, and when it works it's lovely. For recipe-based workloads to work across providers, however, a common recipe infrastructure needs to exist with each provider.

Templates are more traditional. A golden master image is built and curated. It is copied, the copy is generalized, and some post-clone scripts might run to make customization easier. Usually manual intervention is required to finalize things before the workload is ready to go. Some template approaches use a more recipe-like final stage to remove the manual intervention.

Standards really help here.

PaaS tries to eliminate this by removing the need to light up the whole OS, providing instead a pre-canned environment with several basic applications. Success depends again on standardization; how alike are PaaS offerings by different providers?

SaaS should "just work" between different providers, but often doesn't. There are big gaps in functionality between QuickBooks online and QuickBooks on-premises, for example. Some vendors simply have different teams working on the solutions built for different platforms, and the offerings diverge over time.

Knock-on effects

In addition to raw workload compatibility and implementation standards, you need to standardise governance. This can be problematic as different providers have different approaches to management, monitoring and alerting. On premises you might be able to encrypt all workloads at the array level, for example, but need to rely on per-workload encryption solution in the public cloud, with a completely different approach to key management.

As multi-premises and multi-provider IT expands its reach within organisations IT governance shifts from merely worrying about workloads being online and functional to tracking where workloads live, what the data protection and privacy implications are of workload placement and other risk and compliance concerns.

In 2005, with entirely on-premises IT that had patch schedules under complete control of organizational IT teams, for example, it was quite rare to include concerns about a third party API change in one's risk assessment.

Just like cumulative updates remove a great deal of control over updates, increasing the likelihood of updates breaking a third party application without a fix being available, APIs are also a weakness. Today, a provider API change could strand a significant number of an organization's workloads, wreak havoc with application integration or even break encryption.


Hybrid IT is what we're doing right now, today. Collectively, we will only increase the diversification of workload placement with time.

In order to minimize risk, increase efficiency and generally not make complete fools of ourselves we're going to have to start working with third parties to provide a layer of "glue" between our on-premises IT efforts and those bits of our business we've entrusted to others. These third party providers might be cloud brokers, amplifying our bargaining power and finding us better prices.

These third-party providers might also be more traditional consultants or channel partners, helping us as they always have: figuring out how to glue all these various techno-whatsits together. Only now with more cloud, APIs and, sometimes, uncertainty than ever before.

Similar topics

Biting the hand that feeds IT © 1998–2022