At long last, Google's cloud has an on-premises extension. That extension is... Scale Computing? The cloud giant and hyperconverged infrastructure (HCI) vendor have have said they will build a service with some interesting potential.
The collaboration started a couple of years ago when Scale was approached by the Chocolate Factory to work with them on a new project. Google came to Scale on a purely nerd-to-nerd basis as some staff had been impressed by the upstream code contributions that Scale had been making. The result is nested virtualisation support within Google Cloud Project (GCP).
This allows Scale to virtualise an instance of their hyperconvergence software. Combined with some software-defined networking wizardry and an on-premises virtual machine, this GCP/Scale instance can be used by a customer's existing on-premises cluster for backups and disaster recovery just as they could use the traditional physical appliance to do cluster-to-cluster backups.
Because the GCP instance is just a hypervisor and any potential virtual machines running inside a Google Cloud VM, that VM can be resized as needed. This means that while serving as nothing more than a backup point, the instance could be set to consume minimal CPU and RAM with a large hard drive. If required to assume a disaster recovery role, the amount of RAM and CPU could be increased until it could run all the required VMs.
From Scale's standpoint, this announcement is a significant achievement. And to be fair, the work they put into creating a push-button simple layer 2 networking bridge between an on-premises Scale cluster and a GCP instance is worthy of a little praise. It's not an easy thing. The Google Cloud instance appears to the customer as though it were part of the same subnet they use internally. Compared to that, virtualising an instance of their own software is child's play.
For Google, it is a nice first step towards a fully functional hybrid cloud.
But – and it's a big but – Scale Computing and GCP have a very long way to go before they are even in the same league as full-featured offerings like Azure Stack.
Of greater interest to me are the wider implications for what having qualified this technology for general availability means. Doing a first rollout with Google is reasonably straightforward, because Google uses KVM and nested virtualisation with KVM can be accomplished with almost zero overhead compared to running on bare metal. It is one of the things KVM is very, very good at.
In theory, any service provider willing to stand up a KVM-based environment with the right configuration could also support a Scale hybrid infrastructure. While Amazon uses Xen and Microsoft uses Hyper-V, in theory each of these hypervisors could be set for nested virtualisation.
I have no real idea of what the efficiency of such a solution would be, and it would be quite interesting to see whether the finely tuned versions of these hypervisors run by the cloud giants could run Scale's nested hypervisors anywhere near as close to the speed of bare metal as Google's KVM-based cloud.
Smaller service providers may of course run whichever hypervisor they wish. There are service providers around the world with significant investments in KVM-based clouds. Most clouds running OpenStack, for example, run KVM. Down the road, this could lead to smaller regional service providers with an existing business of being managed service providers for small businesses standing up Scale-based hybrid clouds.
I find this Scale/GCP tie-up interesting not so much because of Google's involvement, but because of the non-Google things that can be done.
Hybrid infrastructure for the rest of us
Scale is best known as an HCI service that targets the small business market. Instead of taking an enterprise-focused solution, crippling it, and offering it at a still-unaffordable price, Scale built their product for small businesses and decided they would add enterprise-class features as and when they had customers asking for them.
Unofficially, Scale's target market is the business with one or two sysadmins who had only ever really administered Windows, were afraid of the command line, and just didn't want to have to bother with infrastructure. This has resulted in a robust and largely automated solution with very few nerd knobs to twiddle. It does the job, and generalist admins who don't have the time to become specialists can't get themselves into too much trouble using it.
As I see it, this makes Scale an attractive option for managed service providers. Channel players of all sizes are looking for a business model that allows them to survive in a world increasingly dominated by public cloud providers in general, and Amazon's AWS in particular. The "feed your customers to public cloud provider" approach isn't viable in the long run, so a lot of channel partners are standing up their own clouds.
Scale's hybrid infrastructure offers service providers a couple of different models. A service provider can opt to serve as a backup and disaster recovery point for their existing customer base. Alternately, the service provider could run the majority of the customer's workloads on their cloud and extrude a physical Scale Computing appliance onto the customer's premises to handle only those workloads that absolutely must remain on site.
The push-button layer 2 networking voodoo that Scale has created to make the GCP Scale instance appear on the same subnet as a customer's on-premises network can also work for service providers.
I won't exactly leap for joy at the idea that I can magically back up my workloads to Google's cloud. I don't trust the government to which Google is beholden. Also Google rolls over for them. Regional service providers working hand-in-glove with SMBs to create hybrid infrastructures, however, is a concept of which I am a strong proponent.