There's a lot of talk – some might say hot air – about cloud computing, what it is and what it is not. Ask 10 people and you will probably get 15 answers.
Take the formal definition of cloud put forward by the National Institute of Standards and Technology (NIST), the section of the US Department of Commerce that for more than a century has been obsessed with measurements and definitions.
It took 15 revisions and nearly three years for NIST to come up with its formulation of what constitutes a cloud. It was released in September 2011 and the grousing has continued ever since about how this, that and the other needs to be added to the definition.
NIST defines cloud as having five essential characteristics, three service models and four delivery models.
The five essential characteristics are on-demand self-service, broad network access, resource pooling, rapid elasticity and measured usage.
The service models include cloudifying the infrastructure, platform or application software layers and exposing these as services to customers with those above characteristics and with increasing levels of abstraction away from the underlying servers, storage, switching and systems software.
NIST recognises private clouds (built for exclusive use), public clouds (run by a service provider with capacity and services shared by multiple tenants), and community clouds (organised around a group of users rather than a particular technology).
Hybrid cloud in the NIST definition is a mixture of any two distinct infrastructures but which offers cloud bursting, load balancing and portability across different kinds of clouds.
Virtualization, by which we mean abstracting server compute and memory capacity as well networking I/O and storage capacity, either residing in those servers or in external arrays, is obviously the key means to enable resource pooling.
It's API hour
"But there is more to it than that," says Tony Campbell, director of OpenStack training and certification operations at Rackspace.
"With OpenStack, we are really big on APIs. We think that for it to be a cloud, everything has to be accessible via an API. This allows developers to write applications for desktops, mobile devices or whatever thin or thick clients they like because the APIs expose all of that functionality. So virtualization without an API – not cloud."
In Campbell's augmented definition, elasticity, or the ability to fire up more virtual machines or fatter ones on a hypervisor, is not sufficient.
"The cloud has spoiled us," he says. "We know we can click on a dashboard and instantly have access to more resources. And we are addicted to that. Standing up bare metal, installing a hypervisor and releasing virtual machines on it – and that process taking several days – is no longer acceptable."
So speeding up virtualization and access to virtual CPU, memory, I/O, and storage capacity is, for some, also part of the cloud definition.
VMware, which is trying to extend its dominance in x86 server virtualization into a similar juggernaut position in cloud computing with its vCloud Suite, wants to add network and storage virtualization to the definition of what comprises a cloud.
"Virtualization is simply the abstraction of compute and memory, and in its current instantiation at the cluster level. Cloud computing – done right – is about going beyond those two constraints to the full set of data centre services," says Neela Jacques, director of product marketing for VMware's cloud infrastructure suite.
“You truly have to virtualize networking and storage arrays. We have to take the concepts that started with virtualization and take them up to the nth level – being able to load balance across clusters and going beyond just compute and memory."
To some people, says Jacques, cloud is different from what NIST, Rackspace, VMware and their peers would generally agree on. Vendors with expertise in system management and provisioning tools want to solve the complexity issue, which is in all data centers and the reason companies are willing to engage in cloud computing in the first place.
They want to hide the complexity in one thin layer that sits between the end-user and then script all of the resources on the disparate infrastructure to work together.
"If you are a management vendor, cloud looks an awful lot like management," says Jacques.
What management vendors want you to swallow is that the world is complex and you are not going to be able to simplify it
"They have a CMDB (configuration management database), they have extensive orchestration, they have a support desk and a catalogue already. Basically, they go with what they know.
"What management vendors would like you to swallow is the idea that the world is complex, it is always going to be complex and you are not going to be able to simplify it, so what you should do is buy a scripting platform so you can provision to any one of those things."
VMware has made investments to help its position in the hybrid cloud arena, particularly with the acquisition of DynamicOps in July. It is unabashed that it wants to be the dominant cloud provider and that for most customers today, hybrid cloud means VMware inside the firewall and Amazon EC2 on the outside.
"When we talk about cloud, customers have a basic virtualized environment and we want to make that environment better," says Jacques.
"That means increasing performance so more workloads can move onto hypervisors, and supporting new technologies like SR-IOV. It is already superior to the physical world but we have to make it easier, which is what the vCloud Suite is all about.
“For VMware, the biggest impact we can have is to deliver the best platform for all apps, and that is where we put 80 per cent of our efforts. We recognise, however, that people have environments beyond that and we are making investments via what we are building as well as acquisitions to cover more of them."