This article is more than 1 year old

El Reg guide to the Private Cloud

How to decide what's right for the business

Anyone researching private cloud runs up against a wall of definition and counter-definition. “Private cloud isn’t real cloud computing,” according to one vendor, in phrasing redolent of Monty Python’s Life of Brian.

This debate will run and run, but much of the discussion about private cloud computing to date concerns the more efficient use of infrastructure.

The theory goes that business departments gain easier access to computing power and storage, as onerous hardware procurement cycles are replaced by a simple allocation of virtual resources.

Two Buddhist monks face the sky

What's not to like?

It sounds marvellous. However, like other quasi-utopian visions based on distribution of abundant resources, it is hard to achieve.

But in principle the benefits of the public cloud can also be achieved by in-house systems - if they are built in the right way.

Fundamentally, this is predicated on an application focus that drives virtual machines. Just as Amazon, Microsoft or IBM can offer hosting facilities based on the rental of virtual machines, so can, well, anybody, including internal IT.

The theoretical benefits are all that you would expect: improved service delivery, reduced infrastructure and licensing costs, better use of assets, ability to scale according to application needs, and so on. What’s not to like?

Nothing going on but the rent

The idea of a pay-per-use model for IT provisioning and delivery is nothing new. It has gone under different names such as cross-charging, and is seen as an element of most best practices, including ITIL and COBIT.

Maturity models for IT best practice generally have a 'Stage 4' in which IT can be delivered dynamically, adaptively or on demand, depending on who you ask.

We are still pretty much where we were ten, 15 years ago

The principles may be there, but the practice – as anyone knows who has tried it – is difficult to achieve. Wave upon wave of infrastructure technologies have appeared, from blade servers to grid computing, each promising dynamic IT nirvana.

Look around and we find that we are still pretty much where we were ten, 15 years ago.

What's different today is the adoption of application-driven server virtualisation, which may hold the key to making dynamic IT work.

Virtual machines are only one part of the answer: we are also seeing server, storage and network architectures designed from the ground up with virtualised resource pools in mind, as well as enhancements to management and support tools.

Cloud may indeed be hyped up in terms of what it can achieve for organisations today, but it is nonetheless catalysing innovation in the industry.

Safe niches

cloud computing illustration

What has the private cloud ever done for me?

These are early days for adopters as well. Few organisations can afford to take a rip-and-replace approach to their IT systems, however compelling the white papers might be. Even if they had the cash the risks of such a large-scale transition can soon dampen ambitions.

Unsurprisingly, there is more interest in niche areas – deploying new infrastructure for business analytics, or development and testing, or pharmaceutical research, for example, while leaving core infrastructure well alone.

The advantage of such a niche-oriented approach is not only that risks can be kept under control, but that investment can be drawn from planned new build rather than harder to justify legacy replacement.

A well-bounded private cloud model offers a better starting point for dynamic provisioning. Instead of the conversation going: “We are going to completely change the way in which we allocate IT resources, and you will have to cope with the consequences,” it becomes: “We’re implementing a private cloud for use by development and test. If you would also like to access some low-cost resource, then here’s the procedure.”

It may be that such pockets of private cloud grow and expand, accommodating more users and subsuming broader elements of IT.

Cloud_Computing

Hey you get off of my cloud

Who’s for the Private Cloud?

We are working neither with a perfected set of tools nor in green field sites. Provisioning is the fundamental weak link: getting this right is the best possible manifestation of success.

Equally, get it wrong and the symptoms (and cost impacts) will be obvious: proliferation of virtual machines, lack of visibility or knowledge of who is using what and why, difficulties in responding to problems and time taken to allocate resources.

Against this background, when does private cloud make sense?

According to some, there is no point in running anything internally as everything can be done far more cheaply in the public cloud. And yes, it is true that even enterprise companies lack the buying power of cloud service providers, and therefore building a private cloud infrastructure inevitably costs more than renting someone else’s.

Columns of coins in the cloud

Cloud costs stack up

Equally, some tasks are obviously better candidates for a public cloud than a private one: one-off processing jobs not involving sensitive data, website back-ends for services which face unpredictable demand, and so on.

Cost is the main concern but it is not the only factor for such tasks. We also have data sensitivity, architectural suitability and management capability, as we shall see.

First, given that entire sectors, such as finance and government, are unprepared to trust confidential data to third parties, it’s a bit rich for pundits to say they should just get on with it. They are not ready to do that yet, confirms Tim Bullock, head of IT at BNP Paribas’ offshore operation, which has spent the past two years implementing a private cloud.

“In financial services, there is a focus on ‘Where’s my data?’” says Bullock. “We were moving from a very physical infrastructure environment, a lot of which needed replacing.

"A private cloud may have been initially more expensive than if we’d worked with a cloud provider, but we have achieved many of the benefits that cloud architecture can bring.”

Reassuringly expensive

Private cloud is not like-for-like cheaper than public cloud, but it stacks up pretty well against the alternative of running internal IT systems as physical silos.

In particular, BNP Paribas has flipped the balance from spending two thirds of its time on business-as-usual activity, and one third on helping to move the business forward, to the other way around.

“We’ve gone from being a team of plumbers to an agile, business-focused group,” says Bullock.

“The infrastructure teams are far more engaged with the business. As a result IT is seen as less of a cost centre and is better appreciated.”

Of course there will always be workloads that it doesn’t make sense to migrate into the private cloud. Some legacy applications, for example, designed with all the architectural elegance of a hippo, would be no better off running in a private cloud even if it was cost effective to port them. They might as well stay where they are.

Pain of separation

Equally, some organisations may have tighter restrictions on data governance, such as government departments or organisations where legislation or risk assessments dictate that IT systems need to be kept separate.

Other workloads may not be candidates at first but may eventually become so

But there is a place for building a secure private cloud exclusively for such information processing.

Other workloads may not be candidates at first but may eventually become so. At BNP Paribas for example, the private cloud was initially deployed to accommodate general-purpose IT facilities such as Windows and Linux-based applications, but attention has turned to more architecturally constrained areas such as the Citrix-based thin-client environment and the database servers.

“We’re investigating migrating them in,” says Bullock.

Few organisations are as advanced as BNP Paribas, and private cloud deployments seen elsewhere have tended to avoid mission-critical business systems, focusing instead on unpredictable workloads, one-off jobs and transient applications.

Where's my crystal ball?

“The key environments for private cloud we are seeing now tend to be project-based,” says Andi Mann, vice-president of virtualisation product marketing at CA Technologies.

There’s plenty that fits into this category, of course, from engineering systems and one-off analytics jobs through test environments to web and collaboration services.

Predictability is the key here: workloads that are harder to define up front are seen as candidates for the private cloud.

Unsure how many people are going to be using the service? Need to build something, then perhaps take it down and build something similar once you’ve worked out the details? Want to run a job quickly, without jumping through procurement hoops that would make it unfeasible anyway? Then we’ll see what the private cloud can do for you.

Best practice makes perfect

Die-hard IT service management experts may say they have seen it all before, and to an extent, they would be right.

Over the next couple of years, we will no doubt see best practice re-emerge on keeping inventory. Already conversations are returning to such topics as the role of the configuration management database as the central hub of IT activity.

Event management, problem management, patch management and other such best practices will probably follow suit.

It is dangerous, however, to assume that the best practices founded on large-scale systems will automatically translate to more dynamic environments.

If speed of provisioning becomes the main indicator of success in more agile IT environments, practices need to be directed towards keeping things moving. At the same time, they must be kept under control if responsiveness is not to suffer later.

After all, IT exists to help the business and not IT managers, however tempting it might be to think otherwise.

The brawl over virtual sprawl

cloud computingFight

Chaos can quickly ensue

As anyone who has managed a large-scale IT environment knows, it was tricky enough to keep tabs on everything even before it was possible to create virtual hardware assets out of the ether.

Virtual servers, virtual disks, virtual networks and their associated ports, virtual devices and cloned copies of software packages have all become ridiculously easy to create, without becoming any easier to manage.

The phrase “VM sprawl”, coined in the past decade, captures what happens when people create their own virtual machines without having controls in place to manage them.

Chaos can quickly ensue if controls are not in place. The term “VM sprawl” encapsulates the main symptom – large numbers of virtual machines, with no one keeping a proper account of who is using them, how important they are and what their software configurations are.

The last point is important, as it affects software licensing. Some licences are device-based (virtual or otherwise), and therefore still apply whether or not the virtual machine is switched on.

It is not hard to picture an organisation falling foul of its licensing regime, perhaps by cloning a virtual machine ten times to test different configuration tweaks. Just because you can do it doesn’t mean that you should.

The answer lies in setting up and enforcing IT management controls. BNP Paribas, for example, puts controls in place up front to prevent a free-for-all. “We control it tightly,” says Bullock.

“We have a limit on who can create virtual machines and keep an inventory of them all. Management of the environment is important and we treat virtual machines just the same as physical machines.”

At first glance, it might appear that BNP Paribas had missed the point. Isn’t that just making things less flexible?

Not so, says Bullock, adding that the key is knowing which controls really matter: focus on keeping a clear picture of who is running what, for example, and the rest will follow.

Capacity planning

Even if the right management controls are in place, a fundamental dilemma emerges concerning capacity planning.

There are two opposing theories: first, that private cloud architectures enable optimum use of resources, for example, the ability to run existing server infrastructure at 80 per cent utilisation.

The second is that you can provision new virtual machines on demand, based on requirements. But hang on – if your infrastructure is already running at maximum utilisation, surely there will be no spare capacity for anything else?

You could always buy more servers, but there’s the risk that they will sit idle, wasting money.

In addition, not all processing tasks are created equal. It is easy to envisage a private cloud that is fully utilised, but running a significant proportion of jobs that are not mission-critical.

At this point the problem becomes political. No one wants to be the person who has to ask the sales department and the R&D department which of them has the more important analytics tasks.

Chargeback is possible only with a clear picture of who is using what

The answer could lie in chargeback, a system by which business units are informed about and perhaps even invoiced for the processor time they use. Chargeback is possible only with a clear picture of who is using what, but as we have seen, that is already necessary to avoid VM sprawl.

With chargeback, it is also possible to compare the costs of private resources with those available in hosted or public clouds. It requires little additional funding to use an internal server to run a virtual machine, compared with renting somebody else’s hardware and leaving the server idle.

Putting these pieces together starts to resolve the on-demand dilemma. First, some level of capacity planning is necessary to put a private cloud in place, taking into account current and future workloads, mission-critical or otherwise.

Best of both worlds

Second, delivery of services from the private cloud requires a high level of IT management control – flexibility does not happen by accident. Chargeback can build on top of this to raise the visibility of costs, helping business units to set their own workload priorities.

Finally, even if your organisation wants to keep most processing in-house, it makes sense to see the private cloud as just one place for workloads to be run. Knowing that a bigger pool of resource is available if internal resources reach their limits enables more rigorous capacity planning.

We are not yet at a point where workloads can be moved between private and public clouds without intervention. Equally, each model has strengths and risks that need to be taken into account.

Nonetheless, IT organisations can already set the wheels in motion to deliver the best of both worlds. ®

More about

TIP US OFF

Send us news


Other stories you might like