This article is more than 1 year old

Face the the future with a private cloud

Coping with demand

You run an enterprise data centre, you are facing hardware refreshes and thinking maybe private cloud is the way to go. But is it? And if it is, how do you get there?

The private cloud in this sense is a data centre feeding its resources on demand and via a monitor or chargeback-for-use business model to users inside the enterprise.

The on-demand aspect means that you won't prescribe what resources you make available to your users; they will demand resources from you and want them turned on or turned off fast.

You must be able to cope with the peaks in demand likely to come your way. If your in-house resources reach a limit you can go outside, to the public cloud, for relief. This is important because your current assumptions about network bandwidth, server and storage use and growth may be quite inadequate.

Living on the edge

Let’s think about end-user devices. Tablets and smartphones are taking their place alongside the desktop PC and notebook as edge devices in enterprise networks.

The edge-device population is growing fast, and that means more traffic and more interactions for your servers to process and data for your storage to store.

This traffic grows as the number of devices rise and as what they do becomes more resource-intensive. Tablets will be used for one-to-one presentations and for accessing corporate material.

A recent report carried out by Google and market research outfit Ipsos OTX showed 48 per cent of smartphone users watch videos on their smartphones. Those videos could be in-house webinars provided by your business.

Your servers will run more applications because increasingly they will be enhanced with flash memory storage to give applications faster access to data that is currently accessed from local or networked disk.

The flash will cut both disk and network latency, leaving applications racing through their code to the next I/O faster, finishing transactions faster, and so enabling the servers to run more applications.

Feed me, feed me

The net effect on your storage arrays is that they have to feed ever-hungrier servers. Today they might feed 500GB of data a day to four virtual machines in a server. Tomorrow they could be feeding 1TB of data a day to six virtual machines in a flash-enhanced server, or more.

The upshot is that your network edge device population will grow and your users will send and receive more data per device. Your servers will be more powerful, run more applications, and send and receive more data to and from storage arrays, which need to be able to hold more data.

And all this data must flow through network pipes big enough and fast enough to avoid traffic snarl-ups.

Rise and fall

How do you respond to a situation in which demand on the three main data centre resource items will rise overall, maybe enormously and with unpredictable spikes in demand?

Here are some points to bear in mind.

Get boxes – servers, storage arrays, network devices – that are big enough to be carved up into several virtual pieces: servers that can run several virtual machines, for example, or network links that can be sub-divided.

Get scale-out capability so that if, as expected, your requirements for a resource increase, you can add another box, wire, switch or array alongside the first, and then another and another.

That way you avoid the dreaded forklift upgrade with one new box replacing the inadequate undersized one. Embrace open standards with vigour and enthusiasm.

What does this mean in practice?
 For servers it means taking the X86 architecture route and mainstream virtualisation: VMware, Hyper-V, or Xen, with Oracle at a pinch, and then mainstream operating systems.

Remember that converged server/storage/networking/software stacks lock you in unless they allow alternate components to be used. Converged stack templates are more open than converged stacks from multiple vendors such as VCE and these are better than ones from a single vendor such as Oracle.

Faster on the flat

It makes sense to get multicore, multi-socket servers with as much memory as you can cram in because these run the most virtual machines and you will certainly be running more virtual machines tomorrow than you did yesterday.

Get the latest servers if you can, as they use less power and are easier to manage.

For Ethernet, embrace 10GbE and prepare to use 40GbE. It is certain that you will be pumping more bits through your Ethernet links in the future than you do now. If your shop runs Fibre Channel storage networking, consider the lossless and deterministic version that can be used to run the Fibre Channel over Ethernet protocol (FCoE).

It seems certain that Ethernet will flatten, with faster packet transits across the network at the Layer 2 level because of less need for Layer 3 supervision, which slows things down.

Network latency is something that needs watching and reducing, and your Ethernet device vendors should be supportive of this.

They should also be supportive of network virtualisation, which will be enabled by having the physical network managed by software, just as we do with server virtualisation.

Software-defined networking using potential standards such as OpenFlow, which enables the rearrangement of network paths and their size in real time, will enable your network to be more responsive to your needs.

Fibre Channel will have a 32Gbps version in the future, but 40GbE will be faster than that. And 100GbE will be much faster than a 64Gps Fibre Channel standard, so it makes sense to have a FCoE gateway that would enable you to bridge from Fibre Channel if it becomes necessary.

Through a glass darkly

For storage, the recommendation must be to head toward scale-out and unified storage arrays with strong VAAI integration.

It seems logical that server virtualisation admin staff will be the front-end focus point for storage provisioning and protection in the future, with arrays doing what is requested by VMware or Hyper-V.

As data-centre components such as servers, arrays and network switches multiply like rabbits, it will be increasingly necessary to concentrate their management in one place and as much as possible through one pane of glass.

Scale-out storage will avoid fork-lift upgrades and unified storage will avoid separate block-access and file-access silos.

What about cloud storage?

Go for standard access methods and rock-solid, service-level agreements with reputable suppliers. Having the cloud as a tier of storage managed by your storage arrays is a distant prospect right now but might eventually come to pass.

What about backup?

Tape’s role is stabilising as the archive for data and there is only one open format, LTO. You will not go wrong by embracing LTO and using modern libraries with integrity-checking facilities to weed out failing media.

Leave the door open

There are changes in the backup area with a switch to snapshots and replication and away from pure backup software.

Don’t let yourself get locked into protection products with limited capabilities and restricted coverage.

At the overall level, the prospect we see is data-centre management software which manages virtualised servers, storage and networking, and provisions their resources to applications dynamically while also managing data protection security and external cloud resources.

Having a single pane of glass through which to monitor, manage and operate a virtualised data centre is the concept that suppliers such as VMware, Microsoft, HP, Dell and others are formulating.

This is the way a data centre could become a private cloud supplying resources on demand to its users, monitoring their usage and charging them for it. The data centre will be more responsive and better able to scale up resources as needed and to utilise resources effectively.

There will be many more servers, storage arrays and network devices in the private cloud data centre, but managing them will be easier and the inherent complexities will be simplified through a central management facility with automated commands rippling out through the virtualised infrastructure.

It sounds like nirvana. It isn’t here yet, but there is a very good chance that it is coming. ®

More about

TIP US OFF

Send us news


Other stories you might like