If you absolutely must do a ‘private cloud’ thing, here's how
Mythical (yet important) conversations with The Finance Guy
I dislike the term “private cloud". As far as I'm concerned, there's really no difference between what I'd call a “traditional” homespun virtualised infrastructure and what they call “private cloud” these days.
As Gartner puts it, private cloud is: “A form of cloud computing that is used by only one organisation, or that ensures that an organisation is completely isolated from others.”
Frankly, you could remove the word “cloud” and not notice the difference.
And, don't think by saying the word “private” that your cloud is “on premise". Your systems could well live in a data centre somewhere in the country, or the world.
The bottom line, and what we’re talking about, is the fact you are the one with control over this cloud – and nobody else is sharing the infrastructure.
All this is important, because what matters are the concepts – not the funky new names for them. So, if you do opt for private, what should the strategy should you adopt?
We're going to talk later about connecting the private cloud and public cloud worlds together, but before we do let's look at how you can make your private world best suited to making this happen. In short: virtualise it, and define your forward strategy to have virtualisation at its core.
Imagine you have, say, a cool piece of kit that lets you present a storage area in the public cloud to the servers in your private cloud. Chances are that it'll be a CIFS or NFS share, and if you have a vast pile of physical servers that means mounting the storage at OS level on each of the boxes.
Change the IP address or how it's presented and you have an admin task on your hands – or, at the very least, a need to dig out your Idiot's Guide to PowerShell and put some scripts together.
Map that onto a virtual infrastructure and life's a whole lot different: point the hypervisor's storage layer at the share and you can present it to any or all of your VMs as a native drive. If something changes at the presentation end, you have one thing to change and the virtual servers will neither see nor care about the change.
Unless you have specific needs for big, stand-alone servers you should be looking at virtualisation anyway, regardless of how cloudy your world is; the benefits are sizeable and numerous, though we won't dwell on them here as we ought to stay on-message.
The directory service is a core database, usually implemented as a distributed but tightly integrated set of servers that store profiles and authentication information for the users and computers on your network. These days, the chances are you use Microsoft's Active Directory, since competition such as Novell's NDS/eDirectory never quite managed to dent AD's ubiquity.
Directory services are absolutely core to making systems manageable and secure, but sadly most businesses have factions that don't quite comprehend this.
Let's have a quick fictional-but-realistic example of a common conversation:
Finance Guy: We're going live next week with the new cloud-based expenses tracking system.
Systems Guy: Really? I'd not heard about that. We'd better talk about how we're going to authenticate the users against our Active Directory.
FG: Oh, we don't need to do that – it has its own user database.
SG: Maybe, but that means everyone needs another user ID and password, and we'll need to change our procedures so that leavers have their account deleted and our support guys know how to support the system.
FG: Oh, you don't need to do that; my team will do the user admin and support the program.
SG: But you don't know when people leave, so how can you?
FG: Oh, we've spoken to HR and they're going to add us to the “staff changes” mailing list.
...and so on.
Hands up if you're a systems person and you've never had this conversation or one like it. Nope, can't see any hands.
The only way to avoid this discussion is to make directory integration the easiest option, and there are three ways to go about it – all of which have the aim of making a directory service available for cloud-based apps to be able to authenticate against it.
The differences are twofold: simplicity and security concerns.
- One way to go is to put your master directory service in the cloud, and to access it from both your public and private services.
When I've written about this in the past, and also when I've seen it suggested within companies I've worked with, it's elicited a gasp from management because they're concerned by the idea of keeping the master user database out there on the internet.
In reality, if you set up the firewalling in the cloud service properly the risk can be made no worse than many in-house setups, but it can be hard to persuade management of this, so it scores low on the “security” factor; it's pretty simple to do though.
- Option two is to have the directory service master hosted internally in your private cloud setup, with replication out into the cloud-based service.
The replica's in the cloud and hence easy to point public cloud services at, while the master is safe in your private cloud. Of course, the public cloud version can be read-only so intruders can't do nasty stuff to it, and of course you'd firewall it so that only the desired systems could interrogate it.
This is easier to sell to management security-wise, but you have to work a little harder on integration with the private systems; this option's a common compromise.
- The third approach is to keep it all within your private cloud – or just about, at any rate. Run an internal directory service as usual, but put a read-only version in your edge DMZ so cloud-based entities can authenticate against it.
Your non-techie management will like this, and it's not too hard to do except for making it resilient to failures of your Internet connection.
There is an fourth alternative, but it's only really relevant if you have a very small number of public cloud applications: you implement a data feed on a per-application basis that delivers encrypted data to the application host so it can authenticate.
It's an option (and one I've done a couple of times in the past, generally reluctantly) but consider it only as a last resort because it usually involves regular automated data uploads (so that users can't access the application until the next upload) and it means you have proprietary stuff to manage on top of your normal workload.