If you absolutely must do a ‘private cloud’ thing, here's how

Mythical (yet important) conversations with The Finance Guy


I dislike the term “private cloud". As far as I'm concerned, there's really no difference between what I'd call a “traditional” homespun virtualised infrastructure and what they call “private cloud” these days.

As Gartner puts it, private cloud is: “A form of cloud computing that is used by only one organisation, or that ensures that an organisation is completely isolated from others.”

Frankly, you could remove the word “cloud” and not notice the difference.

And, don't think by saying the word “private” that your cloud is “on premise". Your systems could well live in a data centre somewhere in the country, or the world.

The bottom line, and what we’re talking about, is the fact you are the one with control over this cloud – and nobody else is sharing the infrastructure.

All this is important, because what matters are the concepts – not the funky new names for them. So, if you do opt for private, what should the strategy should you adopt?

Virtualisation

We're going to talk later about connecting the private cloud and public cloud worlds together, but before we do let's look at how you can make your private world best suited to making this happen. In short: virtualise it, and define your forward strategy to have virtualisation at its core.

Imagine you have, say, a cool piece of kit that lets you present a storage area in the public cloud to the servers in your private cloud. Chances are that it'll be a CIFS or NFS share, and if you have a vast pile of physical servers that means mounting the storage at OS level on each of the boxes.

Change the IP address or how it's presented and you have an admin task on your hands – or, at the very least, a need to dig out your Idiot's Guide to PowerShell and put some scripts together.

Map that onto a virtual infrastructure and life's a whole lot different: point the hypervisor's storage layer at the share and you can present it to any or all of your VMs as a native drive. If something changes at the presentation end, you have one thing to change and the virtual servers will neither see nor care about the change.

Unless you have specific needs for big, stand-alone servers you should be looking at virtualisation anyway, regardless of how cloudy your world is; the benefits are sizeable and numerous, though we won't dwell on them here as we ought to stay on-message.

Directory services

The directory service is a core database, usually implemented as a distributed but tightly integrated set of servers that store profiles and authentication information for the users and computers on your network. These days, the chances are you use Microsoft's Active Directory, since competition such as Novell's NDS/eDirectory never quite managed to dent AD's ubiquity.

Directory services are absolutely core to making systems manageable and secure, but sadly most businesses have factions that don't quite comprehend this.

Let's have a quick fictional-but-realistic example of a common conversation:

Finance Guy: We're going live next week with the new cloud-based expenses tracking system.

Systems Guy: Really? I'd not heard about that. We'd better talk about how we're going to authenticate the users against our Active Directory.

FG: Oh, we don't need to do that – it has its own user database.

SG: Maybe, but that means everyone needs another user ID and password, and we'll need to change our procedures so that leavers have their account deleted and our support guys know how to support the system.

FG: Oh, you don't need to do that; my team will do the user admin and support the program.

SG: But you don't know when people leave, so how can you?

FG: Oh, we've spoken to HR and they're going to add us to the “staff changes” mailing list.

...and so on.

Hands up if you're a systems person and you've never had this conversation or one like it. Nope, can't see any hands.

The only way to avoid this discussion is to make directory integration the easiest option, and there are three ways to go about it – all of which have the aim of making a directory service available for cloud-based apps to be able to authenticate against it.

The differences are twofold: simplicity and security concerns.

  • One way to go is to put your master directory service in the cloud, and to access it from both your public and private services.

    When I've written about this in the past, and also when I've seen it suggested within companies I've worked with, it's elicited a gasp from management because they're concerned by the idea of keeping the master user database out there on the internet.

    In reality, if you set up the firewalling in the cloud service properly the risk can be made no worse than many in-house setups, but it can be hard to persuade management of this, so it scores low on the “security” factor; it's pretty simple to do though.

  • Option two is to have the directory service master hosted internally in your private cloud setup, with replication out into the cloud-based service.

    The replica's in the cloud and hence easy to point public cloud services at, while the master is safe in your private cloud. Of course, the public cloud version can be read-only so intruders can't do nasty stuff to it, and of course you'd firewall it so that only the desired systems could interrogate it.

    This is easier to sell to management security-wise, but you have to work a little harder on integration with the private systems; this option's a common compromise.

  • The third approach is to keep it all within your private cloud – or just about, at any rate. Run an internal directory service as usual, but put a read-only version in your edge DMZ so cloud-based entities can authenticate against it.

    Your non-techie management will like this, and it's not too hard to do except for making it resilient to failures of your Internet connection.

There is an fourth alternative, but it's only really relevant if you have a very small number of public cloud applications: you implement a data feed on a per-application basis that delivers encrypted data to the application host so it can authenticate.

It's an option (and one I've done a couple of times in the past, generally reluctantly) but consider it only as a last resort because it usually involves regular automated data uploads (so that users can't access the application until the next upload) and it means you have proprietary stuff to manage on top of your normal workload.

Broader topics


Other stories you might like

  • Venezuelan cardiologist charged with designing and selling ransomware
    If his surgery was as bad as his opsec, this chap has caused a lot of trouble

    The US Attorney’s Office has charged a 55-year-old cardiologist with creating and selling ransomware and profiting from revenue-share agreements with criminals who deployed his product.

    A complaint [PDF] filed on May 16th in the US District Court, Eastern District of New York, alleges that Moises Luis Zagala Gonzalez – aka “Nosophoros,” “Aesculapius” and “Nebuchadnezzar” – created a ransomware builder known as “Thanos”, and ransomware named “Jigsaw v. 2”.

    The self-taught coder and qualified cardiologist advertised the ransomware in dark corners of the web, then licensed it ransomware to crooks for either $500 or $800 a month. He also ran an affiliate network that offered the chance to run Thanos to build custom ransomware, in return for a share of profits.

    Continue reading
  • China reveals its top five sources of online fraud
    'Brushing' tops the list, as quantity of forbidden content continue to rise

    China’s Ministry of Public Security has revealed the five most prevalent types of fraud perpetrated online or by phone.

    The e-commerce scam known as “brushing” topped the list and accounted for around a third of all internet fraud activity in China. Brushing sees victims lured into making payment for goods that may not be delivered, or are only delivered after buyers are asked to perform several other online tasks that may include downloading dodgy apps and/or establishing e-commerce profiles. Victims can find themselves being asked to pay more than the original price for goods, or denied promised rebates.

    Brushing has also seen e-commerce providers send victims small items they never ordered, using profiles victims did not create or control. Dodgy vendors use that tactic to then write themselves glowing product reviews that increase their visibility on marketplace platforms.

    Continue reading
  • Oracle really does owe HPE $3b after Supreme Court snub
    Appeal petition as doomed as the Itanic chips at the heart of decade-long drama

    The US Supreme Court on Monday declined to hear Oracle's appeal to overturn a ruling ordering the IT giant to pay $3 billion in damages for violating a decades-old contract agreement.

    In June 2011, back when HPE had not yet split from HP, the biz sued Oracle for refusing to add Itanium support to its database software. HP alleged Big Red had violated a contract agreement by not doing so, though Oracle claimed it explicitly refused requests to support Intel's Itanium processors at the time.

    A lengthy legal battle ensued. Oracle was ordered to cough up $3 billion in damages in a jury trial, and appealed the decision all the way to the highest judges in America. Now, the Supreme Court has declined its petition.

    Continue reading

Biting the hand that feeds IT © 1998–2022