This article is more than 1 year old
Avoid the dreaded auditor's smirk: Smart policies and procedures for the hybrid cloud
Make it easier on everyone
When you get to a certain age, and you've been in the IT industry for enough years, you start to get an idea of what auditors are looking for when they descend on you and ask you pointed questions about your systems.
And I don't just mean security auditors: if your company has an annual financial audit the team which comes to ask you stuff want to know about systems security as well as whether the financial numbers add up – because they want to be sure that the systems are sufficiently well-controlled for you and them to be able to rely on the information they contain and report.
And to demonstrate security, you don't just need to show that at that point in time you have no user accounts belonging to people who've left and that you enforce complex passwords for your users: you need to demonstrate that you have processes and policies in place that ensure system security is an ongoing concern and has year-round attention.
If you have a hybrid infrastructure you need to be particularly careful when you define and implement your policies and procedures, because even if you have a generic management suite that you can use to control both the private and public cloud elements of your world, the chances are you'll have to have some special items – particularly in the procedures – for the public cloud element.
We'll look at the top five policies areas you need to give attention to if you're to keep the auditors happy: oh, and of course if the auditors are happy it means you're running systems with a good level of protection against attack and upon which you can rely, which is a big deal for you too. Incidentally, we'll work on the premise that you're able to mandate contractually that all staff and contractors must adhere to the policies you adopt as a company.
The access control policy is absolutely core to your security regime. It specifies how you'll control access (role-based profiles are a good way to go – you shouldn't be defining privileges for individual user accounts) and also mandates how the systems work with regard to password complexity, how frequently password changes are enforced, and so on.
You can also mandate in your access control policy that any new systems that are installed must follow particular security standards – my favourite is to insist that they're able to authenticate users against the corporate directory service instead of having their own stand-alone user database. If you're introducing a new access control policy it's often impossible (=uneconomic) to retro-fit it to all your existing systems, but you just need to be pragmatic and accept this, and to insist on retro-fitting just to the key systems that you can afford to treat.
Starters, movers and leavers
It's not often you see an organisation make a good job of managing system access for people who join the company, people whose roles change, and people who leave. From a security point of view starters are actually not a big deal: the primary risk with not dealing correctly with new starters is that you don't get their user accounts set up in time, so the company ends up looking foolish.
It's leavers that you need to be most mindful of, because the hazards of allowing a leaver's login to remain active are very clear (particularly if they had remote access to your systems). Dealing with personnel changes is generally both long-winded (it'll involve numerous departments, from HR and payroll to the premises department for physical access and the IT team for system access) and tedious, which means it's very easy to become complacent about it – particularly when it comes to contractors whose involvement (and premature departure, should it happen) often side-steps the HR team. A clear, strong starters/movers/leavers procedure is key, then.
Movers are fun, incidentally: you need to be careful that the policy and procedure ensure you remove their old privileges and grant their new ones. The common problem is that when someone moves within the organisation they'll often have a “soft” hand-over, where they help out in their own role whilst getting to grips with their new post. Make sure the policy puts hard time limits on the old privileges being blown away, or it'll get forgotten.
In a hybrid installation the transfer of data between the on-premise and public cloud installations needs scrutiny. Happily this is a long way from rocket science – it's just a case of selecting appropriate encryption that is both acceptable security-wise and achievable technology-wise, and then doing it. (There's really no point defining a policy that would make Lt. Commander Montgomery Scott start spluttering about the laws of physics). So it's about authentication between the private and public cloud installations, strong encryption as traffic transits the link (it'll usually be a VPN), matching ingress and egress rules at each end, and regular reviews of keys and certificates.
Secure disposal of data is a common policy in a private installation, but it doesn't entirely fit with the public cloud components because you just don't have the low-level access to the hardware. If you're decommissioning a server in your own data centre you'll ensure that the policy states that it's overwritten multiple times to a particular military-grade standard, or that it's degaussed in a particularly magnetic environment, or that it's physically mashed into pieces no more than particular dimensions. None of thes options is open to you for the public cloud.
What you need to do, then, is to work on the assumption that when you delete cloud-based data, it will actually continue to exist forever in some form. What can you do about this? Encrypt it.
So your secure disposal policy will interface into the policies and procedures that you should also have around how to architect systems in your infrastructure. I mentioned earlier that you sometimes have to mandate things differently in the public cloud than in the private world, and this is one of those examples. Mandate an encryption algorithm that's sufficiently strong to give a good assurance that decrypting it post-”deletion” will be an intractable problem: you can't entirely protect against decryption, but you should do what you can.
Finally, mandate proper change management and have a robust procedure for making it happen. Those who read what I write from time to time will know this is one of my hobby-horses, but that's primarily because proper change management is always, without exception, a good thing. User interfaces on public cloud systems are superbly designed and are pretty straightforward for even non-experts to use … which means that you often get non-experts doing maintenance on your public cloud world that they would need more skills to carry out in the private installation. A strong change management regime will maximise your chances of giving due consideration to what's required to minimise issues caused directly by change. ®