Hybrid infrastructures – where you combine on-premise equipment with systems that sit in a public cloud installation – have their own particular foibles when it comes to management. It's really not so hard, though – here are 10 things to think about when you're looking at the security aspects of managing your hybrid world.
Integrate as much as you can
A hybrid setup is a single thing – it's a single infrastructure that happens to live in two different types of installation. If you have two separate worlds that do different stuff (eg, an on-premise infrastructure and standard-alone cloud-based Web services, for instance) then that's not a hybrid setup. As you have a single installation you should try your hardest to manage it as such, rather than needing different tools for each element of the setup.
Chances are that you're never going to get 100 per cent of the way there due to the peculiarities of different public cloud installations, but at the very least you should have everything from the virtual server level upwards under a single management GUI, and if you can look further down the stack using specific tools that are able to peek into the underlying infrastructure of the cloud provider then do so.
Keep your management connections separate
Management and data are two separate things, and it's crucial in the interests of security that they exist separately and cannot interfere with each other. It's a cardinal sin to allow an attacker who has compromised one of your servers to hop over to the management layer and start playing with the setup of your virtual infrastructure, and so you should take some simple but essential precautions including prohibiting management access from the production server network and ensuring that different credentials are used for management layer access and server administration.
Consider how to deal with availability issues
If someone attacks your private cloud with a denial-of-service (DoS) attack you have a few options open to you – including hopping down to the comms room or data centre and hitting the big red switch or pulling out a LAN cable. You don't have that option in a cloud setup, so make sure you have a route into the management suite in the event of an availability problem. And this includes attacks on the network in which your technical team live – if your office catches fire and it's the only IP range permitted to manage the public cloud infrastructure, that's something of a problem.
Use (or emulate) out-of-band management
Taking this a step further, even in the private side of your setup you may not have ready access to the data centre: maybe it's a couple of hundred miles away, for instance. So have an analogue connection and appropriate hardware in each of your private installations to give you a last-resort dialup connection; you can get some very funky all-in-one KVM/console server devices these days and although they're not cheap, they're worth the investment.
And on the public cloud side, consider every possible DoS attack and make sure that your management path isn't susceptible to them: generally you'll be fine because the management layer is independent of the CPU/RAM usage of the server operating systems, but take the time to make sure this is the case.
For each of the virtual machines you run up in the public cloud side of your setup, consider how you're authenticating its users. In many cases you'll be creating a digital certificate which you lodge on whichever machine you're using to connect to the server, and you're not prompted for any password when you connect. Consider whether this is enough, and if you decide it's not (which you probably will) make sure you apply enough further authentication to make you comfortable with the security of that approach.
Change management is absolutely crucial for secure operation. Remember that security isn't just about keeping data secret – it's also about ensuring your systems are available and that you're able to rely on the integrity of the data that sits on them. I've worked with various companies in recent years where we've started with no formal change management process and have introduced a formal regime, and in all cases systems have become more reliable and have suffered far fewer “why is it configured like that, and who changed it?” incidents. With the added complexity that hybrid setups bring, control over change is even more important than on a private-only setup.
Use a management suite
One answer to the above is to use a package that puts a layer of separation and control between you and the kit you're managing. So instead of authenticating directly to the target machine and managing it natively, put in an interim layer that acts as a proxy for all management connections. So it'll authenticate anyone trying to connect and permit connections only to those who are legitimately trying to make management connections to each server, and will also be able to present your whole hybrid setup in a single interface to make it feel unified.
And most importantly ...
Log the management sessions
… the proxy layer allows you to log everything that goes on – not just who initiates what sessions but even a command-by-command playback of all the instructions that were given during the management session. Some of the packages will even show you a video-style playback of a Windows remote access session.
… and review the logs
And if you're logging what's going on, you'd be an idiot not to have a scheduled regime of log review. As with all logs, there's absolutely no point storing something if you're never going to look at it. Check the logs regularly and properly, and collate them against your change management logs, holiday calendar, and so on. Although these logs will be useful for after-the-event analysis if you need to figure out what went wrong in the event of an outage or security incident, that really is the last resort as they can bring huge value as part of day-to-day operation. ®
Sponsored: Ransomware has gone nuclear