Modernise and prosper: It's time to imbibe server orchestration
Live long by transporting what you know to a virtual environment
Server farm deployment used to be a simple enough process. Sometimes it was manual, sometimes it was automated. The odd shell script here or there, or some basic file manipulation was the order of the day along with PXE and other automation tools. It was workable at best.
As virtualisation technology matured, so did the automation of virtual deployments. Virtualisation gave access to accelerated deployment tools and techniques such as deployment from templates and cloning. An administrator could do a bit of post-deployment batch scripting, but that was usually the limit of modification without jumping through hoops.
Fast forward to the last few years and cloud and elastic compute have driven automated deployment to new levels of efficiency and ease. The administrator of old needs to reskill and understand that the days of performing manual deployments are almost over.
Orchestration and workflows are the new way forward for all but the smallest of enterprises. To be clear, to fully utilise an orchestration solution and see the returns, a company needs to be of a certain size and do more than a few deployments a week. Orchestration comes into its own when you have hundreds or thousands of virtual machines per week.
Just about every cloud offering has orchestration capabilities. But what is orchestration exactly?
A good working definition is something which describes a way to automate the management of systems, software and services. The key words here are “automate” and “management”. Put simply, orchestration is the process of automating server builds, but taking the process further and building business logic and intelligence into the deployment.
Orchestration can also mean less paperwork and less manual interaction. A good example of this is in a well-configured orchestration environment, where all the requirements to stand up a server (IP pools, DNS entries and such) are all available ahead of time. Not having these potential bottlenecks can slash provisioning time from days to hours, or even quicker.
A very positive effect of automation is that it’s less prone to human failure and is flawlessly consistent in deployment. Humans make errors. Orchestration deployments may fail, but that is usually because some subsystem is broken rather than being the fault of the deployment itself – or some badly factored code, of course.
A good example of orchestration in use is providing the ability to self-provision. Within an orchestration-enabled organisation, approved users such as project managers or nominated staff should be able to utilise a self-service portal. This portal should allow them to select the type of server desired from a well-stocked service catalogue. Any customisation required can then be added, such as being able to take the default specification for a virtual server and modify it by adding additional CPU, memory or disk capacity as needed.
Orchestration isn’t solely used for machine deployment. It can be used to offload service catalogue items such as power cycles, snapshot requests, passwords resets and such like from the IT service provider to local administrators or team leaders, for example. Such tools are critical in a multi-tenanted environment.
Underlying orchestration logic can also be configured to simplify complex issues that tend to consume precious admin cycles. An example of such an issue in a large environment is that the virtual machine requester may well not know which VLANs are required. It is understandable, as they are not system admins.
Some orchestration environments get around this issue by allowing a requester to mimic existing server network settings. Logic can be built into the workflow to restrict the available machines based on type. An example would be that a Postgres DB server shouldn’t be able to live on the same VLAN as the webservers, for obvious infosec policy reasons.