This article is more than 1 year old
Continuous delivery: What works (and what doesn't)
Software on the assembly line
The notion that we might just as well automate everything is common in the perpetually dynamic world of continuous delivery.
What does continuous (software application development) delivery actual entail and is it a practical solution all the time?
It is effectively like refuelling your car while driving. Or perhaps a better analogy would be a factory assembly line that produces cream cakes. While new raw materials are pumped into hoppers and tanks feeding the manufacturing process, the assembly line keeps moving to produce more cakes.
Everything is continuous unless quality control or management decides to stop the production line because of errors, sub-standard products or for maintenance.
Let them eat cake
It is the same for software as for cream cakes: the assembly line (the programming team) works constantly to feed raw materials (blood, sweat and code) into the production plant (revision control system) so that the cake mix (automated integration phases) can keep producing new end product.
Cake tasting by batch (static analysis) is joined by higher-level quality control (unit testing) and the finished product is delivered to the cake shop (the pre-production or production environment) continuously.
This all works fine in principle, unless of course the consumer really wanted bacon sandwiches in the first place.
It is even worse if the consumer has a gluten allergy that precludes eating cakes anyway. In other words, the user is more important than the process, however efficient it is.
Even if we accept all the above, what processes should still be manual and why? Moreover, how do we mitigate risk throughout the lifecycle of a continuous delivery project?
Websecurify, a vulnerability scanning company, explains that during the test phases of a continuous delivery project we should see automated security testing tollgates employed to identify vulnerabilities.
“If a critical vulnerability is identified, the process is stopped and feedback delivered to the development team. The pipeline cannot complete before the critical issues are remediated, therefore ensuring better security,” says the firm.
Websecurify further details the difference between static (white-box) continuous delivery automated security testing that works on the application source code, as opposed to dynamic (black-box) analysers that perform real-time tests simulating an actual attack.
Security attacks are not the only continuous delivery risk and vulnerability: we need to examine application robustness and carry out stringent debugging procedures from front to back. But pure-play security is not a bad place to start.
So is continuous delivery ever a mistake? Why keep pushing releases into production if the software is not effective and efficient in the eyes of the users?
Too much too young
Is there a risk of too much too fast for customers – and shouldn’t users be in charge anyway?
ThoughtWorks chief scientist Martin Fowler has commented that a state of continuous delivery is experienced when the software team prioritises the need to keep the software deployable over and above working on new features.
This is a state where push-button deployments of any version of the software project can be channeled to any environment (or platform) in an on-demand manner. This goes some way to putting users in charge, but not the whole way by any means.
Fowler and his team remind us that in a term of programming activity, continuous delivery (a term coined by ThoughtWorks) comes down to a process of building executables and running automated tests on those executables to detect problems.
From that point we can push the executables into production (or at least pre-production or production-like) environments as they start to work. The theory is that this allows the team to build incremental extensions to the way software works on the basis of what the users have requested in the first place.
Connecting users back to the continuous delivery chain via a process of diligent requirements gathering is key to making sure that it is not a case of too much too fast for customers.
This is when continuous delivery Nirvana is achieved: it is not just a question of continuous delivery, but also one of continuous requirements analysis and user satisfaction. This particular Nirvana is not easily gained without concentrated meditative effort.
As continuous delivery practice lead for Europe at ThoughtWorks, Kief Morris echoes that sentiment.
"The most important places to have manual steps in the delivery process are reviewing feedback and data about how people are using the software, deciding what changes to implement, and implementing the changes,” he says.
"The purpose of automation in continuous delivery is to allow the team to focus their attention on what to build, rather than spending their time pushing bits onto servers."