Step one: Reduce code complexity
Koopman makes the point that improving software quality is largely a matter of observing best practice.
These include reducing code complexity, using static analysis tools and compiling with zero warnings, rigorous checking of real-time code scheduling, satisfactory software testing, and use of basic tools including configuration management, version control and bug tracking.
CAST Research Labs reported in 2017 on application software health, based on 1,850 "large, multi-layer, multi-language business applications" across 329 organisations and eight countries – more than a billion lines of code. The report is based on five health factors: robustness, security, efficiency, changeability (how difficult it is to modify the code) and transferability (how difficult it is to understand for a developer new to the code).
The CAST report is encouraging, in that overall mean scores were at 3.0 or above for all categories, on a scale of 0 to 4, with security best at 3.22 and transferability worst at 3.0. This decent level of quality reflects the fact that these are in general mission-critical applications in well-resourced sectors.
The conclusions are still worth reading. Security scores had a wide variation, so lack of secure coding practice remains a problem. In terms of team sizes, CAST reckons teams of more than 20 developers achieved poor scores and suggests optimal team size is more like 10 people or fewer.
Another point of interest is that the methodology behind the best scoring projects was neither Agile (emphasis on iterative development) nor Waterfall (emphasis on up-front planning), but a hybrid approach with extensive up-front analysis followed by short iterative coding sprints.
CAST also makes the point that software architecture that involves "multiple components spread across several layers of the application" is harder to test than code quality at the level of a class or method; but it is structural quality that accounts for most operational problems.
Slow coding class
Writing bulletproof code is slower and therefore more expensive at the time of development, which is one reason why software quality remains so variable. It is well known that the cost of fixing defects increases the later it is found, though putting generic figures on how much difference it makes is difficult as it varies greatly.
In a carefully architected DevOps process for a web application, where a code change can be made, tested automatically and deployed into production rapidly, the cost of fixing a bug found late may not be too bad. At the other extreme is a case like the 1996 explosion on launch of the Ariane 5 rocket, caused by a 64-bit variable being converted to a 16-bit variable when the number was too large to fit. The immediate cost of the bug was around $500m.
An extreme example, but what about the everyday? So-called "poor-quality" software is costing the US economy $2.26 trillion - after you remove technical debt according to the Consortium for IT Software Quality. Breaking that down, losses from software failures account for 37.46 per cent of that figure, with the task of finding and fixing defects accounting for 16.87 per cent. The figures are derived from, among other factors, lost business and wasted investment.
Identifying and fixing a bug before it makes it through production is a priority, as the cost involved in fixing it or dealing with its aftermath increases the longer it lives on.
Bugs that make it through to production can therefore have severe long-term costs. If the bug is in software or firmware on which other software depends, such as an API, then third-party developers may have to code around the bug. Then a compatibility problem appears, since fixing the bug may break that other software.
One key difference between today's internet-connected world, and the early days of software development, is that deploying patches is easy and generally automated. Tracking down bugs that have made it into production is also easier, thanks to techniques such as prompting users to submit crash data back to the developers, or "flight recorder" agents that capture the application state and log exactly what code was executing at the time of a crash. Obvious defects are mitigated by being found and fixed quickly.
Decades after McConnell and despite numerous changes along the way, the frustration of software quality remains this: the knowledge and tools needed to write solid code exist, but it is human factors including finance, deadlines, mis-management, skills shortages, and the challenge of dealing with legacy code and systems that means code quality remains uneven.
Given the huge and growing importance of software, the continuing prevalence of bugs is both sobering and disturbing. Implementing systems to minimise the burden of deploying fixes helps for sure, but it is effort to improve software quality at source that will yield the biggest benefit. ®
Sponsored: Webcast: Simplify data protection on AWS