Software testing is still a controversial subject – everybody agrees that it is a "good thing", but it is frequently the first bit of the process to get cut when deadlines bite.
After all, those sneaky testers are really responsible for the bugs, aren't they? Our software is just fine until strangers start poking around inside it, trying to stop it going out to our eager users. Mind you, I was taken aback once when I asked some people in a bank why they imposed silly deadlines on the IT group I worked for - and was told that they didn't expect our software to work anyway.
So, they said, they'd rather get it, broken, a year before they needed it, with a year to iron out the bugs before it was deployed; than get it just before they really needed it and risk disrupting operational business systems with broken software.
That was then, and it represented a very expensive approach to testing, but things don't seem to have improved much. Natalia Juristo and Ana M Moreno, (Universidad Politécnica de Madrid) and Wolfgang Strigel (QA Labs), the guest editors of the July/August 2006 issue of IEEE Software (featuring Software Testing Practices in Industry), can still say: "Despite...obvious benefits, the state of software testing practice isn't as advanced as software development techniques overall. In fact, testing practices in industry generally aren't very sophisticated or effective. This might be due partly to the perceived higher satisfaction from developing something new...Also, many software engineers consider testers second-class citizens."
This highlights the fact that many of the issues with testing derive from a failure of process and the people carrying out the process, rather than from failures in technology or the supply of tools. After all, it is well known that defects are cheaper to remove the earlier that you find them, and cheapest of all if they're never introduced in the first place. But what chance is there of producing defect-free code if the most enjoyable part of the development process, for many programmers, is hunting bugs?
Unfortunately, if you only ever find some of the bugs in your code (as Myers pointed out in chapter one of The Art of Software Testing, it is "impractical, often impossible, to find all the errors in a program"), then the more you put in (typically, by guessing something, in the expectation of supplying the right answer while debugging the code), the more bugs there will be in the delivered product.
And yet, increasing legal regulation and concern with security issues makes defects in delivered systems increasingly unacceptable. It is unlikely that "more of the same" will work any better than it ever has, so perhaps it is time to try radically new approaches to managing defect removal; and mathematically-based approaches might take some of the human issues out of the equation.
Bayesian Analysis and Formal Methods are examples of such approaches. They are established enough for reasonable maturity, but they are not yet widely employed in software development generally. Perhaps they should be.