Formal methods, by David Norfolk
A requirements specification is, ideally, a rigorous logical definition of a business process, while code is an unambiguous statement of program logic; so, in principle, you can compare two mathematically and prove (subject to Gödel and Turing, I suppose) that the code satisfies the corresponding requirements. If the maths is right, there's no need to test against spec.
This use of "formal methods" is usually thought, by the general public, to only work for trivially small pieces of code, but Praxis High Integrity Systems (Praxis-his or, increasingly, just Praxis; a UK consultancy in Bath, specialising in security and safety critical applications) asked the question some years ago: "Is proof more effective than testing" for industrial scale programs?
It came up with the answer that "proof appears to be substantially more efficient at finding faults than the most efficient testing phase". This implies, of course, that you use both proof and testing on the project, where each technique is appropriate (even though proof is more cost-effective at finding some errors than testing is at finding other errors, proof may not be able to find all errors).
I was impressed some time ago, by the way in which Praxis used its pragmatic combination of formal methods and conventional testing on the SHOLIS (Ship Helicopter Operating Limits Information System) for the UK MOD. See Is Proof More Cost-Effective Than Testing? by Steve King, Jonathan Hammond, Rod Chapman and Andy Pryor, IEEE Transactions on Software Engineering vol 26, Number 8, Aug 2000, here.
I recently went back to Praxis to see how this approach has developed. In the world of formal methods, simply remaining in business with an expanding customer-base is a measure of success, which Praxis has certainly achieved.
What Praxis now has is a named, documented process, "Correctness by Construction" (CbyC): build it right in the first place (instead of the more usual "construction by correction", that is, build it wrong and fix the errors afterwards).
This appears to work: at one level, Praxis now seems able to offer a warranty on its software, for any departures from spec; at another level, its programmers don't bother to use code debuggers, because the code is correct as delivered (you still need acceptance testing, but to show that the system works rather than to find errors).
There is technology behind this – a special language, SPARK, that supports formal verification; and smart tools to compare the formal spec with the SPARK code and to verify the code for completeness, logical consistency and so on - but the technology isn’t the main thing.
Praxis chief technical officer (software engineering) Peter Amey points out that Microsoft has superb technology, using similar mathematics to that behind CbyC, to help identify bugs in, say device drivers, as part of a certification process; but how much more cost-effective to supply formal device driver interface specs and build device drivers correctly in the first place, rather than to certify them after they're built.
The CbyC principles are described in a paper describing Praxis' latest project for the NSA, published in ISSSE '06, the proceedings of the 1st IEEE International Symposium on Secure Software Engineering, March 2006): Engineering the Tokeneer Enclave Protection Software. Roughly speaking, these are:
- Use a programming language with unambiguous static and dynamic semantics to "write it right" in the first place. No, probably not C++ (see, for example, Dominic Connor here).
- Take small steps, semantically, during development, so it is easier to check them for correctness.
- Each step in development should have a defined purpose and express information or involve decisions that are not made elsewhere (if you express information in several places you risk introducing errors from it being inconsistent – the classic "duplicate data" issue).
- "Check here before going there" – verify each design step (usually against a prior design deliverable) immediately, before proceeding; and write deliverables in a simple way that facilitates review.
- Document, at the time, why you are doing something and why you are confident that design decisions have been implemented correctly, not just what you've done – apart from helping with future maintenance, this encourages you to find errors early.
- Use the most appropriate tool for the job when verifying deliverables. This may be tool-supported proof, static code analysis – or informal peer review. Don't, for example, use formal methods religiously if the answer can be obtained from reviewing a prototype with users.
- Constantly think about and question what you are doing – encourage discussion with the various stakeholders