The "old dog&" in this case is Marcus Ranum, inventor of the proxy firewall and the man who implemented the first commercial firewall product. He’s now the CSO at Tenable Network Security, the company that produces the Nessus security scanner, and author of the book The Myth of Homeland Security.
He's also on the technical advisory board (TAB) for Fortify Software - which gets him the chance to play with the software for nothing, with a bit of free support thrown in (rather like being a journalist).
"After a bit of thinking," he says, "I cooked up the idea of taking Fortify's toolset and running it against my old Firewall Toolkit (FWTK) code from 1994, to see if my code was as good as I thought it was. As it happens, that's a pretty good test, because there are a couple of problems in the first version of the code that I already knew about - it would be interesting to see if Fortify was able to find them."
That's pretty brave of him - most programmers take a let sleeping dogs lie attitude to their legacy - if they can. You can read about the whole experience here, and it makes interesting reading.
There are some real insights, such as:
- “The more complicated the program is, the harder it is to get it right”; and the killer hat follows on from this,
- “It's really hard to tell the difference between a program that works and one that just appears to work”.
OK these may sound like commonsense, especially the first one, but how common is common sense – how many companies actually assess complexity as an input to risk-based testing estimates? And, if your errors do cause your program to crash, then you're very lucky - you've just had the “scope of impact” of your mistakes limited.
Rattling the Saber
Marcus's insights seem to come largely from poking code using an old C interpreter tool called Saber-C (now called CodeCenter; there’s an introduction to Saber-C here), which fully simulates the runtime environment instead of running it and then monitoring it, as with a normal debugger).
Marcus says: "Saber-C/CodeCenter is, to me, a tragic story - here was an absolutely terrific product, but the market just doesn't seem to have wanted it; I'm dumbfounded when I see programmers still debug code using
printf() statements”. He sums up his "old tricks" as realising:
"...that there is no excuse for writing unreliable software. Humility - and an acceptance that you can and do make mistakes - is the key to learning how to program defensively. Programming defensively means bringing your testing process as close as possible to your coding so that you don't have time to make one mistake and move on to another one. I also learned that code that I thought was rock-solid was actually chock-full of unnoticed runtime errors that worked right 99% of the time, or were reliable on one architecture but not another..."
And so, what new trick did Fortify teach him? That working closely with software code for a long time doesn't mean that you've found all its defects, it really doesn't. Automated (and therefore entirely egoless) code analysis does find errors you didn't expect to find, even in long-established and respected software. An example Marcus quotes:
"Ouch!! I am trying to stuff a string that is potentially up to 2048 characters into
mbuf which is 512. What was my mistake? Simple: I relied on a function that I had written years before, and had forgotten that it would potentially return strings that were as large as the original input. This kind of mistake happens all the time. I'm not making excuses for my own bad code - the point is that it's absolutely vital to go through your code looking for this kind of mistake. I know that, if you had asked me at the time when I wrote it, I would have sworn that "It was written with insane care; there are no unchecked inputs!" And I'd have been wrong. This was a fairly subtle mistake that I overlooked for 3 years while I was actively working on the code, and Fortify found it immediately."
Interestingly, Marcus found many "false positives" using Fortify too – but on investigating these, most of them only weren't a problem in practice because data that could break them, couldn't reach those bits of code. But what if just one of the myriad paths to a potential issue gets overlooked? He now thinks that this is a programming style issue: “I realized that it's important to keep the controls close to the danger-points,” he says, “rather than letting them be scattered all over the code.”
Marcus’ final conclusion, after finding lots of defects (only one of which he knew bout) in code that had been in wide use for 10 years is: “when you consider that code-bases like Microsoft's and Oracle's can number in the tens of millions of lines, it's simply not realistic to expect manual code-audits to be effective”. Then he points out that the bad guys can use these tools too – but if we can just get our code to the stage where it takes months rather than hours to find a new exploit in it, then the world will be a better place. So, go on – read his article when you have a moment… ®