This article is more than 1 year old

Analysis, design, and never the twain shall meet

Jumping the chasm

Not many people know how to get from analysis to design. Software agilists “solve” the problem by blurring the distinction between the two. There has to be a better way...

It’s a familiar scenario to many. Deep in the basement where coders dwell, Hero Programmer #1138 stops at an apparent impasse and thinks: “Oh hang on, we didn’t agree on what happens if the user cancels at this point. In fact, we didn’t even talk about it. Well, I’ll just roll back the whole transaction and pop up a warning dialog taunting the user. But we also didn’t talk about transactions for this part of the system, so they’re not factored into the design. Well, here goes, let’s start coding!”

The programmer, bless his cotton socks, makes project-shaking decisions then and there, without a customer or analyst in sight, to retro-design new functionality into the system. Programmers are great for making technical decisions, but they’re not the right people to be defining and signing-off on new requirements. Yet this scenario happens, a lot. It’s the way in which many projects wobble and stumble towards eventual completion. Then the bug hunt begins.

The project’s development process (if there is a defined process at all) must have some serious shortcomings if great swathes of functionality can be missed. In the agile world, where flexibility is sometimes taken to extremes, these shortcomings are addressed by putting safety nets in place: have an on-site customer (or team of analysts) to be at the programmers’ beck and call . Incidentally, this arrangement was partly why the infamous Chrysler “C3” project failed, because such available paragons seldom represent the real paying users well. And rethinking the design as new functions are added is dressed up as “evolutionary design”, with safety nets such as copious unit tests and pair programming.

Evolutionary design does have its place: but using it as a “process hack” to catch forgotten requirements just isn’t it. It’s much better to explore the requirements in-depth first, looking at all the “rainy day scenarios”, i.e. things that can go wrong, or events where the end-user steps off the trail and does unexpected things. It’s not as difficult, or as time-consuming, as it sounds to get it right: but software agilists wouldn’t sell nearly as many books, or get to speak in public as often, if they admitted this.

The reason why software agility appears to be such a saviour is that it skirts around the need to get from requirements to source code, by blurring the distinction between them. But – and here’s the thing – getting from requirements to working, maintainable source code really isn’t that difficult, if you do it right. And if you get it right, then you don’t need those process hacks. I’ll have a go at explaining how to do this in this article.

If it isn’t too disingenuous to plug my own book, the process that I describe here is illustrated in more detail, with oodles of examples and exercises, in Use Case Driven Object Modeling with UML: Theory and Practice (co-written with Doug Rosenberg, and published later this month). The process is adapted and refined from Ivar Jacobson’s approach to OO analysis and design (with use cases, robustness diagrams and sequence diagrams).

More about

TIP US OFF

Send us news


Other stories you might like