This article is more than 1 year old

One programmer's unit test is another's integration test

Word games

The question of what units you are working with is one that will at one time or other have plagued anyone who studied a science or a branch of physical engineering.

Teachers go to great lengths to make sure students remember to specify their units. It is not enough to say that the answer is 42. Forty-two what? 42 metres? 42 electronvolts? 42 furlongs per fortnight? Without a clear understanding of what units are involved, certain results and claims can be meaningless, misleading or simply expensive.

And so it is with software testing.

When it comes to programmer testing (the slightly ambiguous term that is used to describe the act of programmers testing, rather than the act of testing programmers) there are normally three categories that questions fall into: why, how, and what.

The question of why programmers should test their own code should not really be a question that needs asking, but some folk seem to think that all forms and levels of testing should be handled only by people whose sole role is testing. This view is founded on various misconceptions about the roles, nature, and economics of software development. It is nothing to do with agile, fragile, or any other kind of development - it is simply about professional responsibility.

The question of how to test covers a range of questions from the use of automated testing to how testing relates to other development activities, which results in a distinction between test-driven development and development-driven testing. However, the question of what to test is largely independent of this question of approach, and is the question we're going to focus on here.

There are many things that can be tested about a software system, so what should be tested? There is a surprisingly simple answer to this question: anything that is considered significant, directly or indirectly, for the successful development and acceptance of the software. If performance is a critical feature of a given application, there should be tests for performance.

If scalability matters, there should be equipment dedicated to that cause. If usability is supposed to be the unique selling point of a shrink-wrapped application, an empirical approach founded on usability testing should be employed rather than the usual usability conjecture. If code quality matters, there should be tests that are code-centric rather than system-centric, as well as static analysis and peer review. And so on.

These different kinds of tests differ in their degree of automation, their scale, and the roles responsible for them, but they all share a common aim. Put another way, a test is a way of demonstrating that something is important. It shows you care.

Conversely, the absence of a particular kind of test or measure indicates that a particular aspect is not seen as critical. For applications that are not performance critical, having a battery of performance tests would not offer a particularly useful return on investment.

We can also use this way of thinking about tests as a way of deconstructing a project's actual priorities, as distinct from its advertised priorities. If someone on a project states that meeting customer requirements is the most important thing, but there are no tests defined with respect to the requirements, then meeting customer requirements is not actually as important as they would like you to believe.

More about

TIP US OFF

Send us news


Other stories you might like