This article is more than 1 year old

How to counter premature optimisation

Is optimisation the root of all evil - or, at least, many bugs

There was a time in my career, in the 1960s, when optimisation wasn't optional. System memories were measured in kilobytes, instruction times in tens or hundreds of microseconds. We planned our programs around those limited resources.

No longer. In fact, program optimisation is rarely needed these days despite programs that are several orders of magnitude larger. It’s been less expensive to pay for more computer capacity than for more programmer time for at least a decade. Yet programmers still spend time optimising even before there is a demonstrated need for it.

This causes a number of problems - let me count the ways:

  1. It wastes programmer time.
  2. It makes code more complex, leading to maintenance problems.
  3. It introduces subtle bugs.
  4. It increases debugging time.
  5. It delays initial production use of the program
  6. It often becomes irrelevant after the next processor upgrade.
  7. It makes modification of the optimised parts of the application very difficult, even by the original programmer.
  8. In rare cases, the optimised version discovers previously unknown hardware glitches.
  9. Optimised code can depend on undocumented side effects, failing when the hardware is updated.
  10. Optimised code is often the basis for bragging competitions amongst top programmers, unfortunately spreading the infection.

I’ve personally made most of the mistakes listed above, starting when they were a necessity and continuing past when they were even useful. One personal instance comes to mind:

In 1967, working as a systems analyst on a CDC3300 with cards for source and object code, the source code was punched in a compressed format when I ran a compile. It was easier to handle, but decompressing was really slow. I replaced the Fortran decompressor with assembler, and it ran six times as fast.

Getting it to work, installing in the system, building a standard command card and bragging to the staff how fast it was took about two weeks. Three months later, the centre added extra 7.5 MB disks for the programmers. Source was kept on disk, edited batch or online, and the decompressor was history.

Saving a minute per compile never came close to the hundred hours or so that I had invested. I didn't learn that lesson for another nine years.

Optimisation good practice

The following quote is right to the point:

Rules of Optimisation (from M.A. Jackson):

  • Rule 1: Don't do it.
  • Rule 2 (for experts only): Don't do it yet.

When hardware accelerated in performance during the 1980s, most expert programmers continued their optimising habits, to the net detriment of overall software delivery and reliability. It only became clear in hindsight that optimisation was no longer, in most cases, a useful practice. But, perhaps it was fun…

Today, the key list of development deliverables often doesn't even mention program performance, because the systems available can easily scale grow to extremely high capacities. Only the very largest loads, such as the IRS and Social Security, still need to use optimisation techniques, mostly in peripheral usage rather than for the processor.

At the lower end of system size, small and medium business, new techniques of virtualisation and replication of software in an array of identical servers can create the same expansion capabilities that have existed in the mainframe world for years at much lower incremental costs.

Optimisation hasn't disappeared, but like the industry, it has enlarged its scope. We now optimise software systems for throughput, hardware for reliability, and clusters for power and air-conditioning efficiency. And, perhaps, to take advantage of multicore processors. But this is usually done for us by the vendors rather than by individual application programmers.

The overall capability of IT installations as a whole is now viewed as providing a competitive edge in the marketplace and this includes in-house software costs as well as purchased software, equipment, floor space and power requirements and operational personnel. Optimisation of this environment against company performance and profitability is now the usual measure of effectiveness in the executive suite.

Optimisation hasn't gone away, it has grown up.

Bill Nicholls is an IT industry veteran. From 1964 with Univac; to 1985 with Weyerhaeuser; then software developer and writer for Byte and Byte.com.

More about

TIP US OFF

Send us news


Other stories you might like