Opinion Give up on the notion that computer security can be improved by putting more people in prison, argues Jon Lasser, SecurityFocus columnist.
The war on hackers is failing for the same reason the war on drugs failed: Most individuals can control themselves, but there is a substantial group of people for whom no legal penalties will be enough to discourage their behavior.
The temptation to try and "beat the system" that is often felt by hackers and crackers, and even just regular computer users, can be enormous. People will succumb to the temptation to pirate copyrighted material, to disable copy protection on software, and to try and break into other people's computer systems.
Meanwhile, the costs associated with the war on hackers are unreasonable: the PATRIOT act, the DMCA and similar bills now working their way through state legislatures will cause irreparable harm to the rights of all Americans -- and those costs alone likely exceed the benefits offered by these laws.
That's why I think it's time to adopt a "harm reduction" approach to computer security. Traditionally, harm reduction is a strategy applied to illegal drug use, as an alternative to an unwinnable war on drugs. It's an approach that acknowledges the reality of drug abuse, and seeks to reduce the dangers posed by those drugs, both to the users and to society at large.
For example, the spread of HIV and hepatitis is one serious consequence of drug abuse. A harm reduction approach implements needle exchange programs to limit the spread of disease. It also treats drug addiction as a medical rather than legal problem, acknowledging that people have flaws.
Let's make the same concession in computer security: People will never be perfect, and software will never be perfect. So how can we reduce the harm caused to our information security by crackers? In my last column, I proposed several harm reduction strategies, including writing in safer programming languages and using tools like Immunix's StackGuard.
Since then, several readers have steered me towards ProPolice, an extension of the StackGuard system that can be integrated into a Linux or BSD system. OpenBSD 3.3, slated to be formally released on May 1st, integrates this and several other strong technologies to guard against buffer overflows.
Virtual Servers
OpenBSD also uses chroot more widely. Chroot is a method of keeping an application in a small portion of the filesystem. Though not originally designed for use as a security technique --- it was written to make the BSD install process work more smoothly --- network daemons can run as an unprivileged user in a part of the directory tree that limits access to critical files. If a cracker gains remote access through a chrooted daemon, it is difficult (though not impossible) to gain control of the rest of the system.
Web servers, name servers, and database servers can all be chrooted with relative ease. However, few Linux and Unix installations do so by default. Using chroot is not difficult, and will reduce the risk to your system from running network daemons. (Of course, if you don't need that particular daemon, it's better to simply turn it off.)
A step far beyond chroot is to run a virtual private server using User-Mode Linux (UML). User-mode Linux runs a separate Linux kernel as an individual process on a running Linux system, and can use a specially-formatted file as a complete file system. If a hacker breaks into a Sendmail daemon running on a User-Mode Linux system and acquires root privileges, then that hacker controls only the virtual private server and the single file system on which it is being run.
UML can be somewhat tricky to set up and configure properly, and trickier still to manage well: if you're running a bunch of UML instances on a single system, installing system patches can be tricky. However, each virtual server can be easily reinstalled remotely with little to no risk of rendering the server unbootable.
Hackers who have broken into a UML server can be carefully tracked and monitored. For this reason, as well as due to the ease of reinstallation, people running honeypots often use UML.
Again, it may be possible to "break out" of a UML process and gain control of the host system, but this is an order of magnitude more difficult than breaking out of a chrooted jail. It is difficult enough that only the most experienced crackers are likely to be able to do so. Because the partition file can be backed up or mounted outside of the system, and because a system can run an arbitrary number of UML processes, a reinstall can occur very rapidly following an intrusion into the system.
Chroot and UML will not stop hackers from breaking into systems. However, they will reduce the damage a hacker can do following a break-in. They can also make it easier to recover following an intrusion. All this without damaging the civil liberties of all Americans.
Jon Lasser is the author of Think Unix (2000, Que), an introduction to Linux and Unix for power users. Jon has been involved with Linux and Unix since 1993 and is project coordinator for Bastille Linux, a security hardening package for various Linux distributions. He is a computer security consultant in Baltimore, MD.