Updated A presentation at Blackhat last week by Tim Mullen of AnchorIs, offering a novel treatment for the Nimda worm, has caused considerable controversy because it involves taking unauthorized actions against the offending box.
Mullen has come up with two possible ways of shutting down the bandwidth-hungry attacks when an infected IIS box attempts to spread the worm, each with its own advantages and problems. Method one places a bit of harmless code in the boot sequence which simply precludes Nimda from loading. The advantage here is that the machine will be made harmless without interfering with any functionality or damaging any files. The disadvantage is that it involves privilege escalation and requires a remote re-boot, which are a bit aggressive however therapeutic they may be. The sudden re-boot could also be problematic where cached writes are common and RAM drives are in use, though supposedly Windows will handle them gracefully as it shuts down.
A Reuters hack unfortunately stated last week that the remedy would immobilize the machine until it's re-started, but this isn't correct. It does nothing except copy a bit of code and has no effect until the box is re-started.
Method two simply blocks outbound port 80. The advantage is that it's somewhat less intrusive than number one; the disadvantage is that it makes a configuration change and actually impedes functionality, however slightly. One thinks of poor Harry Homeowner with his Win2K machine running a Web server which he doesn't even know exists. Now he can't use his browser and he has no idea what to do next. Mullen says he'll pop up a console telling the poor sap what's happening and explaining how to address it.
He's considering both methods and working on a tool, not entirely certain which approach will ultimately prove to be the most effective and least problematic. I'm not a lawyer but I play one in my column, and so far as I know, both methods are illegal -- I don't believe US law makes an exception for helpful intrusions. And legal or not, a large number of people will be extremely resentful of someone else installing code or affecting configurations without their knowledge or consent, though it may be in their best interest. Indeed, an infected machine will waste it's owner's bandwidth as well of that of others it attacks.
After being hammered by hundreds of infected hosts whose unresponsive admins laugh off email pleas or reply with open contempt, Mullen's concluded that the last resort is simply to shut off the attacks as gently as possible. To justify taking action, he says it's nothing more than old-fashioned self defence.
It's not vigilantism, he reckons, because the tool he's contemplating will simply listen for an attack and respond only when the characteristics of Nimda (and perhaps Code Red) are evident. He also plans to rig it so that the tool will not affect a machine that's not infected. Thus if an infected box is coming through a proxy server, the proxy won't be affected in any way unless it too happens to be infected.
While no one would argue that people have a right to defend themselves, this foot-in-the-door approach does bring up the proverbial 'slippery slope' in two ways. We have to wonder what should and shouldn't constitute an attack (as opposed to a mere annoyance), and what degree of interference should be permissible in reply.
Last year an unusually savvy US District judge found that port scanning is not an attack and therefore legal. Nevertheless it's often perceived as an attack, or as a precursor to one, and many people resent it, or simply fear it. Can they claim to be defending themselves even though the action they're replying to is legal?
We also have to wonder if the attacker's intent should matter. Someone infected with a worm or virus is clearly innocent. They may be causing problems for others, all right, but they have no intention of doing so. Are they a threat, or a victim themselves? Is their personal property now subject to interference from others? And if so, does their intent or lack thereof alter the equation?
And what about liability? Essentially, legally allowing someone to take unauthorized action on our machines implies that we're liable for the trouble they cause. Should we be liable for unintentional problems we cause with a poorly-defended machine which we choose to connect to the Internet? Does it matter that we wouldn't be susceptible to Nimda or Code Red if IIS had been designed and shipped in a more secure manner? Isn't Microsoft liable for selling a faulty product? In the real world, if a person causes damage with, say, his car, due to a design flaw, it's the car manufacturer we hold responsible, not the driver.
Isn't making users responsible for their 'problem machines' just like holding drivers responsible for unsafe cars? And isn't requiring users to know what they're doing security-wise just like requiring drivers to be auto mechanics? A car should be safe to drive; a Windows box should be safe to connect to the Net.
And on the other end, if a person defending himself causes unintentional damage to a machine that's attacking their own, are they liable? If so, the risk of litigation might well discourage the practice of defensive hacking even if it should be legal. And let's face it; installing code on someone's machine without their cooperation or consent is a hack. It may be beneficial; one may even be grateful; but that doesn't change its essential nature. Suppose I remotely install a harmless binary on your machine which improves its performance? You may dig the result, but you've still been hacked.
Now, Mullen's not talking about taking this sort of action except where self-defence can be invoked. But he is talking about hacking other people's machines, however benignly. This raises a number of questions which no doubt The Register's beloved readers will be pleased to weigh in on. ®
In a previous version I reported incorrectly that the Reuters hack mentioned above is with the Associated Press. I apologize to the AP, and to our readers, for my carelessness.