Opinion A security company cleverly tricks hackers into compromising one of its distribution sites. Really.
Jim Fiebig once said that no one should be allowed to play the violin until they have mastered it. It is a humorous paradox, and for those who have been in the proximity of a fledgling violinist, one with merit.
As clever as this conundrum is, it illustrates a mindset that is pervasive in the security industry: We are not allowed to make mistakes.
It only makes sense; as with a master musician, security professionals are expected to be, well, security professionals. As such, our peers hold us to a high standard -- we are not supposed to get infected with worms or viruses. We are not supposed to fall prey to social engineering. And we are not supposed to get hacked.
The truth, of course, is that we all make mistakes-- all of us. For me, it is an important part of my learning process. I derive more value in identifying the things I've done wrong than celebrating the things I've done right, as difficult as it may be. It all goes to building experience, which as the old saying goes, is something you don't get until just after you need it.
But to learn from a mistake, you've got to admit to it first.
Internet Security Systems recently suffered a breach that resulted in the defacement of one of their corporate Web servers. The hacker group known as USG reportedly used a WebDAV exploit (for which a patch was already available) to deface the "X-Force Internet Watch" site with an anti-war political message.
It was a serious embarrassment for ISS. After all, here we have a leading security company, one that provides products to identify vulnerable systems as well as ones to supposedly prevent breaches in the first place, getting hacked with a public exploit.
Many took joy -- an almost perverse pleasure -- in the irony of the event. I too poked fun at them in IM's to different colleagues, but it was all in the spirit of a good roasting. "ISS got hacked, ha ha ha."
But in all seriousness, ISS is a large technology company -- over 1,200 employees in 22 countries -- so one can imagine how vast their computer network would be. Personally, I expect such companies to get hacked, particularly when the company researches software vulnerabilities; they probably have vulnerable machines all over the place for testing.
I fully expected ISS to respond with a "we have identified the administrator that failed to patch the system in question, and have forced him to drink buttermilk while watching home movies of Janet Reno in a leather teddy. We are confident that this will not happen again."
But they didn't.
Instead, ISS revealed that the hacked site, the one from which students and universities around the world downloaded free versions of BlackICE to protect themselves from hackers, was in reality a cleverly disguised, purposefully vulnerable honeypot, strategically placed in this hostile environment to collect and analyze the actions of evil hackers.
Well, that explains everything.
ISS didn't get hacked by a public exploit; it turns out they willfully and purposefully put trusting end-users at risk by allowing them to download binaries that could have been compromised, in order to conduct research and improve a product they sold to people they were really concerned about.
At least having a single Web server defaced is just an isolated, administrative-level event; and for the most part, it is expected. But perpetuating a culture of putting innocent users at risk for a self-serving venture is systemic.
Given the situation, I would have expected to see some stronger legal protection for ISS mentioned in the license agreement users had to click-on before downloading BlackICE. But you don't.
It almost makes me wish the honeypot revelation was exaggerated: that rather than acknowledge a genuine breach and admit a hack, a not-too-swift PR flack came up with the idea of calling the system a honeypot, misleading the media, their customers, and their global stockholders via an ill-considered ruse.
But there's no evidence of that, is there?
My recommendation is, if you are considering the deployment of a honeypot in order to build attack information, don't do it on a production box. Honeypots catch bees -- don't kill the flowers in the process.
We all make mistakes. But as described by ISS, this incident was willful. For the example it provides, I would rather have been lied to.
SecurityFocus columnist Timothy M. Mullen is CIO and Chief Software Architect for AnchorIS.Com, a developer of secure, enterprise-based accounting software. AnchorIS.Com also provides security consulting services for a variety of companies, including Microsoft Corporation.