In late October 2001, Microsoft's Security Manager Scott Culp published a missive calling for 'responsible disclosure' of security vulnerability information on the Internet, claiming it was because of the public availability of such information that major Internet security problems or cyber-terrorist events could occur. His commentary was well-received by large commercial companies and security vendors, and panned by nearly everyone else.
During his discourse, Culp joined today's sensational security bandwagon by coining the term "information anarchy" to indicate what would happen without 'responsible security discussions' in controlled environments away from where cyber-criminals may learn some new trick to cause electronic mischief or mayhem. First we have the White House (the most powerful government in the world) seeking to prevent an "Electronic Pearl Harbor" by any number of government initiatives. Now we have Microsoft (the most powerful monopoly in the electronic world) seeking to prevent "Information Anarchy" through any number of corporate initiatives. Perhaps "Information Anarchy" is a term intended to imply that information really doesn't want to be free, or can't be free and safe (thus attacking the legitimacy of the open source software movement) and must be therefore restricted through invasive software, policies, or law? Or is Culp simply trying to get a term into the New Hacker's Dictionary?
In his missive touching on several recent (and nearly exclusively Microsoft-based) security incidents, Culp noted that "the relationship between information anarchy and the recent spate of worms is undeniable. Every one of these worms exploited vulnerabilities for which step-by-step exploit instructions had been widely published. But the evidence is more conclusive than that. Not only do the worms exploit the same vulnerabilities, they do so using the same techniques as were published - in some cases even going so far as to use the same file names and identical exploit code. This is not a coincidence. Clearly, the publication of exploit details about the vulnerabilities contributed to their use as weapons." (The intent of how such information is used - for attack or defense - never occurred to him, it seems.)
In other words, Microsoft is saying "Please don't publish anything about security flaws you find in our products. All this does is spread viruses, and makes us and our products look flawed, exploitable, and bad." Or, as George Orwell once wrote, "your ignorance is our strength."
Culp declined to address the truisms that any networked Windows machine is inherently vulnerable, more so than most other operating systems (regardless of price or vendor) and that the only secure Microsoft software is what's still shrink-wrapped in the warehouse.
History shows that Microsoft's products are the largest deployed operating system in the world, running systems from home computers to defense systems and various critical infrastructures. These are the same products that are proven insecure, unstable, and dangerous on a monthly basis, and cause well-known analysts to voice grave warnings over continued blind dependence on Microsoft products. The first warning came in a 1998 Computerworld article where Paul Strassman of the National Defense University in Washington, DC, rightly observed that:
Microsoft's dominance in operating systems represents a new threat to the national security of our information-based society. The government is trying hard to contain the expanding power of Microsoft by antitrust litigation that would prove present harm to consumers. That's insufficient. The government also should address the risks from information warfare attacks on a largely homogeneous systems management environment. Inevitably, infoterrorists and criminals will take advantage of flaws in the gigantic Microsoft operating systems that are on their way to becoming the engines for running most of our information infrastructure....An all-encompassing operating system bares itself to hostile exploitation of paralyzing security flaws. The presence of a fatal defect is unavoidable, as the complexity of Microsoft systems expands to bizarre proportions with each new release. It's the search for such a fault that occupies the minds of some of the brightest computer experts. Finding a crack through which one could induce mayhem with only a few keystrokes would be worth a great deal of money, especially when supporting an act of terrorism....No agricultural expert would suggest that only one crop, using the identical seed strain, be planted in Kansas, Ohio, Illinois and Iowa. "Monocultures," as biologists call them, are just too vulnerable to pests, disease and an unprecedented combination of ecological conditions. The Irish potato famine, for example, was caused by reliance on a single strain of potato.
Strassman's comments were incorporated into my 2000 missive "Microsoft: A Proven Danger to National Security" and recently echoed in Oliver Morton's December 2001 Wired article where he states that our de facto standardization on Microsoft products has the very real potential to be a national -- or international -- security issue, not simply an anti-trust one.
Given its track record, one has to wonder if the company is genuinely concerned with addressing software security or simply trying to convince the world that its products are secure enough for the public to entrust their private data to Microsoft's .NET system, the software monopoly's new business model. As it stands now, nobody in their right mind would use .NET or rely on Microsoft Passport for any significantly-important services, and that's probably driving their out-of-the-blue emphasis on security. After all, the company's image as purveyors of secure, reliable software is lackluster at best, given the almost-comical nature and frequency of their security bulletins. As to its ability to serve as a reliable data services provider - the basis of the .NET strategy - we must remember the monopoly suffered a humiliating network outage across its entire line of Internet properties earlier this year through a network architecture oversight that any second-year engineering student knows about. Not a very good way to entice new customers to join its as-yet-undetermined-but-definitely-proprietary .NET gravy train the company is staking its future on.
Microsoft is using the security hysteria resulting from September 11 to market its newfound security ideas and conducting a pre-emptive marketing strike in the perfect medium for its message to take root in and grow with corporate America. What remote cave were they living in for the past six years that security suddenly appears so critical for them to address? Security expert Simple Nomad correctly observes that "economically and politically this is a great time to start this [program] from Microsoft's perspective. Under the guise of preventing cyber-terrorism, anyone who opposes this is considered 'un-American.'"
During a security conference this week in Mountain View, California, Microsoft's Security Manager Scott Culp released an overview of the firm's plans for dealing with security information that expanded on his October missive to the internet community. His briefing outlined a Microsoft Security Framework to allegedly facilitate more responsible security interaction and vendor involvement in resolving problems. This is accomplished by creating yet another vendor-biased "club" to try and restrict the discussion of vulnerability information away from the public and the evil lurking around every hub, router, and switch on the Internet. Charter membership in this club are Microsoft, and the major security software vendors Bindview, Foundstone, Guardent, @Stake, and Internet Security Systems. (It should be noted that despite public statements to the contrary, some of these firms have and continue to employ "black hat" hackers to research and develop their security products used in large enterprises.)
What's ironic is that Microsoft is reinventing the wheel and taking existing work in the field and warping it to fit its own proprietary image. Rain Forest Puppy's RFPolicy on vulnerability disclosure has floated around security circles for quite some time and more than adequately addresses how responsible disclosure can be accomplished. Yet nobody seems to care about this existing document. Is it because a "hacker" wrote it and if it doesn't bear the author's Real Firstname Lastname, or come from a commercial entity, it can't be trusted or deemed adequate? Once again, industry is looking to create the illusion of real security, but enacting the farthest thing from it.
The CERT/CC did a similar thing earlier this year regarding how it released vulnerability information to the public, and even TruSecure's self-monikered "Surgeon General" Russ Cooper floated a similar "Vulnerability Club" concept with the intention of moving the discussion of vulnerabilities out of the public eye. On the CERT/CC action, recognized security consultant and pundit Brian Martin notes that "when CERT finally manages to release an advisory, it is vague and offers no technical details about the vulnerability. This prevents some administrators from being able to mitigate the risk with an efficient and effective solution. Essentially, it forces administrators to make drastic changes to their network, break necessary functionality, wait for a patch that may be weeks away, or audit tens of thousands of lines of source code to find out exactly where the problem is and if it truly affects them. Administrators are further burdened with trying to convince management or developers of the necessity for downtime without any facts to justify it."
According to Culp's PowerPoint slides, a long-term objective is for the MS Security Framework to be embraced by a "critical mass" within the computer community. In other words, Microsoft Windows customers, the largest and most 'critical mass' of computer users that matters to them.
Members of this Microsoft Security Framework will pledge to insure their tools are limited "only for lawful purposes." Culp proposes that members would take steps to restrict the use of (for example) a network vulnerability scanner to a set of hard-coded target IP addresses or develop restrictive licenses or product distribution channels. A totally useless concept. This does nothing to address the many freeware, shareware, or non-Framework companies that develop security and network administration products, not to mention hacking or intentionally misusing legitimately-licensed software. Not to mention, anyone that's had to re-acquire an ISS network scanning key for a new address range at their company knows this is more trouble than it's worth. There is also a danger to smaller security firms here. It's possible for large, diversified security software vendors (many with their own professional services business units) to exploit information gleaned from "per-use, per-site software licensing" to generate new business by approaching the firm whose IP address range is being scanned by the their software and undercut the smaller firm's bid for that same work. (Talk about knowing your competition!)
Reporting and discussion of vulnerability information is restricted to members during a grace period, however that does not apply to disclosing information with law enforcement, infrastructure protection entities, or "other communities in which enforceable frameworks exist to deter onward uncontrolled distribution." This is a direct attack on the proven value of public access, full-disclosure lists like BUGTRAQ and VULN-DEV, free community resources that have proven useful to the security community on several occasions. We know that the 'infrastructure protection entities' Culp mentions is the FBI's National Infrastructure Protection Center (NIPC) and their information-sharing programs are lackluster at best, providing little if any useful information to even authorized users
In short, trust us, we know what you need to know; we'll tell you what you need to know, as long as we think you should know it. Just trust us!
Elias Levy, security expert and former moderator of the BUGTRAQ list considers this Framework is akin to developing an "Information Cartel" with the result of improving the image of software vendors by withholding potentially embarrassing information that could adversely-impact sales. Simple Nomad also noted that the controversial Digital Millennium Copyright Act (DMCA) could be invoked by Microsoft and target independent researchers and non-Framework members publishing vulnerability information about its products, just as Adobe did this past summer. In this case, the company would join Adobe in using law and criminal procedure as poor replacements for quality control and effective software testing. Perhaps by joining the Framework, you are immune from DMCA liability provided you only report your vulnerabilities to Microsoft? Will security researchers be forced to join the Framework or be litigated out of business?
If the discussion of security vulnerabilities is restricted to such "clubs" and "cartels" only the criminals/hackers/terrorists will be discussing them outside of such circles via e-mail lists, forums, and conferences. One conspiracy theorist even posited that by keeping such knowledge exclusively in the hands of large companies with deep pockets, such vendors would be free to exploit them at will for assorted purposes. Sounds a bit like the loony "if we outlaw guns, only criminals will own them" argument, doesn't it?
eEye Security was criticized by major vendors for publishing vulnerability and exploit information about the Code Red worm plaguing Microsoft-based Web servers this past summer, even thought it was the first company to provide free, useful public information (no marketing strings attached) with the technical details of this latest security problem. Many system administrators used this information to monitor their networks and take preventive steps to monitor and protect their systems well in advance of Microsoft, CERT, or NIPC acknowledging and addressing this particular exploit (one of a recurring series of should-have-been-already-addressed Web server buffer overflow exploits.) Yet EEye was vilified and called irresponsible for releasing "too much information" that could help mischievous folks launch further attacks based on this exploit code that was now public. As we're seeing in the post-September 11th world, information is a two-edged sword that can both help and hurt people. Only human arrogance would assume that hurtful information must be controlled from the general public. This is a rehash of the old argument that cDc's Back Orifice was a freeware and dangerous hacker tool, but that an identical commercial product from Microsoft, Symantec, or other vendors was acceptable for network administrators. It's not the source of the product or information, it's how such is used that makes the difference.
One only has to remember the I-LOVE-YOU fiasco from last year. Remember the NIPC warning on the subject? Its first message was absolutely incredible and totally irresponsible for an entity with its given mission. Four hours later, the message was updated with more useful information, but by that time, the security forums and lists were already abuzz with people reporting the virus signature, propagation methods, publishing temporary fixes, and attempting to reverse-engineer the thing to investigate it. Under Microsoft's Framework, the preferred method of dealing with this is to keep folks in the dark and only issue the barest shred of useful information, if they chose to release it at all. Something somewhere, at some time, is going to attack you. Beyond that, we can't tell you more because we either don't know or don't want to give anyone any ideas. Trust us, we will get back to you as soon as possible.
As I wrote earlier this year, community efforts to restrict the open discussion of vulnerability information is akin to fiddling while the electronic Rome burns. Many system administrators would rather know immediately of potential problems or exploits affecting their systems, not be at-risk for thirty-plus days while vendors decide if, when, or how to address the problem. Without ongoing, immediate public discussion free from corporate spin control, the Internet community is placed at serious risk by being deliberately kept in the dark. With operating systems, vendors know customers are at their mercy; a company is not going to run out and change operating systems and applications without prolonged management discussion. Thus, most folks can't simply address the problem themselves; they are dependent on the vendor to provide assistance.
According to a recent Register article, security researcher Marc Slemko published a finding demonstrating that millions of Microsoft Passport users were open to an attack that could have revealed extremely sensitive personal and financial data. In the spirit of community service and awareness, Slemko published the exploit to the Net. (Microsoft Passport is a centralized single-sign on service for network servers and websites, the center of its vaunted and still nebulous .NET strategy.)
Microsoft immediately disabled Passport services until a workaround could be implemented. However, some critics argue that had MS handled it according to their new disclosure regime, all of those customers would have remained open to attack for up to a month - if not more - entirely unaware that their personal information was in serious danger. Yet this is the plan Microsoft is proposing in its Framework. Again, customer ignorance is Microsoft's strength.
The software monopoly also announced that in light of several self-inflicted incidents of shoddy quality-assurance testing, it was reviewing how its software patch distribution program could be improved It seems the company has a record of releasing patches that break systems when applied, thus causing more problems and headaches for administrators. We've seen numerous occasions where Microsoft released a 'patch to a patch to a patch' because things kept breaking with each new fix.
The implication is that if you're really concerned about a vulnerability report, under the Microsoft Framework, your options are to wait until the vendor blesses the report and offers remediation instructions, or disconnect your network from the Internet, thus shutting your business down. Knowing that the results of applying many of Microsoft's patches are worse than the problems they're being released to fix in the first place, the poor system administrator truly is in a Catch-22 situation.
On the question of corporate responsibility, let's examine how Apple Computer recently handled a very embarrassing and potentially serious software problem, albeit not a security one. The company released a major release of its popular MP3 player iTunes on a Saturday evening. When a major bug was found later that night, not only did the company immediately remove the installer from its website, fix the bug and made available a revised application (properly labeled 2.0.1) within 24 hours. More strikingly, Apple admitted responsibility for their developer's mistake in configuring the software installer and offered to reimburse victims the price of Norton disk restoration software or the cost of DriveSavers recovery service for anyone who lost data to the bug. All this occurred over a weekend, too.
Would this ever happen to Windows users? Microsoft has never worked a security issue this quickly or accepted public responsibility for being anything other than a self-proclaimed 'great software company.' Any problems that occur with their software are the user's fault, and if they [users] lose data, well, they should have had made a backup first, even if they were simply installing a word processor. Microsoft is by far the most notorious in their vulnerability announcements, legalese, and cover-their-tail security alerts. They never accepted responsibility or admitted they were a monopoly, despite the findings of a federal court, either.
Instead, they introduce this new high-and-mighty, allegedly-moral, definitely-proprietary approach to security that's designed more to improve its public image and prepare its .NET marketplace than anything else, using recent world interest in anything security to imply the need for such initiatives, interest, and compliance.
Releasing better products would go a long way in preventing the constant patch triage that Microsoft admins face on a weekly basis. The problem is not the periodic misuse of vulnerability information in the public domain, but the delusional position of Microsoft that their products aren't to blame for these recurring, high-profile security incidents. Novices can write code to exploit Microsoft products because Microsoft makes it so easy for them to do. If the software monopoly effectively addressed the underlying root causes of its software problems instead of merely treating each symptom as it was reported, today's novices would not have historical blueprints to learn from in building new attacks that exploit similar historical vulnerabilities in Microsoft's products. Code Red was not a "new" exploit but the latest in a series of buffer overflow problems affecting IIS for years.
Full disclosure forums serve as a community resource and a much-needed check-and-balance against the profit-motivated interests of vendors preferring that its customers blindly continue purchasing and supporting its line of products, blissfully unaware of the potential dangers they are susceptible to each time they boot up or log on. Absent this objective and freely available mechanism, the internet community is at the mercy of the corporations to decide how, when, or if a given security problem will be addressed.
The scientist who creates the cancer-fighting gene (a good thing) could also use that knowledge to develop tailored genetic weapons (a bad thing)....It's not about responsible disclosure, it's about vendor accountability, quality assurance, and this loony, misguided belief that security through obscurity works.
© 2001 InfoWarrior.org, all rights reserved.
Richard Forno is Chief Technology Officer for a Dulles, Virginia firm providing information assurance support to the national security and intelligence communities.