Report Considering the publicity that has surrounded - and, despite super new security-focused Service Packs, continues to surround - Windows security issues, Microsoft's determination to demonstrate that Linux is less secure than Windows shows a certain chutzpah. The company has however had some support here; Forrester, for example, provides some numbers that can be used to support the contention that Microsoft flaws are less severe, less numerous and fixed faster. And although there's a general readiness among users to believe that Windows is a security disaster area, there's also a reasonable amount of support for the view that Linux would get just as many security issues if it had anything like Windows' user base.
But what's the truth? For every claim there is, somewhere, a counterclaim. But until now there has been no systematic and detailed effort to address Microsoft's major security bullet points in report form. In a new analysis published here, however, Nicholas Petreley* sets out to correct this deficit, considering the claims one at a time in detail, and providing assessments backed by hard data. Petreley concludes that Microsoft's efforts to dispel Linux 'myths' are based largely on faulty reasoning and overly narrow statistical analysis. Even if you think you know this already (as we fear may be the case for numerous Register readers), we think you'll find it useful to be able to say why you know it, what the facts and the numbers really are, and where you can get the document to back up what you're saying. Appropriately enough, we're offering the report for free. You can browse through it here, and you can download it in PDF format here.
We encourage you all to grab a copy and give it a good read, but as a service for the fast fact junkies, we've produced a few bullet points of our own. All of these are clearly supported (unlike some similar efforts you might find elsewhere) by Nicholas' report, but don't just take our word for that, check it against the full report.
Myths and Facts
Myth Windows only gets attacked most because it's such a big target, and if Linux use (or indeed OS X use) grew then so would the number of attacks.
Fact When it comes to web servers, the biggest target is Apache, the Internet's server of choice. Attacks on Apache are nevertheless far fewer in number, and cause less damage. And in some case Apache-related attacks have the most serious effect on Windows machines. Attacks are of course aimed at Windows because of the numbers of users, but its design makes it a much easier target, and much easier for an attack to wreak havoc. Windows' widespread (and often unnecessary) use of features such as RPC meanwhile adds vulnerabilities that really need not be there. Linux's design is not vulnerable in the same ways, and no matter how successful it eventually becomes it simply cannot experience attacks to similar levels, inflicting similar levels of damage, to Windows.
Myth Open Source Software is inherently dangerous because its source code is widely available, whereas Windows 'blueprints' are carefully guarded by Microsoft.
Fact This 'inherent danger' clearly has not manifested itself in terms of actual attacks. Windows-specific viruses, Trojans, worms and malicious programs exist in huge numbers, so if one gives any credence at all to this claim, one would do better to phrase it 'Open Source Software ought to be more dangerous'. But the claim itself hinges on the view - rejected by reputable security professionals - that obscurity aids security. Obscurity/secrecy can also make it more difficult for the vendors themselves to identify vulnerabilities in their own products, and can lead to security issues being neglected because they are not widely-known. The Open Source model, on the other hand, facilitates widespread review and makes it easier to identify and correct flaws. Modular design principles support this, while the overall approach is far more in line with security industry thinking than is 'security through obscurity.'
Myth Statistics 'prove' that Windows has fewer, less serious security issues than Linux, that Windows issues are always fixed, and that they are fixed faster.
Fact Quite a broad collection of 'facts' exist in this category, but what they have in common is the (actual) fact that they are usually based on single metrics, on a single aspect of measuring security. Claims that all Windows flaws get fixed are baffling when we consider that there are Microsoft Security Bulletins saying some flaws will never be fixed, and the existence of these also makes it tricky to understand how the fix rate could ever get to be 100 per cent. In the case of Forrester, which produces the 100 per cent as the Windows result for one of several metrics, it is arrived at through tallying flaws and fixes within a specific period. In the same metric Red Hat 'comes second', on the basis that one flaw was not fixed within the period. This is a rickety base for Microsoft (not, note, Forrester) to build a security campaign on.
This aside, simply claiming that Windows is more secure than Linux because the time from discovery of vulnerability to release of patch is greater for Linux skips consideration of the importance of what gets fixed. A comparison of 40 recent security patches with reference to Windows Server 2003 and Red Hat Advanced Server AS v3 shows that Windows experienced the most severe security holes, while Red Hat had only a handful (four) which rated as critical. It is also arguable that Microsoft understates vulnerabilities in Windows Server, because some flaws are deemed not critical for Server on the basis of system defaults which are in many operational scenarios impossible to adhere to. For Red Hat, on the other hand, there is an argument that in Petrelely's analysis we have overstated the extent of critical vulnerabilities (Red Hat does not assign severity levels), and very few of them would allow a malicious hacker to perform mischief at administrator level.
If we reality-check these conclusions against another scale, we find that vulnerability metrics used by the US Computer Emergency Readiness Team (CERT) return 250 results for Microsoft, with 39 having a severity rating of 40 or greater, and 46 for Red Hat, with only three scoring over 40. So simply making claims based on that one metric (as Steve Ballmer did, again, earlier this week) is like judging a hospital's effectiveness in dealing with emergency cardiac care from its average speed in dealing with all patients.
Reliance on a single metrics is a major feature of Microsoft's Get the Facts campaign, and this is perhaps understandable if we consider what the campaign is. It is essentially a marketing-driven campaign intended to 'get the message across' with data used to back up the message (note that Microsoft would not necessarily disagree with us here). However, by their nature marketing campaigns push specific, favourable headline items and magnify their significance. They do not necessarily (even usually) accurately reflect the underlying data, and frequently outrun it by some distance. And this process is actually easily illustrated by the Forrester report we linked to earlier on. Get the Facts pulls out the 100 per cent fix and fewest vulnerabilities bullets, while the report itself talks of its use of three metrics and (if we're doing headline items) also says: "ICAT classified 67% of Microsoft's vulnerabilities as high severity, placing Microsoft dead last among the platform maintainers in this [high severity] metric."
So here right on the front page of its 'data-backed' campaign, Microsoft has stripped a single metric out of the underlying data, paraphrased it and put it in the headline. You don't want to be doing this, so you really do want to read the report.
* Nicholas Petreley's former lives include editorial director of LinuxWorld, executive editorial of InfoWorld Test Center, and columns on InfoWorld and ComputerWorld. He is the author of the Official Fedora Companion and is co-writing Linux Desktop Hacks for O'Reilly. He is also a part-time Evans Data Analyst and a freelance writer.