Benchmark battles – now Linux beats NT

A 'real life scenario' test by c't mag does a good job of putting things in context


In the wake of the latest in the round of NT versus Linux face-offs German magazine c't has published the results of its own tests of the two operating systems. The c't tests, conducted by Jurgen Schmidt, were intended to assess the two rivals in 'real life' situations of the sort Linux is supposed to be good at. Linux does a lot better than it has in the 'clash of armour' benchmarks we've seen so far, and Schmidt makes a number of eminently valid and sensible observations concerning the real operation of Web servers. (Full c't report) The tests pitted NT 4.0 and IIS against SuSE Linux 6.1 and Apache on a quad 450MHz Xeon Siemens server with two gigs RAM, twin EtherPro 100 boards and a RAID system. One of the first 'real life' differences between the c't and Mindcraft tests was the use of RAID-5 rather than RAID-0. Mindcraft's use of the latter was aimed at performance, while c't went for a more realistic performance/stability compromise. Schmidt also comments on the nature of the Mindcraft test: "Unlike for the Mindcraft test, which required the server to produce its pages through four 100-MBit interfaces we decided on a more realistic scenario. How many web servers actually serve four of these network interfaces? The majority of web servers make do with a 10-MBit interface and even in intranet one 100-MBit board should be sufficient. This was the configuration we chose for our tests. To get an impression of maximum load behaviour anyway, we made the server prove it can handle two Fast Ethernet connections." For serving a static HTML page of 4-8k, the two came out roughly even at 4k, with Linux slightly ahead at 8k. Schmidt notes that both operating systems didn't benefit to any great extent from use of multiple CPUs, although the Linux installation was running kernel 2.2.9, which is better at SMP than 2.2.5. Linux did however performs substantially better in random requests of 1,000,000 4k files. With 512 requesters NT managed to answer 30 per second, and Linux 274. In another test using a CGI Perl script, Linux delivered twice as many pages as NT on a single CPU, and 2.5 times as many with four CPUs. This isn't entirely surprising, as IIS' support for Perl isn't great. NT did however shine when using multiple network boards, and Schmidt comments: "Linux's comparatively bad results when tested with two network boards show that Mindcraft's results are quite realistic. NT and IIS are clearly superior to their free competitors if you stick to their rules." In summary, he feels that "additional CPUs for plain web server operation with static HTML pages are a waste. Even with two Fast Ethernet lines there's only a moderate less than twenty percent increase." The server wasn't needing to work to its full capacity, and the tests were simulating conditions tougher than you'd expect in most real life scenarios. "In SMP mode, Linux still exhibited clear weaknesses. Kernel developers, too, admit freely that scalability problems still exist in SMP mode if the major part of the load comes through in kernel mode. However, if user mode tasks are involved as well, as is the case with CGI scripts, Linux can benefit from additional processors, too. These SMP problems are currently the target of massive developing efforts." In the most relevant, practical areas, Linux and Apache "are already ahead by at least a nose," while if the pages don't come directly from main system memory, they're more clearly ahead. c't was also impressed by the level of support it got from the Linux community. Microsoft was slow to respond to requests for information, while "Emails to the respective [Linux] mailing lists even resulted in special kernel patches which significantly increased performance. We have, on the other hand, never heard of an NT support contract supplying NT kernels specially designed for customer problems." A very sensible report, and well worth reading in detail. ®


Other stories you might like

  • Cuba ransomware gang scores almost $44m in ransom payments across 49 orgs, say Feds

    Hancitor is at play

    The US Federal Bureau of Investigation (FBI) says 49 organisations, including some in government, were hit by Cuba ransomware as of early November this year.

    The attacks were spread across five "critical infrastructure", which, besides government, included the financial, healthcare, manufacturing, and – as you'd expect – IT sectors. The Feds said late last week the threat actors are demanding $76m in ransoms and have already received at least $43.9m in payments.

    The ransomware gang's loader of choice, Hancitor, was the culprit, distributed via phishing emails, or via exploit of Microsoft Exchange vulnerabilities, compromised credentials, or Remote Desktop Protocol (RDP) tools. Hancitor – also known as Chanitor or Tordal –  enables a CobaltStrike beacon as a service on the victim's network using a legitimate Windows service like PowerShell.

    Continue reading
  • Graviton 3: AWS attempts to gain silicon advantage with latest custom hardware

    Key to faster, more predictable cloud

    RE:INVENT AWS had a conviction that "modern processors were not well optimized for modern workloads," the cloud corp's senior veep of Infrastructure, Peter DeSantis, claimed at its latest annual Re:invent gathering in Las Vegas.

    DeSantis was speaking last week about AWS's Graviton 3 Arm-based processor, providing a bit more meat around the bones, so to speak – and in his comment the word "modern" is doing a lot of work.

    The computing landscape looks different from the perspective of a hyperscale cloud provider; what counts is not flexibility but intensive optimization and predictable performance.

    Continue reading
  • The Omicron dilemma: Google goes first on delaying office work

    Hurrah, employees can continue to work from home and take calls in pyjamas

    Googlers can continue working from home and will no longer be required to return to campuses on 10 January 2022 as previously expected.

    The decision marks another delay in getting more employees back to their desks. For Big Tech companies, setting a firm return date during the COVID-19 pandemic has been a nightmare. All attempts were pushed back so far due to rising numbers of cases or new variants of the respiratory disease spreading around the world, such as the new Omicron strain.

    Google's VP of global security, Chris Rackow, broke the news to staff in a company-wide email, first reported by CNBC. He said Google would wait until the New Year to figure out when campuses in the US can safely reopen for a mandatory return.

    Continue reading
  • This House believes: A unified, agnostic software environment can be achieved

    How long will we keep reinventing software wheels?

    Register Debate Welcome to the latest Register Debate in which writers discuss technology topics, and you the reader choose the winning argument. The format is simple: we propose a motion, the arguments for the motion will run this Monday and Wednesday, and the arguments against on Tuesday and Thursday. During the week you can cast your vote on which side you support using the poll embedded below, choosing whether you're in favour or against the motion. The final score will be announced on Friday, revealing whether the for or against argument was most popular.

    This week's motion is: A unified, agnostic software environment can be achieved. We debate the question: can the industry ever have a truly open, unified, agnostic software environment in HPC and AI that can span multiple kinds of compute engines?

    Our first contributor arguing FOR the motion is Nicole Hemsoth, co-editor of The Next Platform.

    Continue reading
  • Sun sets: Oracle to close Scotland's Linlithgow datacentre

    Questions for tenants as Ellison's gang executes its OCI strategy

    Oracle's datacentre in Linlithgow, Scotland is set to close over the next few months, leaving clients faced with a cloud migration or a move to an alternative hosted datacentre.

    According to multiple insiders speaking to The Register, Oracle has been trying to move its datacentre clients to Oracle Cloud Infrastructure – with mixed results.

    The Linlithgow facility dates back to the days of Sun Microsystems, which opened a manufacturing plant there in 1990.

    Continue reading

Biting the hand that feeds IT © 1998–2021