‘Hit team’ drives MS plan to bludgeon Linux with benchmarks

And who's in charge? Step forward Jim Allchin, we reckon...


Today's Wall Street Journal reports that Microsoft has deployed a 'hit team' to deal with the threat from Linux. This isn't entirely surprising - in reality, you could see the hit team forming when the Halloween memos leaked, but it is surprising that the team's command structure is so simple to identify from what Microsoft tells the WSJ. We probably haven't got evidence of the fabled kinder, gentler Microsoft here - it's probably just a case of the company having figured out that it'll be caught if it doesn't confess anyway. The WSJ fingers Windows 2000 director of marketing Jim Ewel as running the team, which is somewhere under ten strong (we'll get back to that). But it seems pretty clear that reporting is direct to Jim Allchin, Grand Dragonlord of the OS and the man who's getting Win2k to market. Allchin is quoted as saying of Linux: "I have now upped the focus on it. I've got the performance team prepared to benchmark it every which way." That seems to confirm several things. The hit team will be operating under Allchin, and the Microsoft effort against Linux is in reality somewhat larger than around ten people, because we've got the performance team on it as well, haven't we? And the "benchmark it every which way" leaves us a very clear Allchin fingerprint. The Halloween memos roughed-up some initial counter-attack strategies Microsoft could use, and a few of these showed up in the Mindcraft test, in the shape of grouses about moving targets, for example. Another Halloween suggestion, a disruptive Microsoft move towards open source, has been noisily touted by Steve Ballmer for some months now (so he's on the team too), but it's obvious Allchin will have set up the first Mindcraft tests as a part of the counter-attack. The Mindcraft-Microsoft challenge to a second, allegedly more Linux-friendly, round of tests is of course the next phase of "benchmark every which way," and as the challenge contained a good deal of marketing spin combined with classic marketing comparison checkboxes, it doesn't take an Einstein to figure out that a certain Win2k director of marketing is likely to have been involved, and that the efforts actually started last summer, at the latest. And here's what appears to be an intriguing piece of marketing spin that seems to have crept into the WSJ piece. We all know the story of the original Mindcraft test, and we know about the complaints from Linus and others about their integrity. And we know about the challenge, issued just last week, where Microsoft and Mindcraft said they'd agreed to all the demands of the Linux folks, and were prepared to re-run the tests. We quote from today's WSJ: "Mindcraft reran the tests, this time with input from Mr. Torvalds and others. Linux did better the second time, but even Linux boosters admit that their operating system can't keep up with NT on bigger systems." So when did Mindcraft rerun this test then? And where does that leave the rerun it's proposing to do, if Linus et al agree? Well, presumably before Microsoft decided to issue the challenge, Mindcraft and/or Microsoft will have had another go at the tests to figure out NT's chances in a public battle. Odd that it should leak out like this, though. ® Related Stories: MS declares war on Linux MS marketing spins on Linux Can Linux avoid MS NT trap? Tests show flaws in MS Linux study MS memo outlines anti-Linux strategy Second MS leak boosts Linux


Other stories you might like

  • Everything you wanted to know about modern network congestion control but were perhaps too afraid to ask

    In which a little unfairness can be quite beneficial

    Systems Approach It’s hard not to be amazed by the amount of active research on congestion control over the past 30-plus years. From theory to practice, and with more than its fair share of flame wars, the question of how to manage congestion in the network is a technical challenge that resists an optimal solution while offering countless options for incremental improvement.

    This seems like a good time to take stock of where we are, and ask ourselves what might happen next.

    Congestion control is fundamentally an issue of resource allocation — trying to meet the competing demands that applications have for resources (in a network, these are primarily link bandwidth and router buffers), which ultimately reduces to deciding when to say no and to whom. The best framing of the problem I know traces back to a paper [PDF] by Frank Kelly in 1997, when he characterized congestion control as “a distributed algorithm to share network resources among competing sources, where the goal is to choose source rate so as to maximize aggregate source utility subject to capacity constraints.”

    Continue reading
  • How business makes streaming faster and cheaper with CDN and HESP support

    Ensure a high video streaming transmission rate

    Paid Post Here is everything about how the HESP integration helps CDN and the streaming platform by G-Core Labs ensure a high video streaming transmission rate for e-sports and gaming, efficient scalability for e-learning and telemedicine and high quality and minimum latencies for online streams, media and TV broadcasters.

    HESP (High Efficiency Stream Protocol) is a brand new adaptive video streaming protocol. It allows delivery of content with latencies of up to 2 seconds without compromising video quality and broadcasting stability. Unlike comparable solutions, this protocol requires less bandwidth for streaming, which allows businesses to save a lot of money on delivery of content to a large audience.

    Since HESP is based on HTTP, it is suitable for video transmission over CDNs. G-Core Labs was among the world’s first companies to have embedded this protocol in its CDN. With 120 points of presence across 5 continents and over 6,000 peer-to-peer partners, this allows a service provider to deliver videos to millions of viewers, to any devices, anywhere in the world without compromising even 8K video quality. And all this comes at a minimum streaming cost.

    Continue reading
  • Cisco deprecates Microsoft management integrations for UCS servers

    Working on Azure integration – but not there yet

    Cisco has deprecated support for some third-party management integrations for its UCS servers, and emerged unable to play nice with Microsoft's most recent offerings.

    Late last week the server contender slipped out an end-of-life notice [PDF] for integrations with Microsoft System Center's Configuration Manager, Operations Manager, and Virtual Machine Manager. Support for plugins to VMware vCenter Orchestrator and vRealize Orchestrator have also been taken out behind an empty rack with a shotgun.

    The Register inquired about the deprecations, and has good news and bad news.

    Continue reading

Biting the hand that feeds IT © 1998–2021