IBM witness – MS probes for conflict within IBM

Ah, the 'witness is a rogue cannon' gambit...


MS on Trial Microsoft counsel Richard Pepperman, during the deposition of Garry Norris of IBM, revealed a document from Brian Conners, VP of IBM's consumer division, that said: "I would like to feature IE only and remove Netscape. This is contingent upon Microsoft supporting IGN [IBM Global Network] as an ISP. Also with IE 4.0 we will have a channel (1 or 2 of 10) to personalise." Norris said he did not previously know of this. Microsoft is clearly seeking to show that there was some dissent within IBM, and that the browser market was competitive (except that Microsoft does not like to recognise the existence of a browser market, of course). The purpose of Pepperman revealing these details appeared to be so that Microsoft could be seen to be a fair player in the negotiations, and to detract from the absolute power that Microsoft had over OEMs. It became clearer as questioning proceeded that Pepperman wanted to establish that Microsoft was essential to IBM and that it accommodated at least some of IBM's particular contractual needs, even though IBM was Microsoft's most vigorous competitor at the time. Pepperman was interested in having on record an extract from an internal IBM memorandum (about a request for bonuses) that said: "During these negotiations, Microsoft provided IBM with concessions such as selective recovery, recovery CD distribution, open bay, EIAA, and the team negotiated more agreeable terms over onerous terms such as new OPK implementation and GUI restrictions." Occasionally Norris showed some spark. When asked: "It was IBM's first priority, though, wasn't it, to sell and promote IBM solutions on IBM platforms?" he replied: "That's what companies normally do that make products for a profit, yes." Microsoft's greatest fear must be in Norris' direct examination, and the documents that are produced at the time. It begins to look as though IBM is gaining confidence about its contractual relationship with Microsoft. Perhaps revenge will also be sweet, at least for those with a long-enough memory about the OS/2 saga. ® Complete Register trial coverage


Other stories you might like

  • Warehouse belonging to Chinese payment terminal manufacturer raided by FBI

    PAX Technology devices allegedly infected with malware

    US feds were spotted raiding a warehouse belonging to Chinese payment terminal manufacturer PAX Technology in Jacksonville, Florida, on Tuesday, with speculation abounding that the machines contained preinstalled malware.

    PAX Technology is headquartered in Shenzhen, China, and is one of the largest electronic payment providers in the world. It operates around 60 million point-of-sale (PoS) payment terminals in more than 120 countries.

    Local Jacksonville news anchor Courtney Cole tweeted photos of the scene.

    Continue reading
  • Everything you wanted to know about modern network congestion control but were perhaps too afraid to ask

    In which a little unfairness can be quite beneficial

    Systems Approach It’s hard not to be amazed by the amount of active research on congestion control over the past 30-plus years. From theory to practice, and with more than its fair share of flame wars, the question of how to manage congestion in the network is a technical challenge that resists an optimal solution while offering countless options for incremental improvement.

    This seems like a good time to take stock of where we are, and ask ourselves what might happen next.

    Congestion control is fundamentally an issue of resource allocation — trying to meet the competing demands that applications have for resources (in a network, these are primarily link bandwidth and router buffers), which ultimately reduces to deciding when to say no and to whom. The best framing of the problem I know traces back to a paper [PDF] by Frank Kelly in 1997, when he characterized congestion control as “a distributed algorithm to share network resources among competing sources, where the goal is to choose source rate so as to maximize aggregate source utility subject to capacity constraints.”

    Continue reading
  • How business makes streaming faster and cheaper with CDN and HESP support

    Ensure a high video streaming transmission rate

    Paid Post Here is everything about how the HESP integration helps CDN and the streaming platform by G-Core Labs ensure a high video streaming transmission rate for e-sports and gaming, efficient scalability for e-learning and telemedicine and high quality and minimum latencies for online streams, media and TV broadcasters.

    HESP (High Efficiency Stream Protocol) is a brand new adaptive video streaming protocol. It allows delivery of content with latencies of up to 2 seconds without compromising video quality and broadcasting stability. Unlike comparable solutions, this protocol requires less bandwidth for streaming, which allows businesses to save a lot of money on delivery of content to a large audience.

    Since HESP is based on HTTP, it is suitable for video transmission over CDNs. G-Core Labs was among the world’s first companies to have embedded this protocol in its CDN. With 120 points of presence across 5 continents and over 6,000 peer-to-peer partners, this allows a service provider to deliver videos to millions of viewers, to any devices, anywhere in the world without compromising even 8K video quality. And all this comes at a minimum streaming cost.

    Continue reading

Biting the hand that feeds IT © 1998–2021