HP Labs chief Prith Banerjee departs

'Presided over demise of a once-great facility' - mole


Prith Banerjee, the head of HP Laboratories, has resigned for undisclosed reasons and is joining Zurich-based power and automation tech outfit ABB.

Banerjee, SVP of research and the HP Labs director, is officially leaving on 15 April. HP CEO Meg Whitman issued a letter about his departure yesterday, which says: "He will be assuming a role outside the company, which will be announced at a later date."

ABB announced Banerjee's appointment today.

HP Fellow Chandrakant Patel, director of the Sustainable Ecosystems Research Group, will step in as Banerjee's temporary replacement while a permanent successor is being sought.

Whitman said in her letter "Chandrakant ... will continue to drive Labs forward during this transition, and I couldn’t be more pleased that he has agreed to assume this interim role."

Prith Banerjee

Prith Banerjee

The CEO has plenty of praise for the former head: "Prith has been a strong contributor to HP’s product innovation and has substantially increased the visibility of Labs within the business. He’s led breakthrough research, including data de-duplication, flexible displays, the memristor and nano-technology sensors (CeNSE)." She mentions his passion for innovation too: it sounds like HP is sad to see him leave.

In contrast, we are told by someone familiar with the situation that Banerjee "presided over the virtual demise of a once very solid research facility. Word on the shop floor was that he was brought in (to replace the well-respected Dick Lampman) because ... he posed no threat to the then-CTO."

The CTO our source referred to was Shane Robison and he left in November last year, after Whitman became CEO, following the ruthless cost-cutting and efficiency-led rule of her predecessor Mark Hurd.

Banerjee reduced the number of research projects at the Labs from around 150 to 30 or fewer in 2008 as part of Hurd's efficiency drive. Hurd had said that he wanted the Labs to produce technology that could be productised faster and more reliably. Doing research for the scientific interest alone became a no-no.

We asked HP if had any comment on this none was offered. ®


Other stories you might like

  • Everything you wanted to know about modern network congestion control but were perhaps too afraid to ask

    In which a little unfairness can be quite beneficial

    Systems Approach It’s hard not to be amazed by the amount of active research on congestion control over the past 30-plus years. From theory to practice, and with more than its fair share of flame wars, the question of how to manage congestion in the network is a technical challenge that resists an optimal solution while offering countless options for incremental improvement.

    This seems like a good time to take stock of where we are, and ask ourselves what might happen next.

    Congestion control is fundamentally an issue of resource allocation — trying to meet the competing demands that applications have for resources (in a network, these are primarily link bandwidth and router buffers), which ultimately reduces to deciding when to say no and to whom. The best framing of the problem I know traces back to a paper [PDF] by Frank Kelly in 1997, when he characterized congestion control as “a distributed algorithm to share network resources among competing sources, where the goal is to choose source rate so as to maximize aggregate source utility subject to capacity constraints.”

    Continue reading
  • How business makes streaming faster and cheaper with CDN and HESP support

    Ensure a high video streaming transmission rate

    Paid Post Here is everything about how the HESP integration helps CDN and the streaming platform by G-Core Labs ensure a high video streaming transmission rate for e-sports and gaming, efficient scalability for e-learning and telemedicine and high quality and minimum latencies for online streams, media and TV broadcasters.

    HESP (High Efficiency Stream Protocol) is a brand new adaptive video streaming protocol. It allows delivery of content with latencies of up to 2 seconds without compromising video quality and broadcasting stability. Unlike comparable solutions, this protocol requires less bandwidth for streaming, which allows businesses to save a lot of money on delivery of content to a large audience.

    Since HESP is based on HTTP, it is suitable for video transmission over CDNs. G-Core Labs was among the world’s first companies to have embedded this protocol in its CDN. With 120 points of presence across 5 continents and over 6,000 peer-to-peer partners, this allows a service provider to deliver videos to millions of viewers, to any devices, anywhere in the world without compromising even 8K video quality. And all this comes at a minimum streaming cost.

    Continue reading
  • Cisco deprecates Microsoft management integrations for UCS servers

    Working on Azure integration – but not there yet

    Cisco has deprecated support for some third-party management integrations for its UCS servers, and emerged unable to play nice with Microsoft's most recent offerings.

    Late last week the server contender slipped out an end-of-life notice [PDF] for integrations with Microsoft System Center's Configuration Manager, Operations Manager, and Virtual Machine Manager. Support for plugins to VMware vCenter Orchestrator and vRealize Orchestrator have also been taken out behind an empty rack with a shotgun.

    The Register inquired about the deprecations, and has good news and bad news.

    Continue reading

Biting the hand that feeds IT © 1998–2021