Right, we've bagged IBM’s X86 server biz... now what? – Lenovo

Firm mulls storage market, penetrates EMC VSPEX


Now that Lenovo has IBM’s X86 server business, it’s able to freely participate in partnerships with storage suppliers. Does it view storage as a market into which it can expand?

Although not a storage vendor in the mainstream array sense Lenovo does have a number of storage product lines:

  • PX2-300D and PX4-400D NVR video surveillance arrays, basically RAID-protected filers with Atom-powered processors and LifeLine SW providing small office NAS functionality
  • ThinkServer SA120 Direct-attach storage 12-bay, 3.5-inch drive, rack enclosures with four optional 2.5-inch SSD bays, and up to two controllers
  • Lenovo-EMC desktop and network (rack enclosure/tower) storage – reference the EMC Lenovo deal in 2013
  • Lenovo VNX arrays

It's following on from IBM participation in EMC’s converged/integrated server/storage/networking VSPEX reference architecture scheme with two EMC-validated systems:

  • EMC VSPEX by Flex System for Private Cloud supporting 200 – 1,000 Virtual machines
  • EMC VSPEX by Flex System for VDI

They "feature Lenovo ThinkServer systems with the latest Intel processors, RackSwitch networking, EMC storage, and VMware virtualisation".

Lenovo is also involved in IBM’s Storwize V7000 arrays with, for example, the Lenovo 6195 Storwize V7000.

The questions El Reg is asking itself are:

  • Will Lenovo move into the networked storage array business?
  • Will Lenovo do VSPEX-like deals with other suppliers, such as NetApp?
  • Will Lenovo produce its own integrated server/storage/networking products?
  • Will Lenovo partner hyperconverged systems suppliers by running their SW on its servers?

That fourth question was prompted by the Maxta and Simplivity deals with Cisco UCS servers running their software.

There may well be an agreement between IBM and Lenovo specifying that Lenovo will not offer any competition for IBM storage arrays for a certain period of time.

If there isn’t, or when it runs out, just how will server-shipping Lenovo view the storage market? Watch this space. ®


Other stories you might like

  • New audio server Pipewire coming to next version of Ubuntu
    What does that mean? Better latency and a replacement for PulseAudio

    The next release of Ubuntu, version 22.10 and codenamed Kinetic Kudu, will switch audio servers to the relatively new PipeWire.

    Don't panic. As J M Barrie said: "All of this has happened before, and it will all happen again." Fedora switched to PipeWire in version 34, over a year ago now. Users who aren't pro-level creators or editors of sound and music on Ubuntu may not notice the planned change.

    Currently, most editions of Ubuntu use the PulseAudio server, which it adopted in version 8.04 Hardy Heron, the company's second LTS release. (The Ubuntu Studio edition uses JACK instead.) Fedora 8 also switched to PulseAudio. Before PulseAudio became the standard, many distros used ESD, the Enlightened Sound Daemon, which came out of the Enlightenment project, best known for its desktop.

    Continue reading
  • VMware claims 'bare-metal' performance on virtualized GPUs
    Is... is that why Broadcom wants to buy it?

    The future of high-performance computing will be virtualized, VMware's Uday Kurkure has told The Register.

    Kurkure, the lead engineer for VMware's performance engineering team, has spent the past five years working on ways to virtualize machine-learning workloads running on accelerators. Earlier this month his team reported "near or better than bare-metal performance" for Bidirectional Encoder Representations from Transformers (BERT) and Mask R-CNN — two popular machine-learning workloads — running on virtualized GPUs (vGPU) connected using Nvidia's NVLink interconnect.

    NVLink enables compute and memory resources to be shared across up to four GPUs over a high-bandwidth mesh fabric operating at 6.25GB/s per lane compared to PCIe 4.0's 2.5GB/s. The interconnect enabled Kurkure's team to pool 160GB of GPU memory from the Dell PowerEdge system's four 40GB Nvidia A100 SXM GPUs.

    Continue reading
  • Nvidia promises annual updates across CPU, GPU, and DPU lines
    Arm one year, x86 the next, and always faster than a certain chip shop that still can't ship even one standalone GPU

    Computex Nvidia's push deeper into enterprise computing will see its practice of introducing a new GPU architecture every two years brought to its CPUs and data processing units (DPUs, aka SmartNICs).

    Speaking on the company's pre-recorded keynote released to coincide with the Computex exhibition in Taiwan this week, senior vice president for hardware engineering Brian Kelleher spoke of the company's "reputation for unmatched execution on silicon." That's language that needs to be considered in the context of Intel, an Nvidia rival, again delaying a planned entry to the discrete GPU market.

    "We will extend our execution excellence and give each of our chip architectures a two-year rhythm," Kelleher added.

    Continue reading
  • Amazon puts 'creepy' AI cameras in UK delivery vans
    Big Bezos is watching you

    Amazon is reportedly installing AI-powered cameras in delivery vans to keep tabs on its drivers in the UK.

    The technology was first deployed, with numerous errors that reportedly denied drivers' bonuses after malfunctions, in the US. Last year, the internet giant produced a corporate video detailing how the cameras monitor drivers' driving behavior for safety reasons. The same system is now apparently being rolled out to vehicles in the UK. 

    Multiple camera lenses are placed under the front mirror. One is directed at the person behind the wheel, one is facing the road, and two are located on either side to provide a wider view. The cameras are monitored by software built by Netradyne, a computer-vision startup focused on driver safety. This code uses machine-learning algorithms to figure out what's going on in and around the vehicle.

    Continue reading
  • AWS puts latest homebrew ‘Graviton 3’ Arm CPU in production
    Just one instance type for now, but cheaper than third-gen Xeons or EPYCs

    Amazon Web Services has made its latest homebrew CPU, the Graviton3, available to rent in its Elastic Compute Cloud (EC2) infrastructure-as-a-service offering.

    The cloud colossus launched Graviton3 at its late 2021 re:Invent conference, revealing that the 55-billion-transistor device includes 64 cores, runs at 2.6GHz clock speed, can address DDR5 RAM and 300GB/sec max memory bandwidth, and employs 256-bit Scalable Vector Extensions.

    The chips were offered as a tech preview to select customers. And on Monday, AWS made them available to all comers in a single instance type named C7g.

    Continue reading
  • Beijing reverses ban on tech companies listing offshore
    Announcement comes as Chinese ride-hailing DiDi Chuxing delists from NYSE under pressure

    The Chinese government has announced that it will again allow "platform companies" – Beijing's term for tech giants – to list on overseas stock markets, marking a loosening of restrictions on the sector.

    "Platform companies will be encouraged to list on domestic and overseas markets in accordance with laws and regulations," announced premier Li Keqiang at an executive meeting of China's State Council – a body akin to cabinet in the USA or parliamentary democracies.

    The statement comes a week after vice premier Liu He advocated technology and government cooperation and a digital economy that supports an opening to "the outside world" to around 100 members of the Chinese People's Political Consultative Congress (CPPCC).

    Continue reading
  • Nvidia teases server designs for Grace-Hopper Superchips
    x86 still 'very important' we're told as lid lifted on Arm-based kit

    Computex Nvidia's Grace CPU and Hopper Superchips will make their first appearance early next year in systems that'll be based on reference servers unveiled at Computex 2022 this week.

    It's hoped these Arm-compatible HGX-series designs will be used to build computer systems that power what Nvidia believes will be a "half trillion dollar" market of machine learning, digital-twin simulation, and cloud gaming applications.

    "This transformation requires us to reimagine the datacenter at every level, from hardware to software from chips to infrastructure to systems," Paresh Kharya, senior director of product management and marketing at Nvidia, said during a press briefing.

    Continue reading
  • Nvidia brings liquid cooling to A100 PCIe GPU cards for ‘greener’ datacenters
    For those who want to give their racks an air cut

    Nvidia's GPUs are becoming increasingly more power hungry, so the US giant is hoping to make datacenters using them "greener" with liquid-cooled PCIe cards that contain its highest-performing chips.

    At this year's Computex event in Taiwan, the computer graphics goliath revealed it will sell a liquid-cooled PCIe card for its flagship server GPU, the A100, in the third quarter of this year. Then in early 2023, the company plans to release a liquid-cooled PCIe card for the A100's recently announced successor, the Hopper-powered H100.

    Nvidia's A100 has already been available for liquid-cooled servers, but to date, this has only been possible in the GPU's SXM form factor that goes into the company's HGX server board.

    Continue reading

Biting the hand that feeds IT © 1998–2022