Right, HPE. You've eaten your hyperconverged Simplivity breakfast. Will it blend?

C'mon... there has to be some aggregation aggravation

Interview How does Hewlett Packard Enterprise view hyperconverged infrastructure (HCI) now that it has bought and is digesting SimpliVity?

In particular, how do its hyperconverged products, with aggregated components, fit in with its Synergy composable infrastructure ideas? Synergy uses disaggregated components which are composed into platforms at run time.

We talked to Paul Miller, VP of marketing in its software-defined and cloud group, to explore HPE's thinking.

Does HPE support the view that HCI could become the on-premises IT mainstream architecture?

Software-defined infrastructure is quickly becoming the de-facto standard for on-premises architectures. HCI fits into the software-defined infrastructure category, which spans from HCI to composable infrastructure.

Customers are looking for simplicity, agility, elasticity, security and predictability within their environment. Each customer will select their own path to meet their unique business requirements. For some customers, this will be 100 per cent HCI. For others, they will use a mix of HCI and composable.

Does HPE think HCI must embrace hybrid IT and have a public cloud integration play?

Yes, hybrid IT is a given today. Customers are running multiple clouds that are offering a range of services across public and on-premises infrastructure. Many HCI solutions only extend an HCI silo to a single public cloud. This is not hybrid IT.

Project New Stack from HPE offers an open cloud approach to any cloud and any services in the cloud, married with on-premises HCI and composable infrastructure, thus unifying customers' hybrid experience.

Does HPE agree that HCI is a virtual SAN game but containers are coming?

We see customers wanting to run both virtual machines and containers within their HCI environments. With great relationships with containerisation leaders like Docker, Kubernetes and Meso, we are working to optimise containers on HCI and composable.

How does HCI relate to composable infrastructure?

HCI and composable are part of the software-defined infrastructure category. HCI provides simplicity, agility, elasticity, security and predictability for virtualized environments and composable provides the same for bare metal, virtual and containers. Note, composable runs both software-defined storage and traditional SAN storage.

Should HCI be an all-commodity hardware game with no added ASIC/FPGA components?

Customers want performance and simplicity, they do not ask whether the server their VM is running on in Azure has an FPGA in it. At the end of the day, it is about delivering a total solution to the customer at a compelling price point.

Microsoft deploys FPGAs in Azure servers, Intel paid $16.7bn for Altera, and the GPU market has exploded in the last several years. HPE SimpliVity leverages an FPGA to provide unique capabilities like predictable performance that we guarantee with always-on deduplication and compression. This allows HPE to guarantee performance of backup and recovery. No other HCI solution guarantees performance – period.

How will NVMe drives, NVMe over fabrics and storage-class memory, such as Optane (3D XPoint), affect HCI systems?

These technologies will absolutely affect the HCI market. HCI systems need to evolve as the technology market changes. HPE SimpliVity was designed to be media agnostic from day 0 and has moved to a 100 per cent flash portfolio because that offers the best value to the customer today. As storage-class memory, high-speed interconnects and high-speed media continue to become more mainstream, HPE will continue our technology leadership in the HCI market.


In a way, we can view Synergy as dynamically composing hyperconverged infrastructure at run time and then returning the server, storage and networking hardware components to the resource pool when no longer needed.

But HCI is partly a response to the difficulties customers have in buying, installing, integrating, operating and managing server, storage, networking and system software components of multifarious separate application stacks. Buy one SKU and scale it out is the simplistic response to this.

If customers can get dynamically composable infrastructure stacks using generic commodity components then, in theory, the need for HCIA could go away. HPE has high hopes for Synergy accomplishing this. ®

Similar topics

Other stories you might like

  • HPE Greenlake to power Taeknizon private cloud expansion in UAE
    Isn't this the definition of a middle man?

    Why build a cloud datacenter yourself, when you can rent one from Hewlett Packard Enterprise? It may seem unorthodox, but That’s exactly the approach Singapore-based private cloud provider Taeknizon is using to extend its private cloud offering to the United Arab Emirates (UAE).

    Founded in 2012, Taeknizon offers a menagerie of services ranging from IoT, robotics, and AI to colocation and private cloud services, primarily in the Middle East and Asia. The company’s latest expansion in the UAE will see it lean on HPE GreenLake’s anything-as-a-service (XaaS) platform to meet growing demand from small-to-midsize enterprises for cloud services in the region.

    “Today, 94% of companies operating in the UAE are SMEs," Ahmad AlKhallafi, UAE managing director at HPE, said in a statement. "Taeknizon’s as-a-service model caters to the requirements of SMEs and aligns with our vision to empower youth and the local startup community.”

    Continue reading
  • Multiplatform Linux kernel 'pretty much done' says Linus Torvalds
    Debuts version 5.19rc1, which includes HPE's next-gen server ASIC and much more

    Linus Torvalds has announced the first release candidate for version 5.19 of the Linux kernel, and declared it represents a milestone in multiplatform development for the project.

    After first commenting that the development process for this version has been made difficult by many late pull requests, then applauding the fact that most were properly signed, Torvalds opined that Linux 5.19 "is going to be on the bigger side, but certainly not breaking any records, and nothing looks particularly odd or crazy."

    Around 60 percent of the release is drivers, and there's another big load of code that gets AMD GPUs playing nicely with the kernel.

    Continue reading
  • Home-grown Euro chipmaker SiPearl signs deal with HPE, Nvidia
    Claims partnerships will drive development and adoption of exascale computing in Europe

    European microprocessor designer SiPearl revealed deals with Nvidia and HPE today, saying they would up the development of high-performance compute (HPC) and exascale systems on the continent.

    Announced to coincide with the ISC 2022 High Performance conference in Hamburg this week, the agreements see SiPearl working with two big dogs in the HPC market: HPE is the owner of supercomputing pioneer Cray and Nvidia is a leader in GPU acceleration.

    With HPE, SiPearl said it is working to jointly develop a supercomputer platform that combines HPE's technology and SiPearl's upcoming Rhea processor. Rhea is an Arm-based chip with RISC-V controllers, planned to appear in next-generation exascale computers.

    Continue reading
  • AI and ML could save the planet – or add more fuel to the climate fire
    'Staggering amount of computation' deployed to solve big problems uses a lot of electricity

    AI is killing the planet. Wait, no – it's going to save it. According to Hewlett Packard Enterprise VP of AI and HPC Evan Sparks and professor of machine learning Ameet Talwalkar from Carnegie Mellon University, it's not entirely clear just what AI might do for – or to – our home planet.

    Speaking at the SixFive Summit this week, the duo discussed one of the more controversial challenges facing AI/ML: the technology's impact on the climate.

    "What we've seen over the last few years is that really computationally demanding machine learning technology has become increasingly prominent in the industry," Sparks said. "This has resulted in increasing concerns about the associated rise in energy usage and correlated – not always cleanly – concerns about carbon emissions and carbon footprint of these workloads."

    Continue reading
  • Another AI supercomputer from HPE: Champollion lands in France
    That's the second in a week following similar system in Munich also aimed at researchers

    HPE is lifting the lid on a new AI supercomputer – the second this week – aimed at building and training larger machine learning models to underpin research.

    Based at HPE's Center of Excellence in Grenoble, France, the new supercomputer is to be named Champollion after the French scholar who made advances in deciphering Egyptian hieroglyphs in the 19th century. It was built in partnership with Nvidia using AMD-based Apollo computer nodes fitted with Nvidia's A100 GPUs.

    Champollion brings together HPC and purpose-built AI technologies to train machine learning models at scale and unlock results faster, HPE said. HPE already provides HPC and AI resources from its Grenoble facilities for customers, and the broader research community to access, and said it plans to provide access to Champollion for scientists and engineers globally to accelerate testing of their AI models and research.

    Continue reading
  • All-AMD US Frontier supercomputer ousts Japan's Fugaku as No. 1 in Top500
    Exascale beast's test system also claims top spot in the Green500

    The land of the rising sun has fallen to the United States’ supercomputing might. Oak Ridge National Laboratory’s (ORNL) newly minted Frontier supercomputer has ousted Japan’s Arm-based Fugaku for the top spot on the Top500 rankings of the world's most-powerful publicly known systems.

    Frontier’s lead over Japan’s A64X-based Fujitsu machine is by no means a narrow one either. The cluster achieved peak performance of 1.1 exaflops according to the Linpack benchmark, which has been the standard by which supercomputers have been ranked since the mid-1990s.

    Frontier marks the first publicly benchmarked exascale computer by quite a margin. The ORNL system is well ahead of Fugaku’s 442 petaflops of performance, which was a strong enough showing to keep Fugaku in the top spot for two years.

    Continue reading
  • Los Alamos to power up supercomputer using all-Nvidia CPU, GPU Superchips
    HPE-built system to be used by Uncle Sam for material science, renewables, and more

    Nvidia will reveal more details about its Venado supercomputer project today at the International Supercomputing Conference in Hamburg, Germany.

    Venado is hoped to be the first in a wave of high-performance computers that use an all-Nvidia architecture, in this case using Grace-Hopper Superchips that combine CPU and GPU dies, and Grace CPU-only Superchips.

    This supercomputer "will be the first system deployed not just with Grace-Hopper in terms of the converged Superchip but it’ll also have a cluster of Grace CPU-only Superchip modules,” Dion Harris, Nvidia’s head of datacenter product marketing for HPC, AI, and Magnum IO, said during an Nvidia press conference ahead of ISC.

    Continue reading

Biting the hand that feeds IT © 1998–2022