HPE Storage crows: All the array-slingers NVMe for my SCM

And adds containers to halfway hybrid house Cloud Volumes

HPE has made its storage arrays faster with Optane caching, while adding containerisation support to its Cloud Volume Nimble arrays to bridge on-premises, AWS and Azure public clouds.

First XPoint, then Z-NAND: Oh dear, server-makers. SCM is happening


It is also beefing up InfoSight 3PAR and Nimble array management with – and many of you are already mouthing the words – machine learning.

AI and ML in InfoSight's sights

InfoSight is the Nimble array predictive analytics and management facility which HPE inherited with its $1.2bn acquisition of Nimble Storage in March. It has since been extended to cover 3PAR arrays and is growing into a data centre management facility, coveting servers, networking and, maybe, applications.

InfoSight on Nimble now includes the virtualization layer, with ML-driven guidance as to how best to optimise customers' environments and where to place their data. There is also a resource planner that helps optimise workload placement based on resources available, helping them ditch planning and lower the risk of disruption when deploying new workloads.

The 3PAR InfoSight facility now uses machine-learning to self-diagnose performance bottlenecks. Unlike InfoSight generally, which is a cloud-delivered service, this is deployed on-premises, and extends InfoSight to sites that have restricted access to the cloud.

Cloud Volumes

HPE said its Cloud Volumes delivers an enterprise-grade, pay-as-you-go multi-cloud storage service with hybrid and multi-cloud mobility.

Last year, at the time of the Nimble acquisition, Nimble's Cloud Volumes stored block data for use by Amazon or Azure compute instances, and the block data was in the Nimble array, not in AWS or Azure. AWS or Azure cloud workloads mount an iSCSI LUN provided by the Nimble array, through its NimbleOS. The array is located near the AWS or Azure instance, which treat it effectively as an in-cloud storage array.

The Cloud Volumes look like an AWS EBS volume to an AWS compute instance and the same is true for an Azure-focused Cloud Volume and AWS compute instances.

This array is not provided by HPE through its GreenLake pay-as-you-use service but is a hosted service, possibly in the same co-location facility used by the public cloud provider. Thus they are local to that public cloud provider's region.

Data can be replicated between an on-premises Nimble array and a (remote) Cloud Volume array without incurring data movement costs from the public cloud provider. This helps customers migrate from on-premises IT to a hybrid cloud environment, being a halfway array house.

HPE and Nimble are pitching this against AWS and Azure block-level cloud storage, and, as you'd imagine, are pushing those product features which they believe play well against the Amazon and Microsoft products – data durability, for one. HPE claimed Cloud Volumes are millions of times more durable than the  0.1-0.2 annual failure rate published by Amazon Web Services for EBS native cloud block storage.

HPE is expanding Cloud Volume availability into UK and Ireland in 2019 to service UK and European customers requiring local cloud data access. It’s adding support for Docker and Kubernetes container platforms for DevOps and test/dev of cloud-native apps and hybrid cloud workloads. You can register for a technical preview here.

The HPE Cloud Volumes have also completed SoC 2 Type 1 certification for customers with compliance controls and HIPAA compliance for healthcare customers.

3PAR and Nimble getting SCM support

HPE is adding NVMe storage-class memory (SCM) drives to its 3PAR and Nimble arrays, meaning 750GB Intel Optane drives. These use 3D Xpoint AIC media which is faster than flash, with 9-10μs latency versus 90-100μs or so, and which has higher endurance.

SCM has near-DRAM speed while being cheaper than DRAM and enabling greater capacity than DRAM’s 12 x 128GB DIMMs/Xeon SP CPU limitation. HPE brags about being the "first" enterprise storage platform available with SCM and NVMe. Pure and IBM have already added NVMe flash drive support but not SCM support.

The company calls this Memory-Driven Flash and claimed it lowers latency up to 2X and is up to 50 per cent faster than all-flash arrays, such as Dell EMC’s PowerMax, with NVMe solid state drives. It said it has optimised SCM to enable real-time processing for latency-sensitive applications and mixed workloads at scale.

The product extends memory semantics to large, persistent memory pools through storage class media over NVMe.

3PAR and Nimble Storage are capable of delivering sub 300 microsecond latencies for near 100 per cent of all I/Os and averaging sub-200us, it said.

Memory-Driven Flash marks an architectural shift to memory-driven storage. This is possibly drawing an extended conclusion from adding Optane caching to 3PAR and Nimble arrays which don’t yet support NVMe over Fabrics access from servers but rely on slower Fibre Channel or iSCSI.

SCM support for 3PAR will be available December 2018, and is expected in 2019 for Nimble, with both being non-disruptive upgrades.

We expect HPE to add SCM caching to its ProLiant servers in the not too distant future, the caching being general, metadata-specific or for latency-sensitive and relatively small storage volumes.

Bits and pieces

Nimble Storage has added Peer Persistence, multi-site synchronous replication with automatic failover.

The Apollo 4200 server has had a gen 10 makeover, we were told, upgrading to Xeon SP processors with up to 24 cores each and DDR4 2666MT/s HPE SmartMemory for up to 66 per cent faster bandwidth than before. It now supports up to six NVMe-connected 2.5-inch SSDs. Smart Array Gen10 Controllers provide up to 65 per cent better random and up to 25 per cent better sequential performance, and network controllers up to 100Gbit/s. It will be available globally on January 7, 2019.

HPE’s GreenLake offering provides storage delivered as a service (pay-per-use on premises), with 500PB delivered. It's expanding GreenLake for backup to include Veeam Software backup as part of the consumption-based model. ®

Similar topics

Other stories you might like

  • HPE Q2 revenue growth held back by supply constraints
    'However, enterprise demand continues to persist across our entire portfolio,' says CEO

    Amid a delayed HPC contract and industry-wide supply limitations compounded by the lockdown in Shanghai, Hewlett Packard Enterprise reported year-on-year sales growth of $13 million for its Q2.

    That equated to revenue expansion of 1.5 percent to $6.713 billion for the quarter ended 30 April. Wall Street had forecast HPE to generate $6.81 billion in sales for the period and didn't look too kindly on the shortfall.

    "This quarter," said CEO and president Antonio Neri, "through a combination of supply constraints, limiting our ability to fulfill orders as well as some areas where we could have executed better, we did not fully translate the strong customer orders into higher revenue growth."

    Continue reading
  • Another AI supercomputer from HPE: Champollion lands in France
    That's the second in a week following similar system in Munich also aimed at researchers

    HPE is lifting the lid on a new AI supercomputer – the second this week – aimed at building and training larger machine learning models to underpin research.

    Based at HPE's Center of Excellence in Grenoble, France, the new supercomputer is to be named Champollion after the French scholar who made advances in deciphering Egyptian hieroglyphs in the 19th century. It was built in partnership with Nvidia using AMD-based Apollo computer nodes fitted with Nvidia's A100 GPUs.

    Champollion brings together HPC and purpose-built AI technologies to train machine learning models at scale and unlock results faster, HPE said. HPE already provides HPC and AI resources from its Grenoble facilities for customers, and the broader research community to access, and said it plans to provide access to Champollion for scientists and engineers globally to accelerate testing of their AI models and research.

    Continue reading
  • Home-grown Euro chipmaker SiPearl signs deal with HPE, Nvidia
    Claims partnerships will drive development and adoption of exascale computing in Europe

    European microprocessor designer SiPearl revealed deals with Nvidia and HPE today, saying they would up the development of high-performance compute (HPC) and exascale systems on the continent.

    Announced to coincide with the ISC 2022 High Performance conference in Hamburg this week, the agreements see SiPearl working with two big dogs in the HPC market: HPE is the owner of supercomputing pioneer Cray and Nvidia is a leader in GPU acceleration.

    With HPE, SiPearl said it is working to jointly develop a supercomputer platform that combines HPE's technology and SiPearl's upcoming Rhea processor. Rhea is an Arm-based chip with RISC-V controllers, planned to appear in next-generation exascale computers.

    Continue reading

Biting the hand that feeds IT © 1998–2022