HPE debuts storage-as-a-service platform based on a new storage array: Alletra

The race to keep up with AWSs and Microsoft of this world.

HPE is trying to up its Greenlake public cloud-like game by launching a storage-as-a-service (SaaS) platform with data service software abstraction layers operating on new Alletra storage arrays.

The set-up has a Data Service Cloud Console (DSCC) that has API access to the Alletra arrays so they can be managed in the same way as a customer would look after AWS or Azure storage. This cannot be done with HPE's existing storage kit available via GreenLake, which are managed by storage admin staff directly.

The DSCC provides access to Cloud Data Services, a suite of software subscription services which store, protect and move block data on the all-NVMe flash Alletra systems, and replicate data to Cloud Volumes located on or near public cloud regional data centres.

Tom Black, HPE Storage SVP and GM, claimed: "HPE is changing the storage game by bringing a full cloud operational model to our customers' on-premises environments. Bringing the cloud operational model to where data lives accelerates digital transformation, streamlines data management, and will help our customers innovate faster than ever before."  

alletra diagram

How it all works (click to enlarge)

What is Alletra?

HPE's Alletra is a re-imagining of the Primera and Nimble arrays – as the Alletra 9000 and 6000 product families respectively. They provide on-premises block-level access to all-NVMe flash storage in a 4U chassis, with management, capacity, and data services using that capacity, accessed and available through the online console as if users were accessing public cloud storage capacity and data services.

The Alletra 9000 is for mission-critical use while the Alletra 6000 is a mid-range product. The 9000 has a code-base derived from the Primera arrays while the 6000’s code-base comes from the Nimble arrays.

The 9000, with active-active clustering, has a 100 per cent availability guarantee and features automatic failover across active sites. It can deliver more than 2 million IOPS and support up to 96 SAP HANA nodes with its multi-node, parallel system design.

HPE 9000 image from Alletra brochure, with the box actually labelled Primera A670; the A670 being a 4-controller node, 16-SSD slot chassis which can grow with expansion enclosures. An Alletra 6000 image in the brochure uses the same Primera A670 shot.

The HPE 9000 image is from an Alletra brochure, with the box actually labelled Primera A670. (The A670 is a four-controller node, 16-SSD slot chassis which can grow with expansion enclosures.) An Alletra 6000 image in the brochure uses the same Primera A670 shot

The 6000 has a six nines (~99.9999 per cent) availability level with app‑aware backup and recovery, both on-premises and in the cloud. 

Each of the two Alletra units comes as a sub-portfolio of systems, such as an Alletra 9080 or 6030, with varying capacities and performance. All are managed by the DSCC. Controller software upgrades are delivered from the HPE cloud and are invisible to users.

Deduplication increases the array's effective capacity and is backed by what HPE has termed its "Store More" pre-sales guarantee (a pledge to store more "data per raw terabyte of storage" than any other vendor's all flash array). Users can set data reduction on or off at a volume level.

HPE says Alletra arrays can support any workload, which The Reg took to mean applications running in bare metal servers, virtualized servers and containerised servers. In other words, Alletra can provision block storage to all three types of workload.

Business continuity is supported by pairing Alletra arrays across metropolitan distances and using synchronous replication to copy data. Two arrays then represent a single highly available repository to hosts.

HPE Green Lake

HPE: Since y'all love cloud subs so much, we'll throw all our boxes into GreenLake by 2022


Developers connect to Alletra via consistent APIs, whether on-premises or in the cloud. They can dynamically facilitate persistent volume provisioning, expand or clone volumes, and take snapshots of data for reuse. 

HPE claims Alletra has seamless and simple data mobility across clouds, using replication to HPE Cloud Volumes, and is claimed to operate in hybrid clouds by design. Data can be flexibly restored from Cloud Volumes to on-premises Alletra arrays with no egress fees.

Alletra array operations are monitored by HPE's InfoSight with predictive analytics aiming to help help to detect and fix problems without the need for support calls. InfoSight uses AI and machine learning and is integrated with the DSCC so users can track Alletra's status. It also promised support calls would go to "Level 3" experts who have access to InfoSight telemetry for a customer's arrays.

Data Services Cloud Console

This is based on the SaaS, cloud-native technology that underpins Aruba Central and has an API usable by applications, partner-led and custom-built data services. It can manage fleets of Alletra deployments, potentially thousands of systems across geographies.

DSCC screen grab from demo.

DSCC screen grab from demo

New hardware devices are connected to the network, powered up, and then auto-discovered and activated. Configuration parameters can be pre-defined and automatically deployed with no need for specialist admin involvement.

Cloud Data Services

A Data Ops Manager manages the data infrastructure from any device and provides self-service, on-demand, intent-based provisioning of services. HPE said it is "AI-driven" and application-centric and meant to optimise service-level objectives.

Infrastructure services can include deployment, configuration, management of devices and a fleet of devices, software upgrades that are background processes, and optimising resource efficiency using machine learning.

Data services can include provisioning capacity (volumes), data access, protection, search and movement. Volume creation is a four-step process: select workload (eg SQL Server), define number of volumes, set volume size and select a host group. 

The provisioning is called "intent-based" as capacity is provisioned in such a way as to optimise SLAs for the particular workload. It is neither manual nor LUN-centric.

We have yet to see a catalogue of the available cloud data services.

Cloud Volumes

The Cloud Volumes scheme has HPE arrays located physically close to public cloud regional centres, and hosted by HPE, so that public cloud-based apps can quickly access data on them. Data is replicated between the on-premises Alletra and near-cloud-located arrays. As far as a public cloud compute instance is concerned the Cloud Volume is presented as a cloud block volume, an EBS volume in AWS for example.

The data isn't actually stored in the public cloud which is why there are no egress fees.

Alletra has container ecosystem integration with the HPE Container Storage Interface (CSI) Driver for Kubernetes. Developers can use the cloud for development and testing with data stored in Cloud Volumes.

The Data Services Cloud Console, cloud data services, and HPE Alletra will be available for order globally direct and through channel partners this month. These products and services are available through GreenLake subscription or through a perpetual license model.

HPE did not provide Alletra data sheets or pricing but did say there is flat support pricing for the life of Alletra.


HPE has a vision to become an edge-to-cloud platform-as-a-service company.  That means its servers have got to become part of the everything-as-a-service game too. Logically, the DSCC or an equivalent entity will need to provision servers as well as storage.

Existing HPE storage customers will need help in this transition to an Alletra-led future. Primera and 3PAR customers should have a migration facility available in a few months.

Nothing has been said by HPE about a 3PAR upgrade programme, nor about how the XP8, OEM'd from Hitachi, and HPE's MSA arrays fit into the scheme.

We understand the block storage focus will be expanded to include file level access, with HPE partners being given API access so they can integrate their file-based offerings with the DSCC. We envisage this could apply to Qumulo and WekaIO, for example.

Logically object storage access will also be embraced by HPE’s storage-as-a-service unified operations design, and we think partners such as Cloudian and Scality will be able to join in as well - eventually.

There is also, El Reg storage desk believes, a missing capacity play here, and disk storage will surely have a role to play. ®

Similar topics

Other stories you might like

  • DigitalOcean sets sail for serverless seas with Functions feature
    Might be something for those who find AWS, Azure, GCP overly complex

    DigitalOcean dipped its toes in the serverless seas Tuesday with the launch of a Functions service it's positioning as a developer-friendly alternative to Amazon Web Services Lambda, Microsoft Azure Functions, and Google Cloud Functions.

    The platform enables developers to deploy blocks or snippets of code without concern for the underlying infrastructure, hence the name serverless. However, according to DigitalOcean Chief Product Officer Gabe Monroy, most serverless platforms are challenging to use and require developers to rewrite their apps for the new architecture. The ultimate goal being to structure, or restructure, an application into bits of code that only run when events occur, without having to provision servers and stand up and leave running a full stack.

    "Competing solutions are not doing a great job at meeting developers where they are with workloads that are already running today," Monroy told The Register.

    Continue reading
  • Patch now: Zoom chat messages can infect PCs, Macs, phones with malware
    Google Project Zero blows lid off bug involving that old chestnut: XML parsing

    Zoom has fixed a security flaw in its video-conferencing software that a miscreant could exploit with chat messages to potentially execute malicious code on a victim's device.

    The bug, tracked as CVE-2022-22787, received a CVSS severity score of 5.9 out of 10, making it a medium-severity vulnerability. It affects Zoom Client for Meetings running on Android, iOS, Linux, macOS and Windows systems before version 5.10.0, and users should download the latest version of the software to protect against this arbitrary remote-code-execution vulnerability.

    The upshot is that someone who can send you chat messages could cause your vulnerable Zoom client app to install malicious code, such as malware and spyware, from an arbitrary server. Exploiting this is a bit involved, so crooks may not jump on it, but you should still update your app.

    Continue reading
  • Google says it would release its photorealistic DALL-E 2 rival – but this AI is too prejudiced for you to use
    It has this weird habit of drawing stereotyped White people, team admit

    DALL·E 2 may have to cede its throne as the most impressive image-generating AI to Google, which has revealed its own text-to-image model called Imagen.

    Like OpenAI's DALL·E 2, Google's system outputs images of stuff based on written prompts from users. Ask it for a vulture flying off with a laptop in its claws and you'll perhaps get just that, all generated on the fly.

    A quick glance at Imagen's website shows off some of the pictures it's created (and Google has carefully curated), such as a blue jay perched on a pile of macarons, a robot couple enjoying wine in front of the Eiffel Tower, or Imagen's own name sprouting from a book. According to the team, "human raters exceedingly prefer Imagen over all other models in both image-text alignment and image fidelity," but they would say that, wouldn't they.

    Continue reading
  • Facebook opens political ad data vaults to researchers
    Facebook builds FORT to protect against onslaught of regulation, investigation

    Meta's ad transparency tools will soon reveal another treasure trove of data: advertiser targeting choices for political, election-related, and social issue spots.

    Meta said it plans to add the targeting data into its Facebook Open Research and Transparency (FORT) environment for academic researchers at the end of May.

    The move comes a day after Meta's reputation as a bad data custodian resurfaced with news of a lawsuit filed in Washington DC against CEO Mark Zuckerberg. Yesterday's filing alleges Zuckerberg built a company culture of mishandling data, leading directly to the Cambridge Analytica scandal. The suit seeks to hold Zuckerberg responsible for the incident, which saw millions of users' data harvested and used to influence the 2020 US presidential election.

    Continue reading
  • Toyota cuts vehicle production over global chip shortage
    Just as Samsung pledges to invest $360b to shore up next-gen industries

    Toyota is to slash global production of motor vehicles due to the semiconductor shortage. The news comes as Samsung pledges to invest about $360 billion over the next five years to bolster chip production, along with other strategic sectors.

    In a statement, Toyota said it has had to lower the production schedule by tens of thousands of units globally from the numbers it provided to suppliers at the beginning of the year.

    "The shortage of semiconductors, spread of COVID-19 and other factors are making it difficult to look ahead, but we will continue to make every effort possible to deliver as many vehicles to our customers at the earliest date," the company said.

    Continue reading

Biting the hand that feeds IT © 1998–2022