Hyper-active Pure goes bananas with new software
Also demo'ing end-to-end NVMe over fabrics to Cisco UCS servers
Pure Storage is launching a mega-slew of software, with some new hardware, as well as a demo-ing an end-to-end NVMe over fabrics Flash Stack at its annual Pure Accelerate conference in San Francisco.
There is a near hyper-converged feature for its FlashArray, the start of a wide-ranging public cloud integration story, fast object flash storage, the addition of file services, metro clustering, and the introduction of what it calls self-driving storage, using machine learning. It’s wrapping this groaning buffet table of features, it’s largest SW release, in three parcels:
- New or enhanced tier 1 storage, meaning FlashArray facilities
- Big Data moving to big intelligence - features for FlashBlade
- Self-driving storage - machine learning-driven features for admin staff
Pure boasts it has 25 new software features and here are the main ones with some hardware bits and pieces too;
Tier 1 storage and FlashArray software
Purity FA v5 for Flash Array with FlashArray is characterised as combining traditional tier 1 legacy array reliability features with all-flash array features such as dedupe, compression and NVMe drive and fabric access speed. It’s getting an ActiveCluster feature to link two different data centre sites up to 150 miles apart in an active-active stretch cluster with transparent failover, zero recovery point objective (RPO) and recovery time objective (RTO).
Such clusters are needed, it says, for mission-critical enterprise data centres running software infrastructure s such as Oracle databases, SAP, VMware, Hyper-V and SQL Server, and applications that must not fail.
The need to have a third entity watching the link between the two sites, and declaring which site becomes the main one should the link fail, is fulfilled by the Pure1 Cloud Mediator which runs in a Pure data centre, so no extra hardware is needed. The Active Cluster feature is included at no charge in Pure FA v5, with Pure pleased to tell us that Dell EMC SRDF and NetApp’s Metro Cluster features can cost hundreds of thousands of dollars to implement.
It can be used to provide rack-level active clustering inside a data centre as well as linking separate data centres. A third data centre can be added in using an async link, and that can be located anywhere on the planet.
Public cloudy weather coming
Pure is going big on snapshots, with an initial full snapshot followed by incremental forever snaps.
A Snap feature can snap local storage to a FlashArray and also to FlashBlade. The snapshots can also be moved to NFS targets, such as DataDomain arrays.
CloudSnap moves a Pure Snapshot to Amazon Web Services and Glacier with the S3 protocol. A portable snapshot format is used and CloudSnaps can be hydrated into native Amazon formats such as EBS, S3 or Glacier.
We could envisage snapshots of VMs from a data centre server going off to Amazon and then being instantiated to run in AWS, providing cloud-based backup, restore, migration and disaster recovery features.
SNAPDIFF is an openly available ASPI for third parties to use and Pure has a range of partners using it to send their data to its arrays, such as; Actifio, Catalogic, Cohesity, Commvault, Rubrik and Veeam. We can imagine that could help data move onto its FlashBlade kit.
These snapshot features are included at no charge in the Purity FA v5 software and no special cloud gateway devices are needed.
VVOLs and QoS
Purity FA v5.0 also gets:
- Always-on quality of service (QoS)
- QoS performance classes; bronze, silver and gold to sort out the noisy neighbour problem
- Policy-driven QoS for multi-tenancy customers, such as service providers
- VVOL support with HA and stateless VASA provider hosted on array
- Instant VMFS <—> VVOL migration
Pure claims its VVOL implementation is the simplest to adopt, use and manage yet.
Quasi-hyper-convergence and file services
Developers and the like, Pure stresses, can run Virtual machines and containers directly in FlashArray with the v5 software. They are run in a quasi-sandbox with dedicated CPU and RAM resources providing security and performance isolation.
Purity FA v5 app-running resource
This Run feature can be used to run analytic apps closer to the data in the array, or database and remote office appliances. Pure says developers could build their own custom protocol to achieve this. We should not view this in any way as a full hyper-converged infrastructure appliance (HCIA) capability.
The company is also introducing Windows file services in Purity v5, using the Run feature. It supports SMB 2 and 3 and NFS v3 and 4. It says it's good for use cases needing file access in a SAN situation, such as VDI user files and file sharing.
Purity FA v5 file services
There are plug-ins to Microsoft's management stack and customers bring their own Microsoft licence.
Big Data intelligence with FlashBlade features
Pure says FlashBlade, its unstructured all-flash data store, is ideal for AI, Big Data, the Internet of Things and associated edge computing applications. It claims they need more capacity and bandwidth to store fast access data.
Purity FB v2 software for FlashBlade extends its scalability out to 75 blades, five times more than before, with an 8PB namespace. That would involve 5 x 4U chassis, 20U in total.
The system now features up to 8.5 million IOPS, 75GB/sec read bandwidth and 25GB/sec write bandwidth.
The v2 software supports SMB file access, LDAP, HTTP, IP v6, snapshots and a network lock manager, As well as SMB, FlashBlade also gets S3-based object storage support, which is a natural extension as FlashBlade is basically a large key-value-based object store internally anyway.
Pure says it is 10 times faster than AWS S3 to access the first byte. Basically what we have here is an on-premises all-flash object store; so of course it will be blindingly fast compared to disk-based object storage systems but have flash-based pricing.
Machine learning-driven storage admin
For storage admins, Pure1 META is a machine learning-driven resource to manage a fleet of Pure Storage arrays. Pure says it's building a real-time global sensor network and currently records 1 trillion data points a day from the 1,000s of arrays its customers' use. The idea is to match workload types with data point patterns, using machine learning, and so build up workload profiles embracing things like read and write IO size, bandwidth, IOPS, dedupe and compression rate, total capacity used and more.
If customer A is using workloads with known patterms, and customer B starts using the same workloads then their use of array resources can be predicted and array sizing (performance, capacity, bandwidth) can be predicted more intelligently, and the array's resources managed better.
Pure is providing customers with a Global dashboard to provide aggregated metrics on their Pure array estate:
Pure1 Global Dashboard slide
One possibility that comes to mind here is that workloads could be moved between arrays for better load balancing. Pure is also talking about real-time analytics being used to prevent issues affecting array operations. Known issues across its global estate will be identified with so-called fingerprints. Then individual array data can be searched in real-time for "fingerprint" presence.
If a match is found, the customer admin is notified, support services are notified and a ticket opened to start fixing the problem.
It says its META technology provides predictive intelligence for array issue detection and prevention, workload-based performance sizing and provisioning, and workload interaction intelligence. The marketing wrapper here is that storage management at customer estate level is is becoming too complex for people to look after it, and the storage has to become more like self-driving cars; a neat idea.
The net result should be more cost-effective arrays and array operation, and lower admin costs.
Pure has announced a new expansion shelf for FlashArray, the DirectFlash shelf, with native NVMeF support, meaning its accessed using NVMe over fabrics across 50Gbit/s RoCE v2 Ethernet.
Pure DirectFlash shelf slide
The shelf can hold up to 512TB raw with a maximum if 28 DirectFlash modules.
Pure is also demo'ing end-to-end NVMe over fabrics to Cisco UCS servers using a 40GBit/s RoCE v2 Ethernet link. This is pretty much as we envisaged an NVMeF version of FlashStack in April.
It's only a demo, but the direction here is pretty clear.
There is also an intermediate-size blade for FlashBlade, with a 17TB capacity, fitting between the existing 8TB and 52TB blades.
Pure has provided a slide listing 27 items and their availability from this groaning buffet table of announcements. Good idea– listing them individually would be tedious:
Pure is busily introducing me-too software features, like the file, object and QoS ones, and doing so in a way that simplifies adoption and use, as well as extending its software to embrace the public cloud; did anyone mention a data fabric? In so doing, it’s steadily eliminating competitive knock-offs its competitors have been able to use.
- NetApp SolidFire style noisy neighbour QoS? - Yep, got that,
- NetApp Data Fabric? - Yep, got that,
- Dell EMC SRDF - Yep, got that,
- Nimble sensor-driven array management? - Yep, got that,
- NAS features? - Yep, got that,
- Object storage? Yep, got that,
- NVMe over fabrics? Yep, getting that.
The Pure FA v5 Run feature extends FlashArray towards the HCIA area. It is easy to imagine this being developed so that FlashArray could be morphed into an actual HCIA system, much as NetApp is doing with its SolidFire array.
Could Pure go in full-scale pursuit of Nutanix? It's not that inconceivable is it, Pure being such an ambitious company?
Overall Pure is reaching out towards becoming a storage platform company. It's announced a blistering extended set of features that should both keep its customers happy and give its channel plenty of reasons to knock on new customer's doors and push their win rates higher. ®