Comment The use of disk drives to store performance data for enterprises is declining and, flash drives - SSDs - are taking their place. A wave of all-flash array (AFA) to disk array migration is starting to wash across data centres as generations of disk drive arrays give way to ones built with NAND flash drives. The tipping point has arrived.
Why now? What has changed so that NAND is now seen as better than disk? One sign of the change is the appearance of second and third generation products from storage array vendors covering a wider range of use cases as the technology has matured. The latest all-flash arrays take advantage of technology developments that mean they can deliver faster data access than disk arrays without customers giving up the data management services they need to run their data centres and safeguard data.
The tipping point flash array change is based on:
- Better than disk performance meaning access speed and endurance
- Price, both raw and effective after data reduction
- Data management services
- Confidence as pioneers have evolved v1 products
- SSDs exceeding HDD capacity
- Non-stop operations
Better than disk performance
Data comes off an SSD up to 500 times faster than disk because there is no need to wait for a read/write head to move across the disk platter surface to the right track location, and then wait a little longer for the disk’s rotation to bring the right track position under the head.
This speed aspect of performance has never been in doubt. What has been in doubt is whether the fresh-out-of-the-box (FOB) performance lasts once blocks of flash cells have to rewritten, and how long the flash drive itself will last before the cells wear out. Every time a block of cells is written some portion of that block’s life is ended.
Controller developments have reduced the number of writes a flash drive has to endure, called lower write amplification, and provisioning provides a buffer of spare blocks to call into play when existing blocks get worn out. Enterprise flash drives now come with five year warranties and sufficient lifetime TB written numbers to make enterprise use perfectly safe.
Four things have lowered the cost of flash in $/GB terms: lithography shrinks, bits per cell, controller advances, and 3D NAND.
First, smaller cell geometry means more cells per die and more dies per wafer, meaning lower manufacturing costs per die. The second thing is the addition of bits to a flash cell. The initial SLC (single level cell with 1bit/cell) has given way to MLC (multi-level cell with 2bits/cell) and that has expanded to TLC (triple level cell with 3bits/cell).
Traditionally flash endurance reduces as we move from SLC to MLC and on to TLC, and also reduces as cell lithography shrinks. Controller firmware and software developments to extract better signals from cells and reduce the amount of cell writes means means that these unwelcome attributes can be controlled and countered.
Stacking layers of flash one above the other, which is what 3D NAND achieves, means that a flash die can hold even more data. A Samsung 48-layer die holds 256bits with a 64-layer die storing 512Gbits. It is this which has latterly led the way to 15.36TB SSDs, exceeding even nearline, 3.5-inch disk drives in capacity.
At first flash was only affordable once data reduction, meaning deduplication and compression, had been taken into account. These were of variable efficacy depending upon the data type involved.
As SLC has transitioned to MLC and on to TLC, and 2D planar NAND evolves to 3D NAND, raw flash capacity prices are dropping below performance disk prices, though not capacity or nearline disk prices. There is a premium to be paid for speed in accessing such vast amounts of storage after all.
Data management services
Drive array vendors are adding data management services to acquired/developed new flash array technology, for example, EMC with XtremIO, while vendors who have added flash media and management to existing array product lines, such as Fujitsu and HPE, have inherited this capability. Fujitsu stresses that flash array management should also cover disk array management as this greatly eases flash array adoption.
It also stresses, as do others, that the CPU-intensive deduplication technique should be applied selectively, and only to data that will benefit from it. This contrasts with the approach taken by Pure Storage and others who try and dedupe every piece of data
AFAs with snapshots, mirroring, thin provisioning and so forth, the gamut of disk drive array data services, enable storage admins to adopt AFAs easily and smoothly. Data is managed and protected pretty much as before, only faster, and data access is also far faster.