Comment Is the flash storage business a hype-filled wonderland or is flash-based technology making real inroads into IT?
Flash arrays provide much faster access to data, because of SSD’s lower latency compared to disk drives but they are more expensive to buy. This can be justified by looking at potentially lower total cost of ownership over five years taking into account power, cooling and rack array space uptake.
You could also look at cost per storage transaction if that gets factored into your budget calculations.
New all-flash arrays from startups don’t have the same width and maturity of data management services that traditional-style arrays have. When these existing array architectures have SSDs added to them, with their controller software updated to use the SSDs well, then such hybrid arrays can provide an attractive middle way between slower, all-disk arrays, and costlier and probably faster all-flash arrays lacking data services.
Modern array use of SSDs, and newer SSDs, ensure longer endurance than earlier SSDs, through management to minimise the write rate to individual blocks on flash chips. Flash arrays can provide greater IO rates than disk drive arrays, and do so in in a smaller space, not needing multiple spindles to increase the overall UO rate.
IDC Research Director Eric Burgener said: "Vendors are aggressively flash-optimising their offerings to provide improved performance, longer endurance, higher reliability, and a lower effective cost per gigabyte. The most successful vendors will be those that can make a smooth transition from the traditional, dedicated application model to mixed workload consolidation.”
Hybrid arrays, featuring flash for performance and disk for capacity, have a wider starting workload match than flash arrays focused on sheer performance, especially where the hybrids have a rich set of data management services.
A rule of thumb approach might be to add flash to traditional arrays if they need a performance uplift added to their existing storage capacity, but choose a dedicated all-flash array if you need to prioritise faster data access more than storage capacity and operation under an existing data management services umbrella.
Developments to cut flash cost
The main disadvantage of flash, its cost, is being addressed by chip manufacturers and array suppliers. MLC - 2bits/cell - flash is now mainstream and SATA and SAS SSDs using such chips are commonplace. The arrays housing such SSDs add, typically inline, data deduplication and compression to increase their effective capacity.
Flash device controllers have become more efficient and need less excess and hidden capacity to ensure an acceptable drive working life, which lowers SDD cost.
PCIe flash, with faster access over the PCIe bus, needing no SATA or SAS protocol conversion, is widely available, and its use is expected to grow strongly as the standard NVMe driver is widely adopted, meaning you no longer need PCIe manufacturer- and device-specific drivers.
3D chip building techniques have been developed, layering ordinary or 2D planar flash to build 32- or 48-layer chips with larger capacities but no increase in footprint. The associated use of larger cell geometries means that 3bits/cell TLC flash can have enterprise-class endurance. Samsung has demonstrated a 15TB SSD using this technology and it's expected to be widely available and used in 2016/17, probably surpassing the maximum disk drive capacity available; meaning >10TB drives.
Increasing non-volatile memory speed
Turning from capacity to performance natters, Intel and Micron are bringing their 3D XPoint memory to market in 2016. This non-volatile, but not flash, memory, will be byte-addressable, not block-addressable as is the case with flash. It is claimed to be 1,000 times faster than flash, with 1,000 times more endurance. It will not be as expensive as DRAM but, obviously, cost more than flash.
The thinking is that it will be used as persistent memory rather than storage, and so make applications in servers run faster.
A different technology, NVMe over fabrics, will provide RDMA (Remote Direct Memory access), meaning PCIe bus-class access speeds between servers and external (flash arrays), mounting a huge assault on data access latency.
Basically flash and flash-type technologies promise to banish both disk latency and storage array network access latency to history while continuing to decrease flash $/GB cost.
Already flash arrays are taking over the SPC-1 (random IO) and SPC-2 (throughput) storage benchmarks. Fujitsu's DX600 S3 all-flash array snagged a good SPC-1 benchmark score, beating mid-range array competition in the sheer IOPS stakes, recording 320,206.35, and coming second to a 3PAR 7400 with its $1.54/IOPS cost - based on list pricing.
An all-flash 3PAR 20850 now tops the SPC-2 benchmark list.
Technology developments in how to use flash are ongoing. Fujitsu Labs has developed a way for in-memory database software to send read and write commands directly to flash chips in an SSD, enabling parallel access to those chips instead of sequential access through the SSD controller.
It developed a SW-controlled PCIe SSD, with 16 control channels and 256 on-board flash chips, which delivered a massive 5.5GB/sec of bandwidth. We could see product as soon as 2017.
Maturing flash array designs
Early flash arrays from established vendors more or less put SSDs in disk drive slots and treated them as faster disks. Newer designs have gone beyond that and have updated the array operating systems to fully exploit the potential of flash and fit better into IT environments. For example, Fujitsu’s DX200F:
- Uses the same management system as other Fujitsu DX array family members
- Uses existing cluster functionality for high-availability
- Uses the DX RAID architecture
- Has quality of service and thin-provisioning features
- Synchronous and asynchronous remote copy functionality
While possessing these family features it provides up to 760,000 IOPS and 12GB/sec bandwidth with write latency of 88 ms and read latency of 180 ms.
This system can fit comfortably within existing Fujitsu DX environments and be managed from the same central management console.
Data, as it ages and its access rate slows, can be moved off the DX200F onto hybrid or all-disk DX systems, leaving space for newer, high-access rate data. As the data ages more it can be moved to tape for archiving because of regulatory needs or other considerations.
IT directors with an eye to the data life cycle will appreciate this, treating the data centre partially as a data flow process, and optimising data placement in storage tiers suited to its access rate and importance over time.
It can be argued that Violin Memory was a pioneer in this specialised flash module sphere, with its VIMM (Violin In-line Memory Module) flash drives, and this baton has also been taken up by DSSD.
Flash system take-up
Suppliers of all-flash arrays are reporting good business. All existing mainstream storage suppliers are reporting double-digit year-on-year quarterly growth in their flash storage businesses.
Monolithic and dual-controller architecture disk and hybrid flash/disk arrays from Dell, EMC, HP, IBM and NetApp are showing declining or flat revenues while all-flash arrays from these suppliers are growing at significant double-digit rates year-on-year and even quarter-on-quarter, such is the demand. The AFA startups, apart from special case Violin, are also seeing strong revenue growth.
All-flash array supplier Pure Storage has reported beat-the street quarters while Nimble Storage, with its hybrid arrays, grew far less than expected in its latest quarter, widely ascribed to its lack of an all-flash offering.
The conclusion is that there has been huge latent demand for faster access to storage, driven by server virtualisation and multi-socket, multi-core CPUs increasing the IO capacity of servers. But storage arrays couldn’t satisfy it, and the servers and their running applications had to endure IO waits while the storage arrays struggled to keep up.
Having all-flash arrays means that the server’s potential is unleashed and puts compute and IO back in balance. And having all-flash and hybrid arrays and the disk array estate share the same management facilities and data services means that customers’ data centre management facilities are not overstretched in coping with the newer flash storage systems and sub-systems.
Customers are steadily moving latency-sensitive workloads to all-flash arrays, because the effective $/GB cost of flash, after data reduction, is now at or below 15K rpm disk drive costs and approaching 10K rpm drive costs. Once power, cooling and rack space savings are added in, the total cost of ownership of flash arrays can be significantly less than that of traditional arrays.
We're not seeing a frantic rush to flash, rather a strengthening trend which will cumulatively cut into traditional array sales more and more over the next few years.
Entire all-flash data centres will not be common because bulk secondary data is better stored on disk from a TCO point of view. There is also still a place for tape archives where large amounts of reference and/or compliance data needs to be stored for potential future access.
Consultant Enrico Signoretti sums the situation up like this:
- Enterprise + primary data only = All-Flash
- Enterprise + complex projects = Hybrid (it’s not unusual to see scenarios with a primary site all-flash and a secondary site hybrid)
- Tiering/caching is primary choice for secondary system
- Small/medium organisations = hybrid (but not a lot of tiering/caching) - it's more an all-flash + hybrid in a single box.
As ever IT choices need to be made in a balanced way and we can be glad that, today, we have more choices than before when it was just disk or tape. Now we have all-flash, hybrid flash and tiers of disk, all-disk, tape and the cloud. It means we can apply storage in a more granular way to workloads, resulting in better balanced and more cost-effective systems. Thank flash for that. ®