This article is more than 1 year old

Nice guy NetApp's adopting 'disruptive' tech non-disruptively

Gently easing us into the NVMe-over-Fabrics and storage-class memory future

Analysis NetApp will bring disruptive NVMe-over-Fabrics technology to its customers in a non-disruptive way.

NetApp chief evangelist Jeff Baxter gave a presentation at the Flash Memory Summit last week explaining this.

He said applications such as artificial intelligence, machine learning and real-time analytics demand lower latency and higher performance from faster fabrics and faster media. On the faster media side there are NVMe SSDs and storage-class memory (SCM) such as 3D XPoint (and Samsung Z-NAND) becoming available.

Faster fabric interconnects such as 25/50/100Gbit Ethernet, 32Gbit/s Fibre Channel and NVME-over-Fabrics (NVMe-oF) were bringing network access speeds up to better access the faster media types coming. NetApp will initially use the new media selectively, then scale its adoption and fully integrate it, leading to broader adoption and optimised media.

He said NetApp has shipped more than 6PB of NVMe media and it has a clear idea of how it sees the enterprise adoption of NVMe-oF and SCM.

NetApp-NVMeF_SCM_vision

NetApp framework from NVMe-oF and SCM

An NVMe-oF network links a server CPU to a storage array CPU, which talks to its SCM as a cache. The new SCM is used selectively here as a cache for what Baxter describes as maximum impact. It's also used as persistent memory in the server.

Enterprise data management, a NetApp strength, continues as before.

Over time the new media moves down from cache to become data-storage drives in the array, a next-generation solid-state NetApp array.

NetApp veep Ravi Kavuri writes:

From a storage system perspective, NVMe-oF will be deployed in two contexts: front end (from server to storage system) and back end (from storage system to NVMe device). Along with the current Fibre Channel front-end and back-end SAS/SATA choices, many combinations are possible. SCM media will initially be used as a read/write cache to provide significantly lower latency than available with today's NAND flash SSDs.

As the price of SCM media comes down and it becomes a viable option for more applications, you'll be able to create a pool of SCM storage. Such storage will deliver consistent low latency that is an order of magnitude faster than today's shared storage. NetApp will roll out some of these new technologies over time as they mature while protecting investment in existing storage technologies.

What Baxter doesn't seem to be saying with these slides is that we will see NVMe-oF access to NVMe drives in the all-flash FAS arrays. Instead, as we see it, Optane memory will be used as array controller cache and that will be the target for incoming and NVMe-oF access requests.

For NetApp, with its DataONTAP array OS this would intuitively seem a more practical way to speed data access to array contents with a fast network connection. The alternative of allowing direct RDMA access to NVMe drives in the array from accessing servers over NVMe-oF would cut ONTAP out and its data management of the data path and raise doubts over its role.

Kavuri writes: "We're making sure that our customers can capitalize on these technologies without having to rip and replace their infrastructures or sacrifice the NetApp data management features on which they rely daily."

If NetApp can maintain equivalent data access speed then its customers should be happy, especially if has a realistic roadmap to, say, Optane SSDs that improve things again. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like