Oh no, you're thinking, yet another cookie pop-up. Well, sorry, it's the law. We measure how many people read us, and ensure you see relevant ads, by storing cookies on your device. If you're cool with that, hit “Accept all Cookies”. For more info and to customize your settings, hit “Customize Settings”.

Review and manage your consent

Here's an overview of our use of cookies, similar technologies and how to manage them. You can also change your choices at any time, by hitting the “Your Consent Options” link on the site's footer.

Manage Cookie Preferences
  • These cookies are strictly necessary so that you can navigate the site as normal and use all features. Without these cookies we cannot provide you with the service that you expect.

  • These cookies are used to make advertising messages more relevant to you. They perform functions like preventing the same ad from continuously reappearing, ensuring that ads are properly displayed for advertisers, and in some cases selecting advertisements that are based on your interests.

  • These cookies collect information in aggregate form to help us understand how our websites are being used. They allow us to count visits and traffic sources so that we can measure and improve the performance of our sites. If people say no to these cookies, we do not know how many people have visited and we cannot monitor performance.

See also our Cookie policy and Privacy policy.

This article is more than 1 year old

Fabric maths: Pure + Cisco = end-to-end NVMe

FlashStack in pole position

Analysis Pure and Cisco could build an end-to-end NVMe FlashStack using Pure's NVMe-using FlashArray//x and Cisco's NVMe over fabric's Fibre Channel.

There are three ingredients contributing to this window of opportunity:

  1. Pure's FlashArray//x, which uses NVMe-accessed DirectFlash modules and an NVMe-tuned //X70 controller
  2. Cisco's MDS 9700 Director with 32Gbps 48-port switching module
  3. Cisco UCS C-Series servers with 32Gbps HBAs

The Cisco gear supports NVMe over Fibre Channel and the Pure array supports NVMe drives. Put the two together and we have an end-to-end NVMe over fabrics system offering sub-millisecond latency to servers accessing a shared storage array.

FlashStack is the combined Cisco and Pure converged infrastructure (CI) template offering, similar in intent to the popular FlexPod CI system, built from Cisco servers and networking and NetApp storage. It is an obvious possibility that an end-to-end NVMe FlashStack reference design could be built using the three tech ingredients above.

Among the IT incumbents only Dell EMC has tried doing this with its DSSD array, which was closed down a couple of weeks ago.

Several startups are in the NVMe-over-fabrics-accessed storage array space, such as E8, Excelero, Mangstor, Pavilion Data Systems and also Apeiron with a similar access speed array using hardened Ethernet. All of these systems need RDMA over Ethernet and thus represent a new approach for existing Fibre Channel SAN users looking to upgrade to NVMe over fabrics.

A combined Cisco and Pure approach, adding NVMe to Fibre Channel, looks less of a departure from current practice.

All the main existing storage array vendors are adding NVMe drives and, eventually, NVMe over fabric access to their arrays. But while they are retrofitting NVMe drives to their arrays Pure has already done it and NVMe over Fibre Channel is suddenly sitting there, waiting to be used.

The FlashArray//x will enter GA in the third quarter. Come on, Pure and Cisco, craft an end-to-end NVMe over fabric FlashStack using it, and out-innovate every other CI supplier, including Dell EMC. Wouldn't that be a pure joy to behold? ®

 

Similar topics

TIP US OFF

Send us news


Other stories you might like