NVMe over Fabrics array startup E8 is adding Optane drive support and providing shared writeable, multi-appliance volumes for up to 96 servers.
E8’s technology has servers and their appliance connected by an NVMe over Fabrics scheme using Ethernet cabling.
It involves agent software in each accessing server directly accessing individual drives in the appliance’s chassis, rendering the chassis pretty much just a bunch of flash (JBOF) drives.
Its 2U x 24 drive slot E8‑D24 appliance, with dual-port 6.4TB NVMe SSDs, (153.6TB raw) provides a claimed 10 million IOPS with 100 microsecond latency. This is now joined by the E8‑X24 centralised NVMe appliance which supports dual-port Optane SSDs.
Intel’s single-port DC P4800X Optane drive, using 3D XPoint technology, supplies 375GB of capacity, so an E8‑X24 will have 9TB of raw storage-class memory capacity.
The P4800X has a latency of less than 10 microseconds, so we would expect the E8‑X24 array’s latency to be slightly more than that; with say 7 to 8 times lower latency than the E8‑D24.
Zivan Ori, the founding CEO of E8, said: “E8 uses dual-ported Optane in HA enclosure because it’s always been our belief that people putting such an investment into Optane will want to have it highly available. The dual-port Optane will be announced soon by Intel (its schedule is slightly later than single-port Optane), and our performance results will follow later.”
Apeiron has compared the performance of Optane and VVMe flash drive versions of its ADS1000 storage appliance. Global sales and marketing exec veep Jeff Barber said: “With a 70/30 read/write mix, where Optane with Apeiron produces only 13 microseconds of latency vs. 500 for NAND. Large block writes are 10x NAND performance, with 25x less latency.”
Ori said: “Apeiron used the single-port Optanes and as such is a non‑HA JBOF, something we believe is both (a) a commodity (b) not very useful for customers: if you put such a box and lose it because of a single failure, that’s $100K of SSDs lost.”
Shared writable volumes
One of the issues of NVME over Fabrics arrays is whether they are true SANs and offer storage that is shared amongst users. Traditionally dual-controller arrays have software which pools the array’s drives into a single sharable storage space, cut up into LUNs available to accessing servers.
NVMeoF arrays typically offer direct, RDMA-like access to single drives. What E8 has now done us to develop shared writable volumes. The volume-level work is done by its accessing server-resident agents, as some degree of co‑ordination is necessary.
Ori said: “It’s a feature we’re very proud of,” and performance is approximately the same as when accessing a single user volume: “It’s not noticeably detected on performance.”
He says E8 uses RAID 6 and a 4‑appliance set treated as a single volume would have a single RAID group. It supports RAID 5 and 0 but prefers RAID 6 for its high-performance and reliability. Ori says: “It’s distributed RAID 6 and we’re very proud of that.”
A RAID group has to be homogeneous in terms of drives and you can dedicate RAID groups to specific compute clusters. The systems lets you mix and match flash and Optane within a single appliance. Drives can be hot-added to a RAID group and you can add new RAID groups to reduce failure domains.
Shared writable volumes support simultaneous read/write access by up to 96 concurrent writers to the same volume – i.e. 96 servers accessing an E8 storage system. This, E8 says, delivers the best performance possible for parallel processing architectures such as clustered file systems and clustered databases.
Our understanding is that this means software like IBM Spectrum Scale (GPFS) and systems such as Oracle RAC (Real Application Clusters) and E8’s tech works seamlessly with them in applications such as real-time analytics and transactional databases.
In a SAS/STATS test an E8 system built on GPFS with shared write access provided a 4X performance improvement compared to Local NVMe SSDs using a local file system (XFS). The improvement is irrespective of the number of E8 systems used.
E8 is also adding multi-appliance volumes with up to 4 nodes tested; Ori said that enables half a petabyte or so of shared and extremely high-speed storage. Customers can add multiple E8 Storage appliances, taking advantage of increasingly higher-capacity SSDs without impacting previously stored data.
The appliances themselves are not clustered, with E8 calling it stacking. Ori said access times remain constant as such stacking scales out.
E8 Storage clients can now access volumes across multiple E8 storage appliances. Ori said customers can set up one huge shared LUN or multiple smaller ones down to single drive volumes.
This allows both scaling of capacity and tiering of storage, with tiers encompassing Optane SSDs and NAND SSDs.
You could use an Optane tier for log files and flash drives for the bulk data. E8 says dedicated network links ensure network bandwidth is provided to each tier of storage.
This is a major hardware and software release for E8, and the Tel Aviv Stock Exchange us evaluating the system.
Ori said: “Dual Port Intel Optane SSDs are a perfect fit for our shared NVMe solution, and E8 Storage allows most innovative customers like the Tel Aviv Stock Exchange, to introduce these new SSDs into their applications without any modification to their existing infrastructure.”
Uri Shavit, SVP, CIO in the IT and Operations Department, at the Exchange, said: “E8 Storage allows us to build a shared storage solution with latency that rivals in‑memory database clusters, but unlike in‑memory solutions it allows us to scale capacity easily as well as share volumes between many nodes in the Tel Aviv Stock Exchange cluster. [It] has a potential in the field of high-frequency trading and represents a new breed of storage product that we have not seen before.”
Shared access using NVMe over fabrics to a 9TB pool of Optane storage could make people like high-frequency financial traders quite keen. An Optane SAN – who would have thought that was coming? ®