This article is more than 1 year old

Excelero gets in a right non-volatile mesh over SSD-server connection

Mesh startup wears NVMe-over-fabric-style hat

Analysis Excelero is working on its new NVMesh software to connect shared NVMe SSD storage with accessing servers and their applications.

The aim is to deliver a centralised, petabyte-scale, block storage pool with local, directly-connected NVMe SSD access speeds, using commodity server, storage and networking hardware.

The company was founded in 2014 by CEO Lior Gal, CTO Yanv Romen, VP Engineering Ofer Ishri and Chief Scientist Omri Mann.

Various patent filings are associated wuth these people. It received $20 million in funding from Battery Ventures and Square Peg Capital in 2015.

The NVMesh software includes:

  • NVMesh intelligent client block driver which runs on accessing servers needing to access the NVMesh logical block volumes
  • NVMesh target module which runs on the shared SSD storage systems to validate initial client-drive connections but not be in the data oath
  • Central storage management module with RESTful API and web-based GUI to control system configuration.

The storage management module interfaces with a Docker Persistent Volume plugin and OpenStack’s Cinder drivers.

NVMesh_Scheme

NVMesh scheme

In a traditional or disaggregated deployment the storage system is like an array (server with local NVMe drives) which connects via InfiniBand or Ethernet to accessing servers. The array part runs the target module software, with accessing clients running the client block driver code. Up to 128 clients are supported in the initial software release and Linux distributions with kernel v3.X and above are supported.

Excelero also supports a hyper-converged deployment mode with virtualised block storage across the component server nodes. In this case both the intelligent client and target module pieces of software run on the same servers.

Excelero_deployments

Excelero NVMesh deployment styles

The hyper-converged deployment style is, as far as we know, unique and provides theoretical options for partnership with HCIA vendors wanting to provide higher-performance systems.

The company will certify specific suppliers' hardware products. For InfiniBand and Ethernet Mellanox CX-3 Based NICs and QLogic FastLinQ 45000 Series Controllers are certified by Excelero. The Ethernet method uses RoCE and RDMA-enabled network interface cards are required.

The NVMe drives must use the NVMe v1.0a specification, or above, and Intel, HGST and Samsung (i.e. PM953) NVMe SSDs have been tested.

Data access read and write latency can be lower than 100µs for reads and 30µs for writes with millions of IOPS at target hosts at which there is no NVMesh compute load. The NVMesh scheme adds 5µs of network latency to the NVMe SSD latency.

The system scales IOPS performance linearly, as this table shows:

Workload Remote IOPs, 1 Converged Node Remote IOPs, 10 Converged Nodes
100% read @ ~200μs Average 1,191,400 11,450,700
100% write 255,934 2,403,379
100% read @ ~100μs Average 479,600 4,787,319
100% write @ ~40μs Average 216,737 2,153,192

Add Excelero to your mental list of NVMe fabric-class shared arrays offering local NVMe flash drive access speed: Apeiron, E8, EMC DSSD, Mangstor, and Pavilion Data Systems, with HPE, Kaminario and Tegile committed to adopting the technology, Pure Storage nicely positioned to do so, and NetApp offering encouraging words.

Check out a Philippe Nicolas blog post about Excelero here and a Mellanox white paper here. Expect Excelero to emerge from stealth in one or two quarters. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like