This article is more than 1 year old

Hyper-scaling multi-structured data? Let's count the ways

Molluscoid magic is just the start

Infinidat

This is a classical-style networked array re-invented for the hyper-scale, flash and multi-structured data era, and sold as HW/SW combination; no open source here.

Infinidat is a Moshe Yanai startup, founded in 2011, with Yanai having invented Symmetrix, EMC's monolithic array now in its VMAX 3 incarnation, and also been involved with XIV. This Israeli company is still in stealth mode and claims its technology can provide SAN, file and object storage in parallel with a single management graphic user interface.

Infinidat_rack

Infinidat rack

The initial product is the IZBOx G300 filer.

Its features include;

  • 1.5PB - 2.88PB raw capacity per rack
  • 1.1PB - 2.1PB usable capacity
  • 99.99999 per cent uptime
  • Self-healing architecture
  • Double-parity RAID
  • End-to-end data verification
  • Triple-active redundant nodes
  • Up to 2.3TB of DRAM
  • Up to 38TB of secondary NAND cache per controller
  • More than 750k IOPS per controller

It has an N-way architecture with three nodes per rack. Each node is a server with DRAM, which functions as a memory cache, and a significant amount of SSD cache acting as a global cache. There is a total of over 86TB of DRAM and flash across three nodes. Each of the three nodes has access to eight drive shelves, each holding 480 6TB disk drives. Small sections of each disk are treated as RAID drives instead of the whole disk.

An Infinidat datasheet (registration required says; "As data comes into the system it is aggregated into 14 SATA-optimized sections, each with its own DIF (data integrity field) and lost-write protection field. These protect the data from logical corruptions as well as disk-level errors. Infinidat then adds two parity sections to complete the RAID stripe. The data is then sent to a group of 16 disks, with each RAID stripe always landing in different disks... you can recover from a double 6TB-disk failure back to protected mode in less than 10 minutes. "

The nodes are inter-connected by InfiniBand with hosts accessing the system via Fibre Channel and Ethernet. Disk drives are SAS-connected.

Infinidat promises a disruptively low-price point which, considering options like Ceph are open source, is just as well.

DSSD

Here is another classical proprietary supplier's networked array. DSSD is EMC's acquired all-flash technology that equates to rack-scale storage that will provide block, file, object and Hadoop unstructured data access. Flash drives are hooked up with a PCIe fabric. Each drive has its own controller and DRAM is used to cache the flash.

The system will be launched later this year. It appears file and block access semantics will be layered on top of a base access layer. EMC Information Infrastructure president Chuck Sakac has said: "DSSD doesn’t require any of those file/block semantics between the flash read/write model. It can expose this via libHDFS or object semantics, or directly mapping to key value stores (with a PCIe/NVMe connection). If you want direct memory mapping over RDMA and over direct PCIe NVMe, it can do that too!"

This multi-structured data system is for, we think, mission-critical structured data held in blocks, as well as structured and semi-structured files and objects. Like Infinidat it is a hardware- and software-centric design with some but not all commodity components, PCIe fabrics and InfiniBand not being classed generally as commodity hardware.

ScaleIO

This is EMC's view of how to implement a virtual SAN at hyper-scale using not exactly open source software but with a nod in that direction. The software is freely available (with no support from EMC) and EMC will be offering a paid-for supported version.

ScaleIO is software, using commodity X86 server hardware. Accessing hosts have requests, using file system semantics, dealt with by ScaleIO data clients, software constructs which get services from ScaleIO data servers which present block volumes.

Data is stored in blocks across multiple nodes, loosely coupled together, with node counts intended to go past 1,000, well past. The nodes use disk and flash for persistent storage and flash and memory for caching. Sakac says ScaleIO goes well beyond VMware's VSAN in terms of scalability and is intended for transaction workloads.

But he says "ScaleIO smokes Ceph for transactional use cases in every dimension: ease of use, performance, latency, failure behaviours."

Sakac notes; "Ceph is really first and foremost an object stack...but we see a lot of customers trying to make Ceph work as a transactional storage model. When we would ask “why, when it’s so hard to get working and the performance is really, really bad?” – the answer tended to be “well, it’s easy to get, and openly available”.

"Now ScaleIO is easy to get and openly available as well. Oh, and it costs less than Ceph Enterprise if you want to compare the TCO inclusive of support."

Two SW-centric and two HW/SW-centric designs

Both the DSSD and Infinidat products are classic proprietary hardware/software products that offer unified storage at hyperscale. Since they compete with hyperscale unified storage software their pricing has to be comparable.

ScaleIO and Ceph are both freely-available software-centric products using commodity hardware. They set a price bar for the proprietary products. With EMC (ScaleIO) and Fujitsu (Ceph) providing paid-for, supported versions then business customers can get the kind of support and hardware configurations they are used to without having to go the DIY free software and community support open source route.

All four products offer unified storage at scale and they may bring back together the separated block, file, object and unstructured data silos that have tended to sprout in recent years. But don't bet on it, not yet; all four are relatively recent and still-developing products whose final shape and direction is not yet clear. ®

More about

TIP US OFF

Send us news


Other stories you might like