As the general big data trend gathers momentum, the various object storage suppliers are trumpeting their technology's advantages over file systems, saying they can store and protect vast volumes of data more efficiently and with faster access.
Henry Baltazar, a senior analyst in The 451 Group, says: "Object storage, and cloud storage in particular, needs to progress beyond ‘cheap and deep’ to become a larger part of enterprise and service provider markets.”
Here's an object storage progress report, with suppliers covered in alphabetical order:
Atmos has become the de facto object storage market leader in a perception sense. Earlier this year EMC announced Atmos release 2.0 which is claimed to be five times faster and 65 per cent more efficient than its predecessor.
We have been told, but not by EMC, that there are of the order of 200 EMC Atmos customers with about 60PB of deployed capacity. Atmos quarterly revenues are showing more than 100 per cent annual growth.
For example, in May EMC said Atmos revenues had more than doubled over the year. In July, CEO Joe Tucci said in the third quarter earnings call: "If you look at the newer products we have – the Greenplum, Atmos, Isilon – I can tell you that each of those products quarter on quarter, year on year, Q2 of 2011 over Q2 2010, more than doubled – all three of them."
Caringo, whose CAStor object storage product was OEM'd by Dell with a DX6000 object storage array, has made CAStor available for OpenStack.
A Caringo spokesperson said: "OpenStack is an open-source cloud computing platform. One of the components offered is an object store known as Swift. Because Swift is based on a file system there are limitations in scalability and performance. There are also limitations in setting data protection policies and replication to a remote site for DR or compliance purposes is not supported. These limitations are keeping OpenStack from being adopted by organisations needing enterprise-grade data protection and scalability."
Object storage, and cloud storage in particular, needs to progress beyond ‘cheap and deep’
The CAStor integration maintains OpenStack Swift accounts, containers, authentication and access controls, and enables a seamless integration of CAStor with OpenStack’s compute and imaging components. There are are several advantages, at least according to Caringo. For example, data integrity can be achieved with just two replicas, which, accordinbg to the company "sav[es]ing over 30 per cent in hard drive costs and data centre space when compared to Swift object servers". The beta version of CAStor for OpenStack is available now.
DataDirect Networks (DDN), which is rapidly moving out of its high-performance computing niche, has announced v2.0 of its Web Object Scaler product with performance claimed to be 70 per cent faster than Amazon S3 and more than 100 times faster than EMC's Atmos, retrieving up to 55 billion objects a day, and writing up to 23 billion of them in a day. It supports hundreds of locations managed through a single console with a capacity of more than 20PB. Interfaces for Apple iPads and iPhones and NFS are now available, on top of Amazon S3, and WebDAV.
DDN marketing VP Jeff Denworth said: "WOS 2.0 can do 2 million writes per second, and 8 million reads. This is with SAS disks and we haven't been able to figure out where the ceiling exists with eMLC (flash). Note as well [that] we're supporting SAS/SATA/SSD with policy-based data placement options as part of the WOS system."
WOS 2.0 has ObjectAssure "erasure-code based, declustered data protection to lower storage costs and minimise request latency to less than 40 milliseconds response for small object writes," says Denworth.
Erasure-coding adds extra data to a block of data that is derived from the original data, and can be used to reconstruct it if some of the original data is lost. Reed-Solomon encoding is one method of doing this. The whole idea is to protect data without undergoing time-consuming RAID rebuilds of failed disks.
It can provide RAID-equivalent levels of provide data protection without RAID rebuilds or replication, and is being developed for environments in which multi-site collaboration and replication is not a requirement. With it each WOS node can withstand up to two concurrent drive failures per node without loss of data or data availability. This reminds of RAID 6.
>This is similar to what EMC is doing by hosting apps inside virtual machines running on its VMAX, VNX and Isilon arrays
For other sites, WOS 2.0 introduces asynchronous replication, "which enables users to capture and commit objects to storage faster than previously possible, dramatically increasing performance for big files and big data sets." It introduces cloud storage management capabilities such as multi-tenancy, bill-back, encryption, and per-tenant reporting. WOS 2.0 is available now and targeted at world-wide cloud storage service providers, content-intensive Web 2.0 organisations, geospatial and signal intelligence organisations and world-wide collaborative research partnerships.
Denworth said: "There's no hard limit to the system we've designed. We double the size of the cluster, which we did going from v1 to v2.0, and the performance doubles since it's scaling linearly, and voilà... [We're] expecting to quadruple this [performance] by the next release... with 32 million read IOPS and 8 million write IOPS, and support for 1 trillion objects as we ramp up the test and delivery harness."
Well, there's big data and then there's BIG Data.
By the way, El Reg understands that applications can run under KVM directly on some DataDirect storage arrays. This is where applications need the fastest possible access to data for, say, some kind of filtering in HPC scenarios. This is very similar to what EMC is doing by hosting apps inside virtual machines running on its VMAX, VNX and Isilon arrays.