Big Data storage of the future: Fat spinning tubs smothered in NVRAM gravy

No more tiers, vows one storage guru as industry places its bets


Jean-Luc Chatelain, EVP of one of the major high capacity storage firms, says he sees storage tiers collapsing, leaving only server non volatile memory (NVRAM) and massively fat spinning data tubs of up to 64TB and rendering tape irrelevant. But how do we get to this point?

Chatelain - aka JLC - is the exec who heads up strategy and technology for privately held DataDirect Networks. DDN supplies huge great drive arrays that suck in and pump out data at great rates for supercomputing, high performance computing, media and entertainment, pharma and geo-science apps - all the usual HPC suspects. It has been having a great time doing this and has announced a $100m R&D investment in exascale computing storage.

DDN has also started running system application software in its storage arrays, such as file systems, opening the door to running other data-munching software in its array controllers, aka embedded X86 servers, that needs low latency access to mountains of data.

Chatelain, speaking in his personal capacity rather than as representative of the company, said that HPC storage is currently being used in a set of niche vertical areas, but said he believes that the onrush of Big Data-style processing into general business and public sector organisations is going to make it more of a horizontal activity. He said that should bring a concomitant need for HPC-style storage to enable the real-time processing of Big Data analytics that users will want. This will provide a big opportunity for storage vendors with the right Big Data analytics storage products - step forward DDN.

Chatelain highlights DDN's WOS (Web Object Scaler) as a clusterable highly scalable object storage array that's in use today in massive Big Data applications, including defence intelligence analytics work.

He said he thinks that in future the right storage products will be able to do two things: handle the huge volumes of data involved, and provide exceedingly low latency access to the working subsets of it. That's where Chatelain sees very much bigger data tub drives and very much faster non-volatile storage memory.

We can summarise his ideas like this:

Starting 2014 and gathering pace in 2016, we're going to see two tiers of storage in big data/HPC-class systems. There will be storage-class memory built from NVRAM, post-NAND stuff, in large amounts per server, to hold the primary, in-use data, complemented by massive disk data tubs, ones with an up to 8.5-inch form factor and spinning relatively slowly, at 4,200rpm. They will render tape operationally irrelevant, he says, because they could hold up to 64TB of data with a 10 msec access latency and 100MB/sec bandwidth.

This idea of much higher capacity disk drives attacking tape in the online archive space has a surface appeal because we could see disk drive manufacturers liking the idea of replacing lost performance disk manufacturing volume, lost to flash, with new online disk archiving disks taking share from tape reels.

Gartner analyst Valdis Filks says tape has a unique advantage: offline files can't be corrupted or deleted. It's the safety net enterprises need. He says the big, fat disk ideas remind him of IBM's old SLED (Single Large Expensive Drive, the 3390 from 1989, now discontinued.

What do other's think of JLC's ideas?

James Bagley of Storage Strategies Now said:-

With regards to persistent memories other than flash, I think his timetable is too aggressive, since the only real alternative is MRAM and Everspin is just starting to sample 64Mb parts, while 64Gb 20nm flash parts are flooding the market from Micron and Toshiba.

Everspin has an aggressive plan to continue to shrink lithographies but they have a long way to go, current parts are around 120nm cell size. I’m pretty bullish on MRAMs taking a piece of the server and controller NVRAM market over the next 2-3 years but don’t see it displacing flash in the typical cache and top tier. LSI’s 12Gb SAS controllers will likely use the Everspin chips.

We are in agreement with Jean-Luc that object storage is going to dominate many applications because of the unbridled growth of unstructured data.

With regard to a coordinated attack on tape by HDD, he is probably correct, but tape will still be around for my grandchildren.

Josh Krischer of Josh Krischer and Associates, thought the NVRAM ideas were good, seeing New NVRAM products being:

  • Next Generation SSDs – Storage Class Memory (SCM)
  • Cost within 10x of enterprise disk
  • Performance within 3X of DRAM
  • Endurance superior to NAND.

He said: "In my opinion there will be [a] new type of SSD based on Storage Class Memory (SCM). [It's] not clear which technology will win but one (or two) out of MRAM, FeRAM, Racetrack, Organic and Polymer, and Resistive RAM (RRAM)... It will be a new storage tier between the memory and the Flash SSDs or lower performance SCM which will replace the flash technology."

He noted that in November Everspin Technologies had announced the industry's first Spin-Torque Magnetoresistive RAM (ST-MRAM) chip due to be shipped in 2013.

Krischer agrees big spinning disk data tubs will be needed but can't see 8.5-inch form factor disks coming and replacing tape:

Why … kick a dead horse? Tape is not [a] growing business. The smart tape vendors, like Fujitsu with CentricStor, are not enjoying great success. I bet on “cheap” disks with … mirroring in de-clustered RAID [or] Erasure Coding. [For example] 2.5-inch HDDs (2020 - 12TB, 3.5-inch - 60TB), more platters, all SAS.

We asked the disk drive manufacturers, Hitachi GST, Seagate, Toshiba and Western Digital. None replied but then they don't generally discuss product roadmaps out to four years with the likes of us.

El Reg's take on this is that JLC is largely right: Big Data -processing servers will get post-NAND-based NVRAM storage memory alongside their main DRAM, and hold the bulk of the data they need in a networked massive single-tier, scale-out disk drive array, likely enough to be using object storage technology.

Who's right? Tell us what you think in our storage forum. ®

Similar topics


Other stories you might like

  • Mastering metadata key to shifting data fast, says Arcitecta
    A new transmission protocol can work lightning fast, but only with very thorough records to pull from

    Companies that move and analyze huge volumes of data are always on the lookout for faster ways to do it. One Australian company says it has created a protocol that can "transmit terabytes per minute across the globe."

    The company, Arcitecta, which has an almost 25-year history, has just announced the new Livewire protocol, which is part of their data management platform, Mediaflux, used by institutions including the Australian Department of Defense, drug maker Novartis, and the Dana Farber Cancer Institute.

    According to CEO Jason Lohrey, Livewire itself has already made an impact for some of the largest data movers. "One of our customers transmits petabytes of data around the globe, he told The Register.

    Continue reading
  • Real-time data analytics firm Tinybird nets $37m in Series A
    Millions of rows per second in real time, so the legend goes...

    A data analytics company claiming to be able to process millions of rows per second, in real time, has just closed out a Series A funding round to take-in $37 million.

    Tinybird raised the funds via investors Crane Ventures, Datadog CPO Amit Agarwal, and Vercel CEO Guillermo Rauch, along with new investments from CRV and Singular Ventures.

    Tinybird's Stephane Cornille, said the company plans to use the funds to expand into new regions, build connectors for additional cloud providers, create new ways for users to build their own connectors, provide tighter Git integration, Schema management and migrations, and better defaults and easier materialization.

    Continue reading
  • Big data means big money for the UK government as £2bn tender mooted
    Central procurement team tickles the market with tantalising offer... but what for?

    The UK government is putting the feelers out for a bundle of big data products and services in a move that could kick off £2bn in tendering.

    Cabinet Office-run Crown Commercial Service (CCS), which sets up procurement on behalf of central government ministries and other public sector organisations, has published an early market engagement to test the suppliers' interest in a framework for so-called big data and analytics systems.

    According to the prior information notice: "Big data and analytics is an emerging and evolving capability, with its prominence heightened by COVID. It is fast becoming recognised as business critical and a core business function, with many government departments now including chief data officers. The National Data Strategy and implementation of its missions reinforce the requirement to access and interrogate Government data more effectively to improve public services."

    Continue reading

Biting the hand that feeds IT © 1998–2022