DDN: The elephant in the big data storage room

New class of very large enterprise workloads plays to its strengths

Sponsored DDN has supplied storage systems for over two decades and counts more than 10,000 customers across a range of industries. Despite this, the California company is likely to be less familiar to executives in large organisations than other, more mainstream storage vendors.

The reason for this is that DDN has long specialised in building high performance storage systems for demanding applications that involve huge volumes of data, rather than the kind of systems serving enterprise applications that are typically found in a corporate data centre.

However, enterprise needs are changing, thanks to the massive growth in the volumes of data that organisations are now collecting and seeking to put to profitable use, along with the increasing use of techniques such as AI and advanced analytics to help make sense of it all.

‘We understand scale’

The emergence of data-intensive requirements for the new class of very large enterprise workloads play to the strengths of DDN, which argues it offers a better solution than traditional enterprise storage systems thanks to its background in delivering large-scale storage infrastructure.

<p.“We understand scale,” says DDN’s senior vice president of products James Coomer. “We build these parallel file systems. And they aren't just storage systems. They have software components on the network and in the client compute systems, which talk directly to the applications. It’s that end-to-end approach which means we understand the workloads and how they interact with compute and the network, and with the storage,” he explains.

“When you build systems for mass market, then you're building for mass market issues, which typically aren't at scale. The difference is that all sorts of unusual cases happen quite regularly when you can have thousands of GPUs, large networks, and tens of thousands of SSD drives. It’s those unusual events that DDN is accustomed to. So, it comes down to things like drive management, we've got mechanisms in the file system that allow us to handle failures in storage hardware without incurring long rebuild processes and get back into service much quicker and protect performance through those sort of hardware issues.” All of this means that DDN is well positioned to work with all the customers who suddenly find that data is exploding, because they're starting to pay attention to all their sources of data, and help them try and extract value from them, typically through AI.

You got cloud

Aware that cloud services now form an important part of enterprise IT strategy, DDN has also developed EXAScaler Cloud. This provides the firm’s parallel file system in a cloud-native package that is easy to deploy and optimised to deliver high-throughput and low-latency for managing large volumes of data, the lack of which has limited the wider realisation of in-the-cloud AI and other data-intensive workloads, according to the firm.

A key capability for AI is that EXAScaler, whether on-premises or in the cloud, now supports access via multiple protocols on top of its own native protocol, including NFS, SMB and Amazon’s S3 protocol. This provides the flexibility to use the protocol that suits a specific workload. For example, many cloud or Internet of Things (IoT) applications use the S3 protocol to push data to a storage back-end, but file access is far superior when it comes to analysing the data or using it to train an AI model.


DDN is not just waiting for enterprise customers to move in its direction , and has taken multiple steps to gain enterprise market share. One of these was the acquisition of the Lustre development team from chipmaker Intel in 2018. Lustre is a parallel file system widely used in HPC thanks to its ability to manage huge volumes of data, as demonstrated in DDN’s own EXAScaler product line.

As well as gaining more control over its own destiny, DDN wanted to expand the file system’s suitability for applications beyond traditional HPC workloads to include AI and analytics frameworks and emphasise greater usability and a more enterprise-oriented set of features.

“We went on a mission to really expand out of our traditional space, which was originally HPC,” says Coomer. “A few years ago that expanded into AI, so now AI is a big part of the business, and we've got some large customers there now. But on the tail of the acquisition of Lustre, we had several opportunities to make more acquisitions that would help us expand into other exciting new spaces.” These opportunities include cloud analytics, the telecoms market - especially 5G - and enterprise storage.

DDN’s push into general-purpose enterprise storage kicked off with the acquisition of Tintri in 2018. Tintri’s Intelligent Infrastructure products manage storage at the virtual machine level, making life easier for admin staff in virtualized environments. DDN gained about 3,000 enterprise customers from this move. Nexenta followed, a software-defined storage vendor that brought another 2,000 customers, then IntelliFlash, a provider of high-performance all-flash storage arrays with another 3,000 enterprise customers.

The result is that DDN now comprises the At Scale division with its existing AI, analytics and HPC products, and an enterprise division known as Tintri, which combines the three recent acquisitions. “Nexenta added something different into our portfolio, and that was helping us address the expansion of 5G,” says Coomer. “There are many more cell base stations required to support 5G, low latencies and higher throughputs are needed. That’s quite a storage challenge.

Nexenta is software defined, so it can be run in the cloud, it can be run on virtual machines, it can be run in containers, delivering a robust enterprise storage system to service these 5G workflows.” Meanwhile, IntelliFlash is not software defined, but instead offers high performance all-flash storage arrays with unified data services. This filled a gap in DDN’s enterprise portfolio, according to Coomer, in providing fast block storage for both SMB and Fortune 500 customers.

Since the formation of the Tintri group, DDN has worked to cross-fertilise the best features between the three acquisitions. For example, the Nexenta software had superior file serving capabilities which have now been merged into the IntelliFlash arrays, according to Coomer.

However, perhaps the most important technology that DDN is looking spread across its portfolio is the predictive analytics capability that Tintri brought with it. This uses telemetry data gathered from across the infrastructure to glean insights on how to optimise performance. “Every day, data is uploaded from Tintri systems into the cloud, and various analytics are applied. And we've started the process of bringing data analytics to all products, including the At Scale ones, so At Scale customers will benefit from cloud analytics as well,” says Coomer.

This application of cloud analytics across the board is intended to help combat the growing complexity of storage and IT infrastructure in general, with accelerators such as GPUs becoming an integral part, as well as cloud compute and cloud storage becoming an important part of the enterprise infrastructure mix.

“All this brings in complexity, and often customers don't have enough people to manage it and really take advantage of the economic and productivity benefits that can be born out of all these new options,” Coomer said.

“So the analytics platforms we’re developing, their purpose is to allow you to handle the increasing complexity of physical and cloud environments when it comes to data and storage, and allow you to get the value out of that without increasing your cost base and without needing more personnel to handle that complexity,” he adds.

“We're able to cross pollinate all the algorithms across the different products. Some of these algorithms are in the context of virtual machines, but the algorithms themselves are also applicable to blog systems, file systems, whatever. So if you're doing anomaly detection on latency for a VM, you can do anomaly detection on latency for an application in HPC.”

In AI and HPC environments, being able to quickly pinpoint the root cause of an issue is vital, as it can easily cost upwards of $100,000 per day for each day that the infrastructure is sitting idle while a failure is diagnosed and fixed.

“For the At Scale business, that predictive analytics capability will ultimately make that process much, much quicker even to the point where we can start to provide hints or tips about where to look,” said Coomer.

This also becomes important in the light of a recent survey of enterprises around the world by Hyperion Research, which found that the greatest operational challenges in HPC storage infrastructure were recruiting and training storage experts, followed by the storage installation time and costs, then the tuning and optimisation of that infrastructure.

Many of these challenges tended to be overlooked at the procurement stage, where the same survey showed that buyers largely focus on performance and purchase cost, without considering how difficult or costly it will be to run their storage infrastructure.


To summarise, DDN has watched carefully how AI and HPC workloads are finding their way into the enterprise as organisations grapple with the volumes of data required for AI and advanced analytics, DDN believes it has the right storage solutions to deliver the performance and cost-effectiveness required.

Sponsored by DDN

Biting the hand that feeds IT © 1998–2021