The Register Home Page

This article is more than 1 year old

PernixData chap: We are to storage as Alfred Nobel was to dynamite

Snazzy memory tech demolishes DRAM volatility restrictions, says chief techie

Resolving issues

In fact, SAP recommends using local storage resources, such as flash, to provide sufficient performance and data protection for these operations. Virtualising such a platform becomes a challenge, as mobility is reduced due to the use of isolated server-side storage resources, which impede the operations and clustering services virtualised data centres have relied upon for almost a decade.

El Reg: You would have us understand that DFTM technology resolves these issues?

Frank Denneman: Yes. DFTM allows every application in a virtualised data centre to benefit from the storage performance potential of RAM with no operational or management overhead.

In many ways, the introduction of DFTM solutions is comparable with the introduction of vSphere High Availability (HA). Before vSphere HA, the architect had to choose between application-level HA capabilities or clustering services such as Veritas Cluster Server or Microsoft Clustering Services, with each solution impacting IT operations in their own way.

vSphere HA empowers every virtual machine and every application by providing robust failover capabilities the moment you configure a simple vSphere cluster service.

El Reg: How does DFTM resolve the issues?

Frank Denneman: It provides fault-tolerant write acceleration, synchronously writing copies of data to acceleration resources on multiple hosts to protect against device or network failure.

The net effect is that you are able to get predictable and persistent microsecond storage performance. What’s more, with new developments popping up in the industry every day, it is not unrealistic to hope that we will hit nanosecond latencies for storage performance. When that happens, we can absolutely and fundamentally change what applications expect out of storage infrastructure.

El Reg: In what way?

Frank Denneman: Application developers used to expect storage platforms to only provide performance in the millisecond range. This limited them as they saw no need to improve their code beyond a certain point; the lack of storage performance was perceived as an insurmountable barrier. With nanosecond access latency, for the first time ever storage performance is not the bottleneck. And, with memory as a server-side acceleration resource, extremely fast storage is affordable.

Then the real question becomes, what if you can have a virtual data centre with millions of IOPS available at microsecond storage access latency levels? What would you do with that power? What new type of application will you develop and what new use cases would it enable?

If we can change the core assumption around the storage subsystem and the way it performs, then we could spur a new revolution in application development and open a whole new world of possibilities.

Next page: Local hero

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like