Opinion Among all the uncertainties in this world right now, there is at least one constant: applications still absolutely rely on data storage.
The performance of code – from enterprise resource planning (ERP) and online transactional processing (OLTP) software to machine-learning tools, high-performance computing workloads, and real-time analytics – depends on the speed at which data is served from memory.
And with this performance concern in mind, enterprises find themselves looking at their warehouses of disk-based storage systems, and wondering: is there anything faster, and at what cost?
Storage-class memory (SCM) is one fairly new option, and it sits somewhere between flash and DRAM, performance-wise. SCM has a higher latency and greater storage density than DRAM; is persistent, unlike DRAM; and is generally less expensive than DRAM. Against flash, SCM is more expensive though it is faster at reading and writing data by more than a factor of 10, with fine-grain access at the byte level, just like RAM, rather than in big blocks like NAND flash.
SCM can be packaged as solid-state drives and use standard SSD interfaces, or, for more speed, built as DIMMs and accessed across a host server's memory channels. In other words, and crucially, you can plug SCM into your machines as faster-than-flash, more-expensive-than-flash SSDs, and as larger-than-RAM, slower-than-RAM DIMMs.
Operating systems can provide applications generic access to SCM as mass storage, or present it as large amounts of RAM. An app doesn't have to be specifically aware it is using SCM: the code could think it's talking to ordinary DRAM, or some part of the file-system. In other words, you don't have to tailor software for SCM – let the operating system and underlying hardware take care of that transparently.
Alternatively, if an app is aware of specific SCM in a system, it can request, via the OS or similar low-level code, direct access to a large region of SCM to use. For instance, Intel's Optane SCM provides various APIs and access modes that software can potentially tap into.
Bear in mind, if an application is RAM intensive, you don't want to configure the code or host system to use SCM DIMMs directly, as although you get loads of capacity, the latency will be terrible. You may want to consider ensuring the regularly accessed RAM uses normal high-speed DRAM DIMMs – and that the SCM DIMMs hold the not-as-hot information. Some vendors can ensure DRAM in a system, acting as a fast cache, is paired with SCM DIMMs to avoid slowdowns.
In effect, you can place SCM where you like, and access it however you want, just be careful to balance and tune it all so that you're not running into SCM's limitations and instead making the most of its benefits.
What does it look like inside?
There exists a number of technology candidates for SCM. These include Magnetic RAM and STT-RAM (that's Spin Transfer Torque RAM) that can change the magnetic orientation of material within the chips – the direction representing a binary value of 1 or 0 in each memory cell – to provide non-volatile and byte-addressable storage.
Then there’s Phase-Change Memory (PCM) that is made from a chalcogenide glass: this material's state can change from crystalline to amorphous and back again depending on the presence of electrical current. The two states have their own level of electrical resistance, thus allowing each memory cell within a chip to store a binary 1 or 0 value. NRAM, aka Nanotube RAM, uses carbon nanotubes to provide non-volatile storage that's about as fast as DRAM, and possess impressive endurance.
Make no mistake, SCM is coming to the enterprise. According to IDC, strong growth in next-generation workloads, such as analytics, machine learning, and IoT sensor processing, will drive the development and deployment of storage technologies that can feed these applications. For the most part, you should expect operating system vendors to pick up the slack, and provide more and improved interfaces and tools for managing SCM in a painless and generic way. ®