Sponsored If you face a computing problem, one solution is to throw money at it. If money is really no object, then you can throw memory.
Today’s CPUs will potentially pack tens of cores, representing a vast amount of potential processing power. But getting the most out of that processing power depends on getting data to the cores as quickly as possible.
Data from a HDD or flash drive has to be found, moved across the network or system bus, before being loaded into DRAM to be fed into the processor, spat out again, fed in again, written back to storage, pulled again… you get the picture. So, clearly, the more data you can hold in memory, close to the CPU, the better.
CPU power has grown broadly in line with Moore’s law over the years, and today’s multicore, scalable processors can support massive amounts of memory, with Second Generation Intel® Xeon ® Scalable processors supporting up to 4TB of DDR4 memory per socket. Storage too has delivered steadily better price/performance. But somehow, DRAM, for all its speed, has not followed the same trajectory.
“We've seen that DRAM isn't scaling. You pay tremendously for that high capacity,” says Kristie Mann, senior director of product management for Intel's Optane™ persistent memory products for the data center. The result is that DRAM is often the single costliest element in a fully loaded enterprise server.
While this has left data centre architects fretting over the balance of components in their servers, it has also presented a challenge for software developers. Good coding has often meant being as economic as possible when it comes to placing data in memory, which can seem like missing the point.
Even where developers have found workarounds, such as memory caching for database acceleration, traditional memory presents one other problem – its volatility. If the system is powered down data disappears from memory, kicking off the laborious process of reloading it.
Which all amounts to a big problem for companies looking to exploit the potential of in-memory processing for workloads such as real time analytics.
Almost 60 per cent of CIOs plan to increase their investment in business intelligence and analytics next year, according to Gartner, putting it just behind cybersecurity as an investment target.
It is the ability to query and analyse vast datasets in memory that allows retailers or video streaming services to deliver instant recommendations to customers. Or for financial services organisations to use vast amounts of historical data to underpin real time fraud detection systems or make instant credit decisions.
So, enterprises are left with the dilemma or going all out on memory or being economical and not realising the full potential of either their data, or the rest of their infrastructure.
But, Mann says, “If you could have your storage, in memory, operating at memory speeds, without the block file system overhead, you could do things very differently.”
Memory hungry applications
Which is why Intel Optane persistent memory has the potential not just to change the way enterprise servers and data centres are designed, but to change the way enterprise software in general, and real-time analytics in particular, is developed and deployed.
Intel’s Optane persistent memory takes the same byte addressable 3D XPoint media used in Optane SSDs, packages it as a DIMM that can sit on the same DDR bus as DRAM, and applies a protocol called DDRP, which allows memory to be addressed asynchronously, at different speeds. The memory controller infrastructure is split between the processor and the DIMM.
The result, says Mann, is “you have both hardware and software deciding where to place the data.” And you have persistence.” This can have a dramatic effect on memory hungry applications, as shown in benchmarks published by Intel. In a comparison of two virtualized Redis installations, one running with 192Gb of DRAM and 1.5TB of Intel Optane persistent memory and 48 virtual machines, and one running 1.5TB of DRAM and 72 virtual machines, both installations showed CPU utilisation of 76 per cent while the persistent memory server delivered around 90 per cent of the throughput efficiency as the DRAM server.
In a comparison of two similar systems running Apache Spark, the persistent memory-based system delivered better response times on queries for the TPC-DS (decision support) benchmark.
In both cases, the Optane-based systems delivers comparable performance with a substantially reduced tier of traditional DRAM, and therefore cost.
Beyond the workload itself, ensuring better utilisation per core means fewer servers are needed to deliver the same amount of work - while requiring less data centre real estate, less power for cooling and ventilation, and potentially, less licenses.
Intel has already seen take up of Optane persistent memory powered systems by over 200 companies in the Fortune 500, according to Mann. “The reason these credit card companies are looking at us for their processing and their fraud detection, is because if you can stop the fraud while it's happening. It's worth billions of dollars to them,” she says. Meanwhile, “If you're guiding shoppers’ retail purchases, you're bringing in more revenue.”
But taking full advantage of Optane persistent memory is not simply a question of swapping out a few DIMMs. The first thing infrastructure architects and software developers need to understand is that Optane can be used in two different modes.
In Memory mode, the processor only addresses Optane persistent memory, while the system’s DRAM acts as a cache. So, swapping most of a system’s DRAM for higher capacity Optane DIMMS will certainly give a much bigger memory pool at a lower cost per GB.
It is Optane’s App Direct mode that makes software aware of memory persistence and allows the processor to address all the DRAM and all the Optane persistent memory available. App Direct mode also requires a modified memory controller in the CPU, and this comes with Intel’s Second Generation Xeon Scalable Gold or Platinum Processors.
That is why examining both your systems and your software requirements, is the first step in getting the full benefit of Optane persistent memory.
“Being a cache-based architecture, depending on the characteristics of your data set size, or the core to memory ratio, your performance may vary,” says Mann. “It might be just as good as DRAM or it could be 20 per cent lower than DRAM and so if consistent, high performance is critical to you, then you might not choose to deploy persistent memory.”
Core to memory ratio
So, Mann explains, the first step is to “look at your core to memory ratio and find out if you are limited by IO accesses, or number of cores. And also try to size your memory needs based on the number of cores that you're supporting in that workload and make sure that you're not starved for memory, because memory capacity is the primary benefit in memory mode.”
Some commercial software, for example, SAP Hana, can already exploit App Direct mode says Mann, and “you know you’re going to get the same performance as in DRAM.”
The more control you have of your software stack, the more opportunity there will be to optimise for persistent memory. “Look for workloads where you have high levels of data ingest,” says Mann. “Look for applications where you're doing real time analytics and need to make fast decisions…where it could be faster than storage.”
“If you have your own on prem technology, and you're willing to do a little bit of software optimization, it's completely viable today,” says Mann. Intel provides open source libraries, and adhere to open source standards, says Mann, meaning “any programming you do for our products will work for NVDIMMS, or any other storage class memory that chooses to be byte addressable.”
If you don’t have on-prem infrastructure and software that can be optimised for Optane, you can turn to the cloud, where service providers have already begun using Optane persistent memory. Oracle’s Exadata X8M service uses Optane to effectively move storage into memory, and realise a significant boost in database performance, according to Mann.
Meanwhile, Microsoft Azure offers persistent memory instances, says Mann, while launches from other cloud providers are imminent. So, between its existing take-up by those Fortune 500 companies, and its adoption by cloud service providers, it’s just possible that you’ve benefited from the Optane effect without even realising it. The big question then is has your competition begun benefitting from it too?
Sponsored by Intel