Sponsored Hyperconverged infrastructure (HCI) has been one of the industry success stories of the past several years, offering IT departments a convenient building block approach to deploying and scaling infrastructure, with the promise of lower overall costs and easier management.
HCI started out as an easier way to support virtual machines. Instead of building the infrastructure out of discrete servers and SAN storage components which can be complex and costly to configure, each node in an HCI cluster is a ready-made appliance-like device with its own internal storage. Both storage and compute resources are virtualised and pooled via a software layer so they can be allocated as required. The software layer typically also provides a high level of automation to make management easier.
In the early days, HCI was deployed for niche use-cases such as virtual desktop infrastructure (VDI), where direct-attached internal storage was the most practical solution for the heavy disk I/O demands imposed by running multiple virtual desktops. Gradually, organisations started to see that HCI could support a broader range of workloads, and many enterprises are increasingly looking to deploy even mission-critical applications, such as databases, using HCI.
But as modern workloads continue to evolve, organisations need to ensure their data centre infrastructure will be able to handle the demands of ever-more data-hungry workloads. In-memory databases, complex analytics, and even machine-learning algorithms, are all starting to find their way into applications, and are placing a new stress on storage performance.
Flash storage has been touted as the answer to these issues, and enterprises are increasingly migrating from disk-based to flash-based storage as the price gap between the two narrows. But flash is still costly, and for various reasons may not have the optimum price/performance characteristics that enterprises are looking for.
Instead, newer technology with different performance characteristics, such as Intel® Optane™ non-volatile memory, could provide a better solution.
Where latency makes a difference
Optane™ is based on Intel® 3D XPoint™ technology, developed by Intel®, which offers both higher performance and lower latency than NAND flash. It is also byte-addressable, meaning it can be used either to build high performance solid state drives (SSDs), or it can be fitted into memory modules and accessed like RAM.
A typical flash chip may have a read latency of 25 microseconds and a write latency of 220 microseconds, and an erase latency somewhere in the range of 1,500 microseconds. By comparison, an Optane™-based SSD has an average read/write latency of 10 microseconds. The upshot of this is that an Intel® Optane™ SSD is much better than a NAND flash SSD for write-intensive workloads.
How does all this apply to HCI systems? Let’s take VMware’s platform as an example, as it accounts for a large share of HCI deployments, and especially the vSAN software-defined storage system part of this. With vSAN, every node in a cluster has its local storage configured into one or more disk groups. Within each group, one drive is categorised as the cache tier, and the remaining drives as the capacity tier. Traditionally, the capacity tier was made up of rotating disks, and the cache tier used SSDs.
Today, all-flash storage is common. In this configuration, the cache tier buffers write operations, and according to VMware, it is preferable to use SSDs with low latency and very high endurance here. The capacity tier serves read requests for anything outside the cache, and can thus be fitted with SSDs that have a greater capacity but lower cost and endurance than the cache tier.
It doesn’t take a genius to work out that as Intel® Optane™ SSDs have lower latency and greater endurance than flash, it is an ideal candidate for the cache tier. The capacity tier, meanwhile, is less critical, but flash SSDs optimised for read-intensive workloads such as the Intel® SSD DC P4500 Series would be a good fit.
According to research by IT storage analyst firm Evaluator Group, fitting Intel® Optane™ DC SSDs in place of flash drives for the cache tier can deliver a significant performance improvement. It found that performance as measured by the IOmark-VM workload benchmark almost doubled with Intel® Optane™ DC SSDs, when tested with vSAN 6.6 running on Xeon® Scalable processors.
The results were more striking when compared with older HCI hardware, where the systems with Intel® Optane™ DC SSDs demonstrated about ten times the performance of previous generation Xeon® servers using flash for the cache tier and hard disks for the capacity tier.
That performance does come at a price, as Optane™ SSDs are more costly than their NAND flash counterparts. Intel® contends that the extra performance they deliver can actually lead to cost savings, especially in a HCI environment where virtual machines are operating. The reasoning goes something like this: customers often believe that their systems are CPU-limited, but in many cases the processor cores are actually blocked, waiting for some storage I/O operation to complete. If you can get that I/O to complete faster, it frees up a swathe of CPU cycles that can be used to perform useful work.
With low-latency Intel® Optane™ SSDs as the write-buffering caching tier in a vSAN environment, it absorbs all of the random writes, according to Intel®, and so just by fitting faster storage, a customer can actually scale up to more virtual machines per host.
Alternatively, customers can use the greater efficiency of the Optane™ SSD to make cost savings by reducing the size of the cache tier required. Intel® claims that caching SSDs previously had to be sized so they were at least 10 percent of the size of the capacity tier, but the higher performance and lower latency of Optane™ means that 2.5 to 4 percent is sufficient. This means that while a 16TB capacity tier used to require a 1.6TB NAND SSD for caching, customers can now meet that requirement with a 375GB Optane™ SSD.
Optane™ technology is also now available in Intel® Optane™ DC Persistent memory modules, which fit into standard DIMM sockets inside server systems based on Second Generation Intel® Xeon® Scalable processors.
In this configuration, the technology is accessed like DRAM and can be used to expand the overall memory capacity of the host system, as Intel® Optane™ DC Persistent Memory modules are available in higher capacities than standard DDR4 DIMMs.
VMware has already added support for this new technology into its platform, with vSphere 6.7 Express Patch 10 enabling users to take advantage of both App Direct Mode and Memory Mode.
No changes to application code are required when an Intel® Optane™ DC Persistent memory module is in Memory Mode, because the storage appears to work just like DRAM. Behind the scenes, however, the memory controller uses the system DRAM to cache the Optane™ memory area. App Direct Mode is for applications and operating systems that are aware there are two types of memory in the system, and can place large data structures or data that needs to be persistent into the Optane™ memory area.
Using Memory Mode, VMware has found that a server can be configured with 33 percent more memory than with a server using just DRAM. In tests using the VMmark virtual machine benchmark suite [PDF], the virtualization giant was able to use the enlarged memory space to achieve 25 percent higher virtual machine density and 18 percent higher throughput against a comparable cluster not fitted with Optane™.
Meanwhile, technology moves on, and Intel® has already announced a forthcoming second generation of Optane™ products based on improved Intel® 3D XPoint™ technology. This will arrive in the form of Intel® Optane™ DC SSDs currently codenamed Alder Stream, and future Intel® Optane™ DC Persistent memory modules currently codenamed Barlow Pass.
It is anticipated that the next edition of Intel® Optane™ DC SSDs will exhibit about 50 per cent higher performance than the current generation, while the next generation of Intel® Optane™ DC Persistent memory modules are expected to double the available capacity to 256GB, 512GB, or 1TB per DIMM. If this proves to be the case, it will enable a further boost in performance.
Intel® Optane™ technology isn’t the only way to optimise performance in an HCI deployment. However, fitting Intel® Optane™ DC SSDs as the cache tier for vSAN is a simple and effective step that any VMware shop can take, while other changes may not deliver the kind of improvements needed to keep pace with the demands of emerging data-intensive applications.
Likewise, Intel® Optane™ DC Persistent memory can boost applications that are memory-constrained, without having to fill up the host system with costly DRAM. Organisations should, of course, evaluate whether their specific workload would benefit from this before committing.
Sponsored by Intel.