Memory constrained? Amazon and AMD now offer 1.5TB VMs
Or about two and a half Chrome tabs
Amazon Web Services is sticking with AMD for its next generation of memory-optimized instances in EC2.
The Epyc Milan-powered R6a instances, announced today, more than double the memory capacity and network bandwidth, while delivering 35 percent higher performance than its previous generation instances, the cloud giant claims.
The R6a family of instances is available in 11 varieties ranging from the lowly R6a.large, which pairs two vCPUs with 16GB of memory and 12.5 Gbit/s of network bandwidth, to the R6a.48xlarge and R6a.metal on the high end, which offer 192 vCPUs, 1.5TB of RAM and 50 Gbit/s networking.
Like the R5a instances they replace, the VMs are designed for highly-memory-intensive applications. Examples include relational and noSQL databases; distributed web scale in-memory caches, like memcache and Redis; and in-memory databases, like Hadoop and Spark clusters. The instances are also certified for use with SAP workloads out of the gate.
All of the instances are built on Amazon’s Nitro smartNICs, which accelerate input/output intensive workloads — common in networking, storage, and security applications — by offloading them to specialized domain-specific processors in silicon. This has the benefit of freeing CPU resources to run tenant workloads.
The instances also take advantage of Amazon’s Elastic Fabric Adapter, which provides up to 40 Gbit/s of high-speed connectivity to other nodes and block-storage resources.
- Those NitroTPMs Amazon teased now really are coming to AWS EC2
- Cloud, on-prem ... we've got the network service to rule them all, says AWS
- Delta Airlines takes flight with Amazon Web Services
- AWS starts renting cloudy M1 Mac minis
Much of the performance improvements claimed by AWS in this generation can be attributed to instruction per clock and frequency improvements in AMD’s third-gen Eypc processors. This enabled the house of Zen to achieve a roughly 19 percent performance improvement over its second-gen Rome chips.
In this case, AWS appears to be using AMD’s Epyc 7643 — a 48-core/96-thread part with a base clock of 2.3GHz and a max boost clock of 3.6GHz, under ideal operating conditions.
Until recently, AMD has held a memory density edge, boasting support for eight memory channels — two more than Intel’s second-gen Xeon Scalable — operating at 3200 mega-transfers a second. Intel closed this gap with the launch of its Ice Lake Xeon Scalable processors early last year.
However, unlike Intel, AMD’s Milan CPUs were drop-in compatible — with a BIOS update — with older Epyc 2 server boards, which may have influenced AWS’s decision to stick with AMD.
In addition to higher performance, AWS also cited support for AMD’s transparent single-key memory encryption, which encrypts data in physical memory, as a key feature on the new instances.
AWS’s R6a instances are available now in the cloud giant’s US East and West, Asia Pacific, and Europe regions ®.