Cloud's new performance leader: Arm beats x86
AWS's Arm Neoverse-based Graviton4 chips set a new bar, beating AMD and Intel in performance and price-performance Signal65 benchmarks
Partner Content When it comes to datacenter computing, the performance gap has been turned on its head.
For years, x86 processors from AMD and Intel were seen as the safe bets in cloud computing. Built for efficiency, Arm disrupted mobile and edge markets, but datacenter performance remained a more challenging proving ground.
Those days are over. New analysis from the consulting firm Signal65 shows that AWS's Arm Neoverse-powered Graviton4 chips are not only leading the competition on price-performance, but significantly outpacing alternative x86 offerings from AMD and Intel on overall performance across enterprise workloads. This represents a decisive shift in the cloud computing performance landscape.
Proof is in performance
Signal65 tested AWS Graviton4 instances against comparable AMD EPYC and Intel Xeon configurations across four key datacenter workloads: large language model (LLM) inference, machine learning (ML) training and inference, database operations, and web serving. The results paint a stark picture of x86's struggles to maintain competitive footing in the modern cloud.
In LLM testing using Meta's Llama 3.1 8B model, Graviton4 delivered 168 percent higher token throughput than AMD's EPYC processors and 162 percent better performance than Intel's Xeon chips. More critically for cost-conscious cloud operators, the Arm-based instances showed 220 percent better price-performance than AMD and 195 percent better than Intel.
Signal65 stated in the report: "Arm's large performance advantage across all Llama inferencing tests is quite notable. This consistent performance advantage is indicative of notable architectural differentiation and showcases Arm as a practical choice for CPU-based AI deployments capable of meeting various model and use case requirements."
The performance gaps extended across other AI and general compute workloads. For XgBoost ML training, Graviton4 achieved 53 percent faster training times than AMD and 34 percent faster than Intel, while delivering 64 percent and 49 percent better price-performance respectively. Database performance with Redis showed Graviton4 handling 93 percent more operations per second than AMD and 41 percent more than Intel.
Even in networking workloads using Nginx – traditionally an area where x86 has maintained strength – Graviton4 processed 43 percent more requests per second than AMD and 53 percent more than Intel, with corresponding price-performance advantages of 81 percent and 73 percent.
Speaking to the testing, the Signal65 report stated: "The impressive performance and cost characteristics demonstrated by this testing highlight the changing cloud infrastructure landscape and establish Arm as a leading option for organizations deploying AWS EC2 instances."
Tipping point
The results arrive at a pivotal moment for the datacenter processor market. Nvidia's Grace Blackwell superchip, which combines its Blackwell GPU with the Arm Neoverse-based Grace CPU, powers some of the world's most powerful AI servers and supercomputers. Meanwhile, the world's leading hyperscale cloud providers have been increasingly designing their own Arm-based chips that are all built on the consistent foundation of the Arm architecture, with Google Cloud's Axion, Microsoft Azure's Cobalt and AWS's Graviton families signaling a clear commitment to the Arm platform.
Industry projections suggest this trend is accelerating. Arm estimates that half of compute shipped to top hyperscalers in 2025 will be Arm-based, marking a dramatic shift from the x86 dominance that has characterized datacenters for decades.
The implications extend beyond raw performance metrics. As AI workloads become increasingly central to cloud operations, the energy efficiency advantages demonstrated by Arm-based processors could prove decisive. With datacenter operators facing mounting pressure over power consumption and operational costs, the combination of superior performance and better efficiency presents a compelling business case.
For x86 vendors, the Signal65 findings represent another data point in an increasingly challenging narrative. More than 90 percent of enterprises are running in the cloud, and as AI continues to push the traditional enterprise computing environment towards a cloud-like technology stack, enterprises are moving away from legacy, off-shelf CPUs in order to take advantage of hybrid or multi-cloud environments.
While x86 providers have invested heavily in adding AI features or seek to deliver more power efficient SKUs, Arm-based alternatives are gaining ground in cloud deployments that represent the future of enterprise computing.
Everywhere, all at once
What's perhaps most striking about the Signal65 results is the consistency across diverse workloads. This isn't a case of Arm excelling in one narrow use case while struggling elsewhere; the performance advantages appear fundamental to the architecture's design philosophy.
As cloud providers continue building out AI infrastructure at unprecedented scale, with industry estimates pointing to trillions in investment, the processor choices made today will shape the competitive computing landscape for years to come. Based on current trajectories and third-party validation like Signal65's analysis, that future appears increasingly Arm-shaped.
As Signal65 concluded in the report: "By leveraging the architectural advantages of Arm Neoverse CPUs, organizations can achieve measurable gains in cost efficiency and performance, helping to meet the requirements of both traditional and newly emerging workloads."
The question for x86 vendors is no longer whether they can match Arm's performance in isolated benchmarks, but whether they can fundamentally restructure their approach to compete in an efficiency-first, AI-driven datacenter world. The Signal65 results suggest that challenge may be more daunting than previously imagined.
Contributed by Arm.