This article is more than 1 year old
Monday: Intel teases 48-core Xeon. Tuesday: AMD whips covers off 64-core second-gen Epyc server processor
Chipzilla more like Tyrannosaurus Rekt
AMD said on Tuesday it's going to roll out a 7nm 64-core second-generation Epyc server processor, dubbed Rome, in 2019.
Samples of the processor are in the hands of selected organizations for testing and evaluation. This comes a day after Intel promised on Monday a 14nm 48-core Cascade Lake AP Xeon chip also for 2019.
AMD's Rome will sport PCIe 4.0 interfaces, and the usual PCIe-based Infinity Fabric that interconnects processor sockets and any GPU accelerators. With up to 64 physical Zen 2 CPU cores, the chips can run up to 128 hardware threads. This is double the number of cores and hardware threads as 2017's 14nm first-generation Zen 1 Epyc, dubbed Naples.
As such, AMD CEO Lisa Su told journos and industry analysts in San Francisco on Tuesday that Rome will have double the performance of Naples, due to this core count leap and switch to 7nm.
Competition
The physical x64 core count blows away Intel's Xeon compute family, which only just set itself a high-water mark of 48 cores with Cascade Lake AP. Also, that line is still on Intel's 14nm process, whereas AMD is nudging ahead with TSMC's 7nm. AMD also claimed a single-socket Rome can match or outperform a dual-socket Intel Xeon Scalable 8180M setup, when running the ray-tracing C-Ray benchmark, at least.
If you've already splashed out on Naples chips, Rome processors are socket compatible with its previous generation, and are forward compatible with Milan: the third-generation Zen-based server part that will use TSMC's 7nm+ node and is due in 2020. As a whole, AMD hopes to get its 7nm+ Zen 3 chips, from Ryzen to Epyc, out that year. Zen 4 is in design, we're told.
Interestingly enough, the Rome processor package contains a central 14nm I/O die that interfaces with up to 2TB of external RAM per socket, and its PCIe 4.0 channels, and multiple dies of 7nm CPU core clusters connected to that I/O block. This way, whenever a CPU core wants to talk to outside memory, it can access it right out through the I/O die, as opposed to the NUMA approach of Zen 1 Epyc chips, where multiple internal dies had their own memory controllers and memory domains.
Amazon Web Services is also offering AMD generation-one Epyc-powered virtual machines in its cloud, which work out 10 per cent cheaper than Intel-based instances.