Intel cranks up accelerators in Xeon 6 blitz to outgun AMD

But you're probably not cool enough for Chipzilla's 288-core monster

Facing stiff competition from its long-time rival AMD and the ever-present specter of custom Arm silicon in the cloud, Intel on Monday emitted another wave of Xeon 6 processors.

However, these new P-core Xeons aren't trying to match AMD on core count or raw computational grunt this time. We got those parts, which were primarily aimed at HPC and AI-centric workloads, last fall.

Instead, its 6700P, 6500P, and 6300P-series Granite Rapids Xeons launching today are aimed squarely at the workhorses of the datacenter – think virtualization boxes, storage arrays, and, of course, big multi-socket database servers.

Having successfully restored core count parity with its x86 rival, Intel is leaning heavily on what remains of its reputation among customers, its complement of compute accelerators, and more aggressive pricing in a bid to stem its share losses in the datacenter.

Scaling Xeon 6 down

Peeling back the integrated heat spreader, we find a now-familiar collection of silicon. With Xeon 6, Intel has fully embraced heterogeneous chiplet architecture splitting of I/O functionality from compute and memory.

At the top of the stack are the 6900P-series parts we looked at last fall, which feature a trio of compute and memory dies, built on the Intel 3 process node, flanked by a pair of Intel 7-based I/O dies.

While all of Intel's Xeon 6 parts largely share the same I/O die, Intel's new 6700P and 6500P Xeons feature different compute tiles ranging from a pair of extreme-core-count dies on the 86-core part or a single high-core-count or low-core-count die for the 48-core and 16-core chips.

Here we see Intel has fully embraced the heterogeneous chiplet architecture pioneered by long-time rival AMD

Here we see Intel has fully embraced the heterogeneous chiplet architecture pioneered by long-time rival AMD – click to enlarge

This gives Intel some flexibility in terms of balancing core count and clock speeds for specific workloads, and, we imagine, helps quite a bit with yields. However, because the memory controller is part of the compute die rather than integrated into the I/O die, this means they're limited to eight memory channels as opposed to the 12 on its flagship parts this generation.

Intel's latest chips make up for this somewhat by including support for two DIMMs per channel configurations – something notably absent on its earlier Granite Rapids parts. In single-socket configurations, the chips also support up to 136 lanes of PCIe 5.0 connectivity versus 88 on its multi-socket optimized processors.

Less silicon means that this round of Xeon 6 processors run a fair bit cooler and pull less wattage than the 500 W parts we looked at last year coming in between 150 and 350 W depending in large part on core count. This means you can now have up to 22 more cores in the same power footprint as the last generation.

Intel remains committed to multi-socket systems

As of 2025, Intel remains the only supplier of x86 kit for large multi-socket systems, commonly employed in large, mission-critical database environments. For reference, AMD's Epyc processors have only ever been offered in single or dual socket configurations

For those running these kinds of large, memory-hungry workloads, like SAP HANA, the release of Intel's 6700P series parts represents a major performance uplift over its aging Sapphire Rapids Xeons.

Yep, Intel is still the only x86 vendor offering configurations that scale beyond two sockets

Yep, Intel is still the only x86 vendor offering configurations that scale beyond two sockets – click to enlarge

However, with Compute Express Link (CXL) memory expansion devices eliminating the need for additional CPU sockets to achieve the multi-terabyte memory capacities demanded by these workloads, the question becomes whether these big multi-socket configurations are even necessary.

One of CXL's key features is the ability to add additional memory capacity to compatible systems via expansion modules that slot into a standard PCIe interface. The extra memory, which, by the way, doesn't have to be DDR5, then appears as another NUMA node.

Despite these advancements, Intel Senior Fellow Ronak Singhal doesn't see demand for four and eight-socket systems going away any time soon. "When you're looking at the memory expansion, there's a certain amount you can do with CXL, but for the people that are going to four-socket and eight-socket, or even beyond, they want to get those extra cores as well," he said.

Intel leans hard on its accelerator investments

Compared to last gen, Intel says its 6700P-series chips will deliver anywhere from 14 to 54 percent higher performance compared to its top-specced fifth-gen Xeons.

Here's how Intel says its latest Xeons stack up against last gen

Here's how Intel says its latest Xeons stack up against last gen – click to enlarge

However, against its competition, it's clear that Intel is leaning pretty heavily on the integrated accelerator engines baked into its chips to differentiate and give it a leg up.

While the rise of generative AI has shifted the definition of accelerators somewhat to mean GPUs or other dedicated AI accelerators, Intel has been building custom accelerators for things like cryptography, security, storage, analytics, networking, and, yes, AI into its chips for years now.

Leaning on its onboard accelerators, Intel says a single Xeon 6 server has the potential to replace up to ten second-ge Xeon boxes

Leaning on its onboard accelerators, Intel says a single Xeon 6 server has the potential to replace up to ten second-gen Xeon boxes – click to enlarge

Along with increased core counts and instructions per clock this generation, Intel says its cryptographic engines and advanced matrix extensions (AMX) now mean a single Xeon 6 server can take the place of up to ten Cascade Lake systems, at least for workloads like image classification and Nginx web serving with TLS.

Compared to AMD's latest and greatest fifth-gen Epycs, Intel is claiming a performance advantage of 62 percent for Nginx TLS, 17 percent for MongoDB, 52 percent in HPCG, 43 percent in OpenFOAM, and 2.17x in ResNet-50.

Here's how Intel says its latest Xeons stack up to Epyc. Obviously take these with a grain of salt.

Here's how Intel says its latest Xeons stack up to Epyc. Obviously take these with a grain of salt – click to enlarge

As usual, we recommend taking these performance claims with a grain of salt, especially because they are for Intel's flagship 6900P processors and focus on workloads that benefit directly from either its integrated accelerator engines, or early support for speed MRDIMM memory tech.

With that said, if your workloads can take advantage of these accelerator engines, it may be worth taking a closer look at Intel's Xeon 6 lineup. Speaking of which, here's a full rundown of Intel's 6700P and 6500P series silicon launching today.

Intel Xeon 6 6700P/6500P Performance SKUs

SKU Cores Base clock Boost clock (All core) Boost clock (Max) L3 cache TDP Sockets Mem channels Mem speed DDR5/MRDIMM UPI links PCIe lanes Price
6787P 86 2 GHz 3.2 GHz 3.8 GHz 336 MB 350 W 2 8 6,400/8,000 MT/s 4 88 $10,400
6767P 64 2.4 GHz 3.6 GHz 3.9 GHz 336 MB 350 W 2 8 6,400/8,000 MT/s 4 88 $9,595
6747P 48 2.7 GHz 3.8 GHz 3.9 GHz 288 MB 350 W 2 8 6,400/8,000 MT/s 4 88 $6,497
6745P 32 3.1 GHz 4.1 GHz 4.3 GHz 336 MB 300 W 2 8 6,400/8,000 MT/s 4 88 $5,250
6737P 32 2.9 GHz 4 GHz 4 GHz 192 MB 330 W 2 8 6,400/8,000 MT/s 4 88 $4,995
6736P 36 2 GHz 3.4 GHz 4.1 GHz 144 MB 205 W 2 8 6,400/NA MT/s 4 88 $3,351
6730P 32 2.5 GHz 3.6 GHz 3.8 GHz 144 MB 250 W 2 8 6,400/NA MT/s 4 88 $3,726
6527P 24 3 GHz 4.2 GHz 4.2 GHz 144 MB 255 W 2 8 6,400/NA MT/s 4 88 $2,878
6517P 16 3.2 GHz 4 GHz 4.2 GHz 72 MB 190 W 2 8 6,400/NA MT/s 3 88 $1,195
6507P 8 3.5 GHz 4.3 GHz 4.3 GHz 48 MB 150 W 2 8 6,400/NA MT/s 3 88 $765

Intel Xeon 6 6700P/6500P Mainline SKUs

SKU Cores Base clock Boost clock (All core) Boost clock (Max) L3 Cache TDP Sockets Mem channels Mem speed (DDR5) UPI links PCIe lanes Price
6788P 86 2 GHz 3.2 GHz 3.8 GHz 336 MB 350 W 8 8 6,400 MT/s 4 88 $19,000
6768P 64 2.4 GHz 3.6 GHz 3.9 GHz 336 MB 330 W 8 8 6,400 MT/s 4 88 $16,000
6748P 48 2.5 GHz 3.8 GHz 4.1 GHz 192 MB 300 W 8 8 6,400 MT/s 4 88 $12,702
6738P 32 2.9 GHz 4.1 GHz 4.2 GHz 144 MB 270 W 8 8 6,400 MT/s 4 88 $6,540
6728P 24 2.7 GHz 3.9 GHz 4.1 GHz 144 MB 210 W 8 8 6,400 MT/s 4 88 $2,478
6724P 16 3.6 GHz 4.2 GHz 4.3 GHz 72 MB 210 W 8 8 6,400 MT/s 3 88 $3,622
6714P 8 4 GHz 4.3 GHz 4.3 GHz 48 MB 165 W 8 8 6,400 MT/s 3 88 $2,816
6760P 64 2.2 GHz 3.4 GHz 3.8 GHz 288 MB 330 W 2 8 6,400 MT/s 4 88 $7,803
6740P 48 2.1 GHz 3.3 GHz 3.8 GHz 288 MB 270 W 2 8 6,400 MT/s 4 88 $4,650
6530P 32 2.3 GHz 3.7 GHz 4.1 GHz 144 MB 225 W 2 8 6,400 MT/s 4 88 $2,234
6520P 24 2.4 GHz 3.4 GHz 4 GHz 144 MB 210 W 2 8 6,400 MT/s 4 88 $1,295
6515P 16 2.3 GHz 3.8 GHz 3.8 GHz 72 MB 150 W 2 8 6,400 MT/s 3 88 $740
6505P 12 2.2 GHz 3.9 GHz 4.1 GHz 48 MB 150 W 2 8 6,400 MT/s 3 88 $563

Intel Xeon 6 6700P/6500P Single-socket SKUs

SKU Cores Base clock Boost clock (All core) Boost clock (Max) L3 Cache TDP Sockets Mem channels Mem speed (DDR5 / MRDIMM) UPI links PCIe lanes Price
6781P 80 2 GHz 3.2 GHz 3.8 GHz 336 MB 350 W 1 8 6,400/8,000 MT/s 0 136 $8,960
6761P 64 2.5 GHz 3.6 GHz 3.9 GHz 336 MB 350 W 1 8 6,400/8,000 MT/s 0 136 $6,570
6741P 48 2.5 GHz 3.7 GHz 3.8 GHz 288 MB 300 W 1 8 6,400/NA MT/s 0 136 $4,421
6731P 32 2.5 GHz 3.9 GHz 4.1 GHz 144 MB 245 W 1 8 6,400 / NA MT/s 0 136 $2,700
6521P 24 2.6 GHz 4.1 GHz 4.1 GHz 144 MB 225 W 1 8 6,400/NA MT/s 0 136 $1,250
6511P 16 3.2 GHz 4.1 GHz 4.2 GHz 72 MB 150 W 1 8 6,400/NA MT/s 0 136 $815

Intel gets aggressive on pricing

Intel's Xeon 6 6700P and 6500P-series launch sees the x86 giant get considerably more aggressive with regard to pricing.

Core-for-core, Intel has traditionally charged a premium for its chips compared to AMD. Comparing Intel's fourth-gen Xeon launch – that's Sapphire Rapids if you'd forgotten – to AMD's fourth-gen Epyc processors, we saw launch price differences ranging from a few hundred dollars to several thousand depending on the segment.

With Intel's latest Xeon launch we don't see the same pricing dynamics at play. Looking at launch prices for AMD's fifth-gen Epycs from last fall, it's clear Intel has attempted to match, if not undercut, its smaller competitor on pricing at any given core count or target market.

The one exception to this is Intel's four and eight-socket compatible chips, which have no competition in the x86 space and so we still see Intel charging a premium here. Need a maxed-out, eight-socket database server? You can expect to pay more than $150,000 ($19,000 a piece) in CPUs alone.

Of course, these are tray prices we're talking about, and so they don't take into account volume discounts offered to customers by either chipmaker. Prices aren't fixed and it's not unusual for them to be revised to account for competitive pressure or market conditions.

We've already seen Intel slash prices for its flagship 6900P-series Xeons by an average of $4,181 since they were launched in September. And it is easy to see why. AMD was charging nearly $5,000 less for the same number of cores with its fifth-gen Epyc Turin parts. Intel's more aggressive pricing makes its chip more than $500 cheaper.

And that's not the end of the story. Short of steep price cuts to the Epyc lineup, Intel's Xeon 6 processors could end up being substantially less expensive than AMD's if a 25-plus-percent tariff on semiconductor imports ends up being implemented by the Trump administration.

Intel's Xeon 6 processors are among the few current-generation products the company is still manufacturing in-house. Assuming it can keep up with US demand at its domestic fabs, Intel Xeons are positioned to sidestep these tariffs. AMD, meanwhile, is reliant on TSMC for manufacturing.

New entry-level Xeons and SoCs for the edge

Alongside its mainstream 6700P and 6500P processor families, Intel is also rolling out new embedded and entry-level Xeon processors.

At the bottom of the stack is Intel's 6300-series Xeons, which can be had in four, six, and eight-core flavors with clock speeds up to 5.7 GHz. However, with only two DDR5 memory channels, which max out at 128 GB of capacity and speeds of 4,800 MT/s, these chips are more closely aligned with AMD's baby Epycs we looked at last year than your typical datacenter CPU.

For embedded, edge, and networking environments, Intel is also rolling out a new Xeon 6 SoC variant, which it's positioning as a successor to its fourth-gen Xeon with vRAN boost.

Alongside its datacenter-focused parts, Intel is rolling out a new edge-optimized SKU with an I/O die tuned for virtualized RAN networks

Alongside its datacenter-focused parts, Intel is rolling out a new edge-optimized SKU with an I/O die tuned for virtualized RAN networks – click to enlarge

The chip, which is designed to be integrated directly into edge compute, networking, or security appliances, can be had with up to 42 cores and features a unique I/O die with 200 Gbps of aggregate Ethernet bandwidth, presumably broken out across eight 25GbE links.

Along with powering things like virtualized radio access network (vRAN) systems, the chips can also be equipped with accelerators for cryptographic acceleration, AI, or media transcoding. The idea being that these chips could also be deployed in security appliances or to preprocess data at the edge.

You're probably not cool enough for Intel's 288-core monster

If you're wondering what ever happened to that 288-core E-core Xeon former CEO Pat Gelsinger teased back at Intel Innovation in 2023, it's still lurking in the shadows, Singhal told press ahead of the launch.

But unlike Intel's earlier Sierra Forest E-core Xeons launched last year, it seems Intel is holding its highest core-count parts in reserve for cloud service providers.

"The 288-core is now in production. We actually have this deployed now with a large cloud customer," Singhal said. "We're really working on that 288-core processor closely with each of our customers to customize what we're building there for their needs."

This isn't surprising as the part was always designed to serve the cloud and managed service provider market, providing loads of power efficient if not necessarily the most feature-packed or performant cores for serving up microservices, web-scale apps, and other throughput-oriented programs.

We also know that demand for its E-core parts hasn't lived up to expectations.

"What we've seen is that's more of a niche market, and we haven't seen volume materialize there as fast as we expected," Intel co-CEO Michelle Johnston Holthaus said during the company's Q4 earnings call.

This ultimately resulted in the delay of Intel's 18A-based Clearwater Forest parts from 2025 to 2026. But as we noted, the timing of that launch was always awkwardly close to Sierra Forest and, in our opinion, left prospective customers in the difficult position of either being an early adopter or waiting a little longer for what's likely to be a far more refined and performant part.

According to Singhal, the part is already running in the lab and an Intel customer has powered on its first systems using it. ®

More about

TIP US OFF

Send us news


Other stories you might like