This article is more than 1 year old

What's in it for server buyers now that Intel's Xeon E5-2600 v2 is here?

More oomph per socket – and maybe just a tiny system price hike

Autumn is on its way and Intel has released the "Ivy Bridge-EP" Xeon E5-2600 v2 server processors just in time to get in line for a chunk of the remaining 2013 IT budget at the data centers of the world. So how are these new processors going to stack up to the existing "Sandy Bridge-EP" Xeon E5-2600 v1 chips, and what can customers expect from server makers who are trying to win business in a flat to down market?

Well, first and foremost, there'll be plenty of wheeling and dealing.

The Xeon E5-2600 v2 processors are supposed to be a "tock" shrink of the manufacturing process, with very little change to the processor design. But with three different variants of the chip – with six, ten, or twelve cores in very different die layouts – this is really more a process "tock" and a partial "tick" with the addition of cores compared to the monolithic eight-core Sandy Bridge-EP.

The good news is that the three variants of the Xeon E5-2600 v2 chips plug into the same sockets as the prior v2 processors and have the same thermal envelopes, and that means server makers do not have to do a lot of engineering to offer the new CPUs. And, because of the three new variants, which have a more balanced cache and main memory bandwidth and performance as the cores scale up, server makers will be able to more precisely target specific chip SKUs to particular workloads.

The differences between the two chip families for two-socket boxes, however, make performance and price/performance comparisons difficult at the chip and system level. Intel does not provide relative performance metrics for its chips, and largely relies on the benchmark tests from its server partners to reckon the bang for the buck of a particular system.

El Reg provided a very rough performance and price/performance comparison between the v1 and v2 families of the Xeon E5-2600 chips, but this adding up the aggregate clocks in each chip and dividing that into the cost of each chip is not particularly scientific.

Customers don't buy processors, except in the cases of many high frequency traders and some supercomputer customers who always need top performance. They buy systems, and they want to know what the performance differences will be at the system level and how the price may change as well.

HP is dropping the Xeon E5-2600 v2 processors into the bulk of its ProLiant Gen8 family from the get-go, including the ProLiant DL350, DL360, and DL380 rackers, the BL460 blades, and the SL230, SL250, and SL270 hyperscale machines. Jim Ganthier, vice president of marketing for the HP Servers division, says that in some cases the Ivy Bridge-EP SKUs cost a little more, but that HP has a number of other "levers it can pull in the system configuration" that can offset these costs.

One of those levers is the three-rank registered DIMM memory stick that it has designed with several memory makers. This stick packs 24GB of capacity of 1.35 volt memory onto a stick that costs the same as a 16GB RDIMM stick. That 24GB stick uses 35 per cent less power and offers about 25 per cent better performance than the 16GB stick.

"I would call them roughly the same," Ganthier says of the prices on ProLiant machines using the v1 and v2 processors. "The price increase is minimal, but the performance increase will be pretty damned decent."

As for target customers, those using ProLiant boxes with Xeon 5500 or older processors are prime candidates, with their machines being more than four years old at this point, and looking very long in the tooth.

HP does have processor upgrade kits available for those ProLiant, SL6500, and SL2500 Gen8 customers, and the SmartSocket it created for the Gen8 machine to make putting in processors easier (without bending the pins) will certainly help those customers who want to do a socket swap. "The bulk of our systems will be new products," Ganthier says. "There are not a lot of people who want to crack open a server."

Over at supercomputer maker Cray, Barry Bolding, vice president of marketing, expects plenty of upgrades and also to ship the shiny new chips in machines that were booked for sale months ago but which are being built for delivery now.

Like HP, Cray is not expecting to change the price of a rack of its high-end XC30 machines, which use its "Aries" XC interconnect, or more traditional CS300 clusters, which hail from the Appro side of the house and which generally have InfiniBand interconnects between the nodes. But the company is expecting a pretty big performance jump based on the SKUs it puts into its systems.

"We are most happy that this is not just a clock speed or core upgrade," says Bolding, "but a balanced upgrade with cache and bandwidth scaling."

In the XC30 supers, Cray only supported the eight-core variants of the Xeon E5-2600 v1 processors, and with the v2 chips, only the ten-core and twelve-core variants will be supported. The CS300 line will support a wider variety of SKUs than the XC30s, but again given the parallel nature of the workloads, customers will tend to want to push the core count and not the clocks.

In general, Bolding says that if your workload is memory-constrained – such as a heavy fluid dynamics application – then the ten-core Ivy Bridge-EP is better for you. If you don't have memory constraints – such as running a molecular dynamics or other life-sciences simulation – then the twelve-core chip will offer better bang for the buck.

So how much extra floppage is there in the Cray boxes using top-bin parts moving from the Xeon E5-2600 v1 chip to the V2? If you take a rack of the XC30 machines, you will get 99 teraflops per rack with the v2 chips compared to 66 teraflops for the v1 chips; the less dense air-cooled XC30-AC machines will have 33 teraflops for the new chips and 22 teraflops for the older processors. And a rack of the CS300s will weigh in at 41 teraflops compared to 28 teraflops with the earlier Xeon E5-2600s.

The thing to remember as a rule of thumb is that the CPU represents about 20 to 30 per cent of the cost of a system, depending on the architecture of the box. It is a higher percentage in a plain vanilla box and a lower percentage in a supercomputer node that has something as sophisticated as the Aries interconnect. So, if a processor price goes up by 10 or 15 per cent (while adding maybe 40 to 50 per cent more oomph), the net effect on the system price is much smaller, maybe only 2 to 5 per cent of a price hike at the system level.

SGI is telling customers to expect something on the order of 40 per cent more aggregate performance per rack with its ICE-X clusters, and similarly that the prices at the system level for its ICE-X and Rackable machines, which were initially aimed mostly at hyperscale data center operators but have been tailored to run Hadoop big data munchers and NoSQL data stores, will go up a tiny bit. SGI is also plunking the new Xeon E5 chips into its Modular InfiniteStorage disk arrays.

Bill Mannel, vice president of product marketing at SGI, says the 80-20 rule is used among its customers, differentiating the CPU cost from the cost of the rest of the system, and therefore the system costs are not expected to rise by all that much.

What SGI is no doubt looking ahead to is the delivery of the Ivy Bridge-EP variants of the Xeon E5-4600 v2 processors, which will give its "Ultraviolet" UV 2000 shared memory systems a significant performance boost. Intel has not said yet when to expect these variants of its Xeon line, which are designed for less-costly four-socket servers but which SGI lashes together in two-socket boxes using the extra QuickPath Interconnect links on the chip to hook into its NUMALink 6 system interconnect. ®

More about

More about

More about


Send us news

Other stories you might like