This article is more than 1 year old

Xeon E7 servers run with the big dogs

Gives chase to RISC and Itanium foxes

A bunch of benchmarks: Xeon E7 systems vs. rest of world

Machines using the Xeon E7 processors are just getting into the field now, so the benchmark test results are a little thin. No one has done and published TPC-C online transaction processing test results using the Xeon E7 processors yet, but there are a few tests that have been done on the SAP Sales & Distribution (SD), SPECvirt_sc2010 server virtualization tests, and TPC-H data warehousing. The TPC series of tests are important because they include pricing as well as performance metrics.

Let's take a gander at the SAP SD test first, which is a two-tier (database and application tiers) test that simulates users logging into the sales module of the SAP ERP suite and running transactions.

IBM System x3850

On workhorse four-socket rack servers, IBM has run the SD test on its System x3850 with four of the ten-core Xeon E7-4870 processors (2.4GHz) running Microsoft's Windows Server 2008 R2 and its own DB2 9.7 database and was able to support 14,000 users with an average response time of 0.92 seconds at 99 per cent CPU utilization.

HP ProLiant BL680c G7

An HP ProLiant BL680c G7 blade server with the same processors and core count (40) was able to handle 13,550 users on the SD test, and an HP BL620c G7 blade with only two E7-2870 processors (also 2.4GHz and with a total of 20 cores) was able to handle 6,703 SD users with sub-second response at nearly full CPU utilization. (HP was running Windows and SQL Server 2008.) By comparison, a two-socket DL380 G7 using six-core Xeon X5680 processors running at 3.33GHz could handle 5,075 SD users running the Windows stack.

Clearly, the Xeon E7s have the performance advantage on two-socket boxes compared to their baby Westmere-EP brethren.

They also have the performance advantage over the current crop of Opteron processors. A ProLiant DL585 G7 with four of AMD's Opteron 6180SE processors running at 2.5GHz (that's a total of 48 cores) can support 9,450 users – about a third fewer than the four-socket Xeon E7 machine tested.

IBM Power 730, 750 and 780

The Xeon E7s have an advantage over some RISC/Unix machines tested as well. For instance, a two-socket Power 730 from IBM with a dozen 3.7 GHz Power7 cores total running SUSE Linux Enterprise Server 11 and DB2 9.7 was only able to handle 5,250 SD users. A Power 730 with two eight-core Power7 processors running at 3.55GHz running AIX 7.1 and DB2 9.7 was able to handle 8,704 SD users.

That brackets the performance the of two-socket Xeon E7 blade server above from HP, which supported 6,703 SD users. A four-socket Power 750 system – a workhorse box in the IBM Unix lineup – equipped with four eight-core Power7 chips running at 3.55GHz is able to handle 15,600 users. That's only an 11.4 per cent advantage over IBM's own four-socket System x3850 box using the Xeon E7s.

IBM can scale its Power 780 up to eight sockets and 64 cores running at 3.8GHz, supporting 37,000 SD users, which is significantly better than the eight-socket, 80-core HP ProLiant DL980 G7, which could only handle 25,160 SD users running the Windows stack. Windows and the Boxboro chipset are having trouble scaling linearly beyond four sockets. No recent Sparc or Itanium systems have been run through the SD gauntlet lately to make comparisons.

The SPEC server virtualization test

The data in the SPEC server virtualization test is a little thin, but instructive just the same and illustrates the advantages that the Xeon E7s have over the Xeon 5600s for virtualized workloads.

IBM ran a BladeCenter HS22V server using Red Hat's KVM hypervisor through the SPECvirt_sc2010 test, which loads up machines with Java application serving, Web serving, and mail serving workloads into a "tile" running on a VM.

You scale up the workload by adding more tiles to the machine and measuring the aggregate throughput in a normalized rating figure. A blade with two Intel X5690 processors running at 3.46GHz attained a rating of 1,367 on the SPECvirt_sc2010 test supporting 84 virtual machines.

That blade had 288GB of main memory using 16GB sticks, so IBM was not holding back on the memory. IBM then took a two-socket Westmere-EX blade with two E7-4870 processors running the same Red Hat stack. (Yes, those are for two-socket machines, but IBM has its own eX5 chipset with memory MAX5 main memory extenders).

With two of these E7 chips, which run at 2.4GHz, and 640GB of main memory (16 slots on the blade and another 24 on the MAX5 extender for using 16GB sticks), the BladeCenter HX5 blade from IBM was able to attain a rating of 2,144 on a total of 132 VMs across 20 cores. Those extra cores are balanced against extra memory, and hence the machine can support more VMs.

This is one thing that is driving up server average selling prices in recent quarters, and this trend will continue. The reason is not that you can just put more VMs on a Xeon E7 or Opteron 6100 blade, but you can, if need be, make a very fat VM for a heavy workload like a database or email server. (There are no RISC or Itanium systems tested as yet on the SPEC server virtualization tests.)


That leaves the TPC-H data warehousing benchmark. There are some comparisons that can be made between Sparc, Power, Itanium, and Xeon systems on the version of the test with ad hoc queries banging against simulated 1TB data warehouses.

Dell PowerEdge R910

Dell booted up Red Hat Enterprise Linux 6.0 and the VectorWise 1.6 database from Ingres onto a PowerEdge R910 server. This machine was equipped with four Xeon E7-8837 processors running at 2.67GHz, each with eight cores (not ten) fired up; the machine was equipped with 1TB of main memory and a mere 2.3TB of disk.

The box was able to process 436,789 queries per hour and cost $384,935 after discounts, yield a bang for the buck of 88 cents per QPH. IBM pit the Windows stack on its System x3850 X5 with eight Xeon E7-8870 processors running at 2.4GHz, 2TB of memory, and nearly 7TB of flash drive capacity and was able to push 173,962 QPH at a cost of $1.37 per QPH after a 28.6 per cent discount.

Oracle Sparc Enterprise M8000

Last week, Oracle was bragging that a Sparc Enterprise M8000 server with 16 of the quad-core Sparc64-VII+ processors running at 3GHz was able to do 209,534 QPH – less than half of the query throughput of the Dell box.

And after discounts, the Oracle machine cost $2.1m, yield a cost of $10.13 per QPH. The Oracle box was configured with Oracle 11g R2 Enterprise Edition and Solaris 10 (Update 8/11) and had a whopping 47 per cent discount to get to that performance. To be fair, Oracle only put 512GB of main memory on the box and had 11TB of disk on its box. Maybe it should have added memory and cut back on disk.

IBM Power 780

IBM's Power 780 with eight of its eight-core Power7 chips running at 4.1GHz was also tested on the TPC-H test at the 1TB level. With 512GB of memory and just under 4TB of solid state disk, this Power Systems machine could handle 164,747 QPH running Red Hat Enterprise Linux 6.0 and the Sybase IQ database; after discounts, the machine cost $1.12m and yielded a cost of $6.85 per QPH after a 33 per cent discount.

HP Integrity Superdome 2

An HP Integrity Superdome 2 server running HP-UX 11i v3 (the September 2010 update) and Oracle's 11g R2 Enterprise Edition was able to do 140,181 QPH at a cost of $12.15 per QPH after a 21.8 per cent discount. That Superdome 2 machine was configure with sixteen of Intel's four-core Itanium 9350 processors, which run at 1.73GHz and have 24MB of L3 cache. This test was run over a year ago, and HP didn't use flash to boost performance or cut down on disk drive costs. The Superdome 2 machine had 512GB of memory and 580 disks weighing in at 42TB.

If the TPC-H test reveals anything, it is that the choice of software can matter as much as hardware, and vendors have done everything in their power to avoid direct comparisons of different servers running the same database and operating system. ®

More about

More about

More about


Send us news

Other stories you might like