This article is more than 1 year old

Power7 v Power6 - it's all about the cache

Double the thread count

IBM is launching the first of its Power7-based systems today, and the company thinks that the innovations inside the Power7 processor are going to give it a leg-up on the competition in terms of capacity, throughput, and energy-efficiency. But how do those Power7 processors stack up to the existing Power6 and Power6+ processors used in the Power Systems lineup?

IBM says that with the Power7 design, it has the right balance of cores, threads, and clock speeds - and the interplay between them - to tackle the kinds of workloads that like multithreading (such as Java applications and middleware) and those that like clock speeds and cache memory and that need to move data through the system quickly (such as transaction processing and analytics).

In a way, the Power7 design is a back-step from the Power6 and Power6+ chips that precede it. IBM has been shipping dual-core, 64-bit Power processors since the Power4 launched in October 2001, and it stuck with dual-core chips as it transitioned from the 180 nanometer processes used in the Power4 chips through the 130 nanometer wafer baking for the Power4+ in November 2002 (adding copper and silicon-on insulator technologies) and the Power5 in May 2004, on down to the 90 nanometer tech used in the Power5+, through the 65 nanometer processes for the Power6 chip in July 2007.

Those same 65 nanometer processes were used to make the Power6+ chips that were put in the midrange Power Systems machines in October 2008 and in entry boxes and blade servers in April 2009. IBM did not, for whatever reason, do a process shrink with the Power6+ chips, but rather added a few instructions to the chip and decided to ride out 2009, waiting for Power7 chips and their radically different eight-core design, to come to market.

With Intel's quad-core "Tukwila" Itanium running late and Sun Microsystems' 16-core "Rock" UltraSparc-RK chip barely on life support (it was canceled in June 2009), IBM could stand pat with the Power6+ chip and just double up the sockets on system boards to add more oomph to the boxes.

The expectation, of course, was that the Power6+ chip itself would offer about twice the oomph of Power6, and while IBM was pretty vague in its roadmaps, the chips topped out at 5 GHz and never came close to their 6 GHz design point and IBM did not add more cores to the Power6+ die, as it was expected to. Chip happens.

Every chip maker misses roadmaps and hurts their business. Lucky for IBM, the delays with Tukwila and Rock hurt worse than the ridiculously small performance gains from the hop from Power5 to Power5+ in October 2005, and doubling up the socket count (rather than the core count) with Power6+ was a good enough stop loss maneuver. At least based on the fact that IBM gained market share against its Unix peers, which is what the top brass at Big Blue get their bonuses from, presumably.

With each successive process shrink in the Power4 through Power7 generations, IBM has crammed more and more transistors onto the chips, pulling more and more features off the motherboard into the chip. The Power4 chip weighed in at 174 million transistors and dissipated 125 watts running at its top-end 1.3 GHz. The chip had a 1.44 MB shared L2 cache for the cores and had L3 memory tags on the chip as well, but the 32 MB L3 cache for the chip was external yet baked into the same package.

With the Power4+ shrink to 130 nanometers, IBM boosted the L2 cache for the dual-core processor to 1.5 MB and moved the DDR main memory controller onto the processor, boosting the transistor count to 184 million. The Power5 stayed on the same 130 nanometer processes, but implemented new cores with simultaneous multithreading (two threads per core), boosted the shared L2 cache to 1.9 MB, and jacked up the off-chip L3 cache to 36 MB.

With the 90 nanometer shrink to the Power5+, the heat dissipation on the chips had fallen to around 70 watts in a chip with 276 million transistors running at a top speed of 2.2 GHz. Because of the relatively low thermals, IBM could put two Power5+ chips in the same package to offer something competitive with then-emerging dual-core, quad-socket x64 servers.

With the Power6 chips, IBM did a major reworking of the Power instruction pipeline so it could buck the industry trend and jack up clock speeds instead of adding cores. The idea was to get more performance per core, which would translate into lower software costs per unit of performance - at least for software that is priced per core rather than per system or per socket. It is debatable as to how successful the Power6 chip was on this front, but the Power6 design included other innovations that made it interesting for existing and new workloads.

With 790 million transistors to work with thanks to the 65 nanometer shrink, IBM could - and did - wrap lots of extra stuff around the two cores in the Power6 design. Each core was given it own private 4 MB L2 cache, but the L3 cache was busted back down to 32 MB and remained off chip. That was to make room for other features on each Power6 core, such as a decimal math unit (for doing money math) and an AltiVec vector processing unit in addition to the two integer and two floating point units in the chip.

shrink

With the Power7 chip, IBM is shrinking down the chip with to 45 nanometer copper/SOI processes and allowing it to crunch 1.2 billion transistors onto the die. The Power7 cores are not all that different from the Power6 and Power6+ cores. The Power7 core has 12 execution units: two fixed point units, two load store units, four double-precision floating point units, one vector unit, and one decimal floating point unit.

The cores support out-of-order execution and are binary compatible with the prior Power chips. Each Power7 core has 32 KB of L1 instruction cache and 32 KB of L1 data cache and 256 KB of L2 cache tightly coupled to it. The chip has 32 MB of L3 cache implemented in embedded DRAM (eDRAM, not static RAM, or SRAM), and this is carved up into eight segments with 4 MB chunks affiliated with one of the eight cores.

The eDRAM is slower than SRAM, but is a lot closer to the cores. (This is important, and I will explain why in a second). The Power7 chip has two dual-channel DDR3 memory controllers implemented on the chip that delivers 100 GB/sec of sustained bandwidth per chip.

IBM Power7 Chip

IBM's eight-core Power7 processor

There are a couple of big changes with the Power7 design, and all of them impact performance. First and foremost, the chip includes 32 MB of on-chip L3 cache memory implemented in embedded DRAM instead of the off-chip L3 cache that was used with all the prior dual-core Power chips. This, as it turns out, may be more important than boosting the threads and cores compared to the Power6 and Power6+ chips.

IBM has said that the technology that it uses to make that 32 MB of on-chip L3 eDRAM cache has allowed it to create that L3 cache in such a way that using static RAM would have boosted the transistor count to around 2 billion transistors. (Which is, by the way, about where the quad-core Tukwila will weigh in with its 30 MB of on-chip L3 cache). According to Scott Handy, vice president of worldwide strategy and marketing for Power Systems, the eDRAM cache can store one bit of data using only one transistor and one capacitor instead of the six transistors needed for storing one bit using static RAM.

The effect of this eDRAM on the Power7 design, and its performance, is two-fold. First, by adding the L3 cache onto the chip, the latency between the cores and the L3 memory has been reduced by a factor of six, according to Handy. (The exact memory latency feeds and speeds were not available at press time). This means the Power7 cores are waiting a lot less for data than the previous Power cores were.

Also, by having that L3 cache take up a lot less space than it might otherwise, IBM could boost the core count by a factor of four, to eight cores on a die, and could double the thread count per core, to four. If it were not for the eDRAM, the Power7 chip might have looked a lot like Tukwila, with its transistor budget being half burned up by cache.

The Power7 chips that are being announced inside of four different Power Systems servers today run at 3 GHz, 3.3 GHz, 3.5 GHz, 3.55 GHz, 3.8 GHz, and 4.1 GHz. (IBM is using Power7 chips with six or eight working cores in the four boxes announced today). The latter two clock speeds are only available in the Power Systems 780 midrange server, and the higher 4.1 GHz clock speed is only available in the so-called TurboCore mode, when the system microcode is told to shut down half the cores in the eight-core chip so the processor can speed up from the 3.8 GHz it is allowed to run at with eight cores turned on.

In TurboCore mode, the activated four cores get access to all of the 32 MB of eDRAM L3 cache and to both memory controllers, and on database workloads where clocks and cache matter, this can boost performance by around 20 per cent. Moreover, the chips are actually rated to push up to 4.5 GHz, so Power Systems shops can overclock them further if the thermal conditions inside the servers allow for this, further boosting performance. Without overclocking, the Power7 cores - not the chips, but the cores - in the Power Systems 780 have about twice the database performance of the Power 570 machines using the Power6 and Power6+ chips.

"The slowest speed bin Power7 core is faster than a 5 GHz Power6 core," brags Handy. It will be interesting to see that claim verified with some performance data.

Equally importantly for an IBM that is doing battle with Oracle and its Sparc T 64-threaded T2 and T2+ chips and the quad-core, eight-threaded Tukwilas due from Intel today, the Power7 chip has 32 threads, eight times as many as the Power5 through Power6+ chips could bring to bear on workloads that like threads. One of those workloads is IBM's own WebSphere Application Server, and on early benchmark tests, shifting from a Power6 to Power7 system with the same number of cores boosted the performance of WebSphere running on AIX by 85 per cent.

By the way, each Power7 chip has a feature called Intelligent Threads, which allows those virtual instruction streams to be turned on and off as conditions dictate. The Power7 processors and their systems also have something called Active Memory Expansion, a memory compression technology built into the chip for its main memory that IBM has not discussed before and has not provided much detail about as yet. It looks AME offers 2:1 data compression on the DDR3 main memory from the brief mention it got in the official announcement today. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like