This article is more than 1 year old

Netlist goes virtual and dense with server memory

So much for that Cisco UCS memory advantage

Netlist, a publicly traded company based in Irvine, California that was founded in 2000 and that you have probably never heard of, will probably make a big splash at the SC09 supercomputing trade show next week. Netlist, which makes memory modules on an OEM basis for various companies, said Wednesday that in December it will roll out a virtualized, dense memory DDR3 module that will be able to trick servers into having more main memory than they are supposed to.

The ability to offer more memory capacity on an x64 server than is allowed in standard boxes is, of course, one of the key selling points of Cisco Systems' B250-M1 blade server in the "California" Unified Computing System and its rack-based sibling, the C250-M1. Both of these machines are equipped with a special memory ASIC that allows Cisco's two-socket Xeon 5500 servers to address up to 384GB of main memory, when a standard server tops out at 96GB in machines with 12 memory slots and at 144GB in machines with 18 slots using very pricey 8GB DIMMs. With the memory extender ASIC on the B250 and C250 servers, Cisco can use less capacious and cheaper 2GB and 4GB DIMMs to get a given capacity, and do so for a lot less money - between a quarter and a third of the price of other server platforms that have to resort to denser and more expensive memory.

Enter Netlist with its HyperCloud memory modules - which, by the way, will plug into any server and which will do essentially the same trick as Cisco is pulling, but do so inside the memory module rather than on the motherboard.

Netlist got its start in 2000 doing custom printed circuit board design, and a "netlist," according to Paul Duran, director of business development at the company, is akin to a bill of materials for all of the connectivity on a PCB. A few years back, when dense rack and blade servers started going volume, Netlist became a specialist in making very low profile memory on an OEM basis for blade server makers. (The company does not disclose who its customers are, but they're probably the usual suspects.) The company also developed a memory packaging technology called Planar-X, which allows for two PCBs loaded with memory chips to be packaged together relatively inexpensively to share a single memory slot. This technique is cheaper and more reliable, according to Duran, than some of the dual-die packaging techniques memory module makers use to make dense memory cards out of low density and cheaper memory chips.

With the HyperCloud memory, Netlist has created its own memory virtualization ASIC and is plunking it onto planar or Planar-X DDR3 memory modules, depending on what server makers want to deploy.

Using the Planar-X double-board designs, Netlist can take 1Gb memory chips and make an 8GB memory module that costs only 20 to 30 per cent more than a standard 4GB module using 1Gb chips; using 2Gb chips, it can make a 16GB module, something no one else can do yet.

But this is not the neat bit.

In the current Nehalem Xeon designs, there is a limit to the number of ranks of memory that each memory channel can address. A standard DDR3 DIMM has four ranks of memory (two arrays of chips linked together on each side of the module) and each memory channel in the processor's integrated memory controllers can only address eight ranks per channel. With three channels per Nehalem processor, you top out at 96GB of main memory using four-rank 8GB DDR3 DIMMs, which aren't even available yet - and even then they are, they will be wicked expensive. And on the machines that have 18 memory slots, offering 144GB of memory capacity using 8GB DIMMs, you have to slow the memory down because there is a cap on the bandwidth the Xeon 5500s and their memory controllers will allow - 800 MT/sec, which means it has to run at 800MHz instead of 1.33GHz. This obviously has a big effect on performance.

The HyperCloud 2 vRank DDR3 DIMMs have two special ASICs. The first, a register device, presents four physical ranks of memory as two virtual ones to the memory controller on an x64 processor (it works with either Xeon or Opteron processors, and any DDR3 memory controller, for that matter). This allows the doubling-up of memory modules per channel. And, thanks to the Planar-X packaging, Netlist can put twice as much physical memory in a slot. The other ASIC is an isolation device that makes four memory slots look like one as far as the memory controllers and memory bandwidth are concerned, allowing for main memory to run at the full 1.33GHz speed, even on a system with 18 slots that is fully populated. The maximum of 384GB per two-socket server assumed 24 memory slots.

Duran says that Netlist will be making the first batches of HyperCloud memory using Hynix memory chips, and that the plan is to charge a slight premium (that's the 20 to 30 per cent mentioned above) for this memory compared to the prevailing spot prices for unvirtualized 4GB and 8GB DDR3 memory modules. The company can probably charge more for 16GB modules, since no one has these yet.

What's the net effect of using HyperCloud memory? Duran cooked up this comparison: take 60 Xeon-based servers, each with 48 virtual machines allocated with 4GB. By installing HyperCloud memory and doubling up memory capacity, the number of servers can be cut in half (assuming memory, not CPU capacity, is the main bottleneck for virtualization). If you assume that - and maybe that is rational and maybe not, but that is certainly Cisco's sales pitch - then using HyperCloud memory on those Xeon rack servers cuts the number of servers in half, power consumption by 36 per cent, and cuts hardware and software costs at the data center level by around 20 per cent. Memory is a big component of a server's price these days, and Netlist is charging a premium for its virtualized memory; hence, even when you cut the server count in half, the iron cost doesn't come down as much as you might expect.

The one cost that Duran did not calculate was savings in power and cooling, but the HyperCloud memory burns under 10 watts for a 16GB module, and in general, for a given capacity, a HyperCloud module will burn 2 to 3 watts less than a standard DDR3 module. And because HyperCloud memory can run at the full 1.33GHz speed, regardless of the capacity in the box, there should be a sizeable performance boost on applications that are sensitive to memory bandwidth - maybe as high as 50 per cent, says Duran.

Netlist plans to start sampling HyperCloud memory modules to OEM customers beginning in December, and expects volume production to begin in the first quarter of 2010. You can bet that more than one server maker will be lining up to give the product a spin as they try to push Cisco back into its networking space. ®

More about


Send us news

Other stories you might like