Dell has quietly acquired Portland, Oregon-based RNA Networks, one of a handful of innovative startups that have been launched in the past couple of years to glue multiple x64-based servers together and allow them to look like a single, monster server to specific workloads.
Dell has not made a formal announcement of the acquisition, but the RNA Networks site made this simple statement last Friday:
"RNA Networks, Inc. has been acquired by Dell Inc. Dell Inc. (NASDAQ: DELL) listens to customers and delivers innovative technology and services that give them the power to do more."
The announcement also included a sales pitch for you to look for a career at Dell. The rest of the RNA Networks site has gone the way of all flesh.
Dell didn't want to say much more about the acquisition at this point. "RNA Networks has a great memory virtualization technology that Dell will leverage in future offerings," Dell said in a statement to The Register.
A Dell spokesman confirmed that the deal has been completed and said that Dell is not disclosing the terms of the agreement or providing a timeline on when specific product offerings using RNA Networks intellectual property will be used – or where.
But clearly, there are a lot of places where Dell can use the RNA Networks software in its systems, storage, and networking products.
And it is about time that Dell decided sophisticated server clustering technology was strategic to its future and that it needed to be less reliant on partnerships to provide this basic functionality.
RNA Networks was founded in 2006 by Jason Gross and Ranjit Pandit, the latter of whom who led the database clustering project at SilverStorm Technologies (which was eaten by QLogic) and who worked on the InfiniBand interconnect and the Pentium 4 chip while at Intel.
It received $7m in Series A venture funding in March 2008, and closed a $7m Series B round in February 2009. Menlo Ventures, Oregon Angel Fund, Divergent Ventures, and Reference Capital have all kicked in dough to RNA Networks.
The company included techies from supercomputer maker Cray, chip maker Intel, host bus adapter maker QLogic, and Web caching provider Akamai. The thing that they all had in common is expertise in caching, interconnects, and remote direct access memory (RDMA) technology.
Shared global memory
RNA Networks came out of stealth mode in February 2009 with a server virtualization technology that doesn't carve up a single physical server into multiple virtual machines.
Instead it takes multiple servers and clues their processors and memory into a single virtual image for applications to run upon, just like they would on big and expensive SMP and NUMA servers. It is fairly easy to create a server virtualization hypervisor but it is tricky to create the high-speed networks and virtualization
The RNA software, which has gone by a number of different names in its short history, creates a pool of shared global memory from main memory in each server node that can be accessed like a cache by all of the other nodes in a cluster.
The server nodes can be linked with either Ethernet or InfiniBand networks, with or without RDMA turned on, but obviously InfiniBand with RDMA or Ethernet with RDMA over Converged Ethernet (RoCE) will significantly improve the performance of the virtualized memory pool.
RNA Networks then plunks a messaging engine, an API layer, and a pointer updating algorithm on top of the global shred memory infrastructure, with the net effect that all nodes in the cluster see the global shared memory as their own main memory.
The RNA code keeps the memory coherent across the server, giving all the benefits of an SMP or NUMA server without actually lashing the CPUs on each machine together tightly so they can run one copy of the operating system.
For financing and scientific applications that run messaging protocols, this approach allows for both horizontal application scalability and very fast data sharing across the nodes.
RNAmessenger, the company's first product, deployed on 32-bit or 64-bit hardware and runs on Sparc, Power (including the now defunct Cell chip), x86, x64, and Itanium processors. The software could boost throughput on workloads by a factor or 10 to 30 times and scale to hundreds of nodes and multiple terabytes of main memory across those nodes.
Memory Virtualization Acceleration
Last July, RNA Networks rejiggered its product line, converging a bunch of separate features in RNAmessenger and RNAcache, a separate but related product for Web caching, into a single product called Memory Virtualization Acceleration, or MVX for short, and put out a 2.5 release.
This included a bunch of new features called Memory Cache, Memory Motion, and Memory Store. The cache feature turns a memory pool into a cache for network-attached storage (NAS) arrays, which RNA Networks contends is a lot cheaper than putting gobs of memory onto a NAS box.
Memory Store turns the memory pool into virtual block storage devices that look like virtual RAMdisks to servers. The Memory Motion feature is aimed at giving operating systems on virtual servers a swap device to get around waiting for the underlying iron, which might be disks or solid state drives, to get data.
As you might expect, the MVX block device was very fast, reading data 52 times faster and writing data 24 times faster than SATA disks; it absolutely blows away SSDs, too, which makes sense, with main memory being so much faster than flash memory.
The benefit of the MVX software is that hypervisors, operating systems, and applications that have been written for disk access don't have to be recoded - you just point them at the MVX software, which looks and smells like a file or block device, depending on what you need. ®