This article is more than 1 year old

Hitachi's flexible servers sport homegrown LPAR virtualization

And maybe an OEM agreement with IBM, too

Sometimes Hitachi and IBM are bitter enemies, as they were in the peak days of the mainframe market, and sometimes they are partners, as they have been with Hitachi making some entry mainframes for Big Blue in recent years while also reselling IBM's Power-based AIX servers in Japan. It looks like the two are working together on their new converged systems, too.

In the wake of IBM's launch last week of its Flex System machines, which are at the heart of its PureFlex infrastructure and PureApplication platform cloudy boxes, Hitachi announced a new blade server called the Compute Blade 500. The name refers to the chassis design, not the nodes, which are called the CB520H blade servers by Hitachi. These CB520H nodes are two-socket server nodes based on Intel's new Xeon E5-2600 processors, which debuted in early March – just like IBM's Flex x240 server nodes. And when I say just like IBM's server nodes, I mean just like them.

Take a gander at the Hitachi Compute Blade 500 chassis with eight half-width, single-bay compute nodes slid into its enclosure:

Hitachi's Compute Blade 500 chassis

Hitachi's Compute Blade 500 chassis (click to enlarge)

Now look at the frontal shot of IBM's Flex x240 server node:

IBM Flex x240 server node, front view

IBM Flex x240 server node, front view (click to enlarge)

It sure does look like someone copied off someone else's homework, doesn't it? Neither IBM nor Hitachi were available for comment as El Reg went to press. But it is pretty clear that IBM and Hitachi are partnering in some way on the mechanicals of the chassis and server node design, if not partnering to manufacture them together.

The IBM Flex System, as El Reg detailed here, is a 10U chassis that has 14 half-width, 2.5-inch high server bays. The Hitachi Compute Blade 500 is also a rack design with horizontal nodes, only it can fit eight of those single-bay nodes into a 6U chassis. The CB52H compute nodes have 24 DDR3 memory slots and support 384GB using 16GB memory sticks and 512GB using 32GB sticks across 16 of its memory slots. Hitachi does not appear to be supporting load-reduced (LR-DIMM) sticks at 32GB capacities, which would allow the CB52H node to have a maximum of 768GB of capacity. The x240 from IBM and the CB52H from Hitachi both have a dual-port 10 Gigabit Ethernet NIC welded to the mobo, plus two mezzanine slots for connectivity to the chassis midplane and linking out to switches.

The two Xeon E5-2600 nodes are probably are the same, and the Hitachi chassis has an integrated management module (in fact, you can have a pair for redundancy) like the chassis management modules (CMMs) that IBM puts into the Flex Chassis. These could be the same as well.

Each Hitachi node has a baseboard management controller (BMC) and a remote KVM switch for managing the nodes, and this is different from what IBM is doing. IBM is using its Flex System Manager appliance server, which runs on a special x86 node in one slot in one chassis and which can scale across 16 enclosures, to perform these functions in conjunction with the CMMs in the back of its Flex System chassis. The Hitachi chassis has six cooling fans and up to four power supplies, and has room for four switch modules (two come standard.)

In a blog post, Hu Yoshida, vice president and CTO at the Hitachi Data Systems unit of the Japanese computer giant, said that the Compute Blade 500 chassis has an Ethernet switch that it OEMs from Brocade Communications, called the VDX 6746, which has 16 internal ports (two per server node) running at Gigabit Ethernet speeds and two 10 Gigabit ports and four Gigabit ports reaching outside of the chassis to the world. If you don't need 10GE uplinks, there is a plain-vanilla Gigabit Ethernet switch module that has 16 internal ports and four uplinks all running at the same speed. There is also a converged fabric switch module that runs at 10GE speeds and supports Fibre Channel over Ethernet (FCoE) for converging storage and server traffic on the same switch; this switch has 16 internal ports and 8 external ports.

The CB520H blade has two hot-plug slots in the front of the node (and directly ahead of the Xeon processors) for installing hard disk, SATA, or SAS disks in 2.5-inch form factors.

Hitachi is supporting Microsoft's Windows Server 2008 and Red Hat's Enterprise Linux 6.2 operating systems on the blade, as well as VMware's ESX 4.1 and 5.0 and Microsoft's Hyper-V server virtualization hypervisors.

The company is also supporting its homegrown logical partitioning, or LPAR, that was ported from its Itanium-based BladeSymphony line as the Virtage hypervisor back in the fall of 2007. (The LPAR functionality could come from Hitachi mainframes or even IBM's PowerVM hypervisor for Power chips, but El Reg doubts it.)

In any event, this LPAR is leaner and meaner than ESXi or Hyper-V, according to the Hitachi data sheets (PDF). It also allows for up to four LPARs per physical blade in a base setup, with the ability to scale that up to 30 LPARs per blade. (A CB520H blade has 16 cores and 32 threads, so it looks like Hitachi's x86-based LPAR is tying a partition to a CPU thread at its most granular level, with a little left over for the hypervisor to do its job.) By the way, you can mix and match Hitachi LPARs, Microsoft Hyper-V, and VMware ESXi in a single chassis and not drive the management tools nuts.

Hitachi did not divulge pricing, but did add that it would be creating new "converged data center solutions" based on the Compute Blade 500 chassis "in the coming months". ®

More about

TIP US OFF

Send us news


Other stories you might like