The second day of the Hitachi Data Systems (HDS) Bloggers event is finished and I'm writing this piece on the airplane flying back home to Italy.
HDS people joked about their unified/converged stack architecture while presenting their stack proposition, saying several times that their stack is uni-verged (unified and/or converged).
It will be a surprise to many of you but HDS has not one but two distinct blade server offerings: BladeSymphony 320 and 2000, targeting different markets. The 320 is for small environments while the 2000 is aimed at large datacentre customers.
The BladeSymphony 2000 series features can be summarized in few, very interesting points:
- Firmware-level virtualization (as close as you can get to hardware partitioning on x86)
- Intel 5600 and 7500 CPU support
- Four blades can be joined up to 8 CPUs with 8 cores, each acting as a single SMP system. Wow!
- Almost linear scalability for expanded machines! (another wow!)
- Very well balanced architecture with powerful I/O capabilities (it also has an external PCIe expander box to get more PCI slots)
The most noteworthy feature of this platform is its partitioning capability: you can partition the blades in hardware and you don't need specific drivers to work with major O/S'. Windows and Linux (RHEL and SuSE) are supported. This capability has a mainframe-like name: "LPAR" (as you probably know HDS is still proud of its mainframe roots). Or, you can use Hypervisors like VMware or Hyper-V.
But, I didn't see anything related to converged Ethernet, I/O virtualization capabilities nor management tools.
BladeSymphony 320 is a compact, very dense (6 rack units), 10 2-way blade slots chassis without the virtualization features mentioned above; a cheaper and simpler product. As with many other vendors you can choose among multiple blade options ( I.e. A storage blade full of hard drives).
The system is well designed with some cool features like hot swap components and automated blade failover (obviously the failover maintains all the blades in stateless form, with the chassis controller keeping track of things like MAC addresses or WWNs).
The Blade platform has centralized management (but we didn't have the chance to see it live) and built-in switches too (for both SAN and networking). This networking part is nothing to be excited about; SAN ports and Layer 3 network switches with uplink ports.
HDS has some preconfigured, pre-cabled, pre-installed and, most importantly, certified stacks with blades and midrange or high-end storage. I know nothing about services and support but HDS has an overall good service department and I can imagine that it will be prepared when these systems will hit the EU and US markets.
I'm sure that HDS can be a good player in the datacentre space with these blades but they need to work hard to improve the networking side to become a serious competitor to Cisco or, in some cases, even HP!
Private Cloud made easy
From my point of view, the greatest thing I saw in these two HDS Blogger days was the HCP (Hitachi Content Platform) coupled with HDI (Hitachi Data Ingestor). It's a killer product with which to build true and easily deployable private cloud storage.
The Ingestors are simple appliances (that can fit any pocket: ranging from a simple VM (virtual machine) to a full-featured, clustered system with local storage) acting as CIFS/NFS gateways to a central Object repository: the HCP. The architectural design is so simple that's genius!
If you already know vendors like Nasuni you can easily understand what I mean. It comes with a phenomenal advantage for the private cloud because it's a whole object-based architecture: the Ingestors manage files as objects, sync them to the central repository and act as a local cache, so it's virtually unlimited in space. You need to take care of the local cache size for performance reasons only.
The HCP maintains a copy of all the objects and it has features like dedupe (only at the object level for now) granting access to objects via HTTP and the REST API. Multi-tenancy and security are built-in at the foundation layer of the product architecture and replication options are very granular.
I repeat to myself that what I'm seeing is a "beautiful product" but still, some questions rise to mind, and the first one is: "Why aren't they selling this product like hot cakes?" This is a killer application because it's a different kind of unified storage; not blocks+files but files+objects. Many vendors are talking about cloud without even a real cloudy product in their offering. On the contrary HDS has a real one and this is the first time I have heard about it.
Probably I'm not so tight with HDS as to know all about their product line but I'm probably not alone. Many of the bloggers in the room today have never seen it before or even heard about HDI. The HDS cloud message is still not clear indeed and the risk is, as occurred in the past, that they'll perform poorly in execution. HDS has a good vision, product, engineering and architecture but it isn't communicating, or evangelizing it, to the customers in the right way.
The event wrapped up with a great speech from David Merrill on storage economics. I strongly suggest you follow his blog because he is a mind opener when the discussion moves from TCA (total cost of acquisition) to TCO (total cost of ownership. That's all I have to say. It's been a good event, a good networking opportunity and a really good way to have first-hand information straight from the horse's mouth.
*Disclaimer: HDS invited me at this event and paid for travel and accommodation but I'm not under any obligation to write any material about this event. ®
Enrico Signoretti is the CEO of Cinetica, a small consultancy firm in Italy, which offers services to medium/large companies in finance, manufacturing, and outsourcing). The company has partnerships with Oracle, Dell, VMware, Compellent and NetApp.