Comment Holy Moly, HGST is getting ambitious. It's building an active archive platform product in competition with some of its OEMs and its aiming to rewrite server clustering with a flash fabric - oh and develop helium-filled disk drives - and shingled drives with its own slant - and thinking of Phase Change Memory chips with DIMM connectors.
At a UK press roadshow providing background to its recent slew of disk and flash announcements, Mike "Gus" Gustafson, HGST SVP and GM, said HGST wants to be more aggressive about using the parts of its portfolio. He split it into three layers
- Storage devices from flash to active archive
- Device affinity - how to be smarter about accessing devices and data
- Advanced software for analysis, decision-making, new bus opportunities
To analyse data and separate out the digital gold from the digital dross you have to store it on the one hand and access it fast on the other. Let's cover the ground here from the server side first.
Ulrich Hansen, HGST's veep for product marketing, talked about the PCM (Phase-Change Memory) SSD demo with much faster access than a flash SSD. Is HGST considering flash DIMMs as a way of dropping access latency below PCIe flash?
Yes, HGST is thinking about it, but it's not convinced flash DIMMs are the way to go: PCM DIMMs could be much more attractive, due to their lower latency, than PCIe PCM cards or flash DIMMS, for instance.
Server-side flash SAN
Virident Space can be used to build a server-side flash SAN, with, for example, Virident FlashMax NAND drives and sharing software. The Space product adds a flash volume manager with what HGST calls "storage affinity" and can provide a single 38.4TB volume or multiple sub-division volumes from that 38.4TB to up to 128 clustered server nodes. The cluster interconnect is, for now, InfiniBand or high-speed Ethernet. The host OS for now is Linux, with Windows likely to be added in the first quarter next year.
This flash fabric could be used for Oracle RAC, avoiding the need to use server pairs for scale-out, Red Hat Linux and KVM and MySQL applications. An open API is needed. It can dynamically scale-up in a node by adding PCIe flash cards and scale out by adding server nodes. A server can be a single or multi-socket machine. Gustafson called it "a breakthrough for the entire industry."
We wonder about Atlantis USX and PernixData hypervisor caching software using it. No doubt they are evaluating the idea.
The HE10 10TB shingled magnetic recording drive was presented to us and Hansen said that HGST required host (Server) software to manage the groups of tracks or zones, and be aware that if part of a zone needed re-writing then the whole zone needed re-writing. Track zones are 256MB in size so there are tens of thousands of zones on the drive.
This means HGST SMR drives are not plug-and-play. The host OS or the drive-accessing application have to be aware of the zoning and have a filesystem that can cope with it; software changes, in other words.
Hansen said: "We don't allow anything that's not a sequential write and so we get full write performance from the drive."
The implication is that Seagate's DMR implementation has the drive handle the track zone management and so causes a slowdown in write performance.
The joint venture with Amplidata, using its Himalaya object storage software in an online, Active Archive system, will use HGST's SMR drives. The Himalaya software will be developed to be SMR track zone-aware. Furthermore, HGST itself is developing the hardware platform, an approximate 6-8U rack enclosure containing software and server and storage resources.
HGST Active Archive platform
What about competing with its own OEMs in this area? Gustafson said HGST had been open and transparent about its ideas in discussions with its OEMs and was looking for partnership opportunities in this, relatively, green field application area.
Hansen said Linux may get an SMR-aware file system, and confirmed that, if drive-using SW is not changed the SMR drives are unusable. He also said; "We don't believe SMR s a short-term technology. It will have long-term benefits and will be complemented with other recording technologies." In other words, future HAMR drives could use SMR to boost their capacity.
One other titbit came out; Avere, with its FXT filer accelerating and cloud storage gateway technology, is involved. WD. HGST's parent, has invested in both Avere and Amplidata. These are are strategic relationships.
Avere's initial role is to provide file-level access the the HGST Active Archive platform. But a subsequent role could be to provide remote access, to an Active Archive system in the cloud for example. Oh, and Avere could see more HGST componentry in its FXT systems.
What we are seeing here is a huge expansion in HGST's reach and ambitions out from its core disk drive business. It has products and plans in the server flash card and server-side flash SAN space, is demonstrating capacity leadership in capacity-focussed disks with helium-filled drives and SMR helium drives, and is boldly going into the object storage archive array space.
HGST is moving down the component stack, heading straight towards close coupling of server CPU, memory and its non-volatile technology, building out its PCIe flash hardware and software product lines, with PCM in its sights, building out its disk drive products and, most startling, going into the storage array business in the archive niche, which involves servers.
Seagate, with its Xyratex ClusterStor arrays and Kinetic and SMR drives is also making strides outside its core disk drive business. What we didn't hear from HGST was anything about hybrid flash/disk drives or Ethernet-accessed drives. Maybe there's more to come. ®