Hitachi Data Systems has renamed and refreshed its Content Archive Platform as a cloud-focussed platform with multi-tenancy features which scales out to a 96-node cluster.
The Hitachi Content Platform (HCP 3.0) is one way to present a storage personality based on single virtual pools of storage from USP-V and/or AMS storage arrays, alongside Hitachi's NAS platform for file services, and the Data Discovery Suite for collaboration and search. Third-party arrays can be virtualised in via HDS' V Series controller.
The Content Archive Platform was the archive personality and HCP 3.0 has extended this to be a cloud storage platform for enterprises. These are three heads layered above the storage arrays. HDS is at pains to say that its approach prevents cloud storage being just another storage silo and that it enables legacy devices to inherit cloud attributes - not least its own.
Like other storage array suppliers HDS is heading up into public and private clouds. HCP is described as enabling cloud storage, archiving, business continuity, data consolidation and the creation of the colourfully-named content depot. A single HPC environment can be segregated for different customers and departments, with each having their own logical HCP (or logical silo). The default limit is 100 logical HCPs with capacity scaling to 40PB, with 1TB drives.
The minimum configuration is to have four nodes in a cluster, using an Ethernet interconnected, with the limit being 96. The separation between head nodes and underlying storage means that performance and capacity can be scaled independently. Nodes are either ingest or retrieval nodes and a total of 40 billion objects can be stored in an HCP cluster.
An object can be a file, say a Word document or image, plus a retention requirement and metadata about the file. All three items are stored, secured, replicated etc together. Objects are accessed through a directory structure, versioned and checked for integrity during their retention. They can be assigned to different tiers of storage depending upon their access rate and value. Storage efficiency is increased through single-instancing and compression.
Each logical HCP is a tenant and can have a different personality, meaning the way data in it is secured, stored in one or more file namespaces, indexed, encrypted or not, protected, subject to a retention policy, replicated to another HCP, stored immutably or not, retained for compliance purposes, and so forth.
HDS says the project can integrate with third-party applications for eDiscovery and legal hold although none are identified. Unlike Mimecast's cloud archiving service or non-cloud arching products such as NearPointso from Mimosa, there are no application-aware content capture features that can, for example, ingest, store and recover emails and folders or Sharepoint data.
However, HDS has provided multi-tenancy and huge scalability and a good base for future development. The separation of the archiving storage function from the underlying array is logical and differentiates HDS from storage-array-centric suppliers, such as 3PAR, Compellent, NetApp and Pillar who would rely on third-parties for such storage applications.
Other storage application and system suppliers do provide their own archiving products. EMC has its Centera line and this may progress to using Symmetrix or Clariion storage and not its own. HP has its IAP or Integrated Archive Platform, which combines "server and grid storage technology and native content indexing, search and policy management software into a single, factory-assembled rack system." That seems pretty similar to HCP apart from the packaging and current lack of cloudiness. IBM, like HDS, has a cloud archive strategy unifying disk and take.
Other suppliers include NEC with its gridded HydraStore and Permabit with its multi-tenant archival software. Enterprise storage archive supplier ducks are being lined up and cloud is becoming the new archive buzzword attribute. ®