HDS embiggens its object array by feeding it more spinning rust

Gets erasure coding as well

HDS has bulked up its HCP object storage offering with an erasure coding capacity tier, remote office and mobile access to objects, filer data stores and OpenStack support. Why? Preparations for the coming Internet of Things, we think.

Hitachi's Content Platform (HCP) is the central object storage facility in a 3-part offering. It's flanked by HDI, the Hitachi Data Ingestor which sucks in data, and HCP Anywhere, the access provider and manager facility. All three have been updated.

Back in June last year HDS added a public cloud backend to HCP as a storage tier and more front-end access. We wrote: "HCP Anywhere provides mobile device access to HCP storage and HDI inputs data into HCP, being a kind of cloud data on-ramp. The latest releases of HCP Anywhere and HDI provide always-on, secure access to data from any IP-enabled device, including mobile phones, tablets, and remote company locations."

Public cloud backends include Hitachi's Cloud Service for Content Archiving, S3, Azure, Verizon Cloud and Google Cloud Storage.

Just six months later HDS has added an on-premises capacity tier and more access options. A chart neatly illustrates the territory HCP (Hitachi Content Platform) has moved into:


The erasure coding comes in the shape of a new storage tier using a new box, the HCP S10. Existing tiering software routines move data to it from other tiers and off it to the other tiers as needed.

This has 60 x 3.5-inch disk drives in a 4U enclosure driven by a pair of 6-core CPUs. Each S10 is a node and 80 nodes can be connected to the HCP controller for a total of 18PB. The nodes connect to the HCP using an S3 interface across Ethernet. They implement erasure coding for data protection and faster-than-RAID rebuilds.

The 80-node/18PB maximum capacity implies 225TB/node. That would, in turn imply 3.75TB drives. HDS is presenting usable capacity, after erasure coding overhead, and confirms it’s using 4TB drives. HDS says the capacity range starts at 112TB, which implies a half-populated enclosure.

Why has HDS added an on-premises capacity tier? It's not as inexpensive to use as public cloud storage but it provides faster, local access. Our best guess is that HDS is looking forward to data flooding in from the coming Internet of Things (IoT), being able to store it cost effectively, protect it better than RAID with erasure coding, and then feed it fast to on-premises analytics routines.

The data provider, HDI, has become an access gateway, a file-serving gateway for remote and branch offices, as well as being a data on-ramp to the cloud. It can be used to manage quotas at remote site and for cloud storage from a single interface. HDS says that HCP can be used instead of traditional file servers.

HCP Anywhere has added mobile users access to files in existing NAS systems as well as providing file sync ’n share capability. Users can choose from multiple languages for their client systems. Also HDI can be provisioned and managed remotely through HCP Anywhere.

HCP can be used instead of SWIFT in OpenStack projects as it supports the SWIFT APIU, the Keystone API for authentication plus Horizon management interfaces and Glance for VM images.

To find out more, read two HDS blogs by HDS' HU Yoshida –this one and this one. ®

Similar topics

Other stories you might like

  • New audio server Pipewire coming to next version of Ubuntu
    What does that mean? Better latency and a replacement for PulseAudio

    The next release of Ubuntu, version 22.10 and codenamed Kinetic Kudu, will switch audio servers to the relatively new PipeWire.

    Don't panic. As J M Barrie said: "All of this has happened before, and it will all happen again." Fedora switched to PipeWire in version 34, over a year ago now. Users who aren't pro-level creators or editors of sound and music on Ubuntu may not notice the planned change.

    Currently, most editions of Ubuntu use the PulseAudio server, which it adopted in version 8.04 Hardy Heron, the company's second LTS release. (The Ubuntu Studio edition uses JACK instead.) Fedora 8 also switched to PulseAudio. Before PulseAudio became the standard, many distros used ESD, the Enlightened Sound Daemon, which came out of the Enlightenment project, best known for its desktop.

    Continue reading
  • VMware claims 'bare-metal' performance on virtualized GPUs
    Is... is that why Broadcom wants to buy it?

    The future of high-performance computing will be virtualized, VMware's Uday Kurkure has told The Register.

    Kurkure, the lead engineer for VMware's performance engineering team, has spent the past five years working on ways to virtualize machine-learning workloads running on accelerators. Earlier this month his team reported "near or better than bare-metal performance" for Bidirectional Encoder Representations from Transformers (BERT) and Mask R-CNN — two popular machine-learning workloads — running on virtualized GPUs (vGPU) connected using Nvidia's NVLink interconnect.

    NVLink enables compute and memory resources to be shared across up to four GPUs over a high-bandwidth mesh fabric operating at 6.25GB/s per lane compared to PCIe 4.0's 2.5GB/s. The interconnect enabled Kurkure's team to pool 160GB of GPU memory from the Dell PowerEdge system's four 40GB Nvidia A100 SXM GPUs.

    Continue reading
  • Nvidia promises annual updates across CPU, GPU, and DPU lines
    Arm one year, x86 the next, and always faster than a certain chip shop that still can't ship even one standalone GPU

    Computex Nvidia's push deeper into enterprise computing will see its practice of introducing a new GPU architecture every two years brought to its CPUs and data processing units (DPUs, aka SmartNICs).

    Speaking on the company's pre-recorded keynote released to coincide with the Computex exhibition in Taiwan this week, senior vice president for hardware engineering Brian Kelleher spoke of the company's "reputation for unmatched execution on silicon." That's language that needs to be considered in the context of Intel, an Nvidia rival, again delaying a planned entry to the discrete GPU market.

    "We will extend our execution excellence and give each of our chip architectures a two-year rhythm," Kelleher added.

    Continue reading
  • Amazon puts 'creepy' AI cameras in UK delivery vans
    Big Bezos is watching you

    Amazon is reportedly installing AI-powered cameras in delivery vans to keep tabs on its drivers in the UK.

    The technology was first deployed, with numerous errors that reportedly denied drivers' bonuses after malfunctions, in the US. Last year, the internet giant produced a corporate video detailing how the cameras monitor drivers' driving behavior for safety reasons. The same system is now apparently being rolled out to vehicles in the UK. 

    Multiple camera lenses are placed under the front mirror. One is directed at the person behind the wheel, one is facing the road, and two are located on either side to provide a wider view. The cameras are monitored by software built by Netradyne, a computer-vision startup focused on driver safety. This code uses machine-learning algorithms to figure out what's going on in and around the vehicle.

    Continue reading
  • AWS puts latest homebrew ‘Graviton 3’ Arm CPU in production
    Just one instance type for now, but cheaper than third-gen Xeons or EPYCs

    Amazon Web Services has made its latest homebrew CPU, the Graviton3, available to rent in its Elastic Compute Cloud (EC2) infrastructure-as-a-service offering.

    The cloud colossus launched Graviton3 at its late 2021 re:Invent conference, revealing that the 55-billion-transistor device includes 64 cores, runs at 2.6GHz clock speed, can address DDR5 RAM and 300GB/sec max memory bandwidth, and employs 256-bit Scalable Vector Extensions.

    The chips were offered as a tech preview to select customers. And on Monday, AWS made them available to all comers in a single instance type named C7g.

    Continue reading

Biting the hand that feeds IT © 1998–2022