Red Hat's OpenShift Container Platform openly shifts storage into the hands of devs

Dynamically allocate without an admin over your shoulder


Enterprise Linux biz Red Hat has revised its OpenShift Container Platform to include support for dynamic storage provisioning in local and remote applications.

The software is an on-premises platform-as-a-service product that allows organizations to run applications using Kubernetes orchestration and Docker containers.

The latest iteration of the software, OpenShift Container Platform 3.4, provides on-the-fly container-native storage through Red Hat Gluster Storage, a software-defined file storage system for on-premises and public cloud installations.

The software now allows developers to allocate storage as needed and to deploy it with minimal effort.

In a statement, Ashesh Badani, VP and general manager of OpenShift at Red Hat, said the update addresses "the growing storage needs of both stateful and stateless applications across the hybrid cloud, allowing for coexistence of modern and future-forward workloads on a single, enterprise-ready platform."

Joe Fernandes, senior product manager for OpenShift, in a phone interview with The Register, explained that OpenShift Container Platform has supported stateful applications and storage since the company transitioned to its software to support Kubernetes and Docker a year and a half ago.

Previously, said Fernandes, adding storage required the involvement of an administrator. "Dynamic provisioning means being able to spin up the storage for each application when the developer configures it in real time," he said.

The update also improves multi-tenancy capabilities, made possible through Kubernetes namespaces, which subdivide clusters. Development teams can now separately search for project details and manage project membership through a revised web console.

What's new, said Fernandes, is the multi-tenant management through OpenShift. "It eases the burden on the administrator who is configuring the system for an organization," said Fernandes. "For our largest customers, they may have hundreds of tenants on the platform. They don't want to set up different clusters for each group."

Customers will also have access to reference guides for running the software on different infrastructure, including Amazon Web Services, Google Cloud Engine, Microsoft Azure, and OpenStack.

"What we want to do is provide more details on the best way to manage and install the software on different providers," said Fernandes.

Fernandes said Red Hat is working on implementing Kubernetes federation, to simplify the management of multiple clusters. When the API is more mature, he anticipates it will appear in a future update.

OpenShift Container Platform 3.4 is expected to be available through the Red Hat Customer Portal. ®

Similar topics

Broader topics


Other stories you might like

  • VMware claims 'bare-metal' performance on virtualized GPUs
    Is... is that why Broadcom wants to buy it?

    The future of high-performance computing will be virtualized, VMware's Uday Kurkure has told The Register.

    Kurkure, the lead engineer for VMware's performance engineering team, has spent the past five years working on ways to virtualize machine-learning workloads running on accelerators. Earlier this month his team reported "near or better than bare-metal performance" for Bidirectional Encoder Representations from Transformers (BERT) and Mask R-CNN — two popular machine-learning workloads — running on virtualized GPUs (vGPU) connected using Nvidia's NVLink interconnect.

    NVLink enables compute and memory resources to be shared across up to four GPUs over a high-bandwidth mesh fabric operating at 6.25GB/s per lane compared to PCIe 4.0's 2.5GB/s. The interconnect enabled Kurkure's team to pool 160GB of GPU memory from the Dell PowerEdge system's four 40GB Nvidia A100 SXM GPUs.

    Continue reading
  • Nvidia promises annual updates across CPU, GPU, and DPU lines
    Arm one year, x86 the next, and always faster than a certain chip shop that still can't ship even one standalone GPU

    Computex Nvidia's push deeper into enterprise computing will see its practice of introducing a new GPU architecture every two years brought to its CPUs and data processing units (DPUs, aka SmartNICs).

    Speaking on the company's pre-recorded keynote released to coincide with the Computex exhibition in Taiwan this week, senior vice president for hardware engineering Brian Kelleher spoke of the company's "reputation for unmatched execution on silicon." That's language that needs to be considered in the context of Intel, an Nvidia rival, again delaying a planned entry to the discrete GPU market.

    "We will extend our execution excellence and give each of our chip architectures a two-year rhythm," Kelleher added.

    Continue reading
  • Amazon puts 'creepy' AI cameras in UK delivery vans
    Big Bezos is watching you

    Amazon is reportedly installing AI-powered cameras in delivery vans to keep tabs on its drivers in the UK.

    The technology was first deployed, with numerous errors that reportedly denied drivers' bonuses after malfunctions, in the US. Last year, the internet giant produced a corporate video detailing how the cameras monitor drivers' driving behavior for safety reasons. The same system is now apparently being rolled out to vehicles in the UK. 

    Multiple camera lenses are placed under the front mirror. One is directed at the person behind the wheel, one is facing the road, and two are located on either side to provide a wider view. The cameras are monitored by software built by Netradyne, a computer-vision startup focused on driver safety. This code uses machine-learning algorithms to figure out what's going on in and around the vehicle.

    Continue reading
  • AWS puts latest homebrew ‘Graviton 3’ Arm CPU in production
    Just one instance type for now, but cheaper than third-gen Xeons or EPYCs

    Amazon Web Services has made its latest homebrew CPU, the Graviton3, available to rent in its Elastic Compute Cloud (EC2) infrastructure-as-a-service offering.

    The cloud colossus launched Graviton3 at its late 2021 re:Invent conference, revealing that the 55-billion-transistor device includes 64 cores, runs at 2.6GHz clock speed, can address DDR5 RAM and 300GB/sec max memory bandwidth, and employs 256-bit Scalable Vector Extensions.

    The chips were offered as a tech preview to select customers. And on Monday, AWS made them available to all comers in a single instance type named C7g.

    Continue reading
  • Beijing reverses ban on tech companies listing offshore
    Announcement comes as Chinese ride-hailing DiDi Chuxing delists from NYSE under pressure

    The Chinese government has announced that it will again allow "platform companies" – Beijing's term for tech giants – to list on overseas stock markets, marking a loosening of restrictions on the sector.

    "Platform companies will be encouraged to list on domestic and overseas markets in accordance with laws and regulations," announced premier Li Keqiang at an executive meeting of China's State Council – a body akin to cabinet in the USA or parliamentary democracies.

    The statement comes a week after vice premier Liu He advocated technology and government cooperation and a digital economy that supports an opening to "the outside world" to around 100 members of the Chinese People's Political Consultative Congress (CPPCC).

    Continue reading

Biting the hand that feeds IT © 1998–2022