This article is more than 1 year old

Red Hat Kubernetes shuffles towards the edge with 4.5 release of OpenShift

Why? It's all about DevOps, says CTO Chris Wright

Interview As KubeCon Europe and CloudNativeCon 2020 gets under way, delivered this time in the cloud, Red Hat has said its OpenShift Kubernetes distribution is now ready for edge computing.

Edge is an emerging marketplace, Red Hat CTO Chris Wright told us, where to date bare-metal Red Hat Enterprise Server (RHEL) or OpenStack deployments have been common like in the telecommunications industry, for example. The trend towards containers and Kubernetes everywhere means there is now demand for running OpenShift on the edge.

Last month the IBM-owned open-source business introduced OpenShift 4.5, with support for small three-node clusters. OpenShift 4.5 is generally available from today.

"Many of our customers are looking at delivering containerized applications to edge deployments so we're focusing on reducing the footprint of the cluster size required," said Wright.

"Edge computing use cases are typically in space and/or power constrained environments – so reducing that footprint was a common customer request." A minimal OpenShift deployment used to require around seven nodes, which was a barrier to adoption.

While you can deploy OpenShift to VMs or (since version 4.5) on VMware vSphere, Wright said that edge deployments are typically to bare metal. "Deploying directly to bare metal eliminates some administrative overhead of having to manage two different orchestration tiers, one for virtualization and one for containerization," Wright told us.

If you have some applications that need to run on a VM, you can look at the problem the other way around and run a VM on OpenShift. This is another new capability in version 4.5, based on KubeVirt.

Why bother with containers and OpenShift at all in these constrained environments? "Putting your application into a container is such an emerging best practice for the industry for how to do rapid development that our customers want that same toolchain and set of capabilities when pointing an application at an edge deployment," said Wright. "And using Kubernetes means you can have a common platform that you're targeting, whether it's scaled out broadly in the data centre or cloud, or scaled down to an edge deployment."

Containers sans Kubernetes

Could you use containers but without the overhead of Kubernetes? "There will environments where that is a very useful choice," said Wright, pointing at Podman as a suitable engine. This would be for the most space-constrained environments, with a single node. Once you need resiliency and a cluster, though, "you will still need to do some sort of orchestration to manage the cluster, and that's what Kubernetes provides," he said.

How do you manage your edge Kubernetes deployments? Red Hat's solution is called Advanced Cluster Management (ACM), and, according to Wright, its focus on automation makes it particularly suitable. "One trend in the Kubernetes community is using Kubernetes in a context where you have a larger number of smaller clusters rather than a small number of large clusters. So ACM's core focal point is managing multiple OpenShift clusters.

"You need to think about clusters programmatically, infrastructure as code to support the cluster, so that you don't have a small number of hand-crafted clusters, you have a large number of automated clusters. It also enables cross-cluster policy definitions, using labels in Kubernetes, to manage how you deploy applications, where there may be requirements for certain applications to land in certain clusters. That maps well to the edge use case."

AI is a common use case for edge deployments, for example, automating fault detection in manufacturing. The challenge here is getting the workflow right, said Wright, with model training in the data centre or in the cloud, and inference based on incoming data taking place at the edge.

"We've been working on an end-to-end data science workflow in an open-source project that we did a couple of years ago, called OpenDataHub, and a core focus for the OpenDataHub community is building that data science workflow for bringing data in, training models and doing model deployment. The model deployment piece includes deploying a model to a different cluster which would look like an edge cluster in this context. That's the core challenge, getting that end to end workflow."

As Red Hat CTO, what is Wright's view on Microsoft's new open-source project, Open Service Mesh, which aims to be an alternative to Istio or Linkerd? "Too early to say," Wright told us, while also giving some positive signals. "The interesting things about Open Service Mesh is that it's built around Envoy as the proxy, which is something that is important to us, and it's leveraging SMI (Service Mesh Interface) which is something we participated with Microsoft on. We will compare it with Istio on technical as well as community merits."

A snag may be that customers are already hooked on Istio. "The most important thing for us is ensuring that any technical change doesn't create a major challenge for our customer base," Wright said.

"We're looking at building a stable platform with a set of capabilities including a service mesh and we want to ensure continuity and community stability." ®

More about

TIP US OFF

Send us news


Other stories you might like