This article is more than 1 year old

In the enterprise, Kubernetes has to play by the same rules as other platforms

Shortcuts? What shortcuts!

Sponsored Without a doubt, Kubernetes is the most important thing that has happened in enterprise computing in the past two decades, rivalling the transformation that swept over the datacenter with server virtualization, first in the early 2000s on RISC/Unix platforms and then during the Great Recession when commercial-grade server virtualization became available on X86 platforms at precisely the moment it was most needed.

All things being equal, the industry would have probably preferred to go straight to containers, which are lighter weight than server virtualization and which are designed explicitly for service-oriented architectures – now called microservices – but it is the same idea of chopping code into smaller chunks so it can be maintained, extended, or replaced piecemeal.

This is precisely why Google spent so much time in the middle 2000s creating what are now seen as relatively rudimentary Linux containers and the Borg cluster and container controllers. Seven years ago, as it was unclear what the future platform might look like; OpenStack, which came out of NASA and Rackspace Hosting, was a contender, and so was Mesos, which came out of Twitter, but Kubernetes, inspired by Borg and adopting a universal container format derived from Docker, has won.

And here is the funny bit. No matter how popular Kubernetes has become, and no matter how much fit and finish it has now compared to when it was first open sourced by Google nearly seven years ago, this container controller had to evolve into a true platform. And now that it has, Kubernetes must be woven into – and submit to – the same security regimen as other software in the enterprise. It has to have at least the same level of resiliency as existing platforms, ranging from backup, high availability clustering within the data center and disaster recovery across data centers. And all of the layers of control, collectively called data governance, that have been created for all enterprise software must apply to Kubernetes and the applications and data it controls. Moreover, all of the data that is embodied in the Kubernetes stack has to be discoverable in the same fashion as it is for other platforms.

There are no shortcuts in the enterprise, and while there are many ways to build or buy or rent a Kubernetes stack, how any particular Kubernetes distribution interfaces with the existing data resiliency, security, governance, and discovery frameworks is what determines what can be deployed in production and what will remain a test and development platform at best and a science project at worst.

“None of these issues go away just because Kubernetes is new on the scene,” says Peter Brey, director of big data marketing at Red Hat, whose distribution of Kubernetes, called OpenShift, is increasingly being deployed by enterprises that are loath to roll their own Kubernetes and who want something that can more easily snap into their existing infrastructure protection and governance frameworks. “We want to prevent that resume generating event from happening, but more importantly we also don’t want any software exposed to downtime or hacking and also be properly governed. And we want to encourage people to adopt this new technology and to realize that they don’t have to redo everything. But they may have to rethink a few things.”

Infrastructure frameworks

Let’s start with data resiliency as an example of how to weave Kubernetes into the existing infrastructure frameworks. Brey worked on various high-end storage products aimed at the cloud, oil and gas, media and entertainment, telecom, and healthcare and life sciences industries in the 2000s, and is now at Red Hat helping put together Red Hat’s OpenShift with its Red Hat Ceph object storage and Red Hat OpenShift Data Foundation products.

“The industry figured out data resiliency two or three decades ago, for instance, and we don’t want to reinvent the wheel here,” says Brey. “In the storage industry, backup and recovery is a kind of holy war. There is very little market share movement among the competitors in this space. Why is that? Backup and recovery is one of the most mission critical things in the enterprise, that’s why. If your backup and more importantly your restore doesn’t work, that is definitely a resume generating event and nobody wants that.”

The situation is a bit more subtle when you consider the evolution from “storage” to “data services,” which are not precisely the same thing, does require some invention, says Brey, and that is because containers use storage in a different way than bare metal servers or even virtual machines.

“One of the fundamental principles of Kubernetes is that you automate the heck out of everything and take away layers and layers of work,” explains Brey. “In the early days of Kubernetes, people were talking a lot about persistent storage, and there is a parallel here called a persistent volume claim, which allows application developers to instantly provision storage on the fly without having to place an IT support ticket for that and wait a week or so for a storage admin to get it done. All the programmer does is write a line of code and they have the storage they need.”

That storage is different with Kubernetes not just because the community wants it to be easier, but also because of the nature of storage for containers that might be a bit more ephemeral than what enterprises are used to when they buy and configure storage for systems. Not only that, but Kubernetes is a fast-changing open source project and someone has to make sure that interfaces to various established data services in the enterprise are not broken, or do not have their performance hindered, by this rapid change. These data services – container storage, backup, recovery, disaster recovery, security (including encryption and key management), data governance (making sure people can only see the data they are supposed to), and data discovery (making massive sets of data searchable so people can make use of it) – have to always work, all the time – no exceptions, no excuses.

The good news is that all of that prior investment in certain data services is reusable when it comes to Kubernetes container platforms, and there are people out there – some of them at Red Hat and IBM, some of them at other companies with other Kubernetes distributions or container services running on the big public clouds.

“There are approaches enterprises can take that will allow them to leverage all of these data services,” says Brey. “The distilled knowledge that your organization has built up with, say, backup and recovery, can be pulled into Kubernetes, and Red Hat, for instance, leverages open source technology heavily to deliver these capabilities, We also bring a deep knowledge of the internals of Kubernetes to the party. The same holds for all of these other data services, and the same metrics of service level agreements to lines of business, and recovery point objectives, or how far back you can go to get a restore of data, and recovery time objectives, or how fast you can restore that data, apply to meet those SLAs – just as it does for virtual machines.”

The point is, organization that are moving from proofs of concept with containerized applications into production need to think through all of these data services, and how they will be fulfilled, before they even think about going into production with containers on a Kubernetes controller. And companies like Red Hat and IBM – and indeed, their competitors in the clouds and within corporate data centers – are going to help IT shops work out the details. The question becomes who does this best, and eventually, the market will decide. We know what IBM thinks, with its $34 billion bet on Red Hat, which it acquired last year.

Sponsored by Red Hat

More about

TIP US OFF

Send us news