K8s celebrates KuberTENes: A decade of working together
Give yourselves a pat on the back - all 88,000 of you
In 2014, Google released Kubernetes, an open source cluster management system that takes its name from the Greek word for "helmsman" or "pilot."
Written mostly in the Go programming language, the first commit landed on June 6, 2014 - at 16:40:48 Pacific Time to be precise. A decade later, Kubernetes has become the de facto way to run large applications in the cloud.
On June 6, 2024, The Linux Foundation, which supports the Kubernetes project via its Cloud Native Computing Foundation (CNCF) subsidiary, celebrated the software's tenth anniversary with a small gathering at Google's Bay View Campus.
"This is a momentous step in technology history," said Priyanka Sharma, general manager of the CNCF, on stage before a live and streamed audience. "Kubernetes spawned the cloud-native movement, and organizations around the world use the technology today."
Kelsey Hightower, a software engineer and developer advocate known for his contributions to the Kubernetes community, and emcee for the event, reminded attendees at the birthday bash that they should be fêting themselves.
"So while we're here celebrating Kubernetes' birthday, that's a piece of software that lives on GitHub," said Hightower. "But we're actually celebrating all the people sitting here, the people on the live stream, and all the people who have contributed even a piece of documentation, is why this project has been here for 10 years. You can't get this far without this many people."
Kubernetes, sometimes referred to as K8s, evolved from Borg, cluster management software Google developed around two decades ago, as well as Omega, a similar clustering system. It has become the second largest open source project, surpassed only by Linux.
Kubernetes is a container orchestration platform that handles services like load balancing, releases and rollback, and scaling, among other infrastructure management tasks. It allows management of containerized applications as a collection of microservices and represents an alternative to virtual machines, the other common way of packaging computing environments.
Craig McLuckie, co-founder and CEO of Stacklok and one of the people who helped launched the Kubernetes project when he was at Google, recalled how K8s began with Googlers pondering how to compete with Amazon Web Services.
"Amazon had come along [with their as-a-service offerings and] effectively created this incredibly disruptive way to commercialize open source," he explained. "They were able to take the work of an open source community, operationalize and deliver it with really effective margin."
Thinking about how Google might do that in a way that used the company's asymmetric advantages, he said, "we started thinking about the Kubernetes project."
So in 2013, McLuckie, Brendan Burns, and Joe Beda took inspiration from Docker, which was designed to run containerized apps on a single machine, and extended the idea to multiple containers running across a fleet of systems.
McLuckie said he initially described the code Burns wrote as "a personal Borg cell," which didn't mean much to people not familiar with Google's infrastructure code. He tried other descriptions like "a promise-theory based scheduling system," then "a Docker orchestrator," and also "programmable ideal infrastructure."
"Eventually we got to a point where we could start to not just feel the thing that we were looking to build, we could actually describe it," he said.
You will be assimilated
The original code name for the project, bestowed by Burns, was "Seven of Nine," explained McLuckie, "because that was a sort of friendly, more accessible Borg."*
Eric Brewer, VP of infrastructure at Google, said the anniversary for him was more like 30 years than 10, since he had been involved in work similar to Kubernetes since his days as a professor at UC Berkeley in the mid-1990s. In his time as founder of Inktomi, a search engine that predated Google and was sold to Yahoo in 2002, he also worked on similar tech.
The idea of a computer cluster utility service dates back further still, Brewer said, to 1965 with Multics (Multiplex Information and Computer Service). But Multics had no internet, and the cloud-like cluster services that emerged in the mid-1990s ran on real machines and were insufficiently elastic.
"Finally, Kubernetes, I would say, is delivering this vision for real," said Brewer. He credits this to running on top of VMs – something process-based Google avoided in favor of containers. And he also credited the Kubernetes community.
"It's in retrospect obvious, but if you want to serve all the things, the community has got to write all the things," he explained. "There's no other way."
- Arm servers are on Nutanix's long-range radar, not yet its to-do list
- IBM packages its Power cloud into 'pods' that run on-prem
- Google Cloud shows it can break things for lots of customers – not just one at a time
- With Run:ai acquisition, Nvidia aims to manage your AI kubes
Kubernetes is not the only container orchestration option. Red Hat OpenShift, Docker Swarm, Apache Mesos, and Hashicorp Nomad all compete for attention from developers and infrastructure ops folk. But Kubernetes dominates the market, thanks in part to effective evangelism and project governance, attentive development, and a committed community.
Chen Goldberg, general manager and VP of engineering at Google, attributed the project's longevity in part to a shift made in 2017 to push for what she referred to as "sustainable success."
That means "paying attention to developer velocity or tooling or test frameworks," Goldberg explained. "For example, how can we onboard new contributors? And even more important, how do we make sure that we can onboard and empower new SIG [Special Interest Group] leaders? Because of course, nobody wants to do it for the rest of their life, I imagine."
Keeping it together
It also means guarding against fragmentation, the possibility that every cloud provider might implement the software a bit differently, eroding the ability to shift workloads between environments and creating opportunities for lock-in. Kubernetes avoided that scenario through the establishment of the Certified Kubernetes Conformance Program, which helps limit the possibility of vendor lock-in by ensuring workload portability.
Over 88,000 contributors have offered code to improve Kubernetes, or helped in other ways. As of last year, according to the Cloud Native Computing Foundation, K8s was the primary container orchestration software for 71 percent of Fortune 100 companies.
"Overall, we see a very big adoption of Kubernetes in our customer base," said Alois Reitbauer, chief technology strategist at observability biz Dynatrace, in an interview with The Register. "You can say pretty much everybody right now is finding their way, at least for the new applications, to move them into Kubernetes."
The reason, said Reitbauer, is obvious. At the beginning of the DevOps movement, people started to build more infrastructure as code, but everything was still bound to individual machines. So you had to know which machines you had.
"One nice abstraction of Kubernetes is that we don't have to care exactly where our code is running," said Reitbauer. "By using containers, we can also package up software components in a way that allows us to not really wonder what's inside because we have the standard interface of the container."
With so many customers looking to automate operations, he said, Kubernetes comes in handy because it works declaratively and provides a central registry in which software artifacts can be stored. "It makes interaction with software systems much easier," he said.
"Kubernetes has become almost like this operating system of applications, where companies build their platform engineering initiatives on top," said Reitbauer.
"It standardizes and abstracts away a lot of the complexities of building and running applications where you ideally only really have to worry about your application code and central teams take care of everything from provisioning, configuration, security configuration, observability configuration, and so forth."
While Kubernetes is very widely deployed, and has proved it can work at colossal scale - Google's used it in 2023 to run what Chen said was "the world's largest distributed training job" – training a single model across 50,000 TPU v5e processors – those contributing to the software infrastructure project recognize that further refinements to accommodate AI workloads will be required.
"We need to actually go back to some of the basic assumptions and design principles that we've put into Kubernetes and enhance them and extend them in areas like scale, optimizing for latency, and cost," said Chen. "But at the end of the day, AI is a modern workload. And many of the fundamentals of Kubernetes really fit perfectly with the AI era."
In the words of CNCF's Sharma, "Kubernetes made the past as it was, into the present as we know it today. And it will power the future as well." ®
* The Borg are a race from Star Trek: The Next Generation, and the main antagonist in that series as they “assimilate” other races and plug them into a collective consciousness. In Star Trek: Voyager the crew frees a human member of the Borg named “Seven of Nine” who becomes a trusted member of the vessel’s complement.