Created to mimic Heroku: Cloud Foundry explained by its chief technology officer
The past present and future of a confusing platform
Interview The development experience may be easy, but the open-source Cloud Foundry (CF) platform is confusing as hell for newcomers. Chip Childers, CTO of the Cloud Foundry Foundation since it was formed in January 2015, spoke to The Reg about its past, present and future, at the recent Cloud Foundry Summit in The Hague.
"CF was originally a product initiative at VMware. It was designed to mimic Heroku," he said.
Heroku was among the earliest examples of PaaS (Platform as a Service). It was launched in 2009 and provided an easy way to run Ruby applications in the cloud (other runtimes are now also supported). Co-inventor Adam Wiggins came up with the idea of 12-factor applications, a methodology for building cloud applications, principles which were also adopted by Cloud Foundry, before the more modern idea of microservices came into vogue.
The VMware project was handed over to Pivotal when it was formed by VMware and EMC in 2012. Other companies including IBM and SAP were interested in collaborating with it, and it became open source. IBM's Bluemix PaaS offering, launched in 2014, was an implementation of CF. Since a number of companies were now using CF, it was appropriate to form the non-profit Foundation to manage the open source project.
'A container image is an architectural detail'
CF was an early user of containers. "The Cloud Foundry platform has been using containers since well before they were popularised," said Childers, explaining that this meant using features of the Linux kernel to isolate applications. Docker was not launched until 2013.
"The thing that [Docker inventor] Solomon Hykes and Docker did was they provided a neat user interface on top of some kernel features, and the idea that you can snapshot those files and then move it around," said Childers. This has given rise to the idea of deploying applications as containers. The CF platform can now do that as well, but it is NOT what the platform is about.
"A container image is an architectural detail," said Childers. So what is the unit of deployment for CF? "Code. Generally it would be a microservice or a web application. If you have a Java project that would compile to create a microservice, that would be the unit.
"So the CF project is designed around handing the system your code. Not handing it something that's been pre-baked. Because if you hand it code, we can do a lot more with it."
This is the big idea which most characterises the CF platform. The developer writes application code, and hands it to CF with the
cf push command. A component called Diego receives the code, runs a task to build the application into a container, then deploys the container to a cell and runs the application.
"The reason that it's called Diego is a pun," said Childers. "The architecture was that each node in a cluster was called a DEA, Droplet Execution Agent. Originally everything was written in Ruby. They switched to the Go programming language, so DEA is now written in GO. Diego."
The build process uses a component called a buildpack. Buildpacks exist for runtimes like Go, Java, .NET Core, PHP, Python, Ruby and Node.js. A detection process determines which buildpacks to use for an application.
Waity K8-y no more Pivotal: We'll unhook Application Service from VMwareREAD MORE
"The Buildpacks build the containers. They suck in all the dependencies during the build process. They create a reproducible artefact because you now have the code that you can run against the Buildpack, you can now recreate it," said Childers.
What about managing the infrastructure, the VMs that actually run the CF containers?
"The CF architecture (pre-Kubernetes) has two key layers to it. One of them is the CF application runtime (CFAR). That's the PaaS system. The other is a tool called BOSH. In the process of deploying a CF environment, you need some sort of infrastructure that has an API that BOSH can communicate with. Public cloud, VMware vSphere, bare metal provisioning system," said Childers.
BOSH prepares the infrastructure to run the CF platform. "A lot of software has been packaged for BOSH to deploy. It interacts with that API, it thinks in terms of VMs and virtual networks, and it knows how to deploy and then guarantee uptime or at least recovery if it sees a VM go offline."
Migrating Cloud Foundry to Kubernetes
Much of the Foundation's efforts in the last couple of years have been dedicated to migrating the platform to K8s. How does that work?
There is no single or simple answer. There are various ways to preserve the Cloud Foundry developer experience while running on a K8s cluster rather than a Diego cluster.
There are three key projects. The CF Container Runtime (formerly Project Kubo) "is a certified distribution of K8s, a BOSH packaging of K8s so you can roll it out," says Childers. BOSH is still there, but managing K8s clusters. This is the basis of Pivotal's PKS (Pivotal Container Service).
Second, there is Project Quarks. This is a packaging of CFAR as Docker images that you can deploy onto K8s. Once deployed, there is no need for BOSH.
Third, there is Project Eirini. This is an API that lets you deploy applications to K8s pods instead of Diego cells.
Childers says of Quarks and Eirini: "Either one can be used in isolation. SUSE has been doing this for a long time. They've been using the work of the Quarks team, an earlier version of their approach, but they’ve been taking the entirety of the CF runtime, turning it into Docker images, and the whole thing including Diego is running as containers inside a K8s cluster. Which works. The improvement is to combine the two. You get a lightweight control plane, running in K8s, talking to K8s, to run apps."
SUSE, note, is also adopting Eirini but were ahead of other providers with K8s support.
Although managing K8s is more challenging than using BOSH and Diego, Childers observes that "most companies that sell enterprise IT software are offering a way to bootstrap and manage K8s. Whether it's RedHat OpenShift or SUSE or PKS that’s the problem they're looking to solve." Similarly, public cloud K8s providers remove most of the management burden.
All Project Eirini implementations are in preview, but Childers expects GA "some time in six months, maybe a little longer." It is not his decision, since the open source project has no concept of GA; it is down to the commercial providers.
What about Windows, a supported platform for CF, but less suited to K8s? Says Childers: "The CF community has never stranded its users. The Windows-based K8s environment today is feature-behind the Linux based [environment]. I’m not worried about it being left behind because the same individuals that talk about moving the CF architecture forward care about moving Windows forward."
Diego will also continue to evolve. "There is a period of time where there is going to be some parallel effort. If there is a feature in K8s that we want to take advantage of it will have to be built into Diego. The inverse is also true," says Childers.
The end result of all this will be, for the CF developer, the same experience as before. It is about the plumbing behind it.
Combining CF and K8s is a necessary step for the CF community. But CF is something of a niche, despite it being (along with Heroku) a pioneer of things that are now highly fashionable: containers, microservices, many aspects of serverless computing. What are the snags?
"I have found that Cloud Foundry is great until use cases pop up that are not easily supported fully within Cloud Foundry. Delivering these use cases can delay projects as you attempt to solve those problems," said a developer on StackOverflow, a few years back, but describing exactly the reason beautifully productive systems can become a burden. It depends, therefore, if a project is likely or not to hit one of those problematic use cases. In a microservices world, the move to K8s may help with this, making it easier to use CF for what it is good at, but mixing with other approaches where necessary. ®