We know that data centres operate with a high proportion of abstraction technologies. This should come as no surprise – the cloud services they house are in and of themselves physically “abstracted” (or, perhaps, even “conceptualised”) chunks of computing and storage power.
We also know that Docker, as an open-source application container project, hinges around its ability to abstract (and automate) the operating system level virtualisation on Linux.
As Linuxcontainers.org underlines: “[Containers] offer an environment as close to possible as the one you'd get from a virtual machine, but without the overhead that comes with running a separate kernel and simulating all the hardware.”
So how does Docker now form the next fold in the space-time continuum of data centre abstraction?
Much of the Docker focus now centres on configuration, orchestration and the deployment mechanics for distributed data centre applications.
In other words, networking, if you prefer.
For both user customers and partner vendors alike, the “so-what” factor comes down to how we fit existing operating system and application parameters into the Docker world so that they are both secure and deliberately tuned to Docker’s radio frequency and beat.
Docker-specific operating systems have started to emerge. CoreOS as a fork of Chrome OS is pre-configured with popular tools for running Linux containers. But CoreOS is not alone.
Just this March we have welcomed Red Hat Enterprise Linux 7 Atomic Host, an operating system optimised for running next-generation applications with Linux containers – umm, we think they kind of mean Docker.
Red Hat says: “As monolithic stacks give way to applications comprised of microservices, a container-based architecture can help enterprises to more fully realise the benefits of this more nimble, composable approach.”
In other words, data centre-networking maintenance is really important.
Docker’s gastro-intestinal mechanics
Docker itself hasn’t been quiet on its own internal gastro-intestinal status. Why would it? The project has so far added thirteen new official repositories to Docker Hub this year.
Without listing all 13, there’s Celery for starters. Celery is an asynchronous task/job queue protocol based upon distributed message-passing technologies written in Python. It helps scheduling and is focused on real-time operation mechanics.
RethinkDB is an open-source, distributed database built to store JSON documents and scale to multiple machines. It has a query language that supports table joins, groupings, aggregations and functions.
There’s also Swarm, a Docker-native clustering system. Described as a simple tool that controls a cluster of Docker hosts and exposes that cluster as a single "virtual" host, Swarm uses the standard Docker API as its front-end. This means that any tool that speaks Docker can control Swarm transparently.
In other words, real-time operational functions that work with Docker and help build the data centre networking control environment that we need are also really important.
Other Docker repositories include: Ghost, a platform dedicated to publishing content; Jetty, a technology often used for machine-to-machine communications, usually within larger software frameworks; and then there’s Irssi, a terminal-based IRC (Internet Relay Chat) client for UNIX systems. So it’s all about finessing content, connecting content and collaborating on content.
Looking inside Docker in this way, we can see that the data centre is no static lump of dry servers. A whole micro-culture of network microservice activity is going on. It’s time to dig the new breed. ®