Containers everywhere! Getting started with Docker

Hello World


How to Docker is the name on the tip of many tongues at the moment. It is a containerisation engine which allows you to package up an application along with all the settings and software required to run it and deploy it to a server with a minimum of fuss.

So where did this idea come from?

Shipping containers! Shipping containers have a defined size. No matter what they contain the cranes used to move shipping containers about and the boats that load them know how to stack them. A shipping container is a standardised unit. Imagine if software was the same. Rather than needing to know how to setup a server to the exact specifications given by a program, you are given a container. The container engine knows how to move that container about and how to run it, no matter what is inside.

How is this different from virtual machines?

Well, virtual machines are also individual boxes, with all the requirements for some software inside them. However, they also emulate the hardware constraints (the ship and the crane from our docks).

This can be useful if you want to change the CPU that the machine is running on, or restrict the amount of memory available to a machine, but it has overheads. These overheads mean that we are restricted in the number of virtual machines we can have on a single server.

Containers don't use hardware emulation, it's all about the software. So you can run more of them on a single machine. As well as that, Docker takes advantage of shared file systems. It builds up its container in terms of layers. The base layer is the operating system. The next any updates and file changes that occur.

If there are two containers that can share some of this file system then they do so. This reduces the space required for each container as it only contains what is different about its setup, rather than repeating the same information over and over.

So how does it work?

Step 1: Download Docker for your operating system.

It is based on Linux Containers (LXC) so if you are running Windows or OS X then you will need a wrapper for the program. Luckily, one is provided for you via 'docker-machine'. If you run Linux already, then you can use it natively.

Step 1.5: Start 'docker-machine' if required.

docker-machine create default --driver virtualbox

This will create a small linux virtual machine using virtual box as its driver. There are other drivers for different virtualisation systems. Now we need to connect to this. List the system settings for your virtual machine with:

docker-machine env default

Then we connect to it!

Mac: eval $(docker-machine env default)

Windows: docker-machine env --shell=powershell default | Invoke-Expression

Step 2: Create your first machine and say hello!

docker run ubuntu echo 'hello world'

That's it. It's that simple. Docker creates a container using ubuntu as its base layer and runs whichever commands you give it. In this case we used the echo program to say 'hello world'.

Pre-defined containers

We can build up containers based on running each command individually with the docker binary, but that would take a while. It doesn't match with the idea of 'automate everything' that has come from the sys admin and devops world.

The solution for this problem are "Dockerfile"s. A Dockerfile is a script which tells the container how to build and run.

A Dockerfile could be defined as:

from centos:centos6

run yum update -y

run yum install -y java-1.7.0-openjdk

run yum install -y java-1.7.0-openjdk-devel

copy hello.java /

run javac hello.java

This uses centos 6 as its base image, updates everything, installs java, copies in a file and then compiles it. When we run this with this, the java file will execute!

docker build -t containername . #build the container

docker run containername java hello #run the program!

Dockerfiles are incredibly flexible and can handle copying in resources from other containers, the internet or your local system. Anything that you would run on the command line is pre-pended with the "run" statement. By defining containers using these files we can version control and test them. This leads to more reliable deployments.

Windows to another world

We can open windows into containers on our terms only. This is much like restricting the port numbers open on the firewall. For a mysql database we may want port 3306 open, we can define this in the Dockerfile with:

expose 3306

It can be mapped to any port on the host system, allowing each service to believe it is running on the default port, but without any clashes (after all, how many services use port 8080? tomcat, glassfish, jenkins, puppet to name a few from my last few weeks!)

# docker run -p hostport:containerport containername

docker run -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=pie mysql

docker run -d - p 3307:3306 -e MYSQL_ROOT_PASSWORD=pie mysql

docker run -d -p 3308:3306 -e MYSQL_ROOT_PASSWORD=pie mysql

Three little containers, all believing that they are running on port 3306, but with redirects being handled by docker on the host system.

We can even restrict access between containers, without leaving ports open to the outside world by linking them together.

docker run -d -e MYSQL_ROOT_PASSWORD=pie mysql

docker run --link mysql:db apachephp

The mysql container has no ports open to the outside world. The apachephp container has a link to the mysql container. It thinks of it locally as "db". The link between the two is secure, no one port scanning your server will be able to access the mysql database!

Composing containers

All looks good so far? Docker goes a step further when automating everything. With docker-compose we can define and start up many containers at once.

This means our entire infrastructure is defined in readable files which can be version controlled and tested.

mysqldatabase:

image: mysql environment:

MYSQL_ROOT_PASSWORD:

rootpassword php: image: phpapache

ports:

- "80:80"

links:

- mysqldatabase:

Containers are a powerful and easy way to manage your code deployment to servers, they are flexible enough to allow for any set of requirements of your software and allow you to automate all the things!

After all, wouldn't it be nice if setting up a minecraft server was as easy as:

docker run -d -p=25565:25565 itzg/minecraft-server

®

Kat McIvor is Principal Technologist for DevOps at QA, the UK’s biggest provider of technical and business training in the UK.

Broader topics


Other stories you might like

  • Lonestar plans to put datacenters in the Moon's lava tubes
    How? Founder tells The Register 'Robots… lots of robots'

    Imagine a future where racks of computer servers hum quietly in darkness below the surface of the Moon.

    Here is where some of the most important data is stored, to be left untouched for as long as can be. The idea sounds like something from science-fiction, but one startup that recently emerged from stealth is trying to turn it into a reality. Lonestar Data Holdings has a unique mission unlike any other cloud provider: to build datacenters on the Moon backing up the world's data.

    "It's inconceivable to me that we are keeping our most precious assets, our knowledge and our data, on Earth, where we're setting off bombs and burning things," Christopher Stott, founder and CEO of Lonestar, told The Register. "We need to put our assets in place off our planet, where we can keep it safe."

    Continue reading
  • Conti: Russian-backed rulers of Costa Rican hacktocracy?
    Also, Chinese IT admin jailed for deleting database, and the NSA promises no more backdoors

    In brief The notorious Russian-aligned Conti ransomware gang has upped the ante in its attack against Costa Rica, threatening to overthrow the government if it doesn't pay a $20 million ransom. 

    Costa Rican president Rodrigo Chaves said that the country is effectively at war with the gang, who in April infiltrated the government's computer systems, gaining a foothold in 27 agencies at various government levels. The US State Department has offered a $15 million reward leading to the capture of Conti's leaders, who it said have made more than $150 million from 1,000+ victims.

    Conti claimed this week that it has insiders in the Costa Rican government, the AP reported, warning that "We are determined to overthrow the government by means of a cyber attack, we have already shown you all the strength and power, you have introduced an emergency." 

    Continue reading
  • China-linked Twisted Panda caught spying on Russian defense R&D
    Because Beijing isn't above covert ops to accomplish its five-year goals

    Chinese cyberspies targeted two Russian defense institutes and possibly another research facility in Belarus, according to Check Point Research.

    The new campaign, dubbed Twisted Panda, is part of a larger, state-sponsored espionage operation that has been ongoing for several months, if not nearly a year, according to the security shop.

    In a technical analysis, the researchers detail the various malicious stages and payloads of the campaign that used sanctions-related phishing emails to attack Russian entities, which are part of the state-owned defense conglomerate Rostec Corporation.

    Continue reading
  • FTC signals crackdown on ed-tech harvesting kid's data
    Trade watchdog, and President, reminds that COPPA can ban ya

    The US Federal Trade Commission on Thursday said it intends to take action against educational technology companies that unlawfully collect data from children using online educational services.

    In a policy statement, the agency said, "Children should not have to needlessly hand over their data and forfeit their privacy in order to do their schoolwork or participate in remote learning, especially given the wide and increasing adoption of ed tech tools."

    The agency says it will scrutinize educational service providers to ensure that they are meeting their legal obligations under COPPA, the Children's Online Privacy Protection Act.

    Continue reading
  • Mysterious firm seeks to buy majority stake in Arm China
    Chinese joint venture's ousted CEO tries to hang on - who will get control?

    The saga surrounding Arm's joint venture in China just took another intriguing turn: a mysterious firm named Lotcap Group claims it has signed a letter of intent to buy a 51 percent stake in Arm China from existing investors in the country.

    In a Chinese-language press release posted Wednesday, Lotcap said it has formed a subsidiary, Lotcap Fund, to buy a majority stake in the joint venture. However, reporting by one newspaper suggested that the investment firm still needs the approval of one significant investor to gain 51 percent control of Arm China.

    The development comes a couple of weeks after Arm China said that its former CEO, Allen Wu, was refusing once again to step down from his position, despite the company's board voting in late April to replace Wu with two co-chief executives. SoftBank Group, which owns 49 percent of the Chinese venture, has been trying to unentangle Arm China from Wu as the Japanese tech investment giant plans for an initial public offering of the British parent company.

    Continue reading
  • SmartNICs power the cloud, are enterprise datacenters next?
    High pricing, lack of software make smartNICs a tough sell, despite offload potential

    SmartNICs have the potential to accelerate enterprise workloads, but don't expect to see them bring hyperscale-class efficiency to most datacenters anytime soon, ZK Research's Zeus Kerravala told The Register.

    SmartNICs are widely deployed in cloud and hyperscale datacenters as a means to offload input/output (I/O) intensive network, security, and storage operations from the CPU, freeing it up to run revenue generating tenant workloads. Some more advanced chips even offload the hypervisor to further separate the infrastructure management layer from the rest of the server.

    Despite relative success in the cloud and a flurry of innovation from the still-limited vendor SmartNIC ecosystem, including Mellanox (Nvidia), Intel, Marvell, and Xilinx (AMD), Kerravala argues that the use cases for enterprise datacenters are unlikely to resemble those of the major hyperscalers, at least in the near term.

    Continue reading

Biting the hand that feeds IT © 1998–2022