Photon taster: Flirting with VMware's CoreOS gambit

New frontiers in containers?

There's growing interest in VMware’s Photon. Essentially, Photon lets highly optimised Docker containers into VMware Land.

One of the great things about deploying Docker is that it can take mere seconds to spin up a new instance.

Photon can be thought of as a competitor to CoreOS and similar offerings. What is different about Photon is that it exposes VMware’s APIs so developers can leverage some of the features coming with the Photon Controller platform - which is due out shortly. Photon is also free to use and even fork should you wish to do so.

Classic VMware features such as VMware HA and FT are on notice in a Photon-centric world. When developers build new web-scale applications, the high availability functionality should be managed using one of the many frameworks to manage availability and developers working to ensure correct functioning in the event of a failure. Things, they are changing!

I took a crack at building a Docker application on Photon using VMware Fusion on Mac - but you can use other desktop virtualization systems such as VirtualBox. The new, Dockerised hosts to support this design are still in private beta at present.

To get the basic Docker on Photon setup installed, follow these instructions:

  1. Download the latest photon micro OS image from github (select full ISO)
  2. Create a virtual machine with 8 GB disk/1.5 GB RAM/1v CPU
  3. Select Other Linux 3.x 64 Bit Kernel
  4. Attach the Photon image and boot the VM. Select “full install” when prompted

The Photon installer asks very few questions and installs very quickly. It will ask the install type, to set host name and root password and that’s it. It obtains a DHCP address by default. We are talking web scale here, not individual hosts. Speed and automation are everything.

From inside the Photon Docker installation there is a NAT service that can be configured for the Docker images to work. At deployment time, an admin can specify the ports to use and Docker will create the required network mappings.

At this point it is probably prudent to address how Docker images work. Docker images are designed to be stateless servers that are essentially just the application, no full fat VMs here. Docker containers can (should) be created and destroyed at will without data loss. If one gets damaged or broken, it can just be destroyed and another one deployed in its place.

All Docker images are built much like a sandwich with several different layers of separate file systems glued together. The reason for this is to make images easier to maintain so that when an image is rebuilt or changed, a minimal amount of work is required.

Log in to the docker host and update it with the latest patches, then issue the command:

yum –y update

Reboot the server by typing reboot and press enter.

Once it has restarted, we need to turn on the Docker daemon and set it to automatically start on bootup. Use the command:

systemctl enable docker

Follow this with a restart. At this point you can verify Docker is installed and working by using the command:

docker info

This will provide proof that Docker is working as well as some information that appeals to the geekier amongst us. Now that this is done it is time to build the Minecraft image that we can deploy at will.

The build information is held in a special file called a Docker file. In here we specify what is included, commands to run and any files we need to include.

Start building the file by creating a folder called mineserver and move into it:

mkdir /opt/mineserver && cd /opt/mineserverv

At this point we can create and modify the dockerfile by using the command:

vi Dockerfile.

Comments can be added by prefixing the comments with a hash. For example

# This is a comment

Building the setup is done by including commands in the dockerfile.

There are several commands that can be used to build up the Docker file. All Docker images need a base OS from which to build the application platform. Ubuntu is a very popular base OS.

Build the Dockerfile by editing it as follows, using nano or vi (whichever editor you are happy with) and copy the following into the file:

FROM ubuntu:latest

Maintainer Joe Public <>

User root

RUN mkdir opt/minecraft/

RUN apt-get -y update

RUN apt-get install -y openjdk-7-jre-headless wget>

RUN wget -P /opt/minecraft/

RUN echo "eula=true" > /opt/minecraft/eula.txt CMD java -d64 -Xmx1024M -jar /opt/minecraft/minecraft_server.1.7.4.jar nogui

It should be noted that there are several items in this example that are not designed for production utilisation, such as grabbing Ubuntu latest and the Minecraft server files from the web. In a real Dockerised world you should have files stored within your network and use version control within the infrastructure.

To build the Docker file, use the command:

docker build –t “stuart/minecraft” .

If the output shows any errors, check the dockerfile for errors and the error on the console. Assuming the image is built you can run the docker instance by running the command below:

Photon 1

Run your docker instance

docker run –p 25563:25563 stuart/minecraft:latest &

The above command will always ensure that when the docker run commences it will always use the latest version of the dockerfile in question. Obviously each instance of Minecraft will need its own network port. Do the port configuration by changing the port number (25553).

Some other commands that may be useful include:

docker images

Photon 2

Images are available locally

The screen grab above shows all the images that are built and available locally. Once built as above, it becomes a very simple process to create as many instances as you desire by repeating the command. Each container will be given a unique ID at runtime for the duration of its life.

To see which containers are running issue the command:

docker ps

To kill off an instance use the command:

docker kill <container ID>

This is only a very basic overview of what can be done but it touches on the core items of creating and using Docker in a Photon environment. What makes Docker so easy is the fact that it is very easy to get accustomed to with only a moderate amount of Linux experience and after a very short time it will be second nature.

As for what the new Photon Controller will bring, that will be an interesting question. ®

Similar topics

Narrower topics

Other stories you might like

  • Inspur joins Arm gang with 2U box running Ampere silicon
    Arm ecosystem elbowing its way into third largest server vendor in the world

    China-based server maker Inspur has joined the Arm server ecosystem, unveiling a rackmount system using Arm-based chips.

    It said it has achieved Arm SystemReady SR certification, a compliance scheme run by the chip designer and based on a set of hardware and firmware standards that are designed to give buyers confidence that operating systems and applications will work on Arm-based systems.

    Inspur may not be a familiar name to many, but the company is a big supplier to the hyperscale and cloud companies, and was listed by IDC as the third largest server vendor in the world by market share as recently as last year.

    Continue reading
  • DRAM prices to drop 3-8% due to Ukraine war, inflation
    Wait, we’ll explain

    As the world continues to grapple with unrelenting inflation for many products and services, the trend of rising prices is expected to have the opposite impact on memory chips for PCs, servers, smartphones, graphics processors, and other devices.

    Taiwanese research firm TrendForce said Monday that DRAM pricing for commercial buyers is forecast to drop around three to eight percent across those markets in the third quarter compared to the previous three months. Even prices for DDR5 modules in the PC market could drop as much as five percent from July to September.

    This could result in DRAM buyers, such as system vendors and distributors, reducing prices for end users if they hope to stimulate demand in markets like PC and smartphones where sales have waned. We suppose they could try to profit on the decreased memory prices, but with many people tightening their budgets, we hope this won't be the case.

    Continue reading
  • Intel offers 'server on a card' reference design for network security
    OEMs thrown a NetSec Accelerator that plugs into server PCIe slots

    RSA Conference Intel has released a reference design for a plug-in security card aimed at delivering improved network and security processing without requiring the additional rackspace a discrete appliance would need.

    The NetSec Accelerator Reference Design [PDF] is effectively a fully functional x86 compute node delivered as a PCIe card that can be fitted into an existing server. It combines an Intel Atom processor, Intel Ethernet E810 network interface, and up to 32GB of memory to offload network security functions.

    According to Intel, the new reference design is intended to enable a secure access service edge (SASE) model, a combination of software-defined security and wide-area network (WAN) functions implemented as a cloud-native service.

    Continue reading

Biting the hand that feeds IT © 1998–2022