Building Docker Images with Distelli

Jan 13, 2016

We’re pretty excited about this: with Distelli, you can now set up your application so that it’s built as a Docker image, and uploaded to your Docker Hub repository or EC2 Container Registry.

Building Applications with Docker

And once you’ve built your application’s image, you can deploy and run your application as a Docker container on an EC2 Instance or any server of your choice — all without requiring SSH access.

Why Build Docker Images

We’ve been working on this because it’s a feature that many of our customers have requested. If you’re already familiar with Docker, feel free to skip the rest of this post and head straight to the documentation to get started. Send us your feedback.

For everyone else, let’s take a quick minute and walk through why you should build your application as a Docker container.

Some background: the underlying technology that powers Docker — CGroups and Namespaces — has been around for a while. Docker has done a really great job of making the difficult parts of cgroups and namespaces easier to use, easier to understand, and easier to configure—so you can focus on your core features.

To understand why Docker is a good idea, let’s take a look at how we would do things without Docker—on a bare OS.

Installing Applications on Linux

Installing applications on Linux is relatively easy. On Ubuntu (and other Debian-based Linux distributions), all you need is the apt-get command, and the package you specify is installed on the machine, complete with any libraries that it depends on. If we wanted to install Kate, a popular text editor, this is all we need:

An innocent command to install Kate, a KDE text editor

Which is easy enough. However, this comes with 123 additional packages, with no indication of whether any other software already on the machine is affected too.

All of Kate's dependencies!

And so we see the difference between “easy” and “simple”. With a single command, we’ve made some complicated changes to our machine, without any indication of the impact to other software on the box.

Now imagine doing this on a server that runs your application: install a library that has its own dependencies, then another, and slowly you begin to lose track of the changes that are being made to your system. Which means you’re losing track of the difference between critical files and cruft. And finally, when your application needs to run on a different server, you’re left figuring out what exactly you need to install and configure to get the application running right.

In this world of Agile and Continuous Integration and Moving Fast and all that, it is an extraordinary waste of time and human brainpower to perform the necessary archaeological digs to find out what works, and to make sure that every server is configured just right.

So instead, we use Docker. In a nutshell, Docker lets you package your application, all its dependencies, and all your environment settings into a single package, called an image. This image can be saved to an online repository, and then pulled to a server where you can run the image as a container. Any commands you run in this container don’t affect your server’s filesystem, or other software installed on the same machine, so you should no longer have to deal with the crazy snowballing of minor changes to your server.

With judicious use of Docker containers, you can spend your time solving problems for your customers, not mucking about with server configuration.

How Docker Images Work

The delicious nut at the center of the Docker chocolate is the image: the template for your Docker container. Images contain your application and everything it needs to run consistently.

To create images, you start with a Dockerfile, which is the blueprint of your Docker image. All Docker images are built on a base image, which is where it all begins. Once you’ve selected a base image, you can use commands in the Docker file to add new information to the image—programs to install, environment variables to set, and so on.

When you start building your Docker image, you use the FROM statement to specify the base image you want to use.

FROM ubuntu:latest

Here, you’re telling Docker that you want to build your environment on the latest Ubuntu image.

The Docker base layer: Ubuntu

Now to start building. It’s generally a good thing to make sure your Ubuntu software repositories are up to date, so we’ll update them by putting a RUN command in the Docker file:

RUN apt-get -y update

This command creates a new image, which takes the base image, and adds a new building block: a layer, containing the updates to the software repositories.

The Docker base layer: Ubuntu

Now, if you need to build another Docker image that starts with these commands, Docker can reuse this image, instead of building a new one.

Going further, let’s add Python 3 and install an app called my-app:

RUN apt-get install -y python3
RUN python3 ~/my-app/

This creates even more layers:

The Docker base layer: Ubuntu

And so on. When you’re done, you can use this Docker file to build the image, and then run that image as a container. In that container, my-app always runs as if it were on an Ubuntu server with python3 installed—even if Docker itself is running on CentOS, Fedora, or any other flavor of Linux.

Docker Gives You Consistency

Once your app and its dependencies are packaged into a Docker image, the app should run on any Linux server that supports Docker. This now frees up the time you’d otherwise spend troubleshooting your environments.

Furthermore, if you want your Docker container to write any data to your host server, you have to explicitly mount a part of your server filesystem as a volume inside the container. Otherwise, any changes made in the container go away when the container is stopped. This lets you isolate your server from any problems that might occur in your Docker containers.

We’ve taken advantage of Docker ourselves: when you start a build in Distelli, it gets built in a Docker container, which contains the basic tools needed to build your app and ensures that builds run in isolated environments on our servers.

So when you’re building a JavaScript app, it’s built in a Docker container that has NodeJS and NVM. If you’re building a PHP app, it’s built in a container that has PHP 5.6.5, and so on. And yes, if you’re building a Docker image, it’s built in a Docker container that has Docker installed. This gives us:

  • A cheaper alternative to virtualization: We save money by not having to use a virtual machine (VM) for every single build. Instead, we run Docker containers on our build machines. This also means that we don’t need to ‘reserve’ system resources for each container—if a build in one container doesn’t need memory or CPU cycles, those resources are available to other containers running on the same machine.

  • A consistent and repeatable build environment for each language: Once your app is built, the Docker container is stopped, and any temporary data it created goes away with it. When you start your next build, a new container is created, and the build begins as if on a brand new server. This means that today’s build environment is the same as yesterday’s is the same as the day before and will continue to be the same until we switch to using a different image.

  • Isolation between builds: Every build environment is a separate container, and is essentially a black box. By default, containers don’t talk to each other, so your builds don’t interfere with anyone else’s (and vice versa).

What Not to Expect

Containers solve a lot of problems, but also come with a few of their own. Each of these topics can be a blog post of its own, but we’ll choose brevity for now:

  • Networking: When you start the Docker container for your app, you’ll need to define its networking settings: to begin with, you need to set up port mapping, so that traffic coming to your server is directed to the right container. If you have multiple containers that need to talk to each other, you must set up linking between containers. If your requirements are more complex than that, you’ll need a more advanced networking configuration.

  • Storage: By default, any data created in a container is removed when the container is stopped. So that your application’s data doesn’t disappear when you stop your container, you need to set up some combination of Docker storage volumes and data-only Docker containers that can maintain your application’s state.

  • Security: Depending on your choice of reading, views on Docker security range from ‘completely fine’ to ‘you shouldn’t trust this in production’. Like any other system, the default settings for Docker containers can pose some risks, but you can harden security with the right configuration.

None of these, however, should stand in the way of seeing what Docker is capable of.

Start Experimenting with Docker

The only way to tell if using Docker can work for your application is to try it out. And the fastest way to find out what happens to your app in a Docker image is to build it using Distelli. To get started with creating your Docker-based application:

  • Create a Docker file
    • Create a Docker Hub account, and browse through the repositories to choose a base image. The base image could be a Linux distribution of your choice, or an image that someone else has created.
    • Corral your application’s dependencies—for instance, the prerequisites that you’ve put in the PreBuild section of your Distelli build steps.
    • To add commands to the Docker image, use the RUN statement in the Dockerfile. For example, if one of your build steps was sudo apt-get install -y nodejs, the statement you need to put in your Docker file is RUN apt-get install -y nodejs. A good practice to limit the number of layers you create is to chain your commands in the RUN statement: RUN apt-get -y update && apt-get install -y nodejs, and so on.
    • To add your release artifacts to the Docker file, look at PkgInclude section of your Distelli build steps, where you specify which files and folders to include in your build. Then use the ADD statement to include those files in the Dockerfile. For example, if one of the paths in the PkgInclude section of your build is ./appfiles, the statement you need in your Dockerfile is ADD ./appfiles.
  • Build!: In Distelli, build the application with Docker and upload it to your Docker Hub account. For steps, see Building your application with Docker.
  • Deploy: Deploy your new Docker image to a completely different Linux distribution than your dev/test environment.
  • Discover: Potentially, find out about dependencies and configuration settings you didn’t realize you needed.
  • Repeat: Because iteration is inevitable.

To learn more: