Mark Eschbach

Software Developer && System Analyst

Docker

Docker is a platform for executing containers, most notably Linux containers. When running under Windows then Docker may also execute Windows containers. Docker is built on the `libcontainer` systems under Linux to create the logically isolated environments. Although by default Docker attempts to create a virtual machine like isolation, including IPC (network, pipes, etc), memory, and CPU, when creating a container an operator can optionally expose host resources such as the network stack.

As a platform Docker provides a proprietary distribution format which has become the de-factor standard. The image distribution is by default internet enabled utilizing an implicit centralized of Docker Hub. Due to the free hosting on this centralized repository many open source projects have been placed there, aiding in the rapid adoption of Docker. Although there have been attempts to standardized on other container image formats, such as Open Container Image, many still recommend building the images via Docker since their tool chains are significantly more complicated.

Notes - To be expanded on later

  • Docker can be used within a vritualized environment, such as a cloud system from Amazon and Rackspace. This provides an interesting case for deployment.
  • Much of the Docker documentation (both from Docker and on the internet) I've seen to date encourage all dependencies to be installed, including the tool chains to build the software. Security theory would state this provides another vector for attack, a possible problme.
  • Although much of the documentation encourages people to use Docker Hub and subscribe with a monthly plan for hosting, however it is easy to setup a registry to pull images on a given host. Start the image with 'docker run -e SETTINGS_FLAVOR=local -p5000:5000 registry:latest'. This will start the reigstry and bind a listener to port 5000. From here any HTTP/TCP/IP traffic allowed to the host will be able to pull images. To date I have yet to find a good solution to securing the registry, however the best recommendation I've come accross is to use a reverse proxy which demands via HTTP/Basic authentication. I have yet to try it, but if you do, then please let me know.
  • Sometimes Docker will accept a container ID or a name. In the case of links, Docker will only accept a name. As I human I generally like using names better, however when automating Docker with Jenkins or scripts you will need to convert the container ID to a name. To do so you may use the inspect subcommand with the format option with a dot notated seperation of the property names. For exammple:
    docker inspect --format='{{.Name}}' $db
    
    This has a leading slash, which appears to work. Inspect - Command Line - Docker Documentation
  • File Systems

    Docker was originally designed and built using AUFS, or Another Union File System, to support file system isolation along with cgroups. AUFS was never included within the mainline Linux kernel, which unforunately means you need to run a modified kernel in oder to use Docker's default file system. Depending on the distribution you use, this may be built into thier default kernel.

    My expierences with differnet file system

    Desiring some support and features of the newer kernels I upgraded a machine in my workshop. After several hours of researching the changes, I decided to upgrade the Ubuntu 14.04LTS box. Unknown to me however, AUFS supprot was removed from the 3.17 kernel I installed. This kicked several expirments with different files systems.

    The defaulted file system became LVM over a loop back file within my / file system. Docker's data is stored at /var/lib/docker. To capture a usable metric, I decided to utilize container construction and system level test for a systme of mine via Jenkins. On average the container consumed 5.5 minutes to build the contianer and run the tests.. The baseline with AUFS was 2.5 minutes, consuming twice the amount of time.

    btrfs -> 4.5 minutes; consuming 7 GBs on disk with 52GBs logical (btrfs uses an interesting Copy on Write algorithm which is integrated with Docker; so the initial data is marked as copy on disk). btrfs would require a lot of learning and time in order to figure out; which will not pay off in terms of time or improvements.

    Ubuntu

    Prior to 14.10
    runs 3.13 of the Linux kernel and includes the AUFS support.

    14.10
    runs 3.16 of the Linux kernel and does not include either AUFS or OverlayFS.

    15.04
    runs 3.18.1. OverlayFS was mereged into the 3.18.1 version of the kernel.

    Linux Kernel Highlights: http://kernelnewbies.org/LinuxChanges

    Linux Kernel Changelog: https://www.kernel.org/pub/linux/kernel/v3.x/ChangeLog-3.18.1

    Consolidated Announcements for 3.18.1: http://www.phoronix.com/scan.php?page=news_item&px=MTgyMzE

Docker CDN using nginx, git, and etcd

Containers invovled: nginx → content container ← git pull container via etcd exec-watch ← etcd ← git repository.

Content Container

The linch-pin of this container is a data container we'll create to hold our HTTP service configuration and content. At /www will contain the content. I'll be using nginx, however feel free to use any service container you would like. nginx will expect it's configuration data at /etc/nginx.

Volumes will be exposed at both the configuration path and content path. This is achieved by using the -v with the path given as an argument. To create the container I'm going to use the nginx, as this allows the resuse of underlying layers. An example of the command to create this container is: docker create -v /etc/nginx -v /www --name www-content nginx /bin/true

Configuration Watcher
HTTP service container

A service container is responsible for actually serving the data over HTTP, or proxying the data. nginx offers an out of the box Docker container with the name nginx. Out of the box nginx expects configuration data under /etc/nginx.

Our service container, nginx, looks in the path /etc/nginx for it's configuration files. The configurations we'll provide will host the static content under /www. Since I'm planning on using the nginx container to serve the files I'm going to cheat to avoid duplicating data.

docker create -v /etc/nginx -v /www --name www-content nginx /bin/true
If you don't have the nginx container, this will pull it.

Next you'll need to run git commands against the specified directories. I used the IO.js container, since I had it arround. docker run --rm --volumes-from www-content -v ~/.ssh:/root/.ssh iojs:3.3.0 git clone https://hosting.test/some-repository.git /etc/nginx

Xref