Docker is a containerization solution that allows apps to run in a sandboxed environment that includes all the dependencies they will need without the additional overhead of a virtual machine. This sounds great–we can containerize our applications and deploy them, no more server provisioning and maintenance! Unfortunately, this is way more difficult than it sounds if a zero-downtime solution is needed. If you are just deploying a small-scale app without a cluster of servers, you will have downtime while Docker stops the existing app container and starts the container with your new code.
How we use Docker
We are keeping an eye on Docker deployments and hope to use them for our apps soon. Services like Amazon ECS make it easier to deploy containers to a cluster of instances, but we don’t need a cluster for the majority of our apps. In the mean time, we can use Docker to containerize our existing internal tools. Containerizing these tools makes it easier for developers to contribute because they don’t have to worry about setting up a local instance of Jenkins, Puppet, Elasticsearch, etc.
Local Services
New developer setup is a pain — that’s why tools like Boxen exist. Installing software to your local machine can still be troublesome though. What if one app uses Elasticsearch 2 and another uses Elasticsearch 1.3? Not only do you have to dig through old download links, you then have to figure out how to run them side by side. It’s just messy. Instead, you can use Docker and Docker Compose to install all of these local services. I personally use this short bash function to spin up services:
function compose { docker-compose -f ~/Workspace/compose/$1/docker-compose.yml up -d}
In ~/Workspace/compose
, I have a collection of directories, one per service, each with a docker-compose.yml. For example:
postgres: image: postgres ports: - "5432:5432" environment: POSTGRES_USER: db POSTGRES_PASSWORD: mysecretpassword volumes_from: - datadata: image: busybox volumes: - /var/lib/postgresql
To get Postgres running, I just have to run compose postgres
. We keep the compose
directory in a Git repository, so any new developer can pull it down and run any service they need. The only requirement is Docker and Docker Compose!
Continuous Integration
Our Jenkins agents only need Jenkins and Docker installed to build any Rails app. Instead of provisioning new Jenkins servers with Puppet or updating AMIs, we just have one AMI with Jenkins and Docker installed. When we need to add a new script, like building an app with PostGIS, we simply update the container and all of our agents can support it. We keep a repository of docker-compose.yml’s to perform our CI tasks, like running rspec
, karma
, or cap deploy
, then use docker run
to execute them during the build process.
Log Servers
Our log server doesn’t need to be re-deployed or updated very often. When it does, it’s okay if it has a little bit of downtime because logstash-forwarder will queue up the logs on each server during that period. This makes it a perfect candidate to deploy using Docker. Setting up an ELK (Elasticsearch, Logstash, Kibana) server with Docker essentially requires no work–we just use the willdurand/docker-elk Dockerfile and launch it on EC2.
Puppet Testing
Testing out Puppet changes is extremely important–you don’t want to provision all of your servers with a bad change. Our Puppet testing setup relies on Docker and ServerSpec so that we can test and build new changes on our Jenkins agents mentioned earlier. When installing ServerSpec, a Rakefile will be generated so that tests can be ran with rake
. Our docker-compose.yml spins up two containers, master and agent, using different Dockerfiles. Our serverspec
container is ran on our Jenkins agents when any change is made to Puppet.
puppetmaster: build: . dockerfile: Dockerfile.masteragent: &agent build: . dockerfile: Dockerfile.agent links: - puppetmaster environment: FACTER_app_name: test FACTER_environment: test FACTER_role: webserver volumes: - ./spec/webserver/:/puppet/spec/ - ./reports/:/puppet/reports/serverspec: >>: *agent command: /bin/sh -c "sleep 5; puppet agent -t --server puppetmaster --verbose --debug --waitforcert 60; bundle exec rake"
How we want to use Docker in the future
We are still searching for a Docker deployment solution that is suitable for cluster-less apps. Switching to Docker would provide us with several benefits:
- We could potentially remove Capistrano, as each deployment would be a standalone container that is built from scratch. No more Unicorn hotswaps.
- Our CI servers could bake secrets directly into the container after pulling them from Vault. This would make 12-factor a lot easier and also remove the need for gems like Figaro.
- Using Amazon ECR, we can offload the container permissions into IAM, instead of managing authorization with a custom solution.
- At the end of an agreement, we can deliver the code and all dependencies in one container, instead of delivering the code and server documentation separately.