Deploying Applications with Docker and Saltstack

This is all that is required to do basic setup, deployment and upgrade of docker containers with saltstack.

Inside Amplitude
March 3, 2016
Image of Jay Bendon
Jay Bendon
Senior Software Engineer
Deploying Applications with Docker and Saltstack

Here at Amplitude we’ve chosen to build and deploy with docker and saltstack. Docker allows us to minimize configuration and customization required to deploy our services. Saltstack is a powerful systems and configuration management tool that is fast, scales well and is highly extendable for solving just about any infrastructure automation or orchestration problem.

How did we choose docker and saltstack?

I’m sure you can imagine a lot of startups start off deploying and building software onto a manually provisioned machine by cloning a git repo and doing the needful. Naturally this process doesn’t scale very far.

First we turned to docker. This for us was a no brainer considering the speed and repeatability of builds we’ve achieved using docker containers for our services. Once we knew we were going to be building with docker we went on a search for a systems and configuration management tools to make our lives easier.

First a few service deployments were rolled out using a tool called Ansible. Ansible was great for getting off the ground quickly but it didn’t take long for us to notice things slowing down and more complex logic being difficult to maintain as we scaled. We took some time to review our options to us. Chef was a strong candidate especially due to its widespread adoption, strong testing capabilities, simplicity and depth of use, but ultimately we chose Saltstack.

Why? Well, first we are mostly a java and python shop, so being able to write python for systems and configuration management was a huge plus to the team. Security was another concern for us. Saltstack allows us to provide pub/priv keys for all masters and minions guaranteeing their authenticity and providing a secure channel for master/minion communications without sacrificing performance or scalability.

Saltstack’s commitment to docker was very promising.

At the time we started using salt they were using the docker-io module and after community feedback realised there were a lot of improvements available and was worth building an entirely new module to address these concerns. Saltstack has an excellent community. I have been able to provide and receive help readily in the community. Bug reports, feature requests and pull-requests are responded to very quickly, usually on the order of 24 hours, although admittedly some bigger bugs have taken some more time to be resolved.

Salt Docker

Getting started with docker and saltstack is pretty easy. We will assume that setting up the salt master and minions is already done. This information is written for use with the docker-io module, which is still the default docker module for salt, but has been deprecated in favor of the newer docker-ng module. The code examples are written to use the python renderer available for saltstack.

Setup docker dependencies using saltstack. You will want to install python-pip, docker-py, lxc-docker and ensure the docker service is running on nodes you want to deploy docker containers on.

Pulling your first container

nginx.sls

#!py def run(): config = {} config['nginx_pulled'] = { 'docker.pulled': [ {'name': ‘nginx’}, # docker repository {'tag': “latest”}, {'force': True}, # force pull image {'require': [ {'pip': 'docker-py'} # require that docker-py is installed by pip ]}, {'order': 200} # optional strict ordering of salt operations ] } return config

Installing the container

nginx.sls

config['ournginx_container'] = { 'docker.installed': [ {'name': “ournginx”}, {'image': “nginx:latest”}, {'mem_limit': None}, # workaround for https://github.com/saltstack/salt/issues/25492 {'require': [ {'docker': 'nginx_pulled'}, # require image is pulled ]}, {'order': 210} ] }

Running the container

nginx.sls

config['ournginx_running'] = { 'docker.running’: [ {'name': “ournginx”}, {'image': “nginx:latest”}, {'require': [ {'docker': 'ournginx_container'}, ]}, {'order': 211} ] }

One caveat to note, the above install and running states should have identical options where applicable. If you set up any environment variables or volumes be sure to configure it for both docker.installed and docker.running.

Now that handles the bare minimum required to pull and deploy a container. But what happens if we release a new “latest” container for nginx? Well, it will pull the latest container, but the running and installed containers will not be updated. Lets fix that. nginx.sls

config['ournginx_absent'] = { 'cmd.run': [ {'name': '/usr/bin/docker stop -t 10 ournginx && /usr/bin/docker rm -f ournginx'}, {'unless': '/usr/local/bin/docker_compare_running ournginx nginx:latest‘}, {'require': [{'file': 'docker_compare_running'}]}, {'order': 201} ] }

docker_compare_running

#!/bin/bash # Script should return 0 if container is not running, insufficient args, or # downloaded container vs running container is not different. # Return 1 if running container is different. if [ $# -lt 2 ];then echo "Please provide a image to search" # return 0 so salt doesn't think the images are different exit 0 fi # return 0 if container not running if [ $(/usr/bin/docker ps --filter name=$1 -q |wc -l) -eq 0 ];then exit 0 fi # return >0 if containers do not match. /usr/bin/docker inspect --format "{{ .Image }}" $1 |grep $(/usr/bin/docker inspect --format "{{ .Id }}" $2)

Great, now when we release a new version of the nginx container, we will stop and remove the old container allowing the updated container to be relaunched.

All of this is great so far, but if you’re setting this up to deploy from a private docker registry or repo, you will need a way to authenticate. The docker modules support the docker v1 registry api. All that is necessary to enable pulling from private repository or registry is configuration of docker-registries pillar data which is just a map of the repository to the username, email and password with access.

Our default pillar data: init.sls

#!py def run(): base_pillar = {} base_pillar['docker-registries'] = { 'https://index.docker.io/v1/': { 'email': 'devops@amplitude.com', 'password': “docker-registry-password”, 'username': ‘accountname' } } return base_pillar

That’s all that is required to do basic setup, deployment and upgrade of docker containers with saltstack. We chose to write our salt states in python as it allowed us to have some more complex states while still maintaining as much readability as possible compared to yaml state files. Most of our docker deployments were born out of these basic steps. There is still a lot of room for improvement based on your specific needs.

A few things that can be helpful or improved upon:

  • Using pillar values to explicitly gate the container or containers deployed.
  • Deploying new containers alongside old for a seamless deployment.
  • Container rollbacks.
  • Upgrade to dockerng
  • A custom state or module to reduce boilerplate required for deploying docker containers with saltstack.
  • Scale number or size of containers based on host specs

Comments

Kevin Audleman: Does your method of stopping the nginx container then spinning up a new one result in downtime?

Jay Bendon: Yes, if you’re only running a single instance of NGINX, or don’t have health-check based load-balancing and you are reloading all the containers at once there will be potential for end user to experience unavailability. NGINX does handle the shutdown signal which causes it finish processing in-flight requests before terminating the process without accepting any new connections and our load-balancers detect the NGINX instance is going out of rotation and remove it from the pool during the reload of the container and we never release all of our containers at the same time so the transition is un-detected by the end user. You do want to make sure you’re shutdown timeout is long enough for your requests to finish processing otherwise they may be terminated in-flight.

About the Author
Image of Jay Bendon
Jay Bendon
Senior Software Engineer
Jay Bendon is a Senior DevOps Engineer at Amplitude. He focuses on ensuring Amplitude engineers have the tools, insights and guidance necessary to operate reliably while scaling for growth. He was previously part of Google, Amazon, and Ooyala engineering organizations.