Scripts to Rule Them All

February 1, 2016

Problem

Recently at work we have started to redo our Continuous Integration pipeline by relying more heavily on Docker, docker-compose and Jenkins. Previously we had a mashup of custom scripts and while we still used Docker and Jenkins, there was no standardized process when it came to setting up a new codebase for CI.

This worked OK when there was only a handful of codebases, but as we are making the move towards more and more microservices, each project requiring it’s own custom setup for CI simply does not scale.

Ideally a developer should not have to worry to much about how their project integrates with the company’s CI process once that process is setup and working correctly. Now obviously someone (preferably a group of people) should have knowledge of how things actually work ‘behind the scenes’, but I believe that new or more junior developers should not have to put much thought into this when they are just getting spun up on a project. It would be great if new team members could simply ‘get to work’ and not be overwhelmed by learning the ins and outs of the build system.

I’ve mostly been speaking from the developers perspective, but standardizing on a way of setting up applications for CI would also be a huge benefit for the devops members of the team as well. Imagine if integrating a new project into the existing CI process was as simple as copy and pasting a single script.

Solution

This all sounds great on paper, but how would you actually implement this? At work we recently landed on leveraging Github’s scripts-to-rule-them-all repo and updated them to fit our needs. Github has written a great blog post describing the idea behind these scripts so I wont reproduce all of that here.

Basically they have created example scripts to standardize the processes of:

The repository’s README also provides a great overview of each example script and how it would fit in the development and deployment process so I encourage you to check it out.

Tweaking

Github’s scripts were a great starting point for us, however they did not solve all of our problems. The main differences between our needs and the example Github scripts were:

  1. We use Docker and docker-compose for developing and deploying our applications, while the Github scripts do not
  2. We have a polygot architecture (currently Ruby, Java and Go), whereas the Github scripts only covered a Ruby (Rails) project
  3. Because we use Docker, our CI process ends up creating several Docker containers when running, which we need to clean up to conserve resources

I’ll quickly go over how we solved each of these problems and how our scripts interact with each other.

Basic Flow

We first needed to come up with the flow of how the scripts would be run by the CI server (Jenkins) and in what order.

After some tweaking we came up with the following set of scripts:

└── script
    ├── build
    ├── cibuild
    ├── cleanup
    ├── console
    ├── push
    ├── server
    ├── test
    └── verify

A Jenkins job would execute the cibuild script which would then kick off the other scripts in the following order:

Here is a sample cibuild script:

#!/usr/bin/env bash

# script/cibuild: Setup environment for CI to run tests. This is primarily
#                 designed to run on the continuous integration server.

set -e

# cd to project root
cd "$(dirname "$0")/.."

# run tests
script/test

# build candidate
script/build

# verify candidate image
script/verify

# push candidate image
script/push

If any of the scripts returned a non-zero code, Jenkins would fail the job and execution would stop.

Integrating Docker

Integrating Docker and docker-compose was not that difficult since we already use these technologies for developing and deploying. Basically the only difference between our scripts and the sample ones provided by Github are the inclusion of docker-compose and Docker commands for running tests and building images.

Here are some examples of commands that we use for running RSpec tests for a Rails app:

# script/test

# make sure containers are up and ready
docker-compose --project-name=$PROJECT_NAME up -d

printf "\n===> Running tests ...\n"

docker-compose --project-name=$PROJECT_NAME run $PROJECT_NAME bin/rake db:create db:migrate

if [ -n "$1" ]; then
  # pass arguments to test call. This is useful for calling a single test.
  docker-compose --project-name=$PROJECT_NAME run --rm $PROJECT_NAME bin/rspec "$1"
else
  docker-compose --project-name=$PROJECT_NAME run --rm $PROJECT_NAME bin/rspec
fi

Ruby, Java, Go.. oh my

As mentioned above, another big difference from the Github scripts is that we want to support multiple languages besides just Ruby. Specifically we need to support Java and Go as well. Each of these languages had their own set of build tools and preferred way to run tests so we want to be as flexible as possible while still enforcing a standard.

The simplest solution that seems to work well is to split each language into its own folder with it’s own set of scripts. The result looks something like this:

├── java
│   └── script
│       ├── build
│       ├── cibuild
│       ├── cleanup
│       ├── push
│       ├── server
│       ├── test
│       └── verify
└── ruby
    └── script
        ├── build
        ├── cibuild
        ├── cleanup
        ├── console
        ├── push
        ├── server
        ├── test
        └── verify

We’re still working on the set of Go scripts, however here is an example of the Java test script:

#!/usr/bin/env bash

set -e

cd "$(dirname "$0")/.."

[[ -f pom.xml ]] ||(echo "This project uses Maven. A $(pwd)/pom.xml file is required" && exit 1)

printf "\n===> Running tests ...\n"

mvn clean test

All you have to do when starting or updating an existing project is copy the set of scripts to your project that are applicable for your language and you are good to go. We are also currently working on a set of ‘common’ scripts that are language independent such as the cleanup script that I’ll cover next.

Cleaning Up Your Mess

Docker and docker-compose are awesome tools that help make developing, testing and deploying applications much easier. However if you are not careful and don’t remember to cleanup after yourself, your disks can quickly fill up with unused or ‘dead’ images.

This is usually not as big of a problem on your development machine because you aren’t spinning up and tearing down containers all that often. However on a CI server such as Jenkins, these containers along with their respective images can be created each time a new build job runs. Multiply the number of builds by the number of projects that you are building and you can quickly have hundreds of images that take of valuable disk space.

Also, once your candidate image passes CI and is pushed to a repository like Dockerhub, they aren’t of much use to your CI server any more, so there is really no need to keep them around.

We solved the issue of these ‘dead’ images by creating a cleanup script which contains the following:

#!/usr/bin/env bash

# script/cleanup: Cleanup environment after CI. This is primarily
#                   designed to run on the continuous integration server.

set -e

[[ "${PROJECT_NAME:-}" ]] || (echo "PROJECT_NAME is required." && exit 1)

# cd to project root
cd "$(dirname "$0")/.."

docker-compose --project-name=${PROJECT_NAME} stop &> /dev/null || true &> /dev/null
docker-compose --project-name=${PROJECT_NAME} rm --force &> /dev/null || true &> /dev/null
docker stop `docker ps -a -q -f status=exited` &> /dev/null || true &> /dev/null
docker rm -v `docker ps -a -q -f status=exited` &> /dev/null || true &> /dev/null
docker rmi `docker images --filter 'dangling=true' -q --no-trunc` &> /dev/null || true &> /dev/null

This script ensures that Docker and docker-compose stop and remove all of your project’s containers after they are run. It also goes as far to remove any ‘dangling images’ which are basically untagged images that occur when a new build takes the repo:tag away from an existing image.

To run our cleanup script, we make use of a Jenkins plugin: PostBuildScript. This plugin makes sure that our script is run no matter if the build succeeds or fails.

Summary

So far the ‘scripts-to-rule-them-all’ have worked great for setting up new or integrating with existing projects. Not only do these scripts make it much easier to standardize our CI process, they also allow new members of the team to get started quickly without having to learn the specific test and run commands of that project or language.

Let me know if you or your company has tried something similar and how that worked for you. If you are still looking for a solution, I would recommend giving Github’s scripts-to-rule-them-all a try.

Did you find this content helpful?


Let me send you more stuff like this! Unsubscribe at any time. No spam ever. Period.


Subscribe to MarkPhelps.me

* indicates required

Discussion, links, and tweets

Mark Phelps

I'm a Software Engineer in Durham, NC. I mostly write about Go, Ruby, and some Java from time to time.