Migrating From Travis to GitHub Actions

Wednesday, September 18, 2019

Over the weekend I decided to move my CI pipeline from TravisCI to GitHub Actions for my open source Go project, Flipt. I wanted to replace my existing CI as well as automate a manual release process and to try an do it all with the new GitHub Actions.

Full Disclosure: I work for GitHub but not on the Actions team. I wanted to setup Actions in my open source project without getting any help from the Actions team or anyone at GitHub. I was not asked to write this post by anyone from GitHub, my goal is to simply relay my experiences using the platform as an end user. My thoughts/opinions are my own.

Needless to say, after a couple hours of tinkering, I was successful:

Pipeline Goalz

I’m not going to get into the details of a workflow vs a job vs a step, etc. GitHub has extensive documentation to describe the syntax and concepts behind Actions.

What I wanted was what I think is a pretty normal CI/CD pipeline:

  • Push a branch and run some unit tests, ideally with multiple versions of Go
  • On pull requests I also wanted to run some more extensive integration tests to exercise the public facing API and CLI
  • After pushing a tag, I wanted to trigger goreleaser to build and push a Docker image to Docker Hub, as well as a tarball to GitHub releases
  • Update my documentation site after each release in case any docs changed

I had the first two steps mostly working on Travis with this config, albeit with a few differences:

  1. I was only testing using a single version of Go (1.12.x). I knew I could test with multiple versions using their build matrix setup, I just never bothered to set it up.
  2. I was only running tests against a real Postgres DB on pull requests

What I was missing was the CD part of my pipeline to actually create the releases and update docs. This was still a manual process of me running a script on my local machine that depended on having a few environment variables setup for secrets. Not ideal.

Low Hanging Fruit

The first action I created was actually one to automate the publishing of documentation changes. This would later move to be the last step in my pipeline, but it was also the simplest to get working.

It consists mainly of two files, a Dockerfile to install the necessary dependencies, and a script to run the build and deploy steps.

I use mkdocs to build my documentation and publish to GitHub pages.

I (eventually) hooked it up to run as the last step in my release workflow:

This informs Actions that I want to use a local action that exists in my repository, and to set the GITHUB_TOKEN environment variable that is required to allow it to push to GitHub Pages.

Those Pesky Tests

The next thing I did was try and get unit test portion of my pipeline working. Since Flipt is a server application, I current only target Linux environments, so I don’t test on Windows or MacOS. Although it’s cool to know that with Actions I can 😉.

I did, however, want to be able to test with multiple versions of Go (1.12 and 1.13 at time of this writing). Actions makes this super easy with their matrix strategy feature.

For my workflow, it ending up looking like this:

This sets up two jobs to run in parallel that will run all the steps below it, one with {{ matrix.go }} set to 1.12 and another set to 1.13.

Later on in the workflow file, I create a step that uses these values to install that version of Go that will be available on the virtual machine:

This uses the actions/setup-go action to install the version of Go that we specify. Dope.

I actually started seeing the benefit of running tests using multiple Go versions almost immediately, as it turns out that Go 1.13 added some new functionality that broke some of my tests.

From the Release Notes:

Testing flags are now registered in the new Init function, which is invoked by the generated main function for the test. As a result, testing flags are now only registered when running a test binary, and packages that call flag.Parse during package initialization may cause tests to fail.

tl;dr I had previously been using an init function in one of my tests to turn on some debug logging if a flag was set. Turns out this doesn’t jive well with Go 1.13.1 per an issue in the Go project.

I don’t think I would have found this until I actually tried to update Flipt to build with 1.13, so it’s cool that I was able to find this early through proper testing.

The Elephant in the Room

I mentioned previously that I also wanted to run my unit tests while using a real Postgres database. This is because Flipt supports both SQLite and Postgres and I want to exercise both code paths equally.

Luckily the VMs that run the Ubuntu builds on Actions seem to have the required libraries for SQLite installed, however they do not seem to have Postgres installed, unlike Travis. You can see a list of all the installed software/libraries for each VM in the documentation.

This meant that I had to find a way to get a Postgres server running in my build so that I could test against it.

I initially tried to do this using Docker by creating a step that runs Postgres in a container using docker run. However I soon found that Actions has a built in solution for this kind of thing, services!

It turns out that the services directive was exactly what I needed:

This does the same thing that I was trying to do with Docker by running Postgres in a container, but it’s managed by Actions.

Bats and REST

Further down the test pipeline live the integration tests. Here I want to be able to verify that Flipt does what it claims to do from a ‘public API’ perspective. I consider Flipt’s REST API as well as it’s CLI as public facing and therefore they should be thoroughly tested and guarded against regressions.

Luckily, testing the CLI has been made pretty easy with tools like bats. I had some existing bats tests that I was running as part of my Travis builds, so I just needed to find a way to get them to run on Actions.

Again, it seems that Actions does not have bats installed on the native VMs, but the fine folks at GitHub seem to have already thought of this and built a bats action that you can just reference in your workflows. Which is exactly what I did:

I have a step above this one that builds the binary in the Linux VM, which is then invoked by this bats action to test the CLI input/output.

The last piece of the integration testing puzzle was testing the REST API. I had previously found a cool bash library called shakedown that makes doing HTTP testing a breeze.

I initially tried to run these tests on the native VM since it seemed it already had the required dependencies installed, however I had some issues getting the tests to run/complete properly, so I decided to move to a ‘clean environment’ and just run the tests in a container.

After much fiddling around with different base Docker images and installing the necessary dependencies, I finally got the shakedown tests working by building my own action with just the right tools installed.

Sweet Release

Finally, the last part of the pipeline was to build a release to:

  • Create a tarball for *nix
  • Create a Docker image
  • Push the tarball to GitHub and create a new GitHub release
  • Tag and push the created Docker image to Docker Hub

Luckily, goreleaser already does 100% of that! All I needed to do was provide it with the required environment variables and call it with the right arguments as the last step in my pipeline.

I had previously been using a script that I would run locally to do this, which meant I had to set GITHUB_TOKEN, DOCKER_USERNAME and DOCKER_PASSWORD on my local machine before invoking the script.

In order to move this process to GitHub actions, I needed a secure way of storing these values and injecting them into the workflow. Luckily, again GitHub had me covered with their secrets support:

This snippet shows how you can reference secrets and set them as environment variables for your scripts to use at runtime. This allows me to run goreleaser via Actions, without having to worry about my secrets being exposed in the logs or repo itself.

Wrap Up

Here’s some ProTips™ that may help you move your pipelines over if you decide to:

  1. Start with the low hanging fruit. Don’t try to replace your entire CI/CD solution in one sitdown. Look to see if there is something non-mission critical that you can move over first.
  2. Keep your existing CI system running. This should go without saying, but don’t go deleting your .travis.yml file until you are confident that everything works as it should with your new setup in Actions.
  3. Look for existing solutions first. There’s a ton of cool stuff in the community that already exist for Actions, as well as the github/actions project. Look there first before trying to create your own to do a specific task. You may find that it already exists.
  4. Read the Docs. Seriously. Theres a wealth of information in there that will probably help you figure out how to do what you are trying to do. This could save you many hours of head scratching.

As you can probably surmise, getting the perfect CI/CD pipeline setup with Actions took some work, which mostly entailed me actually reading the documentation. Anytime I ran into a snag, it ended up being me just not understanding how the system worked. I appreciate the extensibility and power that GitHub Actions provides, as you can literally do just about anything with it. This comes with overhead of learning a slightly different syntax and set of norms, but I think the benefits greatly outweigh the drawbacks.

All of my workflow files that I referenced are available here.