Cross-platform repeatable builds using Docker and Make
Background
During the stint at my last company I was put in the unique situation of having to migrate pipelines from 3 different CI systems. I had originally deployed Drone Enterprise while it was still in its infancy and, at the time, it fulfilled our needs for build and deploy automation.
We eventually outgrew Drone and after some evaluation I settled on CircleCI Enterprise as a replacement. While migrating some of our pipelines from Drone to CircleCI , though, we noticed that Github Actions was now included in our existing license for Github Enterprise, our primary source repo.
What happens if we decided to migrate to another CI/CD system? I’m going to have to migrate from one system’s plug-in structure to another. Can I simplify what the builds and deploys look like from a local and automated perspective?
Github Actions was the platform (for the time) we eventually stuck with.
Analysis
Take this simple (generic pipeline) golang example:
some_vendor_plugin:
vendor_specific_option: foo
another_specific_option: bar
build:
- GOARCH=amd64
- GO111MODULE=on
- go mod vendor
- go build -o main
Disadvantages
Plugins
-
Vendor lock-in
The main reason I started down this path is vendor lock-in. Whether it’s Drone, CircleCI, or Github Actions, folks will normally use said vendor’s plug-in structure. Note that this isn’t necessarily a bad thing and normally the quickest way to get your pipeline up and running.
-
Difficult to repeat in other environments
Using a vendor specific plug-in in your pipeline makes it difficult to repeat pipeline steps in other environments; locally, for example.
-
No fine grained control
When using a vendor plug-in you normally don’t know what’s going on under the hood – unless you delve into the vendors code.
Build steps
The pipeline should list the steps to build, test, and deploy an application. It should not describe how to build, test, and deploy the application.
-
Developers and Devops relying on the pipeline for building applications
There have been numerous occasions where I’ve seen teams using the pipeline as documentation on how to build. Heck, I’ve done it myself!
-
YAML cruft
The pipeline should be treated as any other piece of code. Just like messy code, a messy pipeline can be hard to troubleshoot and maintain.
Alternative approach using Docker and GNU Make
Why Docker?
By using Docker we can avoid having to install the needed tooling across
multiple local and remote environments. This can include simple items like jq
and black
which all developers may not have installed.
Why Make?
I originally didn’t want to go with make
as I’m sure we’ve all encountered a
nightmare of a Makefile
. After looking at various modern solutions I decided
on make
for the following reasons:
- It’s already there. We are trying to avoid installing extra tooling.
- With compiled languages like Rust and Go becoming more and more prevalent it’s “making” a comeback (Pun intended).
- Writing a clean and concise
Makefile
doesn’t require a degree in rocket science, as I previously thought.
To be honest, you can use anything in place of make
in this method. rake
,
maven
, or even shell scripts.
The important thing is to use a common tool available in all environments that allows you to describe the build process and not a vendor plug-in.
Implementing
Provided that your platform allows straight docker
commands the Makefile
examples should be applicable.
Let’s use my blog as an example. Here’s
a condensed snippet from the Makefile
:
DOCKER=/usr/bin/docker
SOURCE_PATH := $(shell pwd)
WORKING_PATH=/srv/jekyll
DOCKER_RUN=$(DOCKER) run -v $(SOURCE_PATH):$(WORKING_PATH) -w $(WORKING_PATH)
JEKYLL_CONTAINER=jekyll/jekyll:4.2.0
.PHONY init
init:
$(DOCKER_RUN) -e JEKYLL_ROOTLESS=1 $(JEKYLL_CONTAINER) bundle
.PHONY build
build:
$(DOCKER_RUN) -e JEKYLL_ROOTLESS=1 $(JEKYLL_CONTAINER) jekyll build
The above Makefile
is clean and shoud be easy to read for most folks. If we
run into an issue with a build the description on how to build does exists
in our source repository, which allows for quicker troubleshooting. Compare this
to digging through a vendors plug-in source repo.
We can now execute the same steps to build both locally and remote. So for local:
make init build
Then to build in a pipeline you’d execute the same thing:
run: |
make init build
Note this lists the steps and doesn’t describe what the process is. If
we weren’t using make
it would look like this:
run: |
docker run -v $PWD:/srv/jekyll -w /srv/jekyll -e JEKYLL_ROOTLESS=1 jekyll/jekyll:4.2.0 bundle
docker run -v $PWD:/srv/jekyll -w /srv/jekyll -e JEKYLL_ROOTLESS=1 jekyll/jekyll:4.2.0 jekyll build
Full Github actions example:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@master
- name: build
run: |
make init build
I’d like to note that a Github action already exists to build Jekyll using a single line, but this is discussed in the Analysis above. We would be using two separate methods to build both locally and remote without complete control over the tooling.
Final thoughts
This has been a condensed overview of the method I’ve been using both professionally and personally. In the coming weeks I’ll be giving an overview of the more detailed workflows I’m currently using in my projects. This will include building, testing, and deploying. In the meantime if you’d like to look at them directly:
-
ecr-template
Github template for creating Docker images and pushing them to ECR. My most basic implementation of the above concepts. -
build-python
Python image built using ecr-template that I use in various projects. -
ses-send
Simple Python wrapper for AWS SES built using the above build-python image. Includes formatting usingblack
, testing usingpytests
, and deploying to PyPI viatwine
. -
blog
This blog which is built withJekyll
. Contains all the steps to get into production including testing withrobot
, deploying tos3
, and then invalidating the Cloud Front deployment.
comment on [twitter]