March 3, 2021 CI/CD GitLab Docker Wrangler Cloudflare
As I went over yesterday, I am wasting valuable resources every time I publish changes to any of the static sites I maintain by downloading the Cloudflare wrangler tool every run. Today I am going to go through building a docker image and publishing it to DockerHub. I have a few reasons for handling it this way, so if you’re following along yourself make sure this configuration suits your needs, as you could accomplish this task in a myriad of other ways.
The biggest reason I am making my images publicly available on DockerHub when I really only intend them to be used internally is that I am trying to expose myself to as many pieces of the infrastructure at play. If you don’t want or need to put your own images on DockerHub, just keep them locally on your docker machine. There are some benefits to this though, such as giving me a mechanism to update my images easily and removes any reliance on any single device in my environment.
The remainder of reasons can all generally be lumped under that umbrella of learning as well. That’s 90% of the reason services end up in my environment. The other 10% can be attributed to the
ooh shiny effect.
With that said, let’s get into the meat and potatoes of building a docker image before this post resembles a recipe you found on Pinterest.
- DockerHub Credentials (optional)
We don’t need much for this project, but that’s mostly because I have all my infrastructure in place. I’ll spend more time on those pieces in later posts. The only major thing to note here is you should be running Docker on the same host OS as where your Runner operates. I default to running Docker on Linux.
There are a few different ways to go about building a Docker image, and my preferred method is to start a container off of a base image and build my container as needed. I do this mostly so I can have immediate feedback on any errors. Once I have this done, I convert the steps into layers in a
dockerfile so that I can build and publish the container to DockerHub.
Developing my Docker Container
This project ended up being one that really taught me a lot when I expected to already know the shape of the tasks at hand. But as I went through the process, building my container off of
bitnami/minideb to keep a trim size, I realized that I had only chosen that base container due to my work with getting PowerShell and Pester working in a container. I really should remember to start with
alpine if I’m looking to create a new container that will adapt well to scale.
Thankfully, the research to get wrangler running on alpine was pretty light. Cloudflare provides a tarball containing the Wrangler binary that you can grab with
wget and extract with
tar. Both of which Alpine has installed already. From there you just have to copy the
wrangler binary to
/usr/local/bin and it becomes executeable. Tack on a layer to remove the downloaded files that are no longer needed, and we’re ready to go.
Of course in my haste I assumed it was going to be that simple and gleefully pushed the image to DockerHub and began testing CI/CD with it. It turns out that wrangler still needs to run on top of Node if you install it in this fashion. Luckily adding Node to alpine turned out to be as easy as installing it with
apk add. With that last hurdle out of the way I was able to push the image, remove all local copies and begin testing in earnest.
FROM alpine:3.12.4 RUN apk update RUN apk add --update nodejs npm RUN wget https://github.com/cloudflare/wrangler/releases/download/v1.13.0/wrangler-v1.13.0-x86_64-unknown-linux-musl.tar.gz RUN tar -zxf wrangler-v1.13.0-x86_64-unknown-linux-musl.tar.gz RUN cp /dist/wrangler /usr/local/bin/ RUN rm -fr /dist CMD ["/bin/sh","-c","wrangler publish"]
It’s a prety short and sweet dockerfile really. the first three lines take care of getting the
npm dependencies installed on top of the
alpine base. Then it’s time to get wrangler taken care of. The tarball gets downloaded with
wget and extracted with
tar. Based on doing this by hand first I know that the tarball extracts into a
dist folder it creates, so I can get the
wrangler binary copied into
PATH easily. Then I clean up after myself and set the
cmd that will be done when the container builds.
I’m still a little fuzzy on the
cmd piece of this puzzle, but I just haven’t given it the attention I need to yet. For now it’s enough that I can hamfist my way into a working configuration. I do have an exam Friday morning after all.
Building the Image and publishing to DockerHub
Currently I perform this step manually in VS Code, but I am working to create a CI/CD Pipeline to handle this piece as I need to eventually update or create new images.
After this is done, I make sure I’m signed in with docker using a quick
docker login and follow it up with
docker push durish/wrangler_lite:latest. All using the VS Code integrated terminal, because I like not having to switch windows when possible.
Now that the image is publicly available it’s time to update my CI/CD configuration. As a refresher, here’s what my
.gitlab-ci.yml for this job looked like this morning before I began this project.
Deploy_to_Cloudflare: image: timbru31/ruby-node:2.3 stage: Deploy_to_Cloudflare script: - wget https://github.com/cloudflare/wrangler/releases/download/v1.8.4/wrangler-v1.8.4-x86_64-unknown-linux-musl.tar.gz - tar xvzf wrangler-v1.8.4-x86_64-unknown-linux-musl.tar.gz - ./dist/wrangler publish artifacts: paths: - public expire_in: 1 day only: - master
If you read yesterday’s post, you’ll notice I have added the
expire_in key under
artifacts. This was another area I identified for immediate improvement. Since I don’t really need to keep those artifacts around very long after they are published.
And now here is my new job configuration.
Deploy_to_Cloudflare: image: durish/wrangler_lite stage: Deploy_to_Cloudflare script: - wrangler publish artifacts: paths: - public expire_in: 1 day only: - master
It sure doesn’t look like much when I lay it out like that. But the results speak for themselves about the potential for improvement these 4 line changes represent.
This was a great project for me to undertake because it reminded me to keep an open mind when tackling problems. And if my gut says I could be doing something more efficiently, I should at least investigate alternatives.
If we look at my total pipeline time for posting the changes I made to test the new image, we can see I have reduced my run time for the whole pipeline down to around 50 seconds. With the
deploy_to_cloudflare job now taking 43 seconds in my last test. Which is a time savings of around 35% per job run.
I know I jumped the gun on my timeline from yesterday by including the CI/CD updates and results in this piece. I had so much fun diving into the docker image build that I had to get it into my pipeline right away. But I came up with plenty of new ideas for new things to try with my pipelines, so I am going to keep working on my ideas in this vein for 2 weeks in a “blogging sprint”.
Tomorrow I am going to start development of a tool for backing up the
/public folder from the build job and source code of the repo to AWS S3 and S3 Glacier as a proof of concept for a simple Disaster Recovery CI/CD component.