Revamping my Hugo/Cloudflare CI/CD Pipeline: Part 1

March 2, 2021    CI/CD GitLab Docker Wrangler Hugo Cloudflare

Pipelines Photo by Belinda Fewings on Unsplash

After working through getting my own Docker Image built for handling Pester testing my PowerShell scripts to use in my CI/CD Pipelines, I realized I should take another look at my other pipelines to see where I can introduce more efficiencies. I was able to reduce my Pester pipeline run time from around 4 and half minutes to under 20 seconds all by creating my own docker image to use for the job instead of building the environment each time from a base linux image.

Pester Old Runtime The old way 😢

Pester New Runtime The new way 😁

What is slowly becoming my most heavily used workflow is my website management pipeline. Currently it takes about a minute and a half to complete this pipeline. Which is all well and good for me and my clients needs. But this is my production environment, which means I should be carefully guarding all my resource usage, especially considering the potential need for scalability if I start managing more sites. Let’s see what kind of blood we can squeeze out of this stone.

GitLab Pipeline Overview

Site Generation

This site and a few others that I maintain are generated using Hugo, a Static Site Generator. This let’s me write my content in markdown in my editor of choice, and when I run hugo it builds my site into the proper html, css and javascript in the /public folder under the project folder.

This means that the first step in my CI/CD Pipeline is going to be building the site and saving the /public folder as an artifact to be used later. Currently I’m using a purpose built hugo container to handle building my site. I will probably eventually create my own, but only to ensure I know what my data is being touched by as opposed to seeing any performance enhancement. It already runs in 7 seconds right now anyways.

GitLab Build Job Output

Deploying the Site

A fun perk of Static Sites like mine is that they can run on Cloudflare Workers, which is great for me because it’s a low cost, low maintenance method of hosting my site. There are tons of other perks that come from using Cloudflare and if you want to know more, I suggest you talk to my buddy Jon at Phasmasec. He’s the reason I’m a convert.

GitLab Deploy Job Output

We already know we are looking at this job for any performance improvements that can be had. It took the other 00:01:23 that we have after building the site. Here’s the stage in my .gitlab-ci.yml file.

  image: timbru31/ruby-node:2.3
  stage: Deploy_to_Cloudflare
    - wget
    - tar xvzf wrangler-v1.8.4-x86_64-unknown-linux-musl.tar.gz
    - ./dist/wrangler publish
      - public
    - master

I love how readable yaml is. An item that will be resolved during this project is that I am entrusting my Cloudflare API credentials to a docker image built by someone else. This is probably fine, but I would rather have it under my control.

After that I am downloading Wrangler from GitHub everytime this job runs, which is a huge waste of resources, especially as my use of this job increases! This is something I can improve as well. Truth be told, this is the same area I was able to reduce my run times for Pester. Every job was installing powershell and then making sure it has Pester 5 on it.

Once Wrangler is downloaded, I am extracting the tarball and calling the wrangler publish command. I’m not sure exactly how much time this is adding to the job, but I can’t image it’s a ton. But maybe we can do something about it, like eliminating the need for it entirely.

The Plan

It looks like I am going to need to build myself my own docker image to handle my site deployments. Check back tomorrow and I’ll go through the whole process of building my custom image and publishing it to DockerHub. Then on Thursday I’ll go over changing my .gitlab-ci.yml file to handle the new workflow and we can see what sort of changes I manage to get.

comments powered by Disqus