sconewrong

Some Blog TLC

Its been a while, and at least part of the reason its been a while is that itching feeling of a re-ramp up I wasn't quite ready to face. Therefore, the publishing of this post is in itself... a victory.

A bit of a recap on the tech behind this blog: I provide the content ready to serve by sending markdown files through a Python based static site generator called Pelican. I also have a custom theme for the blog, which Pelican can ingest as a folder locally - and I try to keep development of separate - so its currently a git submodule.

Shortly before my most recent post I was trying to transition most web development off of an aging Macbook onto a Windows desktop. Python and its virtual environments made interoperability a non-issue for the blog generation, however I'd decided to not store the markdown within the repository, and my Macbook was still my day-to-day, which meant I was often writing and generating on my Macbook still. Fast forward a year the Macbook is collecting dust, but probably contains the newest commits on the repository and also all the blog content.

So on my new environment I pull down the main branch of the repository from BitBucket, and immediately can tell the source code is out of date for what is live on Netlify. So load up the sluggish Macbook and see some commits to the main branch not yet synced upstream. Following some digging into the Git docs I manage to take the new commits off the local main branch and onto a named feature branch which I can safely push upstream.

Faced with a now up-to-date with what's deployed repository in my new environment the next challenge is working out what exactly I should be running to re-generate the site. In lieu of a readme -- which can fall out even of the most critical repositories in industry -- I need to do some detective work. The first tool I grab for is reverse search (ctrl-r) on the shell and fire off 'pelican' - to which I find some clues from the past. I match what I find up with a read through the Pelican documentation, and make some modifications to make some settings more explicit (like the output directory) and we're back up and running.

Next step write that readme - then even better lets get this building in the cloud. Having plenty of industrial experience in using the continuous integration workflows offered by Github and Gitlab, I welcome the opportunity to design a workflow for Bitbucket, which I already have some experience in from my Foodtrucks project.

I started with the Pelican generation stage, choosing a lightweight Python 3.8 image to base the workflow off of makes easy work, other than getting the submodule downloading setup for the custom theme that I pull in - I save the output of the generation as an artifact which I download to verify locally.

For deploying the generated site to Netlify I needed the Netlify CLI available -- at first I tried a bit of trial and error approach committing modifications to the workflow configuration directly, but I soon realised I was only at the foot of the deployment mountain - eating at my pipeline minute quota. So perhaps a change of tack? In the case of the Pelican generation the workflow was expected to be straightforward - its a Python environment, one I've written workflows for countless times - and once I got started it proved -- not a complete walk in a park -- but a tangible solution space. I knew the Netlify workflow was much more unknown - and while it made sense to start with an optimistic, fail fast, approach - the solution space was just too wide to keep going.

The Bitbucket pipelines make use of Docker images to run the scripts provided. So I chose to take the workflow debugging offline. This allowed me to play about with some different images, in the end deciding to go with a user-provided Netlify specific image which made light work of the process. I tried out all the commands in the workflow locally first so was confident they would work - and if they didn't I could compare any errors to my local environment.

Perhaps driven by one eye on my limited minute quota, I chose to not debug wastefully. When you can't quite get a command right, slow cycles of committing, pushing, configuring, and tearing down just aren't economic, or even environmentally conscious. Also, workflows should reflect just that, the workflow you already do - so any time in getting these lined up is time well spent.

So after getting my two step workflow of generating and deploying firing, I now have a much more maintainable blog - so I hope this means more content in future. This may have just been for a bit-rotting personal blog generation, but its been so nice to spend some time just digging into the workflows and thinking about process improvement - it really is a part of tech that I enjoy.

Top ^