Recently, I’ve been working on a cool project called Radius Tracker, an in-house proof-of-concept tool we built for tracking design system adoption. Our CI/CD strategy for this project includes quality checks, packaging, and distribution to NPM.
In this post, we will look at how to set this up.
We are using Github as our git repository hosting platform and Github Actions for CI/CD processes. Github Actions is a tool kit that enables developers to automate CI/CD workflows. Workflows are defined using a YAML configuration file (We will not cover the basics and details here, so for more information, see docs). Configurable automated processes will run one or more jobs each time workflow is triggered. Each job contains a set of steps that in turn execute on the same runner.
There are a couple of reasons why we chose Github Actions. Firstly, you have your code and automated workflows in a single place, rather than having a different provider/solution for automation – this reduces the stack’s complexity and improves maintainability in the long run since the tool kit offers seamless integration with Github. Secondly, Github Actions has a big community behind it already, with a marketplace for custom actions — this allows developers to automate many common tasks. This tool makes deploy and release processes as simple as pushing code to Github by automating the entire process.
Continues Testing
As part of the agreed guidelines, the team chooses to prevent direct changes in the main branch by making pull requests required.
This will result in:
remote: error: GH006: Protected branch update failed for refs/heads/main.
remote: error: Changes must be made through a pull request.
By making the PRs required, and using Github Actions, we can create an automation to run tests when a new pull request is created or an existing one is updated.
steps:
- name: Use Node.js 14.17.1🚧
uses: actions/setup-node@v2
with:
node-version: 14.7.1
cache: ‘yarn’
- name: Run Tests and Build 🦺
run: yarn build
A testing workflow is executed upon a pull request event — it runs all the tests we have inside the project on different Node versions. Why? Since the Radius Tracker package is executed in a Node environment, and teams use different versions of Node in their dev environments, we want to make sure we offer compatibility and hence automate the tests for each supported version. So, we run tests on both the currently lowest supported node version, and LTS versions.
Automate publishing to npm
Radius Tracker is shipped as an npm package. We started off by manually deploying the package as needed, however, this became tedious and unsustainable — the demand for a higher frequency of releases made us adopt Continuous Delivery. As a result, we planned to automatically release a new version whenever a PR was merged to the main branch.
We used the semantic-release package to automate the deployment process — this comes in handy when you need to continuously update and release your package in compliance with semantic versioning.
steps:
- name: Build Package ðŸ›
run: yarn build
- name: Release 🚀
run: npx semantic-release --debug
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
This library comes with some out-of-the-box plugins. It uses Angular Commit Message Conventions to determine the semantic version (major, minor, patch) of the next update. Since our application is still in the proof-of-concept stage, we decided to release a new version of the package as a patch on each change we pushed to the main branch, so we had to modify the commit-analyzer plugin with the aforementioned logic. We also pass a pkgRoot parameter to the plugin to release from a specific folder.
// release.config.js
module.exports = {
branches: ["main"],
plugins: [
[
"@semantic-release/commit-analyzer",
{
releaseRules: [{ release: "patch" }],
},
],
"@semantic-release/release-notes-generator",
[
"@semantic-release/npm",
{
pkgRoot: "build",
},
],
],
}
Check if the package is installable
Early in the development, we noticed that the published package was not installable: a postinstall script was using a dev dependency that wasn’t installed when the package was used in production. That’s why we want to test that our package can be installed properly before we merge any changes.
One way to test if the package is installable… is to try installing it! We chose verdaccio - a lightweight, zero-config local npm registry that can be booted in seconds. We used it as a stand-in for the public npm registry without having to publish. So you can perform end-to-end tests on the package, and gain confidence in package quality without sacrificing the execution time of the pipeline, or exposing potentially flawed package versions.
Demo page deployment
To show users the latest working demo, we built our pipeline such that it would rebuild and publish automatically whenever changes are merged to the main branch. We are using Github Pages for that because it offers the same level of integrity in Github and therefore makes deployment smooth and predictable by reducing any third-party provider’s dependencies. Your web page will be hosted and published through Github.
This is covered by another workflow that should be triggered on the push event.
Deploying to Github Pages is done using github-pages-deploy-action by committing gh-pages branch, this requires writing access to the repository. Github Actions exposes a GITHUB_TOKEN to provide this access. This token is generated automatically for each job you have inside the workflow, and expires right after the job is done.
- name: Deploy 🚀
uses: JamesIves/github-pages-deploy-action@v4.2.5
with:
branch: gh-pages
folder: src/demo/build
token: ${{secrets.GITHUB_TOKEN}}
There is a security risk you should be aware of. Despite what I mentioned earlier that the token will be destroyed automatically after the job is done, during the execution, third-party action has access to it. The token should only contain necessary permissions. To achieve that, you can modify permissions.
Infrastructure as code
In our project, we are using AWS as a provider for the infrastructure.
It includes:
- A couple of lambdas, to contain logic and make our application more flexible.
- SNS and SQS services to create a message queue and make sure that we handle all user requests.
- S3 buckets to store data.
- API Gateway to handle and orchestrate requests.
You’ve probably heard of the concept called IaC — it’s an idea of managing your infrastructure as code. In our case, we are using Terraform to achieve that. It provides a declarative way of describing your desired resources. We must ensure that our infrastructure is up-to-date and that we are using the latest API endpoints. We need to check if it needs to be deployed or updated each time we make changes to our codebase. With Github Actions, we can achieve that as well! We can execute terraform inside our job.
We can use the github.workspace variable exposed by Github and refer to a root folder of the project. Let’s say we have our infrastructure-related code inside the terraform folder, so we can execute it inside the workflow file as:
terraform -chdir=${{ github.workspace }}/terraform apply -auto-approve
Where chdir is a flag provided by Terraform to switch working directory.
Also, we can get and use terraform output as env.variable in our project. The next line will get an output value of api_invoke_url variable from listener_outputs module and store it inside REACT_APP_API_URL variable and will pass as an environment variable during the build. This particular piece of code will ensure that we always have the most recent API URL inside our application. That way, we can be sure to always use the latest API endpoint within our application.
REACT_APP_API_URL=$(terraform -chdir=${{ github.workspace }}/terraform output -json listener_outputs | jq -r '.api_invoke_url') yarn demo-build
Develop Github Actions locally
All that automation is awesome, but developing it can be painful if you can't run your workflows locally.
You can use a tool called act to run all your workflows on your local machine. This package simulates Github Actions images inside docker.
It has some pitfalls though:
- Github Actions workflows are executed inside pre-created images. And all of them run with some pre-installed software. However, act provides slightly different runners, because the original ones are simply too big, so you might need to add extra steps to install any of the missing libraries, or your workflow will fail.
We don’t need to perform those steps on Github, so you can detect whether you need to perform them by using a special env. variable exposed by act.
​​- name: Install Yarn 🚧
if: ${{ env.ACT }}
run: npm install -g yarn
In the example above, yarn will only be installed if you are using act to run your workflow.
- When you run workflows remotely, a unique GITHUB_TOKEN is generated, and it’s only available for the duration of the workflow in the Github Actions virtual environment. You won’t have access to it locally on your machine… So how would you test or run workflows locally?! Generate a Personal Access Token instead and pass the value in as the GITHUB_TOKEN on Terminal!
act -s GITHUB_TOKEN=<PERSONAL_ACCESS_TOKEN>
Conclusion
Github Actions is well-integrated with Github repositories, easy to configure, and reliable for automating CI/CD processes.
Using act to test workflows locally helps developers get faster feedback on their code-correctness, and therefore increases the overall speed of development. Its marketplace for custom actions allows you to find solutions for automating many common tasks instead of having to waste time writing them yourself. You can combine your jobs within one workflow to run them in parallel or execute them sequentially by defining dependencies. You can also create different workflows and run only specific jobs based on the actions you perform.
If your project uses a Github repository, Github Actions is a useful and effective tool to get started with automating your software workflows.
P.S. This article cross posted on Rangle's blog.