We build digital tools for environmental sustainability.

Continuous Integration with Elixir and NixOS

August 25, 2022, by Charles Suggs

Combining Elixir with the declarative configuration of NixOS, we’ve developed a new way to deploy Elixir releases.

Nowadays, there are tons of options for deploying Elixir applications. After the emergence of mix release in Elixir v1.9, the community finally converged around a preferred method for building OTP releases. With that problem solved, Elixir developers were able to shift their bikeshedding concerns to the task of actually deploying releases. A number of deployment platforms have since gained support for mix releases, including Heroku, Gigalixer, and Fly.io.

Suddenly, more resources began to crop up, building momentum for devops best practices through guides, blog posts, Elixir Forum posts, and more. Saša Jurić saw the opportunity for Elixir to take on even more responsibilities in the stack. He announced a proof of concept library, sasa1977/ci, that provides a set of tools for developers to achieve the same type of composable actions they’d come to know from CI services like BitBucket Pipelines, CircleCI, and GitHub Actions.

In this post, we’re going to share yet another way to deploy Elixir releases with NixOS. At FullSteam Labs, we choose well made tools that we can easily configure and customize to suit our needs. We optimize for lightweight and maintainable systems that we can tailor to deliver particular outcomes. Traditional CI systems can be overkill for smaller projects and teams, so building our own with this combination of tools seemed appropriate.

NixOS presents an attractive value proposition: reproducible, declarative, reliable Linux system configuration. We decided to run NixOS on our own cloud instances to deploy with confidence and accelerate our delivery times. Since we control the CI environment, both the system design and its configuration are at our fingertips. Not to mention the added benefit of transparency in security practices, more on that later.

For our proof of concept, we wanted to create a system that could build a release and run it as a systemd service on the same NixOS cloud instance. The stack looks like this:

On the CI server, we followed the official Vultr NixOS install guide, cloned our CI repo, and rebuilt using the configuration.nix file that came with it. Here’s a small portion of the configuration showing how we import and configure the system services that power the CI:

{ config, pkgs, ... }:

{
  imports = [
    ./elixir-release.nix
      ./fsl-ci.nix
      ./hardware-configuration.nix
  ];

…
  services = {
    elixirRelease = {
          enable = true;
      appName = "example_app";
      serviceUser = "phoenix";
    };
    fslCI = {
      enable = true;
      appName = "ci";
      serviceUser = "phoenix";
    };
  };
}

Why systemd? People seem to have a love/hate relationship with systemd. It comes with NixOS and we find it to be useful. Similar to Erlang and OTP, systemd’s design emphasizes process management as a way of achieving stability. It functions at the OS level, providing an interface between system processes and the Linux kernel. OTP on the other hand, deals with much more lightweight BEAM processes orchestrated within the release service process.

The CI service is designed to listen for webhook activity from BitBucket triggered by commits to the repo we want to build. Upon receipt, the webhook endpoint employs a plug to validate the remote IP and payload signature against expected values, ensuring authenticity.

defmodule CI.GitWebhookController do
  require Logger

  import Plug.Conn

  def bitbucket(conn, %{
        "repository" => %{"name" => repository_name},
        "push" => %{
          "changes" => [
            %{
              "new" => %{
                "name" => branch_name,
                "target" => %{"hash" => commitish}
              }
            }
            | _subsequent_changes
          ]
        }
      }) do
    if validate_bitbucket_ip(conn.remote_ip) do
      Logger.info("Bitbucket webhook on #{branch_name} of #{repository_name}")
      Process.send({ReleaseRunner, node()}, {:run, {repository_name, branch_name, commitish}}, [])
      send_resp(conn, 200, "webhook received")
    else
      send_resp(conn, 403, "access denied")
    end
  endend

Next, the CI service checks out the specific commit that triggered the build, preventing race conditions with multiple committers. If there are changes that warrant a new release, it runs a bash script that performs the necessary steps to create an Elixir release. This is where we strongly considered using sasa1977/ci. Since the status of the project was “still far from being usable” according to Saša when we set out on this journey, we decided to take the LTS route by writing our own bash scripts for compiling and cutting releases. We’re still interested in incorporating Saša’s library here at some point.

An example release script that the CI runs to deploy a new Elixir release:

#!/usr/bin/env sh

set -e
# set -o allexport; source .env; set +o allexport

if [ -z "$1" ]
then
        branch=main
else
        branch=$1
fi

if [ -z "$2" ]
then
        commit=$branch
else
        commit=$2
fi

echo "$(date +%Y-%m-%d\ -\ %H:%M:%S)  Running new release on commit ${commit} of ${branch} branch"

if [ -z "$2" ]
then
        git checkout -f "${branch}"
        git pull origin "${branch}"
else
        git pull origin "${branch}"
        git checkout -f "${commit}"
fi

source /home/phoenix/project/.envrc
yarn --cwd assets install
yarn --cwd assets deploy
mix deps.get --only prod
mix compile
mix ecto.migrate
mix phx.digest
mix release --overwrite --force
sudo systemctl restart elixirRelease

exit 0

Given careful, declarative configuration of environment variables, paths, and user permissions, the CI service user, a non-root user, is able to restart the systemd service that runs the Elixir release binary. At this point, the new release gets loaded from the _build path of the project directory.

 

Do you have a deployment story about Elixir or an opinionated method you prefer to use? Feel free to reach out to us with your own experience deploying Elixir applications.

Get in touch!

Although far from fully featured, our proof of concept has managed to provide us with painless automated deployment of Elixir releases. We have more trust in our uptime guarantees thanks to the killer combo of systemd and OTP. And as a knock-on benefit, we get excellent system service logging capabilities through journalctl.

Of course, there’s plenty of room to improve upon what we’ve done so far. That’s part of why we’re sharing this! Among the top improvements we identified were: Slack notifications, remote deployment, support for GitHub or GitLab, and a rollback mechanism. For future use and adoption by others, we’d like to improve the ergonomics around configuration and document the full setup procedures. We intentionally avoided these details on the initial release, just to prove that it could be done, but would like to add these improvements in the near future.

Charles Suggs is a co-founder and full stack developer at FullSteam Labs. He enjoys building useful tools for environmental sustainability, from firmware and servers through to the user interface.

Jason Johnson, Full-stack Developer

Jason Johnson is a co-founder of FullSteam Labs working as a software developer. He enjoys problem solving at all levels of the tech stack, from the front-end down to embedded firmware development.