paint-brush
How Using Self-Hosted GitHub Runners Can Save You a Fortuneby@sauraj
1,628 reads
1,628 reads

How Using Self-Hosted GitHub Runners Can Save You a Fortune

by saurajMarch 7th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

GitHub has made it possible to run GitHub Actions using your own self-hosted runners. Thanks to the [Actions Runner Controller] it is surprisingly easy to run actions in your Kubernetes clusters. We will show you how to install it using `kubectl` but using helm is just as easy.
featured image - How Using Self-Hosted GitHub Runners Can Save You a Fortune
sauraj HackerNoon profile picture

GitHub has made it possible to run GitHub Actions using your own self-hosted runners. Thanks to the Actions Runner Controller it is surprisingly easy to run actions in your Kubernetes clusters.


In this post, we will show how to install Actions Runner Controller into an existing Kubernetes cluster to run customized runners at a fraction of the cost.


Disclaimer: I’m a part of the Symbiosis team.

Why Use Self-Hosted Runners?

At Symbiosis we run a lot of tests on each commit, so we've spent a considerable time making sure they run quickly and can perform complex integration tests.


Using GitHub's own runners is therefore not ideal, as the average commit would cost us almost a dollar each and sadly we make a lot of small changes.


So we could either pay $40 for 5000 minutes of a 2 CPU github runner. Or, we could pay $2 to rent a 2 CPU 8 GB Kubernetes node for 5000 minutes and run our actions on there instead.


And as we see below, we can also customize our runners to add even more flexibility over the default GitHub runners by heavily modifying the runtime.

Prerequisites

To follow this tutorial you need:


  • A Kubernetes cluster
  • NGINX ingress (or any other ingress controller)
  • Certmanager (optional)

Installing Actions Runner Controller

The Actions Runner Controller (ARC) is the service responsible for monitoring your selected repositories and firing up new runners.


We will show you how to install it using kubectl but using helm is just as easy.


kubectl create -f https://github.com/actions-runner-controller/actions-runner-controller/releases/download/v0.25.2/actions-runner-controller.yaml

Connecting Actions Runner Controller to GitHub

So we have the controller running, that's great. Now we need to authenticate so that commits, PRs, comments, or any other event can be picked up by the controller and trigger the start of a runner.


We have two options, either we can create a Personal Access Token (PAT) with admin access to our repos or we can create a github app and install it into the repos instead.


For simplicity, we will authenticate using a PAT.

Personal Access Token (PAT)

Create a token under Settings > Developer settings > Personal access token. Make sure you have admin access to the repos your runners will run on.


Select the repo (Full control) permission, and if your runners will run in an organization you need to select the following permissions as well:


  • admin:org (Full control)
  • admin:public_key (read:public_key)
  • admin:repo_hook (read:repo_hook)
  • admin:org_hook (Full control)
  • notifications (Full control)
  • workflow (Full control)


Next, let's store the token we just created in a secret that our controller can use for authentication.


kubectl create secret generic controller-manager \
    --namespace=actions-runner-system \
    --from-literal=github_token=<YOUR PERSONAL ACCESS TOKEN>

Creating a workflow

Before we move on it is perhaps a good time to create an actual workflow that will eventually trigger our self-hosted runner.


name: Run test on PRs
on:
  pull_request: {}
jobs:
  test:
    name: "Run tests"
    runs-on: [self-hosted]
    steps:
    - name: Checkout repo
      uses: actions/checkout@master
    - name: Run tests
      run: yarn test


This workflow triggers commits to pull requests and runs yarn test. Let’s put it into .github/workflow/test-workflow.yaml and push the changes to our repository.


Notice the runs-on: [self-hosted] option that will instruct GitHub to select any of your own self-hosted runners. Don't worry, you can be more specific about which type of runner to use. More on that later.

Webhook & Ingress

Runners can trigger based on either push or pull-based mechanics. For example, by polling or by configuring webhooks. Most triggers come with certain drawbacks, some spawn too many runners and some spawn too few which may put your actions on a slow-moving queue.


However, there is the workflowJob trigger that has none of these drawbacks, but requires us to create an Ingress and configure a GitHub webhook. So this step isn't strictly necessary but we can assure you it's worth the effort.


apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: actions-runner-controller-github-webhook-server
  namespace: actions-runner-system
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
spec:
  tls:
  - hosts:
    - your.domain.com
    secretName: your-tls-secret-name
  rules:
  - http:
      paths:
      - path: /actions-runner-controller-github-webhook-server
        pathType: Prefix
        backend:
          service:
            name: actions-runner-controller-github-webhook-server
            port:
              number: 80


This Ingress is configured for NGINX ingress, so make sure to edit it depending on your ingress controller. It also assumes that the cert-manager is configured to automatically provision a TLS certificate.


The next step is to define the webhook in GitHub. Go to Settings > Webhooks > Add webhook in your target repository.


First let's set the payload URL to point to the ingress, for example using the details above: https://your.domain.com/actions-runner-controller-github-webhook-server. Set content type to json and enable the Workflow Jobs permission.


Once it is done you can create the webhook and go to Recent Deliveries to verify that the ingress can be reached successfully.

Listening on events

We have our controller running, it's authenticated and we have a workflow. The only thing left is to create the actual runners.


Now, we could just create a Runner resource and be done with it, but just like a Pod, it wouldn't have any replicas or any autoscaling.


Instead, we create a RunnerDeployment and a HorizontalRunnerAutoscaler. And for any Kubernetes user, you will notice plenty of similarities to regular Deployments and HPAs.


apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
  name: actions-runners
spec:
  template:
    spec:
      repository: myorg/myrepo
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
  name: actions-runners
spec:
  minReplicas: 0
  maxReplicas: 5
  scaleTargetRef:
    kind: RunnerDeployment
    name: actions-runners
  scaleUpTriggers:
  - githubEvent:
      workflowJob: {}
    duration: "30m"


Applying the above manifest will launch a deployment that will scale up to five concurrent runners. Remember to change the manifest to track the repository of choice (and make sure the access token has access to it).


Voilà! We're now able to create a pull request to verify that the runner is automatically triggered.

Using labels to identify runners

In a repo with many workflows, for example, a monorepo, it may be necessary to run many different runners at once.


In order to more carefully select which runner to use for a specific workflow we can define custom labels:


apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
  name: actions-runners
spec:
  template:
    spec:
      repository: myorg/myrepo
      labels:
      - my-label


With this label, we're able to select this runner by setting both self-hosted and my-label in our workflow:


name: Run test on PRs
on:
  pull_request: {}
jobs:
  test:
    name: "Run tests"
    runs-on: [self-hosted, my-label]
    steps:
    - name: Checkout repo
      uses: actions/checkout@master
    - name: Run tests
      run: yarn test

Customizing runners with custom volumes

Runners can be configured to pass through volumes from the host system or to attach a PVC to runners.


At Symbiosis we use PVCs to expose KVM to our runners, in order to run integration tests with virtualization enabled. We also use PVCs to attach large images that are used to set up a multi-tenant cloud environment for integration testing.


Custom volumes can also be used for layer caching, to improve the speed of building OCI images.


The below runner will provision a 10Gi PVC that will be shared. Notice that we're using RunnerSet instead of RunnerDeployment. This resource functions much like the StatefulSet in that it will allocate the runner on a node where the volume can be properly mounted.


apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
  name: actions-runners
spec:
  template:
    spec:
      repository: myorg/myrepo
      volumeMounts:
      - mountPath: /runner/work
        name: pvc
      volumes:
      - name: pvc
        ephemeral:
          volumeClaimTemplate:
            spec:
              accessModes: [ "ReadWriteOnce" ]
              resources:
                requests:
                  storage: 10Gi


This PVC can be used to store any data that we need between runs without having to store an unnecessarily large amount of data in the GitHub actions cache! At the time of writing this article 100GiB of storage using GitHub runners would cost $24/mo. With cloud providers like Linode, Symbiosis, or Scaleway that cost would be closer to $8/mo.

To summarize

Running your own actions runners requires some upfront configuration but comes with a list of benefits such as:


  • Reduced costs
  • Attaching custom volumes (such as host path or PVCs) to your runners
  • Customizing images or adding sidecars
  • Integrate workflow runs into the existing Kubernetes observability stack


Therefore, we highly recommend running your own runners to save cost, simplify management and increase flexibility by bringing the runners into the Kubernetes ecosystem.


Check out Symbiosis [here]


Lead image generated with stable diffusion.

Also published here.