paint-brush
Managing Local Kubernetes Cluster with Lightweight Kubernetes and Traefik Proxyby@EmmanuelSys
839 reads
839 reads

Managing Local Kubernetes Cluster with Lightweight Kubernetes and Traefik Proxy

by Emmanuel SysOctober 18th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Different solutions exist to run a Kubernetes cluster on your laptop. Minikube deploys a VM with a single node cluster. K3d is a helper project allowing you to run k3s inside a Docker container as Kind would do. The best fit for me is k3d because it’s easy to set up, it runs in Docker, consumes few resources, and is fully-featured out of the box. The default storage for the Kube-apiserver is using SQLite instead of etcd.

Company Mentioned

Mention Thumbnail
featured image - Managing Local Kubernetes Cluster with Lightweight Kubernetes and Traefik Proxy
Emmanuel Sys HackerNoon profile picture

Motivation

Kubernetes clusters are not exactly cheap, can be complex to set up, and operate properly. For this reason, you may be tempted to reserve “true” online Kubernetes clusters for running your production workloads and have clusters running locally for development purposes.

In this post, we will explore different ways to easily set up a local Kubernetes cluster and the associated trade-offs that accompany them.

Local Kubernetes Clusters Challengers

Different solutions exist to run a Kubernetes cluster on your laptop. Let’s review a few of these.

Minikube

Minikube is the solution the Kubernetes project documentation advises you to use. It deploys a VM with a single node cluster. You pay the price of virtualization, as seen in the minimum requirements for the host machine (2 CPU, 2 Go RAM, 20 Gb storage)

This is a simple yet effective way to learn kubectl commands. For a long time, the single node implementation created some hurdles but the Minikube team introduced the multi-node as an experimental feature recently to help correct this issue.

Kind

Kind is another approach from the Kubernetes SIG to deploy a cluster locally. The trick here is to have the whole cluster jammed into a Docker container. Consequently, it’s easier to set up and faster to boot than Minikube. It supports all typologies of the cluster from single node and multiple masters or workers.

Kind was first and foremost created for conformance testing and for use in CI pipelines which give you some nice features like the ability to load Docker images directly inside the cluster without needing to push to an external registry.

K3S/K3D

K3s is a lightweight fully conformant cluster. To achieve this minimalism, some trade-offs are made, including:

  • The default storage for the Kube-apiserver is using SQLite instead of etcd
  • All the control plane components are packaged in a single binary
  • The numbers of external dependencies are kept in check

K3d is a helper project allowing you to run k3s inside a Docker container as Kind would do.

Which One Should I Pick?

My personal requirements are:

  • Clusters should start and stop quicklyClusters should have a realistic topology
  • Different clusters can run side by sideClusters must use minimal system resources
  • The best fit for me is k3d because it’s easy to set up, it runs in Docker, consumes few resources, and is fully-featured out of the box.
  • Let’s now see how to set up a cluster using k3d.

Using K3D to Bootstrap a Cluster

Get K3D

Follow through https://github.com/rancher/k3d#get

Create a New K3D Cluster

First, let’s create a new cluster.

k3d cluster create devcluster \
--api-port 127.0.0.1:6443 \
-p 80:80@loadbalancer \
-p 443:443@loadbalancer \
--k3s-server-arg "--no-deploy=traefik"

A few things to note:

  • We map localhost ports 80 and 443 to the k3s virtual loadbalancer. This will allow us to reach the ingress resources directly from the localhost on our machine
  • the cluster is deployed without the default Traefik Ingress Controller

Why disable Traefik? Simply because you may want to use another Ingress Controller or because k3s comes bundled with Traefik 1 by default. We will install Traefik 2 (Traefik Proxy) ourselves later which is a huge improvement over Traefik 1.

Get Your Credentials

Simply run this command to get your credentials, save them in a file and export them to your environment

k3d kubeconfig get devcluster > $HOME/k3d/kubeconfig
export KUBECONFIG=$HOME/k3d/kubeconfig

Test that you have access to the cluster by running a simple Kubectl command

kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

Install Traefik Proxy

You can find Traefik Proxy packaged as a Helm chart so a basic setup is super easy

helm repo add traefik https://containous.github.io/traefik-helm-chart
helm install traefik traefik/traefik

Check that Traefik is working by reaching its Dashboard through a port forward.

kubectl port-forward $(kubectl get pods --selector "app.kubernetes.io/name=traefik" --output=name) 9000:9000

Then browse to http://localhost:9000/dashboard/ (be aware of the trailing slash!).

Deploy an Application

Let’s deploy a simple application to validate our Ingress Controller setup. We will use the whoami application

kubectl create deploy whoami --image containous/whoami
deployment.apps/whoami created
kubectl expose deploy whoami --port 80
service/whoami exposed

Then we will make use of our new Traefik by defining an ingress rule. Traefik understands both its own CRD IngressRoute and traditional Ingress resources. We will use the later.

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: whoami
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: web,websecure
    traefik.ingress.kubernetes.io/router.tls: "true"
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: whoami
          servicePort: 80

In this example, we expose the whoami service on both the HTTP and HTTPs entrypoints. Every URL will be sent to the service. You can see the new router on the Traefik Dashboard.

To test it, use the following URL: https://localhost/. It should also work using only HTTP.

Closing Thoughts

Creating a development cluster has never been so easy. All the options discussed have a lot more features to be discovered including the auto Helm charts deployment for k3s or a Golang API to manage clusters for Kind (to name a few).

The important thing here is that it’s now convenient to replace your good old Docker-compose files with a fully-featured Kubernetes cluster. Do not hesitate to give this a go!

Previously published at https://codeburst.io/creating-a-local-development-kubernetes-cluster-with-k3s-and-traefik-proxy-7a5033cb1c2d