( ) is a tool that allows you to run Kubernetes clusters locally using Docker container "nodes". It's a great tool for developers who want to test their applications in a Kubernetes environment without the overhead of a full-scale cluster. K ubernetes in D ocker kind In this blog post, we'll walk through the steps to set up a kind cluster with a Docker registry where you can push the images and pull from the Kubernetes cluster. This will mimic what we would do in Cloud Provider managed Kubernetes clusters i.e. pull the images from OCI registries. We will also look at how to install Ingress and create a service. nginx LoadBalancer Prerequisites Ensure you have the following installed on your machine: Docker , there are many ways to install kind, please refer the link to install as you see fit. kind , we will use this to interact with the cluster. kubectl All the scripts discussed in this blog are uploaded to this . github repository What are we going to create? Multi-node kubernetes cluster with and nodes. Examples of worker nodes with and . control-plane worker labels taints Create a local registry to push and pull locally built docker images Deploy so that we can access the endpoints exposed by the services Ingress Validate the setup by installing pods with dummy services This setup is pretty much what you need to validate your applications locally, this setup will be extremely helpful and cost-efficient while developing services for Kubernetes environment. Create k8s cluster with multiple nodes and configure cluster with containerd registry config dir The following is the with and nodes. The and other control plane components will be on the node with role . And nodes with role will have your pods. You will observe the , , , , , , and pods will be deployed on the control-plane node by default. Cluster cluster-config.yaml control-plane worker api-server control-plane worker controller-manager api-server scheduler etcd coredns kindnet kube-proxy local-path-provisioner The worker nodes will have and by default. You will also observe that we are configuring the cluster with registry config. kindnet kube-proxy containerd We are also exposing ports 80/443/5678 so that can hit those ports from the machine. localhost kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 name: "platformwale" # configure cluster with containerd registry config dir enabled containerdConfigPatches: - |- [plugins."io.containerd.grpc.v1.cri".registry] config_path = "/etc/containerd/certs.d" nodes: # control plane node # this comes with taint so that control-plane node will not accept any other pods by default - role: control-plane image: "kindest/node:v1.27.3" # worker nodes # worker with no node labels, so pods with no nodeSelectors will schedule here - role: worker image: "kindest/node:v1.27.3" # worker with node label role=app # pods with nodeSelectors role=app will schedule here - role: worker image: "kindest/node:v1.27.3" labels: role: app # worker with node label role=ingress # pods with nodeSelectors role=ingress and toleration to taint role=ingress:NoSchedule will schedule here - role: worker image: "kindest/node:v1.27.3" labels: role: ingress # extraPortMappings allow the localhost to make requests to the Ingress controller over ports 80/443/5678 extraPortMappings: - containerPort: 80 hostPort: 80 protocol: TCP - containerPort: 443 hostPort: 443 protocol: TCP - containerPort: 5678 hostPort: 5678 protocol: TCP # add taint to the node such that only pods tolerating the taint will be scheduled on this node kubeadmConfigPatches: - | kind: JoinConfiguration nodeRegistration: kubeletExtraArgs: register-with-taints: "role=ingress:NoSchedule" Create the cluster by submitting the file as follows - kind cluster-config.yaml kind create cluster --config cluster-config.yaml You will see something that looks like what’s displayed below on the successful creation of a kind cluster - $ kind create cluster --config cluster-config.yaml Creating cluster "platformwale" ... ✓ Ensuring node image (kindest/node:v1.27.3) 🖼 ✓ Preparing nodes 📦 📦 📦 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 ✓ Joining worker nodes 🚜 Set kubectl context to "kind-platformwale" You can now use your cluster with: kubectl cluster-info --context kind-platformwale Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂 $ kubectl cluster-info --context kind-platformwale Kubernetes control plane is running at https://127.0.0.1:58931 CoreDNS is running at https://127.0.0.1:58931/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. Validate that you can see the kind cluster you created above - $ kind get clusters platformwale Validate that the nodes are created successfully - $ kubectl get nodes NAME STATUS ROLES AGE VERSION platformwale-control-plane Ready control-plane 9m20s v1.27.3 platformwale-worker Ready <none> 8m55s v1.27.3 platformwale-worker2 Ready <none> 9m1s v1.27.3 platformwale-worker3 Ready <none> 8m56s v1.27.3 Please read this documentation to learn more options to configure the KIND cluster. configuration Create the registry container and configure cluster nodes for the registry Create the registry container and configure the cluster nodes for registry access as below. This command will pull the container locally and will start the container. This container will be used as the local docker registry. registry # start the registry container reg_name='kind-registry' reg_port='5001' if [ "$(docker inspect -f '{{.State.Running}}' "${reg_name}" 2>/dev/null || true)" != 'true' ]; then docker run \ -d --restart=always -p "127.0.0.1:${reg_port}:5000" --name "${reg_name}" \ registry:2 fi Now add the registry config to the nodes as below. This is necessary because localhost resolves to loopback addresses that are network-namespace local. In other words, localhost in the container is not localhost on the host. We want a consistent name that works from both ends, so we tell containerd to alias to the registry container when pulling images. localhost:${reg_port} kind_cluster_name="platformwale" reg_port='5001' REGISTRY_DIR="/etc/containerd/certs.d/localhost:${reg_port}" for node in $(kind get nodes --name ${kind_cluster_name}); do docker exec "${node}" mkdir -p "${REGISTRY_DIR}" cat <<EOF | docker exec -i "${node}" cp /dev/stdin "${REGISTRY_DIR}/hosts.toml" [host."http://${reg_name}:5000"] EOF done Now connect the registry to the cluster network, this allows kind to bootstrap the network and ensure they are on the same network. reg_name='kind-registry' if [ "$(docker inspect -f='{{json .NetworkSettings.Networks.kind}}' "${reg_name}")" = 'null' ]; then docker network connect "kind" "${reg_name}" fi Now document the local registry. The standard of defining the local registry is defined in detail in this . This is a standard way for cluster configuration tools to record how developer tools should interact with the local registry as well as a standard way for developer tools to read that information when pushing images to the cluster. doc reg_port='5001' cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap metadata: name: local-registry-hosting namespace: kube-public data: localRegistryHosting.v1: | host: "localhost:${reg_port}" help: "https://kind.sigs.k8s.io/docs/user/local-registry/" EOF At this point you have finished creating the cluster as well enabled a local docker registry. Connect to the Private Registry We will pull a sample app from the remote docker registry, tag it and push it to the local registry we created above. We will then start the pod using the image pulled from the local registry. Here's an example: # pull a sample hello-app from remote registry docker pull gcr.io/google-samples/hello-app:1.0 # tag the pulled docker image for local registry docker tag gcr.io/google-samples/hello-app:1.0 localhost:5001/hello-app:1.0 # push the docker image to the local registry docker push localhost:5001/hello-app:1.0 Submit following yaml to create deployment to use the docker image from the local registry. hello-server hello-app kubectl apply -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: labels: app: hello-server name: hello-server namespace: default spec: replicas: 1 selector: matchLabels: app: hello-server template: metadata: labels: app: hello-server spec: nodeSelector: role: app containers: - image: localhost:5001/hello-app:1.0 imagePullPolicy: IfNotPresent name: hello-app EOF Validate that the pod is running successfully as below and it runs on node as we have used - platformwale-worker2 nodeSelector: role=app $ kubectl get po -n default -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES hello-server-bfc485c98-jzxbb 1/1 Running 0 14s 10.244.2.3 platformwale-worker2 <none> <none> This proves that we have a working kubernetes cluster with multiple nodes as well as we are able to push and pull images from local docker registry. The local registry can also be used for . For instance, you have a big local or setup which pulls huge images, you can setup this local registry on one of the hosted machines in your office network, and then all the engineers in your team can use this registry to pull the images to setup the local environment instead of pulling from a public network. This will speed up the environment setup as well as save the network bandwidth. bootstrapping local development environments faster vagrant docker-compose Deploy Ingress and validate LoadBalancer k8s service In this section, we will deploy Ingress service on the nodepool we created earlier. We will also deploy k8s service along with sample apps to validate the deployment. nginx ingress LoadBalancer nginx We have modified the public to tolerate and use to deploy the nginx pods on node which we have configured with taints and label such that we only deploy the deployments. Use the modified as below - nginx deploy.yaml taint -> role:ingress:NoSchedule nodeSelector -> role=ingress platformwale-worker3 nginx nginx.yaml kubectl apply -f https://raw.githubusercontent.com/piyushjajoo/kind-with-local-registry-and-ingress/master/nginx.yaml Validate all the pods are running - nginx $ kubectl get pods -n ingress-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ingress-nginx-admission-create-4kxmk 0/1 Completed 0 34m 10.244.1.2 platformwale-worker3 <none> <none> ingress-nginx-admission-patch-kx4zv 0/1 Completed 1 34m 10.244.1.3 platformwale-worker3 <none> <none> ingress-nginx-controller-57d7c6cb58-g2gdf 1/1 Running 0 34m 10.244.1.4 platformwale-worker3 <none> <none> Pull the following docker image to the local registry, we will use it for the setup validation as below - Ingress # pull docker image docker pull hashicorp/http-echo:0.2.3 # tag the image for local registry docker tag hashicorp/http-echo:0.2.3 localhost:5001/http-echo:0.2.3 # push the docker image docker push localhost:5001/http-echo:0.2.3 This will also install loadbalancer. NOTE: On macOS and Windows, docker does not expose the docker network to the host. Because of this limitation, containers (including kind nodes) are only reachable from the host via port-forwards, however,s other containers/pods can reach other things running in docker including loadbalancers. If you are on Mac or Windows, you can skip installing . MetalLB MetalLB # install metallb echo "install metallb" kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml # wait until MetalLB pods (controller and speaker) are ready echo "wait until MetalLB pods (controller and speaker) are ready" kubectl wait --namespace metallb-system \ --for=condition=ready pod \ --selector=app=metallb \ --timeout=90s # find the cidr range for kind network echo "find the cidr range for the kind network" output=$(docker network inspect -f '{{.IPAM.Config}}' kind) ipv4_cidr=$(echo "$output" | grep -oE '([0-9]+\.[0-9]+)\.[0-9]+\.[0-9]+/[0-9]+' | head -n 1) ipv4_parts=$(echo "$ipv4_cidr" | cut -d '.' -f 1,2) echo "IPv4 CIDR Range (First 2 Parts): $ipv4_parts" # configure ip address pool echo "configuring ip address pool" kubectl apply -f - <<EOF apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: example namespace: metallb-system spec: addresses: - $ipv4_parts.255.200-$ipv4_parts.255.250 --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: empty namespace: metallb-system EOF Validate the object using services as well as validate type service using the script below. You can skip creating type service below if you are on Mac or Windows. Ingress ClusterIP LoadBalancer LoadBalancer The script below will deploy pods and setup object to divert the traffic to the pods based on configured. Ingress path kubectl apply -f - <<EOF kind: Pod apiVersion: v1 metadata: name: foo-app labels: name: foo-app app: http-echo spec: containers: - name: foo-app image: localhost:5001/http-echo:0.2.3 args: - "-text=foo" --- kind: Pod apiVersion: v1 metadata: name: bar-app labels: name: bar-app app: http-echo spec: containers: - name: bar-app image: localhost:5001/http-echo:0.2.3 args: - "-text=bar" --- kind: Service apiVersion: v1 metadata: name: foo-service spec: selector: name: foo-app ports: # Default port used by the image - port: 5678 --- kind: Service apiVersion: v1 metadata: name: bar-service spec: selector: name: bar-app ports: # Default port used by the image - port: 5678 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: rules: - http: paths: - pathType: Prefix path: /foo(/|$)(.*) backend: service: name: foo-service port: number: 5678 - pathType: Prefix path: /bar(/|$)(.*) backend: service: name: bar-service port: number: 5678 --- kind: Service apiVersion: v1 metadata: name: foo-service-lb spec: type: LoadBalancer selector: name: foo-app app: http-echo ports: # Default port used by the image - port: 5678 --- kind: Service apiVersion: v1 metadata: name: bar-service-lb spec: type: LoadBalancer selector: name: bar-app app: http-echo ports: # Default port used by the image - port: 5678 --- EOF Validate the as below - Ingress # validate cluster ips via Ingress object # should output "foo-app" echo "validating foo-app via Ingress object" curl localhost/foo/hostname # should output "bar-app" echo "validating bar-app via Ingress object" curl localhost/bar/hostname Validate the service as below, if you are on Mac or Windows, as mentioned earlier docker doesn't expose the docker network to the host, hence you won't be able to use the IP directly instead you will need to port forward the service to access. LoadBalancer LoadBalancer ## on linux you can validate LoadBalancer as below # validate load balancer service echo "validating loadbalancer, note on macOS and Windows, docker does not expose the docker network to the host. Because of this limitation, containers (including kind nodes) are only reachable from the host via port-forwards, however other containers/pods can reach other things running in docker including loadbalancers" FOO_LB_IP=$(kubectl get svc/foo-service-lb -n default -o=jsonpath='{.status.loadBalancer.ingress[0].ip}') BAR_LB_IP=$(kubectl get svc/bar-service-lb -n default -o=jsonpath='{.status.loadBalancer.ingress[0].ip}') # should output foo and bar on separate lines for _ in {1..10}; do curl ${FOO_LB_IP}:5678 curl ${BAR_LB_IP}:5678 done ## on Mac or Windows port-forward and hit the service as below # validate foo-service-lb kubectl port-forward -n default svc/foo-service-lb 5678:5678 # in browser hit localhost:5678 or do curl as below, you will see foo as output $ curl localhost:5678 foo # validate bar-service-lb kubectl port-forward -n default svc/bar-service-lb 5678:5678 # in browser hit localhost:5678 or do curl as below, you will see bar as output $ curl localhost:5678 bar To setup on Mac using MetalLB refer this LoadBalancer documentation Cleanup Destroy the cluster as well as as below - kind registry # delete kind cluster echo "deleting kind cluster" kind delete cluster --name "platformwale" # delete registry echo "deleting registry" docker rm -f $(docker ps -a | grep registry | awk -F ' ' '{print $1}') Conclusion And that's it! You now have a local Kubernetes development environment with a local Docker registry. This setup allows you to build, push, and deploy your Docker images without needing to push them to a public registry. This is useful in setting up local development environments faster and save network bandwidth. You also installed service to create k8s service, this is useful to mimic the LoadBalancer behavior as you might need to do in an actual cloud provider. Ingress LoadBalancer Resources kind documentation kind resources All the scripts and yamls are uploaded in this github repository MetalLB docs Kind and MetalLB on mac Originally published at on Aug 15, 2023. https://platformwale.blog Author Notes Feel free to reach out with any concerns or questions you have. I will make every effort to address your inquiries and provide resolutions. Stay tuned for the upcoming blog in this series dedicated to Platformwale (Engineers who work on Infrastructure Platform teams).