When running your application services on top of an orchestration tool like Kubernetes or Mesos with Marathon, there are some common necessities you’ll need to satisfy. Your application will usually contain two types of services, those that should be visible only from inside of the cluster, and those that you want to expose to the external world, outside your cluster and maybe to the internet (e.g frontends).
This article will focus on how to approach this on Kubernetes. You can make use of the different services types that Kubernetes makes available for you when creating a new service in order to achieve what you want.
So, if your cloud does not support “loadBalancer” (e.g you run an on-premise private cloud), and you need something more sophisticated than exposing a port on every node of the cluster, then it used to be that you’d need to build your own custom solution. Fortunately this is not true anymore.
Since Kubernetes v1.2.0 you can use Kubernetes ingress which includes support for TLS and L7 http-based traffic routing.
You can also ask Kuwit “How can I expose services to the external world?” whenever you need to remember this ;-)
An Ingress is a collection of rules that allow inbound connections to reach the cluster services. It can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc. Users request ingress by POSTing the Ingress resource to the API server.
In order for the Ingress resource to work, the cluster must have an Ingress controller running. The Ingress controller is responsible for fulfilling the Ingress dynamically by watching the ApiServer’s /ingresses endpoint.
This is handy! Now you could go even further, isolate at the infra level where your ingress controller runs and think of it as an “edge router” that enforces the firewall policy for your cluster. The picture for a HA Kubernetes Cluster would look something like this:
We’ll show how to use Traefik for this purpose. Traefik is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease. It supports several backends among Mesos/Marathon and Kubernetes to manage its configuration automatically and dynamically.
We’ll deploy a Kubernetes cluster similar to the picture above and will run Traefik as DaemonSet.
apiVersion: extensions/v1beta1kind: DaemonSetmetadata: name: traefik-ingress-controller-v1 namespace: kube-system labels: k8s-app: traefik-ingress-lb kubernetes.io/cluster-service: "true"spec: template: metadata: labels: k8s-app: traefik-ingress-lb name: traefik-ingress-lb spec: terminationGracePeriodSeconds: 60 containers: - image: containous/traefik name: traefik-ingress-lb imagePullPolicy: Always ports: - containerPort: 80 hostPort: 80 - containerPort: 443 hostPort: 443 - containerPort: 8080 hostPort: 8080 volumeMounts: - mountPath: /etc/traefik name: traefik-volume readOnly: false args: - --web - --kubernetes - --configFile=/etc/traefik/traefik.toml - --logLevel=DEBUG volumes: - hostPath: path: /etc/traefik name: traefik-volume nodeSelector: role: edge-router
The source code is here
You can configure Traefik to use automatic TLS config for your services on demand.
Our edge-router will be just another Kubernetes node with some restrictions.
We don’t want any other pod to be scheduled to this node so we set **--register-schedulable=false**
when running the kubelet as well as giving it a convenient label: **--node-labels=edge-router**
.
Kubernetes will run DaemonSets on every node of the cluster even if they are non-schedulable. We only want this DaemonSet to run on the edge-router node so we use “nodeSelector” to match the label we previously added.
nodeSelector: role: edge-router
Notice that with this approach, if you want to add a new edge-router to the cluster, all you need to do is spin up a new node with that label and a new DaemonSet will be automatically scheduled to that machine. Nice!
Here is a video demo of all this in action using two different clouds (DigitalOcean and AWS), deploying two Kubernetes clusters from scratch:
Recently I’ve seen a lot of users on Kubernetes slack with issues communicating to the ingress controller. This is often due to a known problem.
The ingress controller might want to use hostPort to expose itself.
ports: - containerPort: 80 hostPort: 80 - containerPort: 443 hostPort: 443
If you are using a CNI Network Plugin for your cluster networking, hostPort is not supported yet.
You can track the current status of this problem here:
HostPort seemingly not working · Issue #23920 · kubernetes/kubernetes_I am not sure if what I am doing is supposed to work. But I have created the following pod: apiVersion: v1 kind: Pod…_github.com
https://github.com/kubernetes/kubernetes/issues/31307
https://github.com/containernetworking/cni/issues/46
For now potential workarounds for this are to use “hostNetwork” or run a service using the previously mentioned “nodePort” to match your ingress controller running as a DaemonSet.
Hopefully this post will give you some insight about how to expose services on Kubernetes and the benefits of Ingress controllers.
This post has also been published on Capgemini Engineering Blog