A Kubernetes guide for Docker Swarm lovers

Written by poli | Published 2018/01/14
Tech Story Tags: docker | kubernetes | kubernetes-guide | docker-swarm-users | docker-swarm

TLDRvia the TL;DR App

You’ve mastered the Swarm. Now it’s time to master the Helm

Set sail!

I‘ve never looked at Kubernetes because Swarm gave me all I needed in terms of Container Orchestration. While being straightforward to use, it shined in a world where Container Orchestrators like Mesos and Kubernetes were difficult to setup.

But now in 2018 the story is quite different: All three major cloud providers (AWS, Google Cloud and Azure) are now betting on Kubernetes with managed Kubernetes as a Service. This is big, because it takes all the complexity of managing a cluster (which is the main pain point in K8S, in my opinion) and puts it in the Cloud Provider’s hands. Not to mention the fact that the new versions of Docker Enterprise and Docker for Mac & Windows will come bundled with Kubernetes out of the box.

The size of the community is also a big point in this story. Every time I had a problem with Docker Swarm it took me a while to find a solution. In contrast, even with more features and configuration possibilities, simple Google searches and asking questions on Slack helped me to solve all the problems I had so far with Kubernetes. Don’t get me wrong here: The Docker Swarm community is great, but not as great as the Kubernetes one.

This point isn’t Docker Swarm’s fault: The reality is that Kubernetes is under active development by companies like Google, Microsoft, Red Hat, IBM (and Docker, I suppose), as well as individual contributors. Taking a look at both Github repositories reveals that in fact Kubernetes is a lot more active.

But hey! This was supposed to be a guide, so let’s start by comparing how to achieve similar scenarios in both Swarm and K8S.

Disclaimer: This guide was not meant to provide any production-ready scenarios. I made it simple to illustrate the similarities between Swarm and K8S more easily.

Not so much of a VS, but I found this image on the Internet

Starting a cluster (1 Master & 1 Worker)

To keep things simple, let’s build a simple cluster with 1 Master and 1 Worker.

Starting a Cluster — Docker Swarm

Starting a cluster in Docker Swarm is as simple as it gets. With Docker installed on the machine, simply do:

> docker swarm init

Swarm initialized: current node (x5hmcwovhbpxrmthesxd0n1zx) is now a manager.

To add a worker to this swarm, run the following command:

docker swarm join --token SWMTKN-1-5agb6u8svusxsrfisbpiarl6pdzfgqdv1w0exj8c9niv45y0ya-9eaw26eb6i4yq1pyl0a2zdvjz 192.168.65.3:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Then, on another machine in the same network, paste the aforementioned command:

> docker swarm join --token SWMTKN-1-5agb6u8svusxsrfisbpiarl6pdzfgqdv1w0exj8c9niv45y0ya-9eaw26eb6i4yq1pyl0a2zdvjz 192.168.65.3:2377

The node joined the swarm as a worker

Starting a Cluster — Kubernetes (using kubeadm)

I mentioned a few times that setting up a Kubernetes cluster is complicated. While that remains true, there is a tool (which is still in beta) called kubeadm that simplifies the process. In fact, setting up a K8S cluster with kubeadm is very similar to Docker Swarm. Installing kubeadm is easy, as it can be installed with most package managers (brew, apt, etc)

> **kubeadm init**

Your Kubernetes master has initialized successfully!To start using your cluster, you need to run (as a regular user):  mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  http://kubernetes.io/docs/admin/addons/You can now join any number of machines by running the following on each nodeas root:  kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

The command takes a while to complete, because Kubernetes relies on the setup of external services like etcd to function. All of these is automated with kubeadm.

As with Swarm, to join another node one must simply run the outputted command in another node:

> kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

Node join complete:* Certificate signing request sent to master and response  received.* Kubelet informed of new secure connection details.Run 'kubectl get nodes' on the master to see this machine join.

So far, the cluster creation process is nearly identical in both solutions. But Kubernetes needs an extra step:

Installing a pod network

Docker swarm comes bundled with a service network that provides networking capabilities inside the cluster. While this is convenient, Kubernetes comes with more flexibility in this space, letting you install a network of your choice. The official implementations include Calico, Canal, Flannel, Kube-Router, Romana and Weave Net. The process of installing either of them is more of the same, but I’ll stick with Calico for this tutorial.

> **kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml**

Fore more information about using kubeadm, check here

Starting a Cluster — Kubernetes (using minikube)

If you want to experiment with Kubernetes o your local machine, there is a great tool called minikube that spins up a Kubernetes cluster inside a Virtual Machine. I’m not going to extend so much with this, but you can run minikube in your system by doing:

> minikube start

For more information about minikube, check here

The literal file name of this image was container-drugs.jpg. I hope the DEA isn’t reading any of this

Running a service

Now that we have a cluster running, let’s spin up some services! While there are some differences under the hood, doing so is very similar in both orchestrators.

Running a Service — Docker Swarm (inline)

To run a service with an inline command, simply do:

> docker service create --publish 80:80 --name nginx nginx:latest

Running a Service — Kubernetes (inline)

As you may imagine, doing the same thing in Kubernetes is not that different:

> **kubectl run nginx --image=nginx:latest **deployment "nginx" created> kubectl expose deployment nginx --port 80 --type NodePortservice "nginx" exposed

As seen above, we needed two commands to replicate Swarm’s behavior. The main difference between both orchestrators is that in the case of Swarm, we explicitly exposed the port 80 on the host. In Kubernetes, the port is randomly selected from a pre-configured range of ports. We can select the target port with a flag, but it needs to be within that range. We can query the selected port using:

> kubectl get services

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)nginx NodePort 10.105.188.192 <none> 80:30149/TCP

Running a Service — Docker Swarm (YAML)

You can define services (as well as volumes, networks and configs) in a Stack File. A Stack File is a YAML file that uses the same notation as Docker-Compose, with added functionality. Let’s spin up our nginx service using this technique:

> cat nginx.yml

version: '3'

services:nginx:image: nginx:latestports:- 80:80deploy:mode: replicatedreplicas: 1

> docker stack deploy --compose-file nginx.yml nginxstack

Creating network nginxstack_defaultCreating service nginxstack_nginx

As we didn’t specify any network, Docker Swarm createad one for us. Keep in mind that this means that the nginx service cannot be accesed via service name from another service. If we want to do this, we can either define all the services that need to communicate with each other in the same YAML (as well as a network), or import a pre-existing overlay network in both stacks.

Running a Service — Kubernetes (YAML)

Kubernetes allows to create resources via a Kubernetes Manifest File. Those files can be either YAML files or JSON files. Using YAML is the most recommended, because it’s pretty much the standard.

> cat nginx.yml

apiVersion: apps/v1kind: Deploymentmetadata:name: nginxspec:replicas: 1selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:latestports:- containerPort: 80---

apiVersion: v1kind: Servicetype: NodePortmetadata:name: nginxspec:selector:app: nginxports:

  • port: 80

> kubectl apply -f nginx.yml

service "nginx" createddeployment "nginx" created

Because it’s built around a more modular architecture, Kubernetes requires two resources to achieve the same functionality that Swarm has: A Depoyment and a Service.

A Deployment pretty much defines the characteristics of a service. It is where containers, volumes, secrets and configurations are defined. Deployments also define the number of replicas, and the replication and placement strategies. You can see them as the equivalent of a stack definition in swarm, minus load balancing.

In fact, deployment’s are a higher-level abstraction over lower-level Kubernetes resources suh as Pods and Replica Sets. Everything defined in the template part of the deployment definition defines a pod, which is the smallest unit of scheduling that Kubernetes provides. A pod does not equal to a container. It’s a set of resources that are meant to be scheduled together; for example a Container and a Volume, or two Containers. In most cases, a Pod will contain only one container, but it’s important to understand that difference.

The second part of the file defines a Service resource, which can be seen as a way to refer to a set of pods in the network and load balance between them. The type NodePort tells Kubernetes to assign a externally-accessible port on every node of the cluster (the same on all nodes). This is what swarm did as well. You tell Services what to load balance between by using selectors, and this is why labeling is so important in Kubernetes.

In this case, Kubernetes is much more powerful: For example, you can define a service of type LoadBalancer, which will spawn a Load Balancer in your cloud provider (prior configuration), such as an ELB in AWS, which will point to your service. The default service type is ClusterIP, which defines a service that can be accessed anywhere in the cluster on a given port, but not externally. Using ClusterIP is equal to defining a service without an external mapping in Swarm.

Creating volumes

Volumes are needed to maintain state and provide configurations. Both orchestrators provide simple ways to define those, but Kubernetes takes the lead with a huge lot more capabilities.

Creating volumes — Docker Swarm

Let’s add a volume to our nginx service:

> cat nginx.yml

version: '3'

services:nginx:image: nginx:latestports:- 80:80volumes:- nginx-volume:**/srv/www**deploy:mode: replicatedreplicas: 1

volumes:nginx-volume:

This is the simplest case, and obviously this kind of volume does not provide any benefit in this case, but is enough for a demonstration.

Creating volumes — Kubernetes

Doing the same in K8S is pretty easy:

> cat nginx.yml

apiVersion: apps/v1kind: Deploymentmetadata:name: nginxspec:replicas: 1selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:latestports:- containerPort: 80volumeMounts:- mountPath: **/srv/www** name: nginx-volumevolumes:- name: nginx-volumeemptyDir: {}

---

apiVersion: v1kind: Servicetype: NodePortmetadata:name: nginxspec:selector:app: nginxports:

  • port: 80

The emptyDir volume type is the simplest type of volume Kubernetes provides. It maps a folder inside the container with a folder in the node that dissapears when the pod is stopped. Kubernetes comes with 26 types of volumes, so I think pretty much covers any use case. For example, you can define a volume backed by an EBS Volume in AWS.

That’s it

There are certainly more resources other than services and volumes, but I will left them out of this guide for now. One of my favorite resources in kubernetes are ConfigMaps, which are similar to Docker Configs but provide better functionality. I will make an effort to write another guide comparing those two, but for now, let’s call it a day.

Anoter picture of containers. Just because.

Conclusion

Using kubernetes the same way as Swarm is easier than ever. It will take a while for us to make the decission to migrate all our infrastructure to Kubernetes. At the time of this writing, Swarm gives us all we need, but it’s nice to know that the entry barrier of K8S is lowering with the passing of time.

I’m a Software Engineer based in Buenos Aires, Argentina. Currently working as a Platform Engineer at Etermax, the leading Mobile Gaming company in Latin America.


Written by poli | Principal Software Engineer @ Etermax
Published by HackerNoon on 2018/01/14