Navigating with Kubernetes

Written by durch | Published 2017/03/29
Tech Story Tags: kubernetes | docker | heapster | do-it-yourself | clustering

TLDRvia the TL;DR App

Map for sailing with your very own Kubernetes cluster, as well as Heapster. Also, Kubernetes is awesome!

Or as Google would put it:

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

Kubernetes is most awesome if you are using it as part of a Cloud provider, such as Google, Amazon, Microsoft or OpenShift. If on the other hand you are a bit more advantureous, and want to setup a cluster of your very own servers, things get a bit more complicated and involved.

Not because Kubernetes is not awesome, it still is, however the documentation is not the clearest at times, or the most complete, also some things are less than obvious, and last but not least, latest Kube version has just been released about a day ago, meaning breakage.

We will not get derailed by new features, which are also awesome (5k nodes, weeeee), we will be getting everything up and running. We’ll also be happy while doing it and we’ll stay happy (until the next version hits that is :)).

We’ll be using a great, fresh out of alpha utility, kubeadm, and following along a very nice quick start, we’ll skip the boring parts and we’ll spend some time doing other things.

The quick start we are following along is intended for Ubuntu 16.04+, CentOS 7 or HypriotOS v1.0.1+, I can tell you it also works on Debian Jessie. We’ll be traversing the Debian branch package wise, everything else is pretty much identical. So lets get our feet wet (do eveything as root we are having fun after all, or sudo), also run this in screen or tmux .

It all starts with Docker, as it usually always does…

following with the things we really care about (k8s)

All set, lets init our cluster, we’ll be using [flannel](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml) as our network plugin of choice (dont’t ask, or click the links :)), therefor you need the --pod-network-cidr option, otherwise hairpulling.

kubeadm init --pod-network-cidr=10.244.0.0/16

You should see something like this

And thats it, you’ll be seeing an endless screen of the line.

This has been fixed in 1.6.1 (allegedly :)), so after the cluster initializes feel free skip to the flannel configuration below…

[apiclient] Waiting for at least one node to register and become ready

and that’s just not very nice, turns out there is a kink in post alphakubeadm. Node is not considered ready, until networking is ready. In the previous versions you would set up networking after (or you know kubeadm would finish…).

While apiclient is still assaulting you, switch to another screen|tmux window. We only care about the config files at this time and we got that already, lines below are good lines

we like them a lot. In the window you’ve switched to run:

cp /etc/kubernetes/admin.conf $HOME/chown $(id -u):$(id -g) $HOME/admin.confexport KUBECONFIG=$HOME/admin.conf

This will allow you to interact with your cluster and set up this networking nobody is talking about, if you remember we’ll be using flannel (you really don’t need to care, yet). Notable change from the previous versions of Kube is that you need to set up roles and role bindings, flannel guys have our back :). CoreOS is also awesome mind you :).

git clone https://github.com/coreos/flannel.gitkubectl apply -f flannel/Documentation/kube-flannel-rbac.ymlkubectl apply -f flannel/Documentation/kube-flannel.yml

After you’ve done with that, you can return to the window where the cluster has been initializing, and you should see it initialized (beers for everybody…).

Take note of the line similar to:

kubeadm join --token <token> <master-ip>:<master-port>

Disregard everything else, you did that already (unless everything is different this time).

Do **export** KUBECONFIG=$HOME/admin.conf again for good luck and lets see how we are doing, **watch kubectl** get pods --all-namespaces should get you something like this:

Every 2.0s: kubectl get pods -n kube-system Wed Mar 29 21:13:57 2017

NAME READY STATUS RESTARTS AGEetcd-sd-85956 1/1 Running 0 2mkube-apiserver-sd-85956 1/1 Running 0 2mkube-controller-manager-sd-85956 1/1 Running 0 2mkube-dns-3913472980-3v7hb 3/3 Running 0 2mkube-flannel-ds-2hptj 2/2 Running 0 2mkube-proxy-jx2n7 1/1 Running 0 2mkube-proxy-tlplt 1/1 Running 0 2mkube-scheduler-sd-85956 1/1 Running 0 2m

You have your very own Kubernetes (almost) cluster, after all we need more than one node to call it a cluster. We’ll call this new node node2, meaning the previous one becomes node1.

In case you'll stop at one node run:

kubectl taint nodes --all node-role.kubernetes.io/master-

That way, pods will actually schedule on a master node.

Now might be a good time to open up your firewall. As we are on a Debian, and I use ufw, run ufw allow from <node2-ip> to any on node1 and ufw allow from <node1> to any on node2, this could get cumbersome with many node, but we don’t have many ATM.

That should have you optimistic and unblocked, as well as ready to proceed, now on the slave machine.

kubeadm join --token <token> <master-ip>:<master-port>

The very same line you that node1 instructed us to run, awesome, provided nothing unexpected happends, you should be able to run kubectl get nodes on node1 and see two nodes (happines and parties and cake).

Now that you’re feeling pretty good about yourself, lets tie up a few loose ends, first of all everything is sad without some nice GUI (enter trolls).

In order to get a nice GUI, we’ll set up a Dashboard, here the new version will try to bite us again, but we’ll evade swiftly. Kube version 1.6 uses [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/) as a default form of auth, meaning you can’t just go accessing stuff willy nilly. We already mentioned this above near the role/rolebinding passive-agressive remark.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

Above line sets us up with a dashboard that we can’t really access, we also need to configure a role and juggle some great balls of fire. In a text editor of your choice (vim) create admin-role.yml on node1:

Than with a conviction of a true helmsman run **kubectl apply** -f admin-role.yml (bask in the evershining light of glorious victory).

Now our dashboard can actually access the values it needs in order to be super useful.

Now in order for you to actually access it you need kubectl locally.

After you’ve done that you’ll also want to grab admin.conf from /etc/kubernetes on node1, and for the coup de grace, run**kubectl** --kubeconfig ./admin.conf proxy, on your local machine.

Proxy should be listening on 127.0.0.1:8001. With all the confidence in the world point your browser to [http://127.0.0.1:8001/ui](http://127.0.0.1:8001/ui,).

I really hope you get a functioning Kubernetes Dashboard, otherwise I’ll understand if you hate me a little.

If you still have the energy, there is one final step left and that is Heapster, or monitoring for this awesome cluster you have. Comparing to what you have already done this is easy peasy. On node1

git clone https://github.com/kubernetes/heapster.gitkubectl create -f heapster/deploy/kube-config/influxdb/

In order to get it really, really working you’ll need the ip of the influxdb service (image below), empowered with it run.

Service menu on the left side holds this very useful info

curl -X POST -G "<influxdb-cluster-ip>:8086/query" --data-urlencode 'db=k8s' --data-urlencode 'q=CREATE RETENTION POLICY "default" ON "k8s" DURATION INF REPLICATION 1 DEFAULT'

This fixes some bugs, that might have been fixed already. You know you’re done when what you have looks something like this, it might take a few minutes for the pretty charts to show (also you got Grafana now, you should go find it after you’re done here).

Total victory

You now have a fully functional Kubernetes cluster, that is not doing anything much, it IS tough. You can deploy some public images and have some fun. I did not manage to deploy anything using the dashboard (some JS errors, totally not my fault), but yml is more expressive anyway.

There you go, you are mega-empowered to make some noise, with some effort it is also possible to set up private container registry for supersecretprivatestuff, but now is not the time :)

Congratulations on getting all the way through, feel free to hound me with questions and/or curses, looking forward to it.

Also in order to access stuff in the cluster you’ll need to route to it, since this turned out rather long, I won’t be getting into that, if anyone is interested drop me a line, and I’ll help out…

Oh, and I do apologize for any and all unclarities and confusion, my k8s-fu is a bit rusty, I mostly do Rust these days :)


Published by HackerNoon on 2017/03/29