Deploying Kubernetes On-Premise with RKE and deploying OpenFaaS on it — Part 1

Written by kenfdev | Published 2017/12/09
Tech Story Tags: docker | rancher | kubernetes | openfaas | rke

TLDRvia the TL;DR App

I’m a big fan of Rancher and am very excited in how their RKE (Rancher Kubernetes Engine) is going to evolve and ease the way I deploy Kubernetes. As I’m heavily investing my time on OpenFaaS (an open source serverless platform), I’d like to easily deploy it above the kubernetes cluster made by RKE. In this post I’d like to show:

  • How to deploy a kubernetes cluster with 2 nodes (1 master & 1 worker) using RKE
  • How to deploy OpenFaaS on the kubernetes cluster you deployed via Helm

The following diagram shows a simple relation of the components:

Prerequisites

  • 2 hosts that can run docker (ver. 1.12 to 17.03)(I’ll be using 2 Ubuntu 16.04 hosts with docker 17.03-ce)

  • Each of my hosts have 1 CPU core and 1GB RAM

Deploy a Kubernetes Cluster with RKE

If you haven’t read “Announcing RKE, a Lightweight Kubernetes Installer” already, take a look and try it out. In addition, if you have time you should watch “Managing Kubernetes Clusters with Rancher 2.0 — November 2017 Online Meetup” as it explains the newest features of Rancher 2.0 and about RKE as well.

Download RKE

You can download RKE from here. It’s a simple CLI tool to deploy Kubernetes. If you’re using OSX you’ll be downloading rke_darwin-amd64. Rename it to rke and don’t forget to give it execution permissions via chmod +x rke. From here I’m assuming that you added rke to the PATH. Confirm that you can execute rke. You should see something similar to this:

NAME:rke - Rancher Kubernetes Engine, Running kubernetes cluster in the cloud

USAGE:rke [global options] command [command options] [arguments...]

VERSION:v0.0.8-dev

AUTHOR(S):Rancher Labs, Inc.

COMMANDS:up Bring the cluster upremove Teardown the cluster and clean cluster nodesversion Show cluster Kubernetes versionconfig, config Setup cluster configurationhelp, h Shows a list of commands or help for one command

GLOBAL OPTIONS:--debug, -d Debug logging--help, -h show help--version, -v print the version

Create RKE Config

Now that we can use rke , let’s create a config in order to deploy Kubernetes to our hosts. Execute rke config and you should be prompted to answer some questions. The following diagram describes my hosts and their roles in Kubernetes. Basically Host1 will be a master node and Host2 will be a worker node.

Host Description

With this in mind, my rke config answers look like this:

Cluster Level SSH Private Key Path [~/.ssh/id_rsa]:Number of Hosts [3]: 2SSH Address of host (1) [none]: 203.104.214.176SSH Private Key Path of host (203.104.214.176) [none]:SSH Private Key of host (203.104.214.176) [none]:SSH User of host (203.104.214.176) [ubuntu]: rootIs host (203.104.214.176) a control host (y/n)? [y]: yIs host (203.104.214.176) a worker host (y/n)? [n]: nIs host (203.104.214.176) an Etcd host (y/n)? [n]: yOverride Hostname of host (203.104.214.176) [none]:Internal IP of host (203.104.214.176) [none]:Docker socket path on host (203.104.214.176) [/var/run/docker.sock]:SSH Address of host (2) [none]: 203.104.227.60SSH Private Key Path of host (203.104.227.60) [none]:SSH Private Key of host (203.104.227.60) [none]:SSH User of host (203.104.227.60) [ubuntu]: rootIs host (203.104.227.60) a control host (y/n)? [y]: nIs host (203.104.227.60) a worker host (y/n)? [n]: yIs host (203.104.227.60) an Etcd host (y/n)? [n]: nOverride Hostname of host (203.104.227.60) [none]:Internal IP of host (203.104.227.60) [none]:Docker socket path on host (203.104.227.60) [/var/run/docker.sock]:Network Plugin Type [flannel]: calicoAuthentication Strategy [x509]:Etcd Docker Image [quay.io/coreos/etcd:latest]:Kubernetes Docker image [rancher/k8s:v1.8.3-rancher2]:Cluster domain [cluster.local]:Service Cluster IP Range [10.233.0.0/18]:Cluster Network CIDR [10.233.64.0/18]:Cluster DNS Service IP [10.233.0.3]:Infra Container image [gcr.io/google_containers/pause-amd64:3.0]:

Quick Note: My hosts only had root users so I’m using root but any user who can use docker could be set. Additionally, I’m using calico for networking but flannel and canal are supported as well ( weave is probably coming in the next release based on this PR from my co-worker).

This generates a cluster.yml file and the contents should look like this:

nodes:- address: 203.104.214.176internal_address: ""role:

  • controlplane
  • etcdhostname_override: ""user: rootdocker_socket: /var/run/docker.sockssh_key: ""ssh_key_path: ""- address: 203.104.227.60internal_address: ""role:

  • workerhostname_override: ""user: rootdocker_socket: /var/run/docker.sockssh_key: ""ssh_key_path: ""services:etcd:image: quay.io/coreos/etcd:latestextra_args: {}kube-api:image: rancher/k8s:v1.8.3-rancher2extra_args: {}service_cluster_ip_range: 10.233.0.0/18kube-controller:image: rancher/k8s:v1.8.3-rancher2extra_args: {}cluster_cidr: 10.233.64.0/18service_cluster_ip_range: 10.233.0.0/18scheduler:image: rancher/k8s:v1.8.3-rancher2extra_args: {}kubelet:image: rancher/k8s:v1.8.3-rancher2extra_args: {}cluster_domain: cluster.localinfra_container_image: gcr.io/google_containers/pause-amd64:3.0cluster_dns_server: 10.233.0.3kubeproxy:image: rancher/k8s:v1.8.3-rancher2extra_args: {}network:plugin: calicooptions: {}auth:strategy: x509options: {}addons: ""system_images: {}ssh_key_path: ~/.ssh/id_rsa

Install Docker to the Hosts

I’ll use Docker 17.03-ce for this post. Any version Kubernetes supports should work (atm this post will deploy Kubernetes 1.8.3). One of the easy ways to install Docker is from the shell provided by Rancher Labs on the following link:

Hosts in Rancher_Documentation for Rancher_rancher.com

The following command should work for Docker 17.03-ce:

curl https://releases.rancher.com/install-docker/17.03.sh | sh

Confirm your docker version is correct:

Client:Version: 17.03.2-ceAPI version: 1.27Go version: go1.7.5Git commit: f5ec1e2Built: Tue Jun 27 03:35:14 2017OS/Arch: linux/amd64

Server:Version: 17.03.2-ceAPI version: 1.27 (minimum version 1.12)Go version: go1.7.5Git commit: f5ec1e2Built: Tue Jun 27 03:35:14 2017OS/Arch: linux/amd64Experimental: false

Register authorized_keys

Be sure you can access your hosts via an ssh key. Suppose you’re going to access the host with the private key ~/.ssh/id_rsa. You will likely have your public key at ~/.ssh/id_rsa.pub so cat and copy the content. In each of your hosts, paste the content inside ~/.ssh/authorized_keys . Confirm that you can access your hosts via ssh .

Turn Swap Off in your Hosts

If you’re using your own on-premise machine, it’s likely that your swap is on. Kubelet will fail to activate saying something like:

error: failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained:

Either disable swap with:

sudo swapoff -a

or set your cluster.yml with fail-swap-on: false like this:

kubelet:image: rancher/k8s:v1.8.3-rancher2extra_args:fail-swap-on: false

Deploy Kubernetes!

You’re all set now! Confirm that you’re in the directory with cluster.yml and execute rke up . Yes, that’s it! You should see rke deploying the components for Kubernetes to work the the specified hosts.

INFO[0000] Building Kubernetes clusterINFO[0000] [ssh] Setup tunnel for host [203.104.214.176]INFO[0000] [ssh] Setup tunnel for host [203.104.214.176]INFO[0001] [ssh] Setup tunnel for host [203.104.227.60]INFO[0002] [certificates] Generating kubernetes certificatesINFO[0002] [certificates] Generating CA kubernetes certificatesINFO[0002] [certificates] Generating Kubernetes API server certificatesINFO[0002] [certificates] Generating Kube Controller certificatesINFO[0002] [certificates] Generating Kube Scheduler certificatesINFO[0003] [certificates] Generating Kube Proxy certificatesINFO[0003] [certificates] Generating Node certificateINFO[0004] [certificates] Generating admin certificates and kubeconfigINFO[0004] [reconcile] Reconciling cluster stateINFO[0004] [reconcile] This is newly generated clusterINFO[0004] [certificates] Deploying kubernetes certificates to Cluster nodesINFO[0023] Successfully Deployed local admin kubeconfig at [./.kube_config_cluster.yml]INFO[0023] [certificates] Successfully deployed kubernetes certificates to Cluster nodesINFO[0023] [etcd] Building up Etcd Plane..INFO[0023] [etcd] Pulling Image on host [203.104.214.176]INFO[0028] [etcd] Successfully pulled [etcd] image on host [203.104.214.176]INFO[0028] [etcd] Successfully started [etcd] container on host [203.104.214.176]INFO[0028] [etcd] Successfully started Etcd Plane..INFO[0028] [controlplane] Building up Controller Plane..INFO[0028] [controlplane] Pulling Image on host [203.104.214.176]INFO[0086] [controlplane] Successfully pulled [kube-api] image on host [203.104.214.176]INFO[0087] [controlplane] Successfully started [kube-api] container on host [203.104.214.176]INFO[0087] [controlplane] Pulling Image on host [203.104.214.176]INFO[0089] [controlplane] Successfully pulled [kube-controller] image on host [203.104.214.176]INFO[0089] [controlplane] Successfully started [kube-controller] container on host [203.104.214.176]INFO[0090] [controlplane] Pulling Image on host [203.104.214.176]INFO[0092] [controlplane] Successfully pulled [scheduler] image on host [203.104.214.176]INFO[0092] [controlplane] Successfully started [scheduler] container on host [203.104.214.176]INFO[0092] [controlplane] Successfully started Controller Plane..INFO[0092] [worker] Building up Worker Plane..INFO[0092] [worker] Pulling Image on host [203.104.214.176]INFO[0095] [worker] Successfully pulled [kubelet] image on host [203.104.214.176]INFO[0095] [worker] Successfully started [kubelet] container on host [203.104.214.176]INFO[0095] [worker] Pulling Image on host [203.104.214.176]INFO[0097] [worker] Successfully pulled [kube-proxy] image on host [203.104.214.176]INFO[0098] [worker] Successfully started [kube-proxy] container on host [203.104.214.176]INFO[0098] [worker] Pulling Image on host [203.104.227.60]INFO[0103] [worker] Successfully pulled [nginx-proxy] image on host [203.104.227.60]INFO[0103] [worker] Successfully started [nginx-proxy] container on host [203.104.227.60]INFO[0103] [worker] Pulling Image on host [203.104.227.60]INFO[0156] [worker] Successfully pulled [kubelet] image on host [203.104.227.60]INFO[0156] [worker] Successfully started [kubelet] container on host [203.104.227.60]INFO[0156] [worker] Pulling Image on host [203.104.227.60]INFO[0159] [worker] Successfully pulled [kube-proxy] image on host [203.104.227.60]INFO[0159] [worker] Successfully started [kube-proxy] container on host [203.104.227.60]INFO[0159] [worker] Successfully started Worker Plane..INFO[0159] [certificates] Save kubernetes certificates as secretsINFO[0177] [certificates] Successfuly saved certificates as kubernetes secret [k8s-certs]INFO[0177] [state] Saving cluster state to KubernetesINFO[0177] [state] Successfully Saved cluster state to Kubernetes ConfigMap: cluster-stateINFO[0177] [network] Setting up network plugin: calicoINFO[0177] [addons] Saving addon ConfigMap to KubernetesINFO[0177] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-network-pluginINFO[0177] [addons] Executing deploy job..INFO[0183] [addons] Setting up KubeDNSINFO[0183] [addons] Saving addon ConfigMap to KubernetesINFO[0183] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-kubedns-addonINFO[0183] [addons] Executing deploy job..INFO[0188] [addons] KubeDNS deployed successfully..INFO[0188] [addons] Setting up user addons..INFO[0188] [addons] No user addons configured..INFO[0188] Finished building Kubernetes cluster successfully

How long it takes for initial deploy will depend on your network connection because rke needs to pull the docker images, but without that it will end in a couple of minutes. After you see Finished building Kubernetes cluster successfully, you should see a file called .kube_config_cluster.yml. You can use kubectl with this config. Confirm that your nodes are working with the following command:

kubectl --kubeconfig .kube_config_cluster.yml get all --all-namespaces

You should get all information about your kubernetes cluster.

Wrap Up

Pretty simple wasn’t it? You can easily create a kubernetes cluster even with an environment at your house (which I did). Part 1 focused on creating a kubernetes cluster. In Part 2 I’d like to deploy OpenFaaS on top of it.


Published by HackerNoon on 2017/12/09