This tutorial is for the ones who want to try out the Kubernetes installation on CentOS. In this article, I have simplified the installation into 15 steps for installing Kubernetes on CentOS “bento/centos-7” Before you begin with installation here are prerequisites for installing Kubernetes on CentOS. Prerequisites Reading time is about 20 minutes Vagrant 2.2.7 or latest – For installation instruction click here VM VirtualBox – For installation instruction re click he Step 1: Start your vagrant box Use the following Vagrantfile to spin up your vagrant box. We are going with two VMs here – Master Node – 2 cpus, 2 GB Memory (Assinged IP – 100.0.0.1 ) Worker Node – 1 cpu, 1 GB Memory (Assinged IP – 100.0.0.2 ) Vagrant.configure( ) |config| config.vm.define |master| master.vm.box_download_insecure = master.vm.box = master.vm.network , ip: master.vm.hostname = master.vm.provider |v| v.name = v.memory = 2048 v.cpus = 2 end end config.vm.define |worker| worker.vm.box_download_insecure = worker.vm.box = worker.vm.network , ip: worker.vm.hostname = worker.vm.provider |v| v.name = v.memory = 1024 v.cpus = 1 end end end "2" do "master" do true "bento/centos-7" "private_network" "100.0.0.1" "master" "virtualbox" do "master" "worker" do true "bento/centos-7" "private_network" "100.0.0.2" "worker" "virtualbox" do "worker" Step 2: Update /etc/hosts on both nodes(master, worker) master node – SSH into the master node $ vagrant ssh master vagrant@master:~$ sudo vi /etc/hosts 100.0.0.1 master.jhooq.com master 100.0.0.2 worker.jhooq.com worker worker node- SSH into the worker node $ vagrant ssh worker vagrant@worker:~$ sudo vi /etc/hosts 100.0.0.1 master.jhooq.com master 100.0.0.2 worker.jhooq.com worker Test the worker node by sending ping from master [vagrant@master ~]$ ping worker PING worker.jhooq.com (100.0.0.2) 56(84) bytes of data. 64 bytes from worker.jhooq.com (100.0.0.2): icmp_seq=1 ttl=64 time=0.462 ms 64 bytes from worker.jhooq.com (100.0.0.2): icmp_seq=2 ttl=64 time=0.686 ms Test the master node by sending ping from worker [vagrant@worker ~]$ ping master PING master.jhooq.com (100.0.0.1) 56(84) bytes of data. 64 bytes from master.jhooq.com (100.0.0.1): icmp_seq=1 ttl=64 time=0.238 ms 64 bytes from master.jhooq.com (100.0.0.1): icmp_seq=2 ttl=64 time=0.510 ms Step 3: Install Docker on both nodes (master, worker) You need to install Docker on both the node So run the following docker installation command on both the nodes [vagrant@master ~]$ sudo yum install docker -y Enable docker: on both master and worker node [vagrant@master ~]$ sudo systemctl docker enable Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. Start docker: on both master and worker node [vagrant@master ~]$ sudo systemctl start docker Check the docker service status [vagrant@master ~]$ sudo systemctl status docker Docker service should be up and running and you should get following output on the terminal ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2020-04-23 18:00:12 UTC; 26s ago Docs: http://docs.docker.com Main PID: 11892 (dockerd-current) Step 4: Disable SELinux on both nodes(master, worker) You need to disable the SELinux using following command [vagrant@master ~]$ sudo setenforce [vagrant@master ~]$ sudo sed -i /etc/selinux/config 0 's/^SELINUX=enforcing$/SELINUX=permissive/' Step 5: Disable CentOS firewall on both nodes(master, worker) Master Node [vagrant@master ~]$ sudo systemctl disable firewalld [vagrant@master ~]$ sudo systemctl stop firewalld Worker Node [vagrant@worker ~]$ sudo systemctl disable firewalld [vagrant@master ~]$ sudo systemctl stop firewall Step 6: Disable swapping on both nodes(master, worker) Disable the swapping on master as well as a worker node. Because to install Kubernetes we need to disable the swapping on both the nodes Run following command on both master as well as worker node [vagrant@master ~]$ sudo swapoff -a Step 7: Enable the usage of “iptables” on both nodes(master, worker) Enable the usage of iptables which will prevent the routing errors happening. As the following runtime parameters: [vagrant@worker ~]$ sudo bash -c [vagrant@worker ~]$ sudo bash -c [vagrant@worker ~]$ sudo sysctl --system 'echo "net.bridge.bridge-nf-call-ip6tables = 1" > /etc/sysctl.d/k8s.conf' 'echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.d/k8s.conf' Step 8: Add the Kubernetes repo to rum.repos.d on both nodes(master, worker) [vagrant@master ~]$ sudo vi /etc/yum.repos.d/kubernetes.repo Add following repo details – [kubernetes] name=Kubernetes baseurl=https: enabled= gpgcheck= repo_gpgcheck= gpgkey=https: https: //packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 1 1 1 //packages.cloud.google.com/yum/doc/yum-key.gpg //packages.cloud.google.com/yum/doc/rpm-package-key.gpg Step 9: Install Kubernetes on both nodes(master, worker) [vagrant@master ~]$ sudo yum install -y kubeadm kubelet kubectl Step 10: Enable and Start Kubelet on both nodes(master, worker) Run the following command both on master and worker nodes. Enable the kubelet [vagrant@worker ~]$ sudo systemctl enable kubelet Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. from Start the kubelet [vagrant@master ~]$ sudo systemctl start kubelet Step 11: Initialize Kubernetes cluster only on master node Initialize the Kubernetes cluster (–apiserver-advertise-address=100.0.0.1 this is the IP address we have assigned in the /etc/hosts) [vagrant@master ~]$ sudo kubeadm init --apiserver-advertise-address= --pod-network-cidr= / Note down the kubeadm join command 100.0 .0 .1 10.244 .0 .0 16 kubeadm join : --token cfvd1x h8kzx0u9vcn4trf \ --discovery-token-ca-cert-hash sha256:cc9687b47f3a9bfa5b880dcf409eeaef05d25505f4c099732b65376b0e14458c 100.0 .0 .1 6443 .8 Step 12: Move kube config file to current user (only run on master) To interact with the Kubernetes cluster and to use kubectl command, we need to have the Kube config file with us. Use the following command to get the kube config file and put it under the working directory. [vagrant@master ~]$ mkdir -p $HOME/.kube [vagrant@master ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [vagrant@master ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config Step 13: Apply CNI from kube-flannel.yml(only run on master) After the master of the cluster is ready to handle jobs and the services are running, for the purpose of making containers accessible to each other through networking, we need to set up the network for container communication. Get the CNI(container network interface) configuration from flannel [vagrant@master ~]$ wget https: //raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml Note – But since we are working on the VMs so we need to check our Ethernet interfaces first. Look out for the Ethernet i.e. eth1 which has a ip address 100.0.0.1(this is the ip address which we used in vagrant file) [vagrant@master ~]$ ip a s : lo: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 08:00:27:bb:14:75 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 08:00:27:fb:48:77 brd ff:ff:ff:ff:ff:ff inet 100.0.0.1 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> 1 2: eth0: < > LOOPBACK,UP,LOWER_UP Now we need to add the extra args for eth1 in kube-flannel.yml [vagrant@master ~]$ vi kube-flannel.yml Searche for – “flanneld” In the args section add : – –iface=eth1 - --iface=eth1 args: - --ip-masq - --kube-subnet-mgr - --iface=eth1 Apply the flannel configuration vagrant@master:~$ kubectl apply -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created Step 14: Join master node run only on worker node In the Step 11 we generated the token and kubeadm join command. Now we need to use that join command from our worker node [vagrant@worker ~]$ sudo kubeadm join : --token cfvd1x h8kzx0u9vcn4trf --discovery-token-ca-cert-hash 100.0 .0 .1 6443 .8 sha256:cc9687b47f3a9bfa5b880dcf409eeaef05d25505f4c099732b65376b0e14458c W0423 : : join.go: ] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [preflight] Reading configuration the cluster... [preflight] FYI: You can look at config file [kubelet-start] Downloading configuration the kubelet the ConfigMap the kube-system namespace [kubelet-start] Writing kubelet configuration to file [kubelet-start] Writing kubelet environment file flags to file [kubelet-start] Starting the kubelet [kubelet-start] Waiting the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed the secure connection details. Run on the control-plane to see node join the cluster. 18 50 54.480382 8100 346 from this with 'kubectl -n kube-system get cm kubeadm-config -oyaml' for from "kubelet-config-1.18" in "/var/lib/kubelet/config.yaml" with "/var/lib/kubelet/kubeadm-flags.env" for of new 'kubectl get nodes' this Step 15: Check the nodes status(only run on master) Check the nodes status in the master [vagrant@master ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master m v1 worker Ready <none> s v1 26 .18 .2 63 .18 .2 For more similar kubernetes article please refer to - 14 Steps to Install kubernetes on Ubuntu Previously published at https://jhooq.com/15-steps-to-install-kubernetes-on-bento-centos7