Minikube is ideal tool to setup kubernetes (k8s from now on) locally to test and experiment with your deployments.
In this guide I will try to help you get it up and running on your local machine, drop some tips on where and how particular stuff should be done and also make it helm capable (I assume when you use k8s that at some point you will want to learn about and use Helm, etcd, istio etc).
This is your local k8s environment scaffolding guide.
Minikube works with virtual machine, and for this it can use various options depending on your preference and operating system. My preference in this case is Oracle’s VirtualBox.
You can use brew to install everything:
$ brew cask install virtualbox minikube
In this case you could get some kind of inconclusive installation error related to virtualbox installation, especially on Mojave and probably everything afterwards.
Whatever it says, it is most probably a new security feature in MacOS X that is in your way.
Go to System Preferences > Security & Privacy and on General screen you will see one (or few) messages about some software needing approval to install. You should carefully review the list if there is more then one and allow installation of software you need — in this case software by Oracle.
After that is done you can re-run the command above and when it is done you should be ready for next steps.
Starting it would be as easy as
$ minikube start
In order to optimally utilize your local machine’s resources I would suggest stopping it when you do not need it any more… With VirtualBox in center of it, it will go through you laptop’s battery pretty quickly. Starting it again later will get you back where you left off:
$ minikube stop
Kubernetes dashboard is also available to you (while minikube is running):
$ minikube dashboard
I will assume you have kubectl installed locally and that you are already using it for some remote clusters so you got multiple contexts. In this case, you need to list contexts and switch to minikube one (in following commands assuming default name that is, ofc, “minikube”)
$ kubectl config get-contexts$ kubectl config use-context minikube
Now you are in the context of your local k8s cluster that runs on minikube and you can do all the k8s things in it.
To run your deployments that have ingress (and I assume most of them will), you will need ingress add-on:
minikube addons enable ingress
Make sure that you setup ingress based on your local hosts. It basically means that whatever you set as host in your ingress rules needs to be set up in your /etc/hosts file
[minikube ip] your.host
Where “[minikube ip]” should be replaced with actual minikube ip. It also works with multiple, space separated local hosts after minikube ip.
Here is shortcut to do it in bash:
$ echo "$(minikube ip) local.host" | sudo tee -a /etc/hosts
Reality of the real container registry usage in local environment is a rough one, so I will provide easy, quick and dirty option that makes it quite easy to deploy your local work to your local k8s, but deprives you of really important experience of using proper container registry.
Get your local docker context to point to minikube context:
$ eval $(minikube docker-env)
To revert:
$ eval $(docker-machine env -u)
When in minikube context, to start local docker registry:
$ docker run -d -p 5000:5000 --restart=always --name registry registry:2
So, now you have local registry to push stuff to (as long as your docker is in context of minikube).
You can now do:
$ docker build . -t <your_tag>$ docker tag <your_tag> localhost:5000/<your_tag>:<version>
At this point you can use localhost:5000/<your_tag>: as image in your deployment and that is it.
To use remote container repo locally you need to provide way to authenticate, which is through k8s secrets.
For local secrets management for ECR, GCR and Docker registry I recommend using minikube addon called registry-creds. I do n ot consider it safe enough to be used anywhere but in local env.
$ minikube addons configure registry-creds$ minikube addons enable registry-creds
Note on ECR setup: Make sure that, if you are setting it for AWS ECR, and you do not have role arn you want to use (you usually wont have and it is optional), you set it as something random as “changeme” or smt… It requires value, if you just press enter (since it is optional) deployment of creds pod will fail and make your life miserable.
In case of AWS ECR, that will let you pull from your repo directly setting url as container image and adding pull secret named awsecr-cred:
imagePullSecrets: - name: awsecr-cred
I have to note here that running this locally worked quite chaotically for me and every session was new experience and new hack to make it work… Not a happy path.
Helm is package manager for k8s, and is often used for configuration management across deployments. With high popularity of the tool and raising adoption, I want to end this guide with the note about adding helm to your local k8s env.
It is quite easy at this point, just have minikube up and:
$ brew install kubernetes-helm$ helm init
This should be deprecated information pretty soon, but in current case helm uses backend called Tiller and that is what gets installed/deployed during helm init execution.
You should check tiller deployment with:
$ kubectl describe deploy tiller-deploy — namespace=kube-system
Valuable read: https://docs.helm.sh/using_helm/
Now you have full k8s local environment able to accept all of your test deployments before you decide to put them in the cloud (or “raw iron” server somewhere).
HAPPY SCALING