This is the first article of a series of 3: Kubernetes Adventures on Azure — Part 2 (Windows Cluster and trick for scaling Pods) Kubernetes Adventures on Azure — Part 3 (ACS Engine & Hybrid Cluster) In the last month I read 3 awesome books around Kubernetes: by Gigi Safyan available on . Mastering Kubernetes Amazon by , Brendan Burns and available on the Amazon or the great Safari Books Online. Kubernetes: Up and Running Kelsey Hightower Joe Beda by available as MEAP on Manning web site Kubernetes in Action Marko Lukša Now it’s time to start adventuring in the magical world of Kubernetes for real! And I will do it using Azure. Microsoft Let’s try Azure Container Service aka ACS with its pro and cons (first try) Microsoft Azure offers a ready to go Kubernetes solution: Azure Container Service (ACS). It seems easiest way to test a Kubernetes cluster on Azure, if we don’t consider the new Azure Container Instance. It hides Kubernetes behind the scenes leaving you with simple deployments of containers that will be charged by cou, by memory and moreover by seconds! Let’s try ACS! But first I want to highlight its limits immediately so that you are aware of them: current No Hybrid cluster with mixed Linux and Windows nodes. Version used are not the latest (Kubernetes ACS 1.6.6 vs Latest 1.7.4). I experienced some issue with cli command that seems (to me) not yet ready for the prime time. az acs Easiest way to start our ACS journey is following “ ” that shows a beautiful 4 min to read on top of the page. Deploy Kubernetes cluster for Linux containers : It will guide you in using Azure Cloud Shell to create a Kubernetes cluster with Personally I installed an used a local Azure CLI following this . Another article: “ ” will show how to create a Kubernetes cluster with . with worker roles with Linux and Windows. But I know for sure that this limitation can be overcome using ACS Engine directly to manually deploy a Kubernetes cluster on Azure (another chapter in my adventure). Note Linux only nodes. article from Microsoft Deploy Kubernetes cluster for Windows containers Windows only nodes This missing hybrid deployment is a limitation for me, because I want to use an hybrid cluster Main Steps to install a Linux ACS Kubernets cluster are: Create a resource group az group create --name myAcsTest --location westeurope Create a Kubernetes cluster az acs create --orchestrator-type kubernetes \ --resource-group myAcsTest --name myK8sCluster \ --generate-ssh-keys --agent-count 2 Connect to the cluster az acs kubernetes get-credentials --resource-group myAcsTest --name myK8sCluster After few minutes your cluster should be up and running with 1 master and 2 nodes, but I had no luck with it at first try. : on first try of step 2 I received an error, that disappeared on second run of the command, probably due to newly created app credentials in AAD not yet ready to be used. Here detailed error: Failure on step 2 (solved with a second try) Deployment failed. {“error”: {“code”: “BadRequest”,“message”: “The credentials in ServicePrincipalProfile were invalid. Please see for more details. (Details: AADSTS70001: Application with identifier ….. https://aka.ms/acs-sp-help : this step failed with “Authentication failed” error. Maybe due to the fact that there was already an id_rsa file under my user .ssh folder? Note on step 3 (solved deleting and creating cluster again in another way) Fast solution is deleting the cluster with following command: az group delete --name myAcsTest --yes --no-wait and create it again, but this time we will first create an SSH key pair on our own. Let’s try Azure Container Service again (second try) From Linux/MacOS you can follow: to create an SSH Key pair to be stored on your machine. This is really important and needed to connect to your Kubernetes cluster. How to create and use an SSH public and private key pair for Linux VMs in Azure To create SSH key pair run following command and be sure to specify a path to store your key, mine is ~/acs/sshkeys/acsivan: ssh-keygen -t rsa -b 2048 : I changed group and cluster name to avoid conflict with pending deletion of previous groups, that has been perform asynchronously using — no-wait argument. Note Let’s try again to create our Kubernetes cluster with following commands (replace ssh key pair path with your one): az group create --name myAcsTest2 --location westeurope az acs create --orchestrator-type kubernetes \--resource-group myAcsTest2 --name myK8sCluster2 \--agent-count 2 --ssh-key-value ~/acs/sshkeys/acsivan.pub az acs kubernetes get-credentials --resource-group myAcsTest2 --name myK8sCluster2 --ssh-key-file ~/acs/sshkeys/acsivan If there are no errors in console you are ready to connect to your first Kubernetes cluster on Azure!!! Hurray! KUBERNETES CLUSTER UP AND RUNNING! Let’s run our first kubectl command to check nodes of our cluster: > kubectl get nodes NAME STATUS AGE VERSIONk8s-agent-96ca25a6–0 Ready 12m v1.6.6k8s-agent-96ca25a6–1 Ready 12m v1.6.6k8s-master-96ca25a6–0 Ready,SchedulingDisabled 13m v1.6.6 WAIT! 1.6.6? Wait… v1.6.6? Latest Kubernets version on 24th August 2017 is 1.7.4. This is another limit of Azure ACS: it’s not updated on the fly to latest versions. It’s time to play with our new super mega awesome Kubernetes cluster First of all we will deploy Azure Vote app as described in Microsoft article we are following and then we will run some commands on our cluster to play with it a bit before moving to a Windows cluster. Create a file as described in paragraph. It defines 2 deployments:- azure-vote-backend that is based on a Redis service - azure-vote-front that is a web application azure-vote.yaml Run the Application apiVersion: apps/v1beta1kind: Deploymentmetadata:name: azure-vote-backspec:replicas: 1template:metadata:labels:app: azure-vote-backspec:containers:- name: azure-vote-backimage: redisports:— containerPort: 6379name: redis---apiVersion: v1kind: Servicemetadata:name: azure-vote-backspec:ports:— port: 6379selector:app: azure-vote-back---apiVersion: apps/v1beta1kind: Deploymentmetadata:name: azure-vote-frontspec:replicas: 1template:metadata:labels:app: azure-vote-frontspec:containers:— name: azure-vote-frontimage: microsoft/azure-vote-front:redis-v1ports:— containerPort: 80env:— name: REDISvalue: “azure-vote-back”---apiVersion: v1kind: Servicemetadata:name: azure-vote-frontspec:type: LoadBalancerports:— port: 80selector:app: azure-vote-front Deploy them using following command: You will get following output: kubectl create -f azure-vote.yaml deployment “azure-vote-back” createdservice “azure-vote-back” createddeployment “azure-vote-front” createdservice “azure-vote-front” created Test your app running: Wait for the Azure Load Balancer to be created for you in front of your service and get its IP for console.Open a browser to that IP and voilà: application running! kubectl get service azure-vote-front --watch Azure Voting App running on an Azure ACS Kubernetes cluster Now let’s play a bit on it to test some kubectl commands: Get list of running services kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGEazure-vote-back 10.0.136.59 <none> 6379/TCP 39mazure-vote-front 10.0.96.34 13.93.7.226 80:30163/TCP 39mkubernetes 10.0.0.1 <none> 443/TCP 1h Get list of your deployments kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEazure-vote-back 1 1 1 1 40mazure-vote-front 2 2 2 2 40m Get a detailed description of your front end deployment with kubectl describe deployment azure-vote-front Scale your frontend deployment to 20 replicas (I love this one! Fast, Easy, Immediate) and check new values with kubectl scale deployments/azure-vote-front --replicas 20 kubectl get deployment azure-vote-front Wait! Where is Kubernetes Dashboard? Again a super easy command will lead you to a Dashboard showing your cluster from your browser (I love Kubernetes!) The best way to reach it is that should give you an output like kubectl proxy Starting to serve on 127.0.0.1:8001 Open a browser to and you will see dashboard running. http://127.0.0.1:8001/ui Kubernetes Dashboard running on Azure ACS Now we can easily delete (and stop paying) everything with this simple command: az group delete --name myAcsTest2 --yes --no-wait I tested Azure Container Services with a Windows cluster before moving to a full hybrid cluster. You can find details in . Part 2 I will then try to scatter cluster across multiple cloud providers and onpremises location (dreaming…)