Kubernetes Adventures on Azure — Part 1

Written by ivanfioravanti | Published 2017/08/24
Tech Story Tags: kubernetes | azure | docker | containers

TLDRvia the TL;DR App

This is the first article of a series of 3:

In the last month I read 3 awesome books around Kubernetes:

Now it’s time to start adventuring in the magical world of Kubernetes for real! And I will do it using Microsoft Azure.

Let’s try Azure Container Service aka ACS with its pro and cons (first try)

Microsoft Azure offers a ready to go Kubernetes solution: Azure Container Service (ACS). It seems easiest way to test a Kubernetes cluster on Azure, if we don’t consider the new Azure Container Instance. It hides Kubernetes behind the scenes leaving you with simple deployments of containers that will be charged by cou, by memory and moreover by seconds!

Let’s try ACS! But first I want to highlight its current limits immediately so that you are aware of them:

  • No Hybrid cluster with mixed Linux and Windows nodes.
  • Version used are not the latest (Kubernetes ACS 1.6.6 vs Latest 1.7.4).
  • I experienced some issue with az acs cli command that seems (to me) not yet ready for the prime time.

Easiest way to start our ACS journey is following “Deploy Kubernetes cluster for Linux containers” that shows a beautiful 4 min to read on top of the page.

Note: It will guide you in using Azure Cloud Shell to create a Kubernetes cluster with Linux only nodes. Personally I installed an used a local Azure CLI following this article from Microsoft. Another article: “Deploy Kubernetes cluster for Windows containers” will show how to create a Kubernetes cluster with Windows only nodes. This missing hybrid deployment is a limitation for me, because I want to use an hybrid cluster with worker roles with Linux and Windows. But I know for sure that this limitation can be overcome using ACS Engine directly to manually deploy a Kubernetes cluster on Azure (another chapter in my adventure).

Main Steps to install a Linux ACS Kubernets cluster are:

  1. Create a resource groupaz group create --name myAcsTest --location westeurope

  2. Create a Kubernetes cluster az acs create --orchestrator-type kubernetes \ --resource-group myAcsTest --name myK8sCluster \ --generate-ssh-keys --agent-count 2

  3. Connect to the clusteraz acs kubernetes get-credentials --resource-group myAcsTest --name myK8sCluster

After few minutes your cluster should be up and running with 1 master and 2 nodes, but I had no luck with it at first try.

Failure on step 2 (solved with a second try): on first try of step 2 I received an error, that disappeared on second run of the command, probably due to newly created app credentials in AAD not yet ready to be used. Here detailed error:

Deployment failed. {“error”: {“code”: “BadRequest”,“message”: “The credentials in ServicePrincipalProfile were invalid. Please see https://aka.ms/acs-sp-help for more details. (Details: AADSTS70001: Application with identifier …..

Note on step 3 (solved deleting and creating cluster again in another way): this step failed with “Authentication failed” error. Maybe due to the fact that there was already an id_rsa file under my user .ssh folder?

Fast solution is deleting the cluster with following command:

az group delete --name myAcsTest --yes --no-wait

and create it again, but this time we will first create an SSH key pair on our own.

Let’s try Azure Container Service again (second try)

From Linux/MacOS you can follow: How to create and use an SSH public and private key pair for Linux VMs in Azure to create an SSH Key pair to be stored on your machine. This is really important and needed to connect to your Kubernetes cluster.

To create SSH key pair run following command and be sure to specify a path to store your key, mine is ~/acs/sshkeys/acsivan:

ssh-keygen -t rsa -b 2048

Note: I changed group and cluster name to avoid conflict with pending deletion of previous groups, that has been perform asynchronously using — no-wait argument.

Let’s try again to create our Kubernetes cluster with following commands (replace ssh key pair path with your one):

az group create --name myAcsTest2 --location westeurope

az acs create --orchestrator-type kubernetes \--resource-group myAcsTest2 --name myK8sCluster2 \--agent-count 2 --ssh-key-value ~/acs/sshkeys/acsivan.pub

az acs kubernetes get-credentials --resource-group myAcsTest2 --name myK8sCluster2 --ssh-key-file ~/acs/sshkeys/acsivan

If there are no errors in console you are ready to connect to your first Kubernetes cluster on Azure!!! Hurray!

KUBERNETES CLUSTER UP AND RUNNING!

Let’s run our first kubectl command to check nodes of our cluster:

> kubectl get nodes

NAME STATUS AGE VERSIONk8s-agent-96ca25a6–0 Ready 12m v1.6.6k8s-agent-96ca25a6–1 Ready 12m v1.6.6k8s-master-96ca25a6–0 Ready,SchedulingDisabled 13m v1.6.6

WAIT! 1.6.6?

Wait… v1.6.6? Latest Kubernets version on 24th August 2017 is 1.7.4. This is another limit of Azure ACS: it’s not updated on the fly to latest versions.

It’s time to play with our new super mega awesome Kubernetes cluster

First of all we will deploy Azure Vote app as described in Microsoft article we are following and then we will run some commands on our cluster to play with it a bit before moving to a Windows cluster.

  • Create azure-vote.yaml a file as described in Run the Application paragraph. It defines 2 deployments:- azure-vote-backend that is based on a Redis service - azure-vote-front that is a web application

apiVersion: apps/v1beta1kind: Deploymentmetadata:name: azure-vote-backspec:replicas: 1template:metadata:labels:app: azure-vote-backspec:containers:- name: azure-vote-backimage: redisports:— containerPort: 6379name: redis---apiVersion: v1kind: Servicemetadata:name: azure-vote-backspec:ports:— port: 6379selector:app: azure-vote-back---apiVersion: apps/v1beta1kind: Deploymentmetadata:name: azure-vote-frontspec:replicas: 1template:metadata:labels:app: azure-vote-frontspec:containers:— name: azure-vote-frontimage: microsoft/azure-vote-front:redis-v1ports:— containerPort: 80env:— name: REDISvalue: “azure-vote-back”---apiVersion: v1kind: Servicemetadata:name: azure-vote-frontspec:type: LoadBalancerports:— port: 80selector:app: azure-vote-front

  • Deploy them using following command:kubectl create -f azure-vote.yaml You will get following output:

deployment “azure-vote-back” createdservice “azure-vote-back” createddeployment “azure-vote-front” createdservice “azure-vote-front” created

  • Test your app running:kubectl get service azure-vote-front --watch Wait for the Azure Load Balancer to be created for you in front of your service and get its IP for console.Open a browser to that IP and voilà: application running!

Azure Voting App running on an Azure ACS Kubernetes cluster

Now let’s play a bit on it to test some kubectl commands:

  • Get list of running serviceskubectl get services

NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGEazure-vote-back 10.0.136.59 <none> 6379/TCP 39mazure-vote-front 10.0.96.34 13.93.7.226 80:30163/TCP 39mkubernetes 10.0.0.1 <none> 443/TCP 1h

  • Get list of your deploymentskubectl get deployments

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEazure-vote-back 1 1 1 1 40mazure-vote-front 2 2 2 2 40m

  • Get a detailed description of your front end deployment with kubectl describe deployment azure-vote-front
  • Scale your frontend deployment to 20 replicas (I love this one! Fast, Easy, Immediate) kubectl scale deployments/azure-vote-front --replicas 20and check new values with kubectl get deployment azure-vote-front

Wait! Where is Kubernetes Dashboard?

Again a super easy command will lead you to a Dashboard showing your cluster from your browser (I love Kubernetes!)

The best way to reach it is kubectl proxy that should give you an output like Starting to serve on 127.0.0.1:8001

Open a browser to http://127.0.0.1:8001/ui and you will see dashboard running.

Kubernetes Dashboard running on Azure ACS

Now we can easily delete (and stop paying) everything with this simple command: az group delete --name myAcsTest2 --yes --no-wait

I tested Azure Container Services with a Windows cluster before moving to a full hybrid cluster. You can find details in Part 2.

I will then try to scatter cluster across multiple cloud providers and onpremises location (dreaming…)


Published by HackerNoon on 2017/08/24