Windows Kubernetes cluster installation Part 3 available here: Kubernetes Adventures on Azure — Part 3 (ACS Engine & Hybrid Cluster) In of this series, we have seen how to create a Linux Kubernetes cluster on Azure Container Services. Part 1 Today I will try creation of a Kubernetes cluster but with Windows as nodes instead of Linux. Obviously Master will always be Linux. This time I will follow step by step and then play with the newly created cluster. Deploy Kubernetes cluster for Windows containers Create Kubernetes Windows Cluster Let’s start with the usual creation of a resource group for this test so that we can easily group all cloud artifacts in it and delete everything on the fly at the end. ARM has been a great addition to Microsoft Azure. Create dedicated Resource group: az group create --name myAcsWinTest --location westeurope Create Kubernetes cluster (here I will use my ssh key pair created in Part 1): az acs create --orchestrator-type=kubernetes \--resource-group \--name=myK8sCluster \--agent-count=2 \--ssh-key-value ~/acs/sshkeys/acsivan.pub \--windows --admin-username azureuser \--admin-password myTestPassword1 myAcsWinTest Connect to Kubernetes cluser: az acs kubernetes get-credentials --resource-group=myAcsWinTest --name=myK8sCluster --ssh-key-file ~/acs/sshkeys/acsivan Check connection retrieving nodes list: kubectl get nodes NAME STATUS AGE VERSIONfb7c1acs9000 Ready 19m v1.6.6–9+8a67b481dfc2c6fb7c1acs9001 Ready 19m v1.6.6–9+8a67b481dfc2c6k8s-master-fb7c1c12–0 Ready 19m v1.6.6 Cluster up and running! Awesome! Before proceeding, open Dashboard to check what’s going on in the cluster using a browser. Kubernetes Connect to Kubernetes Dashboard: Open a browser pointing at leave it open so that you can check there, changes applied later. http://127.0.0.1:8001/ui Play with Kubernetes Windows Cluster Let’s start with the easy sample of IIS using windowsservercore instead of nanocore as described in article. I will follow same steps to introduce concepts like pods and way to expose it once manually deployed. : I consider this bad because you end up with a pod, a service, but no controller between them as a Replica Set or a Deployment managing it. Later we will see how to “fix” this issue without downtime. Microsoft Note Deploy an IIS windowsservercore container Create iis.json with following content: {"apiVersion": "v1","kind": "Pod","metadata": {"name": "iis","labels": {"app": "iis"}},"spec": {"containers": [{"name": "iis","image": "microsoft/iis","ports": [{"containerPort": 8000}]}],"nodeSelector": {"beta.kubernetes.io/os": "windows"}}} We can now apply configuration to a new Pod named iis: kubectl apply -f iis.json Check running Pod: It will take around 10 mins to be created, because windowsservercore image is quite big ~5GB. To gain time you can expose this pod right now, there is no need to wait for pod creation in this test. LoadBalancer creation will start immediately. kubectl get pods --watch Open at Pods section and you will see something like: Kubernetes Dashboard Expose Pod with a service: (In a future post I will try Ingress in an hybrid cluster, let’s push Kubernetes to its limits!). kubectl expose pods iis --port=80 --type=LoadBalancer Wait for LoadBalancer activation with: or check manually with Kubernetes Dashboard. kubectl get svc --watch When you have IP you can try to browse there and you should see: If I had a pod to scale I would _________? Now let’s try to scale out our iis pod. Wait… We can’t scale a pod directly. We need a ReplicaSet or a Deployment for this. How would you handle this? Here I use a trick I found during my research that let you scale your pods . Probably there is a better way to do it, if so please add it to the comment. without any downtime for final users If we knew what it was we were doing, it would not be called research, would it? Albert Einstein Brief Explanation needed Run and you can see that service has a selector equal to “app=iis”. This means that any pod with this label will be exposed using this LoadBalancer kubectl describe service iis Name: iisNamespace: defaultLabels: app=iisAnnotations: <none> Type: LoadBalancerIP: 10.0.201.106LoadBalancer Ingress: 40.118.109.119...... Selector: app=iis The idea here is to create a new deployment that will create 2 new replica of our iis pod using the same label. Create a file called iisdeployment.yaml with following content: apiVersion: apps/v1beta1kind: Deploymentmetadata:name: iisspec:replicas: 2template:metadata:labels: spec:containers:— name: iisimage: microsoft/iisports:— containerPort: 80name: iisnodeSelector:beta.kubernetes.io/os: windows app: iis Create deployment with: kubectl create -f iisdeployment.yaml Run now and you can see 1 running pod and 2 additional ones being created: kubectl get pods NAME READY STATUS RESTARTS AGEiis 1/1 Running 0 32miis-1894189856-bxjf5 0/1 ContainerCreating 0 6siis-1894189856-zjw98 0/1 ContainerCreating 0 6s : during these steps, you can check your service using a browser and see that is up and running without problems. Note You can now delete original pod without any downtime for final users running: kubectl delete pods iis NAME READY STATUS RESTARTS AGE iis-1894189856-bxjf5 1/1 Running 1 17miis-1894189856-zjw98 1/1 Running 0 17m iis 1/1 Terminating 0 49m Scale up to 4 replicas! This is easy now: kubectl scale deployments/iis replicas 4 NAME READY STATUS RESTARTS AGE iis-1894189856-bxjf5 1/1 Running 1 17m iis-1894189856-zjw98 1/1 Running 0 17m iis-1894189856–9fwsp 0/1 ContainerCreating 0 52s iis-1894189856-t3m26 0/1 ContainerCreating 0 52s What if we have a problem with a pod iis-1894189856–9fwsp and we want to isolate it from the rest while keeping it running for debugging purposes? I do this running: kubectl edit pods iis-1894189856-9fwsp File opens in your predefined editor Change labels section from: labels:app: iispod-template-hash: “1894189856” to the following content and save file: labels:app: iisdebug If you get the list of your pods now, you will see that Kubernetes is already creating a fourth controller for iis deployments and your pod is still there up and running. How do I connect to a running Windows Server Core container? You can now debug this container being sure that is not exposed externally because our iis service is using app:iis as selector. Connect to it using: kubectl exec -ti iis-1894189856-9fwsp powershell From your computer you can now easily run any PowerShell command in your running image in the cloud! Remember to delete everything when you have finished playing with your cluster with: az group delete — name myAcsWinTest — yes — no-wait In Part 3 I will try to create an hybrid cluster with Windows ad Linux nodes!