paint-brush
Going all in with Kubernetes (Part 1)by@richard.sands
250 reads

Going all in with Kubernetes (Part 1)

by Richard SandsJune 7th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

I use to think the dream would be when we (carsguide) roll out Docker containers on our AWS infrastructure and take baby steps to containerization, instead we went all in with Kubernetes — head first. On the face of it we had two choices; either take a risk and reap or do nothing and remain in the “safe zone”. I like to take risks but being honest, I wasn’t sure we were ready for Kubernetes. Turns out I was wrong to some degree.

Company Mentioned

Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Going all in with Kubernetes (Part 1)
Richard Sands HackerNoon profile picture

I use to think the dream would be when we (carsguide) roll out Docker containers on our AWS infrastructure and take baby steps to containerization, instead we went all in with Kubernetes — head first. On the face of it we had two choices; either take a risk and reap or do nothing and remain in the “safe zone”. I like to take risks but being honest, I wasn’t sure we were ready for Kubernetes. Turns out I was wrong to some degree.

Traditionally this is what happens in many software teams; engineers build things, package it up and hand over to ops who manage a pipeline, deploy and run it. (Our ops team built all of the current pipelines). The flow looks a bit like this;

Most teams are split

I am a big believer in giving engineers freedom to build how they want, deploy how they want. Engineers like ownership and it makes engineers much stronger if they know how to deploy, run and monitor what they are building.

Early 2018 a few of our tech team sat around a table and agreed to engage with a devops consultancy firm, Vibrato. The initial agreement was to investigate possible CI/CD workflows and how to run our stack. Out of this engagement came a choice; go all in with Kubernetes or stay safe and use docker/ECS.

Long story short we went all in with Kubernetes.

A few key reasons;

  • We are building a new micro service platform
  • Cost optimisation due to above reason
  • Engineers can build it from start to finish alongside operations, it is the future of engineering

What we really want our teams to be

In a previous post I talked about why micro services. In a nutshell; we are building a new platform. All of this platform is being built around micro services running on Docker containers. Given we will have many more services than we currently do we need a quick and easy way to deploy and scale each service without the overhead of configuring ALBs, SSL certificates, autoscaling groups, target groups, EC2 images, etc. Kubernetes does all of this for us using code defined in each service, it orchestrates the services for us and the networking to make it happen.

Think about it this way; an application may only ever need 10% CPU or 10% of the memory an EC2 has, that is 90% wastage if you run a single application per EC2 like we do. To reduce costs we need to orchestrate many containers on a single EC2 (node). Kubernetes solves this, it does this by orchestrating the best possible utilisation of the compute nodes.

Our new Kubernetes staging cluster is running on 7 EC2s with over 95 pods (applications) running 200+ containers, load balanced with high availability. Our current staging takes over 50 EC2s (currently 126(!!) EC2s if you count all of our code bases). Kubernetes should save thousands of dollars per month.

Our current pipelines are complex, most of the engineers do not know how they work or were to start if they wanted to create a new pipeline. Kubernetes has let us define how the service should run from the code in each service repository.

If an engineer wants 5 pods for a service they set an autoscaler definition to 5 and push to the repository. A few minutes later they get 5 pods. If you want cron jobs running, again define them in the code, push and a few minutes later your cron jobs will run. Kubernetes allows this to happen, everything is in code.

Engineers are able to create a pipeline, setup service requirements, how it scales, what containers it needs and deploy to staging in a matter of hours. No longer will it take weeks.

Services are defined in code that any engineer can follow without knowledge of the cloud provider. GCP, AWS, Azure — doesn’t matter, it is the same code that will do the same task no matter what it runs on.

I am going to follow up this post with a few more on our journey. But the above gives you insights to the main reasons why we decided to try it. Some things I am aiming to cover;

  • Tooling we use for CI/CD
  • Problems we have faced so far (and there have been a few)
  • Logging and alerting
  • You build it you own it

Have you started a journey on Kubernetes? Always keen to hear other peoples views on their experiences.