The Cloud Native path

Written by clm160 | Published 2018/09/09
Tech Story Tags: kubernetes | cloud-computing | microservices | cloud-native | azure

TLDRvia the TL;DR App

Orchestration on cloud (k8 on Azure)

It is probably not the first time you heard the Cloud Native term. It is fashionable these days and it is propagated by many consultants and technology companies. Of course they are all saying you have to get aboard this train, the benefits are huge and you surely can't afford to lose it. But what does it exactly means and why wasn't cloud computing enough? To answer this let me shortly explained what I understood a couple of years ago by cloud computing and what I learned after it.

Autoscaling

I deployed a first production app on AWS back in 2012. It wasn't designed for the cloud, we were spinning up a vm, connecting to it, copying the binaries and starting the web server. But version after version we learned and understood what the power of the cloud meant in a word: autoscaling.

In order to support both scaling up and down, your application had to be stateless: you can't save things to the local disk or the session details in process because that vm can always be terminated as a result of a scale down event. The same was with logs: if before writing logs on the disk was enough as you could always retrieve them, now that luxury was gone. But this was great because this is how many teams adopted tools like Kibana or Graylog. Another thing autoscaling brought to the table was that you didn't have to over provision, you started with a processing power and then based on metrics you were able to reduce or increase that power. And the good part of the cloud was that our bill was for exactly what we used, if a vm only ran for 2 hours, we only paid those hours and nothing else.

Autoscaling on AWS

The true gain wasn't the money. I worked with some companies that didn’t move to the cloud just because they had a big infrastructure investment already done and when they calculated their monthly cloud bill even with all the discounts the received, it was just too much.

The cloud meant no upfront investment, but on the longer run you actually were paying more, so for companies already with a big investment in infrastructure it was far from feasible. But that was a very simplistic view. The main gain came from the flexibility we gained by being able to handle the spikes we had from our clients and reduce costs when there was less traffic. And more important was that we outsourced completely our infrastructure to pretty good engineers that were providing us a realiable API for it. We all know good developers are hard to find.

Automation

Soon we realized it was pretty repetitive work what we were doing: copy the binaries on the existing vm, create a new image for the vm and create a new auto scaling group. Also the term immutable infrastructure was getting more attention, so the next natural step for us was to start using a configuration management tool like Ansible or Chef and later adding Terraform for infrastructure. This allowed us to always create a new vm for a new version and just replace the older vm with the new one in the load balancer. Which brought us really close to no downtime deployments, we just had to separate the database deployments than the code ones (or the db to support 2 different code versions).

devops.com

What happened since I started automating application installation until today? Well now I see everything in pipelines. Do you need a completely new environment to test your new features. A pipeline should do that, no matter if the output is a new kubernetes cluster or a new set of vms for your app and db or just a new s3 bucket and a route53 entry for a static site hosting. Need to run upgrade scripts for your database? Sure, there should be a pipeline to handle that (and totally separated from the code one). And so on for application deployment. Just remember to separate CI from CD.

Move to microservices

Even from the first monolith that I worked with we had discussions about separating the product verticals. If you sell on your website both hotel rooms and car rentals, why a deploy to one of the product should affect the other? And back then they always did. We could just disable car rentals from the website, do an upgrade to it and then enable it back. So our move to microservices was the normal path, even though when we started with it we were actually building a distributed monolith.

people.groupon.com

And while using microservices you will soon have to start with containers because you simply realize running one microservice per vm is just too much, especially when you end up with a big number of them. So we started with autoscaling, used some automation and this brought us to the path of microservices and later containerisation. And there is still one last step to reach cloud native and that is to use a container orchestration engine.

Orchestration

Running microservices and containers becomes pretty hard when you reach a big number of microservices. New hosts have to be added, while some will need maintenance. New versions of microservices are being added constantly and we need different ways to deploy those new versions. So the next natural step is to start using a container orchestration engine like ECS, Docker Swarm or Kubernetes. Actually first you probably end up writing many scripts for orchestration, but you should soon realise that such a solution just doesn't scale. And in the last years Kubernetes is by far the default choice for orchestration and somehow it became the starting point when taking the cloud native way.

Cloud native

So at the beginning I said cloud computing for me is equivalent of doing autoscaling and I tried to explain how I came to the concusion that autoscaling means using the power of the cloud versus the classic hosting solutions.

pivotal.io

And now if you are taking the cloud native way you are most likely doing kubernetes or some other solution of container orchestration. Because that means you are running containers, you have an microservices architecture in place, automation and pipelines are in use for deployments and your hosts and microserices are autoscaling based on monitoring that you do for your application. And these are the pillars for cloud native computing.

I remember going from a classic application to one that was able to autoscale was a pretty hard transformation that lasted more than 1 year (almost 2 for us). Now going from cloud computing up to container orchestration with having all the pipelines in place is also a journey that I think it will take a little more, while some even say this road is never ending. And of course it is totally worth it, you can't miss this train.

References:

AWS: What is Cloud Computing? https://aws.amazon.com/what-is-cloud-computing/

Pivotal.io: What are Cloud-Native Applications? https://pivotal.io/cloud-native

Container-Solutions: Six steps to Successful Cloud Native migration https://container-solutions.com/six-steps-to-successful-cloud-native-migration-part-1/


Published by HackerNoon on 2018/09/09