paint-brush
A Tale of Cloud, Containers and Kubernetesby@AkashTandon
314 reads
314 reads

A Tale of Cloud, Containers and Kubernetes

by Analytical MonkAugust 6th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Even if you haven’t worked on <a href="https://hackernoon.com/tagged/kubernetes" target="_blank">Kubernetes</a>, chances are you’ve at least heard or read about it. It is already one of the most popular open source projects ever, and it’s still being developed. We need to understand where we’ve come from to appreciate where we’re today. The same rings true for Kubernetes.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - A Tale of Cloud, Containers and Kubernetes
Analytical Monk HackerNoon profile picture

Even if you haven’t worked on Kubernetes, chances are you’ve at least heard or read about it. It is already one of the most popular open source projects ever, and it’s still being developed. We need to understand where we’ve come from to appreciate where we’re today. The same rings true for Kubernetes.

Photo by frank mckenna on Unsplash

I was recently told, ‘Boring infrastructure is interesting now’. ‘Boring’ is a word I take exception with in this context. But I agree that the developer community is discussing infrastructure and operations much more than a decade ago. By the end of this post, you’ll understand why that’s the case.

I’ll outline the developments that made this shift possible, why you should care about them, and where Kubernetes fits in.

Of monoliths and metal

I was lucky to have started software development in an era when cloud computing was already commonplace and AWS deployments were the norm. In fact, the convenience of cloud resources is one of the reasons that more people are picking up software development today.

In the 1990s and early 2000s, software was written primarily as large code-bases — monoliths — and deployed on-premise using proprietary hardware. Load balancers, taken for granted nowadays in their software form, were expensive hardware. As Josh Evans points out in his insightful talk about Netflix’s architecture, “trying to add a column to a table was a big cross-functional project”. Software components used to be tightly coupled, hardware replacement was tough, and horizontal scalability wasn’t even a thing. If you were a service company, updating a client’s software meant carrying a patch and doing updates on-site.

Efforts to ease these constraints, coupled with cheap hardware and fast Internet, have led us into an era of cloud computing and DevOps. DevOps, for those who don’t know, is a philosophy that aims at unifying software development (Dev) and software operation (Ops) processes. The concepts introduced in this blog, from containers to Kubernetes, put parts of this philosophy into practice.

Up in the cloud

Cloud computing is everywhere nowadays — the service on which this blog is hosted even uses it. If you’re still unclear about the term’s meaning, here’s a definition from Microsoft Azure to clear things up:

‘Simply put, cloud computing is the delivery of computing services — servers, storage, databases, networking, software, analytics and more — over the Internet (“the cloud”). Companies offering these computing services are called cloud providers and typically charge for cloud computing services based on usage, similar to how you are billed for water or electricity at home.’

It’s no wonder cloud computing has become the norm, with cloud native applications becoming pervasive. Although there are quite a few providers nowadays, including Azure and Google Cloud, it was Amazon’s AWS that kickstarted the cloud computing revolution in the mid-2000s. If you’re interested in getting a deeper perspective, check out the timeline of AWS or the famous “Google Platforms Rant”.

A glimpse at AWS’ offerings

Cloud‘ has become a buzzword, but what does it actually mean? It is a methodology that’s enabled by a set of technologies, as opposed to being a set of technologies itself. One of the core technologies that makes the cloud ecosystem possible is virtualization. Redhat’s blog comparing cloud computing and virtualization is an excellent resource. Quoted from the same blog,

‘Virtualization is technology that separates functions from hardware, while clouds rely on that split… Virtualization is technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system.’

You can create multiple virtual environments on top of a single piece of hardware, run separate services in each environment, and offer the services to customers over the internet. That’s a simplified version of how cloud computing works. Virtualization makes another key component of the new infrastructure wave possible — containers.

Going micro in a container

Even if you hadn’t heard of virtualization prior to landing here, chances are you’ve stumbled upon one of the two popular ways it’s implemented — virtual machines and containers. We could dedicate an entire post to differences between the two. (In fact, that has already been done quite a bit). The primary difference between the two is the level at which virtualization is done.

Lean Apps made a great chart explaining the difference between containers and virtual machines. Read their full article here.

Virtual machines (VMs) virtualize hardware to run multiple operating system (OS) instances. With containers, an OS is virtualized to run multiple workloads or processes. Containers do away with the overhead that comes with running a VM. Containerization has been around for some time in different forms, but it was Docker’s emergence in 2013 that made containers mainstream.

As with adopting any new technology, overhead is involved with containerizing existing applications. You need to consider possible challenges including those related to management and security. For example, since multiple containers share the same kernel, an underlying bug can affect all of them.

Yet, containers afford numerous advantages, which have contributed to their increased adoption. The most cited advantage is being able to maintain parity across development and production environments, followed by ease of deployment.

It’s no accident that the rise in adoption of the microservice software architecture coincided with that of containerization.There are defining characteristics that can help you identify microservices, as outlined in the widely read microservices article by Martin Fowler. These core characteristics, such as componentization and designing for failure, are well-complimented by containerization. The degree of adoption for this architecture can vary across teams claiming to use it, but there’s no doubt that its ideas can be useful when designing software. And containers help when putting these ideas into practice.

Note: The 12-factor app was originally written by folks at Heroku as a framework for application development in the cloud era. It also works well with the microservices architecture and container deployments. Do give it a read.

Orchestrating the strings (or containers)

More often than not, your applications will run using multiple containers. As you scale, these containers can run into the dozens or even hundreds. They’ll need to interact with each other and be maintained, so you need a tool to manage or orchestrate them. Although there are a number of orchestration tools available, Kubernetes has recently emerged as the standard.

Kubernetes, or K8s, was originally designed by Google. It has enjoyed a great sense of credibility since it emerged out of Borg, a project used by the search giant to manage its fleet of containers. K8s is now maintained by the Cloud Native Computing Foundation and is being actively developed.

Hosting your own Kubernetes cluster poses a unique set of challenges, such as the networking set-up. However, hosted solutions such as Google Cloud Platform’s Kubernetes Engine have made K8s much more accessible.

Google Kubernetes Engine (GKE) dashboard

Kubernetes plays well with the 12-factor methodology and DevOps philosophy. It also allows you to embrace the idea of an immutable infrastructure. (However, how you may adopt these philosophies will vary depending on your organization’s resources and requirements.)

Our experience with K8s at SocialCops has been mostly positive so far. We will put out detailed posts soon on how we use it. However, I would encourage you to make up your own mind about K8s, as well as all the other concepts mentioned in this post. Discussing the topics on forums such as Reddit or Hackernews may help.

What’s next?

There is a lot of material out on the internet around containers and Kubernetes. You can get started with reading the official docs for Docker or Kubernetes. Or if you prefer, you can read through an introductory article or try out an interactive tutorial.

If all or most of the above is new for you, chances are you may already be suffering from a case of tech jargon overload. Don’t worry if that’s the case. The concepts and technologies in this post are the outcomes of decades of work. You won’t get them on the first go, and you aren’t supposed to! If anything, think of the information in this blog as a base to start thinking about DevOps philosophy and tools, including Kubernetes.

We stand on the shoulders of giants after all.

At SocialCops we are building products at the intersection of data and technology to solve some of the biggest challenges that the world is facing today. Sounds interesting? Come build with us! We are hiring software engineers to help us as we build for the next billion. Learn more and apply here.

Originally published at blog.socialcops.com on August 3, 2018.