Kubernetes in 10 minutes: A Complete Guide to Look For

Written by spec-india | Published 2018/08/31
Tech Story Tags: kubernetes | devops | software-development | software-engineering | kubernetes-guide

TLDRvia the TL;DR App

Kubernetes: What, Why, How, When Justified

Unleashing the power of Kubernetes to simplify workloads by deploying cloud- applications anywhere and managing from everywhere

As containers have gained in popularity over the past few years, Kubernetes consulting is redefining the way how software is developed, deployed, and maintained. Most of the articles on the www meant that Kubernetes is taking container orchestration by storm. We were wondering about its usage! We searched on the web for surveys and concluded Kubernetes indeed is the highly used container orchestration tool.

If the stats of the previous three years are to be believed then, it can be rightly and undoubtedly said that Kubernetes is the widely used container management platform. It is dominating the container space for a couple of years.

Here the question arises how? Why? What? When? Etc, etc.

Stay Calm! We will explain you everything.

This article is not for only technical leaders but it is also for a non-technical founder who is looking to develop the complex application by enhancing efficiency and simplifying the workload.

So, let’s start.

Image Ref: https://www.cncf.io/blog/2017/06/28/survey-shows-kubernetes-leading-orchestration-platform/

Scenario before Kubernetes: How Kubernetes came into the existence?

Gist about Container

Before a couple of years, containers were the best concept to deploy applications. It gave the new horizon for developing and maintaining software. With containers, it was easy for the software developers to package up an application including the components like libraries and other dependencies. It can ship a package as a whole without the need of a traditional virtual machine.

When computing world became distributed, more network-based and more relied on cloud computing; monolithic apps migrated to Microservices. These Microservices enabled users to individual scale key functions and has an ability to handle millions of customers. On the top of that, tools like Docker container, Mesos, AWS ECS emerge into the enterprise, creating a consistent, portable and easy way for the users to deploy Microservices.

But, once the application gets matured and complex, there will be a need to run multiple containers across multiple machines. You need to figure out which is the right containers and at a right time of course, how they can communicate with each other, tackle with the large storage need and deal with the failed container. Doing all this manually can be a mere nightmare!Hence, to solve the orchestration needs of the containerized application, Kubernetes came to the scenario.

Kubernetes History: A Quick Overview

When Docker continued to thrive to manage Microservices and containers, container management system became a paramount requirement. During that time, Google was already running container based management infrastructure for many years and in that cusp of an era, the company made a bold decision to open source an in-house project called Borg. The Borg was a key to run Google’s services like Gmail, Google Search etc. To enhance the functionalities of the container management system, the company came up with Kubernetes — an open source project that automates the process of deploying and managing multi-container applications at scale. Kubernetes came into existence in mid of 2014 and in a short span of time grown as open source community with engineers from Google, Red Hat and many other companies contributing to the project.

What is Kubernetes?

Kubernetes is an open source container management system which is used in large-scale enterprises in several vertical industries to perform a mission-critical task. It manages

  • Cluster of containers
  • Provide tools for deploying applications
  • Scale applications as and when needed
  • Manage changes to the existing containerized applications
  • Helps to optimize the use of underlying hardware beneath your container
  • Enables the application component to restart and move across the system as and when needed

Kubernetes provides much more beyond the basic framework, enabling users to choose the type of application frameworks, languages, monitoring and logging tools and other tools of their choice. Although it is not Platform as a Service but can be used as a basis for complete PaaS.

Since a few years, it has become a highly popular tool and one of the biggest success stories on the open-source platform.

Kubernetes Architecture: How it works?

Kubernetes’s Master-Slave Architecture and its components:

Kubernetes Master**:**

It is the primary control unit that manages workloads and communication across the system. Each of its components has a different process which can run on a single master node or on multiple master nodes. Its components are:-

  • Etcd Storage: It is an open-source key-value data store developed by CoreOS team and can be accessed by all nodes in the cluster. Kubernetes uses ‘Etcd’ to store configuration data of the cluster to represent the overall state of the cluster anytime.
  • API-Server: The API server is the central management entity that receives REST requests for modifications, serving as a front-end to control cluster. Moreover, this is the only thing that communicates with Etcd cluster, making sure that data is stored in Etcd.
  • Scheduler: It helps to schedule the pods on various nodes based on resource utilization and decides where to deploy which service. The scheduler has the information regarding the resources available to the members as well as the one which is left for configuring the service to run.
  • Controller Manager: It runs a number of distinct controller processes in the background to regulate the shared state of the cluster and perform a routine task. When there is any change in the service, the controller spots the change and starts working towards the new desired state.

Worker Node:

This is also known as Kubernetes node or Minion node which contains the sufficient information to manage networking between containers such as Docker, communication between the master node as assigning the resources to the containers as per scheduled

  • Kubelet: Kubelet ensures that all containers in the node are running and are in the healthy state. Kubelet monitors the state of a pod, if it is not in the desired state. If in-case node fails, replication controller observes this change and launches pods on another healthy pod.
  • Container: Containers are the lowest level of Microservice, placed inside the pod and needs external IP address to view the outside process.
  • Kube Proxy: It acts as a network proxy and a load balancer. Additionally, it forwards the request to the correct pods across isolated networks in a cluster.
  • cAdvisor: Acts as an assistant who is responsible for monitoring and gathering data about resource usage and performance metrics on each node.

Advantages of Kubernetes

Portable and Open-Source:

Kubernetes can run containers on one or more public cloud environment, virtual machine or on bare metal which means it can be deployed on any infrastructure. Moreover, it is compatible across several platforms, making multi-cloud strategy highly flexible and usable as well.

Workload Scalability:

Kubernetes offers several useful features for scaling purpose:

  • Horizontal Infrastructure Scaling: Operations are done at the individual server level to implement horizontal scaling. New servers can be added or removed easily.
  • Auto-Scaling: Based on the usage of CPU resources or other application-metrics, you can modify the number of containers that are running
  • Manual Scaling: You can manually scale the number of running containers through a command or the interface
  • Replication Controller: The Replication controller makes sure that cluster has a specified number of equivalent pods in a running condition. If in-case, there are too many pods; replication controller can remove extra pods or vice-versa.

High Availability:

Kubernetes can handle the availability of both applications and infrastructure. It tackles:-

  • Health Checks: Kubernetes makes sure that application doesn’t fail by constantly checking the health of modes and containers. Kubernetes offers self-healing and auto replacement if a pod crashes due to an error.
  • Traffic Routing and Load Balancing: Kubernetes load balancer distributes the load across multiple loads, enabling you to balance the resources quickly during incidental traffic or batch processing.

Designed for Deployment:

Containerization has an ability to speed up the process of building, testing and releasing software and useful feature includes:-

  • Automated Rollouts and Rollbacks: Kubernetes handles the new version and updates for your app without downtime, while also monitoring the health during roll-out. If any failure occurs during the process, it automatically rolls back
  • Canary Deployments: Kubernetes test the production of new deployment and the previous version in parallel i.e. before scaling up the new deployment and simultaneously scaling down the previous deployment.
  • Programming Language and Framework Support: Kubernetes supports most of the programming languages and frameworks like Java, .NET etc. and has also got a great support from the development community. If an application has the ability to run in a container, it can run in Kubernetes as well.

Some More Things to Look For

Kubernetes provides DNS management, resource monitoring, logging, storage orchestration and also addresses security as one of the primary things. For instance, it makes sure that information like passwords or ssh keys are stored securely in Kubernetes secrets. New features are released constantly and can be on the Kubernetes GitHub.

What features do Kubernetes provide to deal with the Stateful Containers?

Kubernetes StatefulSets provides resources like volumes, stable network ids, and ordinal indexes from 0 to N etc. to deal with stateful containers. Volume is one such key feature that enables to run the stateful application. Two main type of volume supported are:-

Ephermal Storage Volume: Ephermal data storage is different than Docker. In Kubernetes, the volume is taken into account any containers that run within pod and data is stored across the container. But, if pods get killed, the volume is automatically removed.

Persistent Storage: Here the data remains for the lifetime. When the pod dies or it is moved to another node, that data will still remain until it is deleted by the user. Hence, data is stored remotely.

Kubernetes: Laying the Pillars to Develop Cloud Apps

Some of the container management and orchestration tools like Apache Mesos with Marathon, Docker Swarm, AWS EC2 Container service to the name of few offers great features but weighs less than Kubernetes.

DockerSwarm is bundled tightly with Docker runtime; hence it is easy to shift from Docker to Swarm easily and vice-versa. Mesos with Marathon can deploy any kind of application and is just not limited to containers. AWS ECS can be easily accessible by current AWS users.

As and when this framework got matured, they started to duplicate with other tools in terms of features and functionality. But, Kubernetes is the one that is unique from all and will remain popular due to its architecture, innovation and a large open source community.

Kubernetes paves the way for DevOps by enabling the team to keep pace with the requirements for the software development. Without Kubernetes, software development team needs to script down their own software deployment, scale it manually and update workflows. In a large enterprise, the huge team handles this task alone. Kubernetes help to leverage maximum utility from containers and enable to develop cloud apps regardless of the cloud-specific requirements.

Other than that, enterprises are using Kubernetes because it can be deployed in the company’s pre-existing datacenter on-premise in one of the public cloud environment and can even run as a service. Because Kubernetes abstracts the underlying infrastructure layer, developers can focus on developing applications and then deploy them to any those environments. This increases the company’s adoption for Kubernetes as it can run on-premise while continuing to build any cloud strategy.

The Real-World Use Cases of Kubernetes

  • Pokémon Go - The online multiplayer game is one of the popular games showing the power of Kubernetes. Before its release, this game was expected to be reasonably the most-talked game. But after its release, it derived 50 times more than expected traffic. By using Kubernetes, Pokémon Go was able to scale high to keep pace with the unexpected demand.
  • Pearson - Pearson is one of the popular global education company serving 75 million learners sets the goal to reach 200 million by 2025. But as and when they climb the ladders, they faced difficulties in scaling and adapting the online audience. They were in the need of the platform that helped to scale and adapt the online audience and deliver the product faster. Hence, they deployed Kubernetes container orchestration because of its flexibility. After implementing this platform, there were the substantial improvements in the productivity and speed of delivery. Things which took nine months to provision physical assets in a data center reduced to just a few minutes to provision.
  • Pinterest - Pinterest- a very popular social networking platform grown into 1000 Microservices and had a varied set of tools and platforms. The company wanted to deploy the fastest path of production without making developers worry about infrastructure. The team looked for container orchestration platform like Kubernetes to simplify overall deployment and management of complicated infrastructure. After deploying Kubernetes, the company reduced builds times and efficiency was at its peak.

Top Industries that use Kubernetes

Ref link: idatalabs.com/tech/products/kubernetes

Looking to use Kubernetes? Will your existing architecture need a change?

Startup process may take time: When you create a new deployment, you need to wait for your app to start before it is available to the end users. This can be a hurdle if the development process calls for developing new instances. While migrating to Kubernetes, you need to make some changes in the codebase to make the startup process more efficient so that the end-user doesn’t have a bad user experience.

Migrating to a stateless application requires much effort: Kubernetes has the ability to scale pods up and down during deployment. But, if your application is not clustered or stateless, this functionality is of no use as extra pods will not get configured and can’t be utilized. The process of utilizing stateless in Kubernetes is not worth as you will need to rework the configurations within your applications.

Conclusion

In a short span of time, Kubernetes has grown and developed into an economic powerhouse. As it offers varied benefits, most of the companies of all sizes look to develop products and services to meet an ever-increasing need. Kubernetes has an ability to work on both public and private cloud and has made it one of the favorite tools for the businesses that work with hybrid clouds. If this continues, we can even see more companies investing in Kubernetes and container management system.

So, are you the one who is looking for Kubernetes consulting to revamp the existing container management system or to develop everything from the scratch? Connect with us.

This post was originally published on our blog here


Written by spec-india | Fueling Digital Growth of Enterprises. Software Development & Digital Transformation Company.
Published by HackerNoon on 2018/08/31