paint-brush
Kubernetes: The King Of The Cloud-Native Jungleby@PavanBelagatti
334 reads
334 reads

Kubernetes: The King Of The Cloud-Native Jungle

by Pavan BelagattiJuly 27th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Google’s Kubernetes (K8s), an open-source container orchestration system, has become the de facto standard — and the key enabler — for cloud-native applications, and the way they are architected, composed, deployed, and managed. PavanBelagatti, Pavan Belagatti DevOps Influencer, is the author of the book, “Kubernetes: The King Of The Cloud-Native Jungle”.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Kubernetes: The King Of The Cloud-Native Jungle
Pavan Belagatti HackerNoon profile picture

Google’s Kubernetes (K8s), an open-source container orchestration system, has become the de facto standard — and the key enabler — for cloud-native applications, and the way they are architected, composed, deployed, and managed.

Enterprises are using Kubernetes to create modern architectures composed of microservices and serverless functions that scale massively and seamlessly.

Kubernetes is on the hill of the Gartner hype cycle, everybody wants it, but few people truly understand it. Over the coming years, quite a few companies will have to realize that Kubernetes is not the silver bullet and figure out how to use it properly and efficiently.

Kubernetes is so damn popular

Kubernetes solves real problems in the enterprises. Kubernetes provides an entire ecosystem of tools that you need; Networking, Compute & Memory Allocation, Load Balancing, Object Storage, and Scheduling, etc.

Infrastructure as data:

Needed resources can be represented effortlessly in Kubernetes, using YAML files. Having everything defined in YAML files gives the option to manage them under version control, such as GIT. Also, makes it easy from a scalability perspective since everything can be easily changed and updated in the YAML file.

Extensibility:

There is a big set of existing resources such as statefulset, configmap, secrets, cron jobs, and more. Users can add more types according to their needs.

Innovation:

Over the last few years, there are 3 or 4 major releases every year with a lot of new features and changes, and as it seems, it is not going to slow down.

Community:

Kubernetes is well known for its strong community and supported by CNCF. KubeCon is the largest ever open-source event in the world that attracts so many open-source lovers. Annual GitHub survey from 2019 shows that Kubernetes is one of the top ten open-source projects by contributors. In the last two years, Kubernetes comes in as one of the platforms loved by developers, along with Linux and Docker, as per the annual survey done by StackOverFlow.

Why Kubernetes?

The need for a running environment, always:

Developers, as well as production, need the high velocity of creating environments for daily work. This is something that can be easily reached by using Kubernetes.

Branch integration environment with other products’ branches:

For example, Artifactory developer wants to set up an environment to test his branch with other applications’ versions. In the past, they had to set up a VM, install, and setup network configuration, etc., and this takes a lot of time. This thing, you can easily take care of using Kubernetes.

Better resource utilization for dev and production:

In Kubernetes, an application can be started with a low amount of CPU and memory and increase up to a specific limit according to the consumption. It makes it easy for admins to manage the resources and save a lot of costs.

Taking containerized products to production:

The development environment must be production-like as much as possible. Having an application running in Docker is nice, but we need much more than that, and that’s what a full solution platform like Kubernetes gives.

Vendor agnostic as much as possible — same API:

As we must be able to deploy all of our products on the main cloud vendors, we better have one API for that, and Kubernetes provides an abstraction on top of the vendor with a single API.

Auto-scaling and auto-healing of application:

The ability to scale and heal accordingly as per the situations is one of the great features of Kubernetes.

Kubernetes‘ Dark Secret

Running containers in production is not a picnic or a funny thing. It requires a lot of effort and computing; it requires you to solve problems such as fault tolerance, elastic scaling, rolling deployment, and service discovery. This is where the need for an orchestrator like Kubernetes comes in. There are other orchestration platforms, but it’s Kubernetes that has gained enormous traction and the support of major cloud providers.

Kubernetes, containerization, and the micro-services trend introduce new security challenges. The fact that Kubernetes pods can be easily spun up across all infrastructure classes leads by default to a lot more internal traffic between pods. This also means a security concern, and the attack surface for Kubernetes is usually larger. Also, the highly dynamic and ephemeral environment of Kubernetes does not blend well with legacy security tools.

Capital One’s cloud guru Bernard Golden has stated, “While Kubernetes-based applications may be easy to run, Kubernetes itself is no picnic to operate.”

Guess what: Kubernetes in production is hard!

Image credits: Wikimedia Commons

Different companies might have faced different challenges, but here is one classic example by JFrog. Let’s see how JFrog uses Kubernetes.

Furthermore, Tsofia Tsuriel, DevOps Engineer at JFrog, describes the journey of Kubernetes at JFrog.

In her own words, ‘In the past two years, we moved to deploying and managing JFrog SaaS applications in Kubernetes on the three big public clouds — AWS, GCP, and Azure. During this period, we gained a lot of useful and important lessons. Some — the hard way… In this session, I want to share with you some stories from our journey and the (sometimes hard) lessons learned.’

For easy management of applications deployment on Kubernetes, JFrog makes use of Helm. Helm is a tool for managing Kubernetes’ packages called charts. Helm charts help in defining, installing, and upgrading Kubernetes applications. JFrog publishes and maintains official Helm charts for all its products in the Helm Hub for the usage of their customers, community, and their SAAS solution.

So, how JFrog is using Kubernetes for their internal environments?

For development purposes, they have a CI/CD process that installs JFrog products on Kubernetes. As per the need, a developer can initiate this process and specify a branch version to install without applications master or branch version. The deployment process uses JFrog official Helm charts, and the result is an isolated Kubernetes namespace with all the applications installed on it.

For staging and pre-production purposes, they have several managed clusters, at least one pf each cloud vendor, and they are running full CI/CD processes of JFrog product installation along with Kubernetes infrastructures and tools in order to run tests and reveal bugs before upgrading their production environment.

Back then, three years ago, JFrog investigated various options for self-managing Kubernetes clusters on AWS, GCP, and Azure. They found that, for them, the easiest and immediate approach will be to use managed Kubernetes solution provided by the cloud vendors — EKS, GKE, and AKS.

They understood that managing the clusters by themselves requires a lot of resources and skills that they don’t have and better focus on the things that they are actually good at. For the past ten years, they have production environments running on EKS, GKE, and AKS on various regions with the same API.

A new customer deployment nowadays at JFrog is a self-service and a fully automated process, no need for DevOps engineer intervention. The environment is ready to use within several minutes on any supported cloud region.

Using Kubernetes in production — JFrog makes use of just one or two commands, and everything is just running in its place. You have the application auto-healing and auto-scaling, so now you are free to sail to sunset better with a glass of red wine in your hand.

But JFrog also urges that it is not the case, it is not so smooth as you can see in theory. Making the ship strong and resistant to the big and stormy waves is a big and a must-have process while using Kubernetes in production.

The lessons learned by the JFrog team after using Kubernetes

1. Visibility

Because of the Kubernetes complexity, it is important to know what is going on in the system. No more SSH to the server and ‘get me the logs.’ Developers should not need kubectl access to debug their applications.

2. Dev = Staging = Production

Variations between development and production environment, functionality, and performance issues are found only in production. To minimize the differences between the environments, it is optimal to create production-like environments to reduce production outage risks. Making sure to use the same Helm charts in all the environments is also important.

3. Know your limits

You need to learn and know how your applications work in terms of resources usage, memory, CPU, databases, etc., and anything your application needs in order to perform and run efficiently with minimum force.

4. Pod priority and pod quality of service

Kubernetes uses several pieces of data in scheduling and evicting workloads on the cluster. Performance problems might occur when you fail to set these parameters properly, workload downtimes, and significant overall cluster health issues. Resource Requests and Limits are the most obvious of these settings that need to be considered. Also, the lifecycle of the pod has to be considered thoroughly.

Things to consider:

Is your app CPU or memory-intensive?How easy is it for your app to come up on another node?

Options:

1. Higher pod priority to important apps

2. Pod QoS (Quality of Service)

  • Guaranteed (requests=limits)
  • Burstable (some resources set)
  • Best effort (no resources set)

5. Zero downtime upgrades

We should aim to minimize the downtime of the service for any reason. Application version upgrade is something we should be able to run whenever we want. Having the application running in high availability mode with several load-balanced pods eliminates the risk of downtime when upgrading the environments. With Kubernetes, you have the option for a rolling update, so, at a specific time, only one pod is going for an upgrade, and the others are still functional, and they will be upgraded in their turn.

6. Security

JFrog believes in eating their own dog food. They use several of their own tools in the daily tasks. They believe that the best way to manage their Docker images and Helm charts is using Artifactory. They are using an internal Artifactory server as a Docker repo and a Helm repo. During the deployment process, everything the deployment needs is fetched from Artifactory, so they have full control over the visibility on what’s running in their development and production Kubernetes clusters. On top of that, X-ray runs and scans all the third-party Docker images stored in that Artifactory. So, only scanned and approved images make their way to the Kubernetes clusters’.

7. Continuous learning

Learning is a big process, and it is a never-ending one. Kubernetes as technology is almost new for everyone. Developers and Ops teams must learn how it works and how to use it, learn best practices and recommendations available all over the internet.

You should always continue to tune your infrastructure, applications, and their behaviors. You need to check how your application behaves over a period of time.

Here is a whitepaper that talks about Kubernetes best practices for taking your containers all the way to the production. This gives you a deeper understanding of the practices that should be taken into account while using Kubernetes in production.

Conclusion

There is so much hype about Kubernetes, but it is evident now that it is the future for many things in IT. But the companies should also consider their own scenarios carefully before entirely going towards adopting Kubernetes. For sure, Kubernetes is not a silver bullet, and don’t be dreaming as if using Kubernetes is going to solve all your problems. After a careful analysis of the objectives you would like to achieve, if it makes sense, go for it. Just because it is hype and to look cool or to use it just because your competition is using it, these things don’t serve the purpose, but instead, they might backfire you in many ways. Kubernetes for sure is the king of the jungle as of now but who knows, tomorrow some other cloud-native DevOps tool might appear and crush it. Let’s get deploying fast, hail to the king of the cloud-native jungle, ‘Kubernetes.’

Title Image credits: ship's wheel in front of stormy sea