paint-brush
On Building A Development Pipeline With Kubernetesby@spruha_pandya
824 reads
824 reads

On Building A Development Pipeline With Kubernetes

by Spruha PandyaApril 27th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

A tech enthusiast working with a great team of developers building an open-source platform for AppOps on Kubernetes. Netflix adopted microservices in 2009 for enhanced availability, scale, and speed. Digital transformation led to enterprises moving away from traditional monolithic architectures to microservices. Using VMs to manage and run multiple microservices together, negate the benefits of microservices. thus came the containers. Though the concept of virtual containers dates back to 1979, its popularity has been soaring only since 2012.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - On Building A Development Pipeline With Kubernetes
Spruha Pandya HackerNoon profile picture

A few years ago, digital transformation led to enterprises moving away from traditional monolithic architectures to microservices.

When Netflix adopted microservices in 2009 for enhanced availability, scale, and speed, and openly documented its journey along the way, it kick-started a trend that led to a complete digital transformation — making monolithic architecture obsolete. But as things work in technology, with new solutions, comes a set of unprecedented challenges. Using VMs to manage and run multiple microservices together, negate the benefits of microservices.

Thus came the containers. Though the concept of virtual containers dates back to 1979, its popularity has been soaring only since 2012. Containers offer the ideal solution for running microservices in isolated environments. With this, containers became mainstream and app development methods evolved. Containers changed the way applications are built, designed, developed, packaged, delivered, and managed. This has paved the way for speedy and agile innovation and improved customer experience.

To keep up with the sheer volume of containers, IT capabilities and tools to deliver these applications needed to evolve as well, and they are still evolving to this date. Kubernetes became a part of the solution by offering efficient orchestration of containers in production. Kubernetes enables container orchestration and adds resilience, reliability, and portability to applications. It helps enterprises to build and maintain production-scale deployments.

Kubernetes is the defacto when it comes to container orchestration but before jumping into Kubernetes-based deployments, organizations need to realize that Kubernetes is just one piece of the puzzle for agile DevOps system. Real-world applications require a set of supporting systems to achieve your organization’s cloud-native goals.

Thus, before jumping head-first into Kubernetes adoption, there are a few things you need to consider.

Key considerations for building your deployment pipeline for Kubernetes

A high-performance container deployment system

On-premise container deployment on bare-metal servers or public cloud deployments on virtual machines? —  a conundrum that has been troubling every enterprise IT team. The question is pretty subjective to have a definitive answer.

General practice dictates the use of bare-metal servers for data-intensive workloads to have better control over data security. But that is not written in stone. IT teams are trying to reduce the total cost of ownership by opting for native servers for deploying containers that are data-focused like ones for AI and analytics. But that would require the organization to manage their servers on-premise, thus increasing the management costs.
There are also limitations on the side of talent acquisition for server-management and IT dependencies. These limitations drive many organizations to opt for managed cloud services for ease of management and deployment of clusters. At times, organizations choose to have a hybrid infrastructure with a combination of native and cloud servers to get the best of both worlds.

Before you bring Kubernetes into the picture, you have to decide and plan where your containers will be deployed — the public cloud, on-premise servers, or a combination of both. Having clarity on where will help you plan, execute, and scale your deployment strategies efficiently. If you need to make major changes to the infrastructure after you have your complete deployments system set up, you would require to revamp a lot of dependencies. Your deployment system will require a complete remodeling based on the changes.

The initial stage of your deployment system will be complex and confusing. You need to start ironing out all the nitty-gritty of your infrastructure requirements. In the beginning, it is okay to test and change what does not work for you.

Container security

Considering the multitude of security breaches all around us, let’s face it — you would be required to take security measures into consideration at some point while scaling your enterprise application. Ensuring that data in containers or containers themselves are not compromised, comes as a part of the job while running business-critical applications on containers.

Additionally, Kubernetes is an open-source orchestration tool and with all the benefits it brings to the table, it adds a few security concerns as well. Clusters and pods on Kubernetes are not secure by default. So, on top of securing your containers,  you have to take measures to ensure that the cluster and nodes that help deploy those containers on Kubernetes are secure as well. Correct security policies need to be implemented so that any compromised or misconfigured containers do not lead to unauthorized access to your workloads or resources. Along with this, you would be required to have a system of performing security checks at various stages of the deployment cycle.

When you get started with your container deployment to Kubernetes, your security policies will definitely not be up to the mark. Many errors may slip through the cracks and end up going into production. But with time, this will reduce and your security system should evolve and become iron-clad.

The IT team, as well as the developers, share the responsibility of ensuring that containers, the Kubernetes clusters they are running on, and the code that runs within the containers are devoid of any security bugs. It helps to have a set of security practices and checks to follow before the deployment. Certifications for what goes inside the containers like image registry, image signing, packaging, CVE scans, etc, would also help add an additional layer of security.

Cluster visibility and management

Containers are lightweight and easy to create but the sheer volume of containers can be overwhelming. Container deployments of Kubernetes can often scale up to hundreds and thousands of pods and multiple clusters. Managing such a multitude of containers and pods in production can be challenging if you do not plan ahead. If you do not have visibility across all of your deployments, you may be rendered unable to diagnose severe failures that can result in service interruptions. This will directly impact customer satisfaction and business continuity.

Having a system for monitoring your Kubernetes infrastructure provides detailed metrics of which in-turn help you get complete visibility across all your clusters and deployments. Having access to all the metrics also help you make the best use of the resources at hand to improve efficiency and reduce costs.

It would be advisable to have a system to track and log all metrics on usage and performance all in one place. This would give you a holistic view across all cloud providers, private data centers, servers, networks, and every individual VM or container. You can either have an in-house team to manage your clusters or choose to look for managed Kubernetes service providers.

Disaster recovery

The bigger your infrastructure is, the larger is the risk of it crumbling down. For large infrastructures, it is essential to have an extensive disaster recovery system to ensure business continuity.

Once Kubernetes runs these applications in production, it is accessed by a large number of users. This means that a huge amount of critical business data is consumed and produced at the same time and thus, there are bound to be bugs and crashes that cause downtime. Kubernetes offers some amount of resilience by restarting the pod afresh in the event of failure. But, there is nothing Kubernetes can do if an entire data center collapses.

Thus, for mission-critical applications, the IT team must make provisions to ensure that data is highly available as well as quickly recoverable in case the underlying infrastructure fails. A brilliant example of a disaster recovery system is the way Uber uses UReplicator and Kafka. The goal is to have a system that works towards reducing this downtime as much as possible so that business continuity is not compromised.

A CI/CD pipeline for Kubernetes Deployments

It goes without saying that you would require a CI/CD pipeline to accelerate the speed of releases of all your Kubernetes deployments. A DevOps CI/CD pipeline is critical for maintaining the quality and stability of applications in production.

The continuous integration pipelines constitute app building, integration, and testing along with a trigger for initiation of the continuous deployment pipelines. The CI pipelines are usually built using GitOps methodologies and a ‘git push’ triggering the CD pipelines that initiate the building, testing, and running of the deployment processes. You can choose an automated deployment strategy of your choice (e.g.: blue/green, canary) if you want, depending on your needs. Note that it is important that your DevOps CI/CD pipelines are able to connect easily with Kubernetes to ensure seamless deployments.

You can either:
a. Choose to build your CI/CD pipeline from scratch in-house
b. Choose a bunch of tools to fulfill the required functionalities and assemble them to interact with each other and form a streamlined software delivery workflow
c. Look for a pre-built platform that implements and automates your CI/CD pipeline

You can choose any of the alternatives depending on how large your deployments are and the resources available at your disposal. If you opt for option ‘a’, you need to know that implementing a DIY Kubernetes solution would require a team that can be responsible for upgrading and maintaining the whole system. The team would also be in charge of maintaining the version updates of containers, Kubernetes, and all the relevant tools. The operations team would have to set up a separate upgrade and test cycle for the CI/CD solution. This might become a bottleneck for your app deployments if not managed well. But the upside of all these efforts is that you will have complete control over your system.

If you choose options ‘b’ or ‘c’, you will have to go through rigorous testing and verification while setting up the pipelines. But once the pipelines are in place, it would not require much maintenance on your part to keep them running smoothly. Looking for a software delivery automation platform might make you reliant on the platform for your deployments but it also reduces the total cost of ownership and maintenance of your applications.

Conclusion

Once you are ready with all the above-mentioned prerequisites, you are ready for deploying your applications to Kubernetes at scale. What are the other pre-requisites you would require for your software delivery pipeline to Kubernetes? Tell us in the comments.