paint-brush
How to Maximize Cloud ROI with Containerization - Part 2by@mustufa
266 reads

How to Maximize Cloud ROI with Containerization - Part 2

by Mustufa Batterywala July 12th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

It usually takes 2-3 weeks to set up production-grade clusters on Kubernetes as it involves multiple time-consuming steps. Automation plays a crucial role in expediting containerization initiatives. Automating the installation of the. K8s cluster on Cloud (IaaS) can save significant time and effort for organizations. It is important to note here that the containerization lifecycle is similar to any other software development lifecycle as it includes three distinct phases: Planning and Designing: Deciding the right infrastructure, networking, storage, container runtime and patterns, and image layers.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - How to Maximize Cloud ROI with Containerization - Part 2
Mustufa Batterywala  HackerNoon profile picture


In part one of this blog, we discussed containerization and what’s driving its adoption across enterprises. We also discussed some major challenges of containerization in terms of its complexity, tech stack monitoring, implementation, security, and optimization. Now we will discuss how organizations can overcome these challenges and make the most of their investments in the cloud.

Increasing Containerization Efficiency with Automation

As we discussed earlier, enterprises face several challenges in application modernization and container adoption. Automation plays a crucial role in expediting containerization initiatives.


It usually takes 2-3 weeks to set up production-grade clusters on Kubernetes as it involves multiple time-consuming steps. It can include taking stock of all installation pre-requisites and dependencies, hardening the security infrastructure, the configuration of monitoring and observability solutions, and more. For reference, we have listed some of the typical steps in the setting up of a Kubernetes (K8s) cluster:


  • Infrastructure provisioning and access (VPC, EC2)

  • Prepare servers for K8s

  • Configure ports and connectivity (Set up NACLs, Security groups)

  • Install Kubernetes binaries

  • Configure SSL/TLS

  • Initialize master node

  • Set up metadata store

  • Set up additional masters for HA

  • Add worker nodes to a cluster

  • Configure monitoring (Prometheus, Grafana)

  • Configure log monitoring (ELK/EFK)

  • Install additional tools (Helm)

  • Perform cluster sanity & CIS benchmark


While organizations may consider managed Kubernetes services for such processes, it usually takes a lot of research to set up and manage clusters efficiently. Automating the installation of the K8s cluster on Cloud (IaaS) can save significant time and effort for organizations.


It is important to note here that the containerization lifecycle is similar to any other software development lifecycle as it includes three distinct phases:


  • Planning and Designing: Deciding the right infrastructure, networking, storage, container runtime and patterns, and image layers.

  • Build and Test: This includes steps such as the creation of images, setting up of image repository, continuous testing, security, tagging, and integration.

  • Deploy and Manage: Deployment patterns, monitoring/tracing, debugging, auto-scaling, optimization, fault tolerance


Though one can automate only some of the steps in the initial discovery and planning stages, there’s significant scope for automation of builds, deployment, and monitoring. For instance, image scanning is a critical step that should be automated in the continuous integration pipelines for effective security.

Continuous Integration for Containers

Continuous Integration (CI) as a DevOps process involves automation of build and image creation and pushing the images to the repo for deployment. Implementing CI for containers can be complex, and organizations need to carefully choose the right approach.


For instance, there are enterprises that have heavily invested in on-premises tools (Jenkins, SonarQube, Ansible, etc.) and need to move only some of their applications to containers on the cloud. In such a case, organizations can continue to use Jenkins for builds, quality checks, unit tests, and image creation but can add a step for automated scanning of Docker images. This helps in the detection of vulnerabilities and allows organizations to meet their security compliances. The approach doesn’t involve major changes in the existing toolchain and can allow organizations to adopt any cloud as per their evolving requirements.


On the other hand, there are enterprises that have most of their workloads on Kubernetes and are already running and managing large Kubernetes clusters. In such cases, teams need to leverage cloud-native tools for building CI pipelines. They can consider running Jenkins inside Kubernetes to meet scalability requirements. It is possible to run builds in parallel when there is a large number of builds in the queue. Alternatively, organizations can also run CI tools inside a container platform (e.g., Jenkins inside Kubernetes). This approach also offers significant benefits in terms of on-demand scalability for build creation, higher speed, and infrastructure cost savings.

Improving Observability in Containerized Applications

While Kubernetes has several advantages, it is inherently complex as it includes a lot of components that need to be monitored continuously. Moreover, as containers are ephemeral in nature, traditional monitoring tools and approaches for monitoring resources on virtual or bare metal servers aren’t as effective. Organizations require an end-to-end observability solution that can help them quickly monitor all container images and allow them to stay on top of the performance of their microservices-based applications. Their observability solution should offer actionable insights into container health and performance data, simplify the analysis of resource usage across clusters and namespaces, and help plan resource sharing across different environments. Organizations should also be able to monitor and set up set resource limits (budgets) to optimize their costs.


Using open-source frameworks like OpenTelemetry can help organizations collect and monitor telemetry data to improve observability. OpenTelemetry offers several APIs, SDKs, and vendor-agnostic tools to generate high-quality telemetry data. It can also help organizations define and implement custom metrics and traces for their containerized applications.


Consolidating data and insights is another critical aspect of observability. It is seen that organizations struggle to gain actionable intelligence in time as their data is spread across multiple tools and dashboards. An integrated dashboard offering contextual navigation across tools can help improve observability to a great extent. Further, the implementation of distributed tracing is critical for debugging cloud-native applications.


Last but not least, the observability solution should be able to meet the requirement of people in different roles. For example, a business decision-maker would need a more high-level view of the environment for strategic decisions. On the other hand, DevOps teams would want to drill down quickly to trace a vulnerability to its original line of code or troubleshoot issues quickly.


Stay tuned for part three of this blog series to learn about container security and optimization.