paint-brush
8 Things Developers Forget in Application Migration To Kubernetesby@chayka
244 reads

8 Things Developers Forget in Application Migration To Kubernetes

by Tanya ChaykaOctober 23rd, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

This article offers key advice for developers transitioning their applications to Kubernetes. It emphasizes the importance of stateless architecture, health checks, resource predictability, and the usage of Kubernetes entities like ConfigMaps and Secrets. Additionally, it highlights the need for graceful container shutdown, state independence, and the proper use of reverse proxies and SSL certificates via cert-manager. These best practices will help ensure a smooth application migration to Kubernetes.
featured image - 8 Things Developers Forget in  Application Migration To Kubernetes
Tanya Chayka HackerNoon profile picture


Hi! My previous article Containerization In 2023: Strive for Maximum Modularity via the Cattle Model was warmly welcomed at Hackernoon. Today, I continue to share my knowledge with you!


Kubernetes is fast becoming a development standard, while the threshold of entry remains quite high. I have collected a list of recommendations for application developers who are migrating their applications to Orchestrator. Knowing the listed points will allow you to avoid potential problems and not to create limitations in place of k8s advantages.


Content Overview

  • Who is this article for?
  • Kubernetes basics
  • Choose stateless architecture
  • Ensure you have endpoints to check the status of your applications
  • Try to make application consumption more predictable and uniform
  • ConfigMaps, secrets, environment variables - use these Kubernetes entities
  • Ensure graceful container shutdown with SIGTERM
  • The application should not depend on the requesting pod
  • Reverse proxy and HTTPS
  • Leave the work with Kubernetes SSL certificates to Kubernetes
  • Conclusion


Who is this article for?



This article is intended for developers whose teams lack DevOps expertise and do not have DevOps specialists. You will be taken on a fascinating journey into Kubernetes because microservices are the future of development. Kubernetes is an excellent solution for orchestrating containers and automating development processes, thus accelerating code delivery to testing and production environments.


Be that as it may, certain nuances are important to be aware of during the architecture and development planning phase. If you consider them before the project moves into the hands of DevOps experts, you will avoid problems in the future. Your app will run flawlessly in the cluster. My recommendations cover these subtleties.


So, this Kubernetes article is useful for those creating code from scratch and envisioning running it in Kubernetes and those with an existing application that requires migration to Kubernetes.



Kubernetes basics


Before you start your journey into the world of Kubernetes, make sure you are well acquainted with the gold standard - The Twelve-Factor App. This publicly available guide exposes the key principles of modern web application architecture. You're probably already applying some of these principles, but they'll become even more important as you move on to working with Kubernetes.


Now, let's move on to 8 things developers forget during application migration to Kubernetes.


Choose stateless architecture


When striving for fault tolerance in the context of Kubernetes, favoring stateless applications has many advantages. Much less effort and expertise needs to be expended to ensure their reliable operation.


Kubernetes, in its normal operation, can shut down and restart nodes. This happens in the case of autohealing, where a node stops responding and is then recreated. It also happens in the case of autoscaling to reduce the number of nodes (e.g., when some nodes are not loaded and are excluded to save resources).


In an environment where Kubernetes nodes and pods can be dynamically deleted and recreated, your application must be prepared for such changes. It should not store any data that requires persistence in the container in which it runs.

What should you do?

To ensure reliability and scalability, your application should be designed so that data is written to databases, files are saved to S3 storage, and the cache uses Redis or Memcache. This will allow the application to store data "outside" of the container, make it easier to scale the cluster as the load increases, and provide data replication.


In the case of stateful applications where data is stored in connected volumes, things get more complicated. When scaling these applications, you must manage the "volumes", ensuring they are properly connected and created in the right zone. And what to do with the data when the replica has to be deleted?


Of course, there are business applications for which a stateful approach is the right approach. However, in this case, additional steps must be taken to manage them in Kubernetes. This may include using Operators and other internal tools that provide the necessary functionality. An example of this approach is the PostgreSQL operator (postgres-operator). However, it should be noted that this is a much more complex and time-consuming way than simply packing code into a container, setting up a few replicas, and watching everything work without additional complexity.


Ensure you have endpoints to check the status of your applications


We have already emphasized that Kubernetes automatically monitors the health of your application, including restarting it when failures are detected, disconnecting it from the load, migrating it to less loaded nodes, and limiting resources. To allow the cluster to effectively monitor the health of your application, it is critical to provide access points for health checks, called "liveness probes" and "readiness probes". These mechanisms in Kubernetes allow the system to monitor the application and make appropriate decisions based on its current state.

What should you do?

Liveness probes provide the ability to determine when a container should be restarted to ensure its continuous operation and avoid application hangs.


Readiness probes, on the other hand, determine when the container is ready to accept network traffic. In the case of unsuccessful checks, the container is not excluded from load balancing, but new requests are no longer directed to it. This can be used to allow the application to "digest" the incoming stream of requests without performance loss. Once several consecutive readiness checks have been successful, the container is brought back into load balancing and begins serving requests.


These checks are a powerful tool in Kubernetes, but it is important to configure them correctly to avoid unwanted consequences. For example, misconfiguration of liveness and readiness probes can lead to application update failures during deployment or performance degradation (if pods are configured correctly). In some cases, misconfigurations can cause cascading restart of pods, which can lead to unpredictable consequences.


Suppose you have an HTTP access point that can serve as a comprehensive application state indicator. In that case, it is recommended that you configure both liveness and readiness probes to interact with that access point. By using the same access point, you will ensure that the container restarts if it fails to provide a correct response to this request.


Try to make application consumption more predictable and uniform


Containers running inside Kubernetes pods usually have limits on resource consumption, both in terms of memory and CPU time. Exceeding these limits can lead to undesirable consequences. For example, errors in configuring CPU time limits can cause throttling, limiting the available CPU time for the container. If you promise an application more memory than you have on a node, Kubernetes will evict the lowest-priority pods to free up resources under increasing load.


Of course, these limits can be customized, but the best practice is to make your application's resource consumption more predictable and uniform. The more evenly your application consumes resources, the more efficiently the load can be managed.

What should you do?

First, evaluate your application and determine how many requests it processes and how much memory it requires. Then, consider how many pods you need to run to evenly distribute the load among them. Maintain a balance in resource consumption by avoiding situations where one pod consumes much more than the others. Such an imbalance can lead to constant restarting of Kubernetes pods and jeopardize the reliable operation of the application.


In parallel with setting resource limits, it is important to implement performance monitoring in your pods. This may involve using tools such as kube-Prometheus-Stack, VictoriaMetrics, or even Metrics Server. Monitoring can help you identify and fix problem areas and revise your resource allocation strategy. Robust resource configuration and continuous monitoring will help ensure that your application runs consistently and predictably.

Specifics of CPU time management in Kubernetes

When developing applications for Kubernetes, it is essential to consider the specifics of CPU time management. This will avoid deployment issues and code reworking to meet SRE experts’ requirements.


Suppose you have a container with a limit of 500 milli-CPUs, equivalent to about 0.5 CPU time per core for 100 milliseconds of real time. If your application consumes CPU time in multiple continuous threads (suppose 4 threads) and "gobbles up" all available 500 milliseconds of CPU in 25 milliseconds, the remaining 75 milliseconds will be "frozen" by the system until the next quota period begins.


An example of this behavior would be staging databases running in Kubernetes with limited resources. When the query load increases, processing speed may suddenly increase, causing a temporary performance hit.


If your performance graphs show that the application load gradually increases and then latency increases dramatically, you have probably encountered this specific issue. In this case, it is important to perform adequate resource management. You can increase the resource limits for replicas or increase the number of replicas to reduce the load on each replica.


You should also consider that careful performance monitoring and regular resource tuning will help you avoid problems and ensure that your Kubernetes application is stable.


I recommend reading Assign CPU Resources to Containers and Pods to dive deeper into this topic.


ConfigMaps, secrets, environment variables - use these Kubernetes entities

Several objects in Kubernetes make developers' lives much easier. Knowing about these objects in Kubernetes will avoid rebuilding containers whenever you need to make a configuration change, such as changing a password. These tools allow you to manage your applications and their settings in different environments more flexibly.

ConfigMap


ConfigMap is a Kubernetes object designed to store unclassified data as key-value pairs. Pods can use ConfigMap as environment variables or configuration files by connecting them as Volumes.


Imagine you are developing an application that must run locally (for development) and in the cloud. You create an environment variable like DATABASE_HOST to help the application connect to a database. You set this environment variable on the local machine for the local host. However, when running in the cloud, you must specify a different value, such as the external database host.


Environment variables allow you to use the same Docker image for both local development and cloud deployment. You do not need to rebuild the image for each configuration. You can dynamically change the value of this environment variable in different environments.

Volume


Volume allows you to "mount" files and directories into Kubernetes containers. This is especially useful when you have configuration files that need to be different for different environments (e.g. dev, prod, test). Instead of creating separate Docker images for each environment, you can "mount" different configuration files when you start the feed.


If the config files are small, you can use them as environment variables. It all depends on the requirements of your application.

Secrets


Secrets are analogous to ConfigMap but for storing sensitive data such as passwords, keys, and tokens. Using Secrets avoids including sensitive data in the application code.


Secrets can be used as files “mounted” in containers as Volumes or as environment variables. They provide secure and convenient storage of sensitive data, reducing the risk of information leakage.


Ensure graceful container shutdown with SIGTERM


In Kubernetes, it is possible for an application to terminate before it releases resources. This scenario can be undesirable because it is better if the application has time to complete active transactions, save data in the database, and terminate correctly before the container terminates.

What should you do?

What needs to be done:


  • Handling the SIGTERM signal. To ensure correct application termination in Kubernetes, it is essential to handle the SIGTERM signal. When the container terminates, it sends a SIGTERM signal to the application, giving it time to complete (typically 30 seconds by default). Then, if the application does not complete on its own, the container sends a SIGKILL signal to force completion.
  • Check that SIGTERM is handled. Many frameworks, such as Django, already handle SIGTERM built-in. However, make sure your application handles this signal properly. This will allow the application to complete the current tasks, save data, and terminate correctly.


Handling the SIGTERM signal in the application is an important practice to ensure reliable container termination in Kubernetes. This avoids unexpected problems and data loss during application termination.


The application should not depend on the requesting pod


When moving to Kubernetes, you probably expect to use autoscaling to efficiently manage resources based on the current workload. However, for your application to run smoothly, it is important to follow the following practices:


  • Stateless design. Make sure your application is designed as a stateless service. This means it does not depend on the state or data stored on a particular server or pod. All necessary data should be stored in external sources such as databases, object storage (S3), caching (Redis, Memcached), etc.
  • Synchronize state. If your application contains static files or data, ensure the state is synchronized between all replicas. This may require the use of network file systems, data synchronization, or distributed caching.
  • Sessions and client state. If your application uses sessions or client state, ensure they are stored externally (e.g., in a database) and can be accessed from any replica. This will ensure an uninterrupted user experience, regardless of which sub receives the request.
  • Horizontal scaling. Check that your backend can scale horizontally. This means that it can handle requests from multiple replicas without corrupting data or creating conflicts. Use horizontal scaling techniques such as splitting queries and using horizontally scalable databases.


Ensuring guaranteed performance under autoscaling conditions in Kubernetes requires stateless design and consideration of horizontal scalability. With these practices in place, your application will run reliably and fail-safe, even when dynamically adding and removing replicas.


Reverse proxy and HTTPS


Ingress in Kubernetes provides a reverse proxy for applications and can automatically switch HTTP links to HTTPS. However, to avoid potential conflicts and errors, it is important to keep the following points in mind:


  • Links with HTTPS. The application running behind Ingress must generate and return links with the HTTPS protocol. If the application returns links with HTTP, Ingress will automatically rewrite them to HTTPS. This is important to prevent cyclic redirects and directs error.
  • Marking an application as running behind a reverse proxy. Most modern web frameworks and libraries have options or flags to indicate that an application is running behind a reverse proxy. Setting this option will tell the application that it is behind a proxy server and should generate links with HTTPS. This will help prevent Ingress from unintentionally rewriting links.
  • SSL/TLS Certificates. Ensure that Ingress is properly configured to handle HTTPS traffic and uses valid SSL/TLS certificates. This is important to ensure secure data transfer between clients and your application.
  • Ingress controller configuration. Check your Ingress controller settings to ensure it properly handles HTTPS traffic and performs redirects from HTTP to HTTPS if necessary.


Following these guidelines will help avoid redirection issues and ensure that Ingress can be used safely in Kubernetes.


Leave the work with Kubernetes SSL certificates to Kubernetes


Adding SSL certificates to Kubernetes using cert-manager is a convenient and secure way to ensure secure data transfer and provide traffic encryption. Cert-manager allows you to automate obtaining, managing, and updating SSL certificates in your Kubernetes cluster. Here's how it works:


  • Install cert-manager. First, you need to install cert-manager in your Kubernetes cluster. This can be done using the cert-manager installer provided by the Kubernetes team.
  • Configure ClusterIssuer. To automatically obtain SSL certificates, you must configure ClusterIssuer or Issuer, depending on your needs. ClusterIssuer is a Kubernetes resource that links cert-manager to a specific service that provides SSL certificates, such as Let's Encrypt.
  • Create Certificate. You can now create a Certificate resource in Kubernetes, specifying information about the domains for which you want to obtain an SSL certificate. Cert-manager will automatically contact the selected service (e.g. Let's Encrypt) and request SSL certificates for the specified domains.
  • Configure Ingress. After successfully obtaining SSL certificates, you can configure Ingress in Kubernetes to use these certificates to terminate TLS traffic. When configuring Ingress, specify the use of the SSL certificate that was created with cert-manager.
  • Automatic renewal. Cert-manager ensures that SSL certificates are automatically renewed to remain current and secure.


Using cert-manager greatly simplifies managing SSL certificates in Kubernetes and helps ensure your applications run securely.

Conclusion

The process of migrating applications to Kubernetes is undoubtedly complex, and there are several critical considerations that developers often overlook. These oversights can lead to significant challenges and setbacks during the migration process. However, by keeping these 8 often-forgotten factors in mind, you can streamline your application migration to Kubernetes.


And what recommendations do you have for developers preparing an application for Kubernetes? Share your opinion in the comments!