paint-brush
What should Automated Testing Look like for Kubernetes Apps?by@natelee
286 reads

What should Automated Testing Look like for Kubernetes Apps?

by Nate LeeApril 12th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Realistic autogenerated tests and service stubs are the foundation to building a scalable, repeatable quality automation framework. Hand-scripting tests will never be sustainable. Once a solution is found, it needs to embedded as part of the organization's Gitops workflow.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - What should Automated Testing Look like for Kubernetes Apps?
Nate Lee HackerNoon profile picture

Software as a service (SaaS) applications today scale exponentially as startups race to match customer demand with service delivery. To match the needs of large and complex codebases, testing setups need to scale with them, especially with the rise of containerized apps deployed on increasingly large Kubernetes clusters. Testing these applications requires automated processes and real-world requirements and can be folded into existing development and delivery workflows.


In other words, testing application code from the very beginning of the production cycle extending through CI/CD must be done in a testing environment that recreates — or mocks — the working production environment on Kubernetes. DevOps teams should make this a priority, because not employing CI testing will — among other things —  almost invariably involve the release of more bugs in production and having to spend more time fixing them later.


Automated Test (r)evolution


Meanwhile, much time, effort and thought have been devoted to figuring out how to effectively test SaaS applications. However, the production-oriented priorities of most engineering departments means that engineering teams often scale much quicker than quality assurance (QA) roles or testing teams can. A 2020 survey by JetBrains showed that software engineering teams overwhelmingly had a QA-engineer-to-developer ratio of one to three or less.


This increase in scale is not just a matter of size. As the code solution grows, its complexity increases and the transparency of the solution to the developer decreases. Changes in infrastructure, such as when making a shift to Docker or Kubernetes environments, allow the application's infrastructure to scale more easily to serve customer needs. However, with this shift to containerized environments comes challenges associated with testing as the functionality of a code solution becomes more opaque. This is where the industry’s reliance on unit testing meets its limits.


Though unit testing is a helpful and necessary component, previous research has shown that incidents that take down production systems are often caused by interactions between components, which unit tests are not designed to catch. Instead of relying solely on code-level unit tests, enterprises should build out a thorough automated test suite with carefully written integration tests. Verifying that components and modules of an application work together properly provides a level of stability that pure unit testing cannot match. Automated testing is a standard practice in the industry – most tech organizations already incorporate some form of automated integration testing into their QA plans.


However, configuring automation in a way that incorporates data points like historical traffic can greatly boost the benefits of the test suite and decrease its workload for QA teams. As codebases grow more complex to meet new feature requirements, QA engineers and developers should focus on improving the automation itself, rather than having to spend most of their time mocking data sources and manually creating test beds.


Realistic Testing is critical

In addition to testing for the correctness of the API and function responses, automated test suites need to examine for performance and security, especially for containerized applications which may be more exposed or constrained. This is where historical-usage data can help developers and QA engineers write test cases. Structuring performance tests to use regular usage patterns as input can provide critical data on how the application will behave during times of consistently high load. Similarly, security tests that take common exploits into account can ensure a codebase is not riddled with issues, such as SQL-injection vulnerabilities. Mirroring regular usage patterns can allow developers to understand how their changes will affect both the accuracy of the responses as well as performance impacts on other coupled systems.


This is especially important for applications deployed on Kubernetes clusters as they may have dependencies on APIs or services in other clusters, and may themselves be dependencies for other services. For this type of testing, considering prior traffic and usage history can provide insights that conventional load testing would not.


Relying on pure load testing can too easily devolve into brute forcing large numbers of requests or massive volumes of data onto services ill-equipped to handle them. This request profile obviously does not match what the system would see in day-to-day loads, and might even be misleading. Brute force-request testing often provides more indication about the infrastructure’s ability to listen and respond to those requests than about the service's ability to serve data.


Load profiles also change based on the type and purpose of the application, something QA teams should account for when setting up test suites. For instance, an authentication API that sees high consistent load will have a different performance profile than a metrics application that serves large amounts of data but only does so intermittently. Including this traffic data in an automated test can give developers considerable insight not just about the code changes they make, but also about necessary changes in infrastructure provisioning, such as scaling the Kubernetes clusters. Not including these in a test suite can cause developers to push changes that break performance or availability, and lead to failed service-level agreements (SLAs).



CI/CD for automation and scale


Another critical feature of modern automated testing is the need for thorough integration with continuous integration and CI/CD infrastructure. SaaS applications almost always have continuously evolving feature sets, meaning no system is ever fully deployed or fully tested. The deployment and testing solutions must always accommodate the next iteration of the application.


Emerging requirements and ongoing engineering initiatives mean that automated testing needs to fit into the software-development pipeline without significant manual effort. A tool like Speedscale, for example, can run a sidecar alongside a service and detect and record traffic for the future. This data can then be used by the Traffic Replay service to generate test suites based on likely real-world scenarios the system under test will face. The benefit is a reduction in the workload for QA engineers, who no longer have to re-architect or re-script entire test suites based on changing conditions after deployments. It also prevents anti-patterns from emerging in both QA and development, where engineers otherwise must reinvent the wheel in an attempt to build in-house test harnesses whose maintenance workload becomes unmanageable as the application code scales.


A well integrated, automated test suite can contribute to the preventative health of an application ecosystem by testing for instability during and after deployment, and capturing metrics such as latency, throughput and error rate. Monitoring these metrics empower another critical feature of QA departments: providing feedback to development teams on where errors are occurring. Catching bugs during testing is great, but preventing them through iterative testing and efficient communication between teams is even better.


Automated setups can also make it easier to mock chaos testing for applications by simulating the shutdown of dependencies upstream. Often, these dependencies are third-party software outside the control of the organization. Sometimes the dependency in question is a fundamental one such as a large Amazon Web Services (AWS) outage or a regional or global DNS failure. While resiliency during such large and unforeseen events is probably not a common requirement, such scenarios should still be considered in test suites to ensure they do not have unintended side effects like loss of customer data, duplication of operations such as billing or more.


All of these factors contribute to the robustness of an enterprise solution, and ultimately provide value for customers and peace of mind for developers. SaaS applications are scaling to unprecedented heights, particularly as they introduce new technologies like machine learning or blockchain. Amid these evolving requirements, the ability of testing teams to keep up will be critical to the success of applications.