In March, Abstracta's Senior Leader Matías Fornara and I, had the opportunity to speak at the Triangle Software Quality Association(TSQA) 2022 Conference.
TSQA is a non-profit volunteer-led organization dedicated to promoting software quality practices through networking, training, and professional development opportunities. The TSQA conference takes place once a year, and leaders from all over the world get together to share cutting-edge knowledge, emerging technologies and trends in the test and quality assurance industry.
For this year's online conference, we led a talk about how to create and improve test automation strategies and code, sharing our experiences managing diverse teams and projects. In this article, we’ll share some of the most interesting questions and comments from the public, and of course, the main highlights of the session for all those who couldn’t attend or want to review them again. Let’s dive in!
The test pyramid is one of the most well-known concepts in automation. This pyramid represents the different types of testing layers and serves as a guide to allocate your testing efforts effectively, and establish a good test suite. However, it’s often ignored that each of these layers requires different types of tests and that duplicating tests across different levels has a hugely detrimental effect on the ROI of the automation efforts.
While testers might sometimes feel safer having a higher quantity of tests, the truth is writing, running, and maintaining those tests take up a great number of resources. Keeping the code clean and simple is key to saving time and optimizing testing, so don’t hesitate to eliminate all those redundant tests that don’t contribute to your automation strategy.
Although some automation test pyramids have UI tests as their top layer and others have end-to-end tests as their top layer, these two concepts aren’t interchangeable at all, they are in fact completely different types of tests. While UI tests test the user interface, they don’t necessarily need to be performed in an end-to-end manner, for instance, by using mocks. On the other hand, end-to-end entails the testing of every layer of the system, including the user interface.
While it’s true that there is a certain overlap between end-to-end and UI testing, these two approaches ultimately have different purposes and lead to different results.
Open-source tools are usually free to obtain in the sense that it’s not necessary to pay for a license or subscription. However, that doesn’t mean they come without a cost. As Katya Aronov mentioned in this episode of the Quality Sense podcast, “open source tools are not free if you consider all the costs that are implied.” Unlike commercial tools, open-source ones don’t offer maintenance, test infrastructure, or tech support, which leaves integrations in testers’ hands. This leads to higher learning curves that require experienced testers and a lot more resources allocated to prepare frameworks and create features that commercial tools already have.
This is not to say that open-source tools are not a good option, but to emphasize that understanding your project needs, goals and resources is key in order to find an appropriate tool for your test strategy.
Testing is an essential aspect of the software development life cycle, and as such, is imperative for the test code to be treated as seriously as production code. Even though test code isn’t pushed to production, it’s still a crucial documentation resource and it needs to be as maintainable and readable as possible. As Angie Jones mentioned in this guide to conducting test automation code reviews, “test code is what’s used to determine the quality of your feature code, so it’s really important that it’s developed with care.”
There are many practices that help testers achieve good quality test code, but there is a crucial one that cannot be left aside: peer review. This practice helps to capture errors in the early stages and improve code quality, but it’s also a fundamental part of the learning process for the team. Peer review allows senior testers to teach juniors how to improve their code and encourages a culture of collaboration and communication.
SonarQube is an open-source platform that performs automatic reviews to assess code quality and security. Using this or any other quality assurance tool helps to optimize the quality of the test code by detecting vulnerabilities, errors and code duplications among many other things. It also helps testers improve their coding skills by providing constant feedback, but most importantly, it keeps technical debtvisible and easy-to-follow.
In the last couple of years, low-code testing toolshave become increasingly popular and an important aspect of test automation. However, even though these tools are extremely useful for repetitive and time-consuming testing tasks, they are not a replacement for traditional testing practices but rather a perfect complement that can help teams optimize their test strategy.
As Matias and some of the attendees commented during the talk, the most important thing to keep in mind when using low-code tools is that in order to use them effectively it’s vital to apply the same good practices that we’d use with any other tool. Low-code doesn’t mean we have to put less effort into our tests, it’s just a different and sometimes less complex way to analyze our codebase.
There are many strategies you can use to improve collaboration but none of them will work successfully if your team doesn’t master their internal communication. Achieving fluid communication between product owners, testers and developers allow teams to have a clear understanding of the type of tasks the automation team performs and avoids false expectations that can lead to misunderstandings and frustration among team members. Furthermore, when teams communicate effectively they can make better decisions regarding what to automate, which tools are best for each project, and how to allocate test resources.
Keeping collaboration at the forefront will also ensure good practices are applied throughout the testing process. For instance, keeping organized and consistent documentation will encourage communication, teamwork, and a common understanding of how testing activities are performed.
Behavior-driven development is a highly collaborative development strategy that allows business and technical members of a team to create test cases in a natural language such as Gherkin. However, this common language won’t be effective by itself unless it’s combined with another major element of this methodology: the three amigos principle. This principle focuses on combining the expertise of product owners, testers, and developers in order to develop a clear project scope, eliminate assumptions or false expectations, and create clear guidelines on what to build from a customer and business perspective.
Ultimately, the key to a successful three amigos meeting is that all parties work together as a team, contributing their individual expertise to the project. If one of the parties doesn’t cooperate, then bridging the gaps between business, development, and testing becomes a difficult task that affects the overall productivity of the team and the quality of the software.
Promoting diverse teams has always been a priority here at Abstracta. One of the many ways in which we foster a variety of backgrounds and experiences in our teams is by hiring people with varying levels of experience. Creating an inclusive environment where experienced testers can mentor and train junior professionals is not only beneficial for our own company, but also for the future of the industry.
Junior testers bring fresh perspectives and creative thinking, senior testers add the missing knowledge and expertise to the mix. Together, they create innovative solutions the software industry needs to keep moving forward.
When it comes to writing high-quality test code, balancing simplicity and complexity is key. There are many ways in which you can structure your code to find that perfect balance, but the success of your strategy will depend completely on the context and the methods you implement.
Programming principles, such as DRY (Don't repeat yourself), DAMP (Descriptive and meaningful phrases), and WET (Write everything twice) are examples of different coding practices that can be applied differently depending on the case.
Although these three concepts might seem contradictory at first, they actually balance each other when used correctly, and represent different aspects of maintainability. Making use of them at the right time will allow you to continuously optimize your code while keeping it maintainable.
DRY avoids redundancy by ensuring that code duplication doesn’t occur excessively. Eliminating repetition ensures that every piece of domain knowledge of a system has only one representation in the code, and that future changes of a single element will not affect other unrelated or duplicated elements.
DAMP promotes the readability of the code by reducing the time needed to understand it. This principle advocates for reducing unnecessary abstraction, even if in doing so some code is duplicated. Under this practice, there is no need to reduce comments, descriptive names or variables. It’s preferable to thoroughly explain the why of your code and make it easy to follow and understand.
Lastly, there is WET. This abbreviation refers to all cases where the DRY principle is not applied and the code is instead full of unnecessary code duplications and redundancies. Unfortunately, most of the time WET code increases errors, decreases its readability, and makes future code rework a difficult and time-consuming task.
We received a lot of comments about these principles during the talk, some people were familiar with them and some weren’t. As we already mentioned above, it’s important to understand all three principles in order to know when and how to apply them to optimize your code.
Automation is an essential aspect of software testing, nonetheless not everything can or should be automated. The first step in any project is to evaluate what types of tests are suitable for automation, here are some examples of commonly used criteria you can use to define which tests are best to automate:
Certain types of tests need to be repeated often during development cycles, but due to their complexity, running them manually can be costly and time-consuming. For instance, testing forms that have numerous fields such as checkboxes and alphanumeric fields manually can be a tedious task and lead to errors. Automating these tests allows testers to submit countless combinations of answers quickly and reduce the margin of error.
Some tests are easy to write and perform, but they need to be executed so many times that automation becomes mandatory. Certain tests, such as login tests, are the perfect type of test to automate in order to save time and ensure the performance of your software is reliable. For example, automating your login test cases allows you to test with many different types of accounts and user information, and ensures that the login process works correctly under a high load.
Although these rules can be applied to most projects, each piece of software is unique and has its own complexities that might need a different approach to automation. Having a good understanding of your project's objectives and needs is key to making the right decisions.
Assuming test prevention refers to ways to prevent test failure, as we mentioned previously in this article, one of the best ways to prevent defects and low-quality code is to perform a peer review of the codebase. Here’s a checklist with some of the most important questions to ask when performing peer review that will help you assess the quality of your code and prevent any possible errors early in the process:
Setting clear quality gates is important for the success of a project and essential when it comes to managing technical debt. However, each team will work towards those quality gates in different ways and giving them the necessary freedom to do so is important.
Having a “progressive” approach to quality gates is one of the best ways to include changes and adjustments. It’s not necessary to have extremely high expectations or milestones from the beginning, especially if a team is not at a mature stage. It’s always possible to revise the current quality gates and improve them down the line.
There are many tools that allow us to obtain suitable data for our tests. Google Analytics, for example, offers information about what pages need more test coverage, and Test Data Manager, provides data masking, subsetting, and creating synthetic data.
It’s true that automation testing cannot be separated from manual testing. These two practices, although completely different, need each other to function properly. However, in order to create a tailor-made test strategy that solves the unique needs of a project, it’s absolutely necessary to understand what skills and expertise we need in the testing team. Building an efficient team is the best way to deliver greater value to our clients.
One of the best ways to assess the coverage of your test automation and your testing strategy, in general, is documenting your testing ideas and coverage with mind maps. Unlike traditional text documents, these maps have a flexible and easy-to-understand structure that allows for quick changes. They are a great way to brainstorm and keep track of testing activities.
Thanks to their diagram-like shape, identifying and understanding what areas have been explored and what type of testing has been performed becomes an effortless task. Testers can easily update the map and report which areas they have covered and which ones they can’t. Mind maps are also a great tool for other team members such as product owners and stakeholders because it allows them to understand what work is being done from a development perspective, and share their feedback from a product standpoint.
Follow us on Linkedin, Twitter, Instagram, and Facebook to be part of our community!
This story was first published here