“ .” Change is the only constant in life This quote is also true in the case of software application development. Nowadays software applications tend to evolve more frequently to accommodate the recent trends, technological updates, bug fixes, and improvements. Such applications require testing [r]evolution to maintain the quality of services provided by these applications. Capture-Replay tools In the web application domain, testers perform end-to-end testing of their applications by creating test scripts using Capture-Replay tools such as Selenium, TestComplete, QTP, and Watir. These test automation tools allow the testers to record the various test scenarios as a sequence of user actions (such as mouse clicks, keyboard entries, and navigation commands) performed on the web application and later replay (re-execute) these with the same or different data to test a web application. One of the key benefits of test automation is reusability of test scripts across the product versions. The reuse of existing test scripts for the updated version of the application helps in identifying and fixing regression bugs. It shortened the feedback loop by providing the confidence to introduce new changes without affecting the existing functionality. Similar to other systems, web applications evolve over time. The evolution of web applications can introduce several different types of changes of web elements, for example, changes in visual representation due to new style sheets being applied, changes in visual labels for various fields, or distribution of fields on one page to multiple pages. The test scripts of Capture-Replay tools are strongly coupled with the elements of web pages and are very sensitive to any change in the web elements of the web application. Even simple changes, such as slight modifications in a web page layout may break the existing test scripts, because the web elements that the scripts are referencing may not be valid in the new version. This overhead is considered a major obstacle for organizations to move towards automated web application testing. Maintaining an automated test suite is important to avoid additional risks and difficulties for the testing team involved in the project. Unless the web application or the underlying software systems that run it never changes, it is required to ensure that automated tests continue to work well as the application evolves. In case of evolving applications such as adding new features or fixing newly discovered bugs, will . The team may need to create new tests from scratch or need to fix the existing ones affected by the modifications. affect the existing regression test suite The figure below shows the classification of the regression test suite into test scripts, test scripts, and test scripts. Reusable Retestable Obsolete Figure 1: Classification of Regression Test Suite refer to those where there is no change corresponding in the evolved version of the application, therefore these test scripts can be used directly. test scripts Reusable are those test scripts that are affected by the changes in the evolved versions. Such test scripts could be fixed automatically by applying repair transformations or manually through testers. test scripts Retestable correspond to deleted web elements or web pages and hence are no longer valid for the evolved version. test scripts Obsolete Web Applications evolve over time Commonly used Test Case Terminologies: Fragile tests are those that are easy to break. The test case was interrupted due to changes like date/time-sensitive assertions in the application. Fragile test case: A failing test is one that is well written and which has determined that the component being tested is broken. This usually happens when an assertion fails in the test case. Failed test case: Test smells are defined as bad programming practices in unit test code. For example, too many assertions or test dependency, etc. Smelly test case: The usually can't be compiled or executed due to significant changes in the application. These are affected by the specifications, which the program no longer supports. For example, during the test execution, the locator used to find the web element wasn't found, or the browser crashed. Broken Test case: broken test Bad tests are tests that don’t do their job properly. Either they’re testing more than they should, or they’re hanging on data that is inconsistent or subject to change based on external conditions. Bad Test: A flaky test is non-deterministic in nature that fails to produce the same result each time for the exact same application under the same test environment. Flaky test case: Common Reason for Test Breakages: The automated test scripts are composed of a set of actions in a specific order. If one of these actions is delayed, or if this sequence changes, the test is almost bound to fail. The test breakages occur when the wait time specified in the test script is not enough to properly load and display the content of a web page. Timing Issues: : The single greatest cause of test failures is when the value of an attribute of a web element (such as id or name) that was being used to locate the web element in the original web page has been changed in the new web page. Strongly dependent on Identifier/locators One of the issues with any automated test script is working with a known dataset. Any sort of changes to the data can impact the test execution and lead to test failure. The test breakages occur when the input value used by the original test is no longer accepted on the evolved web page, e.g., a dropdown option that was previously selected by a test is not available in the modified web page. Fragile data /Data changes: : It is the main culprit that causes tests to be dependent on other tests and share the global state. In this case, a test case cannot be run in isolation and it only passes when the test case is dependent on passes. Test case dependency on other external resources such as files, networks, databases also leads towards the test breakages. Test case dependency Large set of test case breakages occurs because of a context. For example, when automated tests depend on certain dates or times, they might suddenly fail when the conditions (temporarily) don’t apply. This is often the case on leap days, but depending on your business logic, other dates may cause your test to fail too. Context specific test cases: Another issue caused by responsive pages causes real problems for Selenium tests because page elements move dynamically. In a dynamic site, the HTML is generated on-demand by scripts in the background and is then displayed on the screen. For instance, a page may display a random news headline or the latest blog entries. This sort of dynamism means that Selenium often is never able to find the correct selector, or having found it, future comparisons may then break. : Dynamicity in web applications It is a common problem in that test scripts execute successfully against one browser but fails on another due to compatibility issues. This is due to the difference in rendering the HTML, CSS, and other scripts that may lead to test breakages. Similarly, different browsers may have different loading times that may cause test breakages. : Browser support issues web test automation How to fix test breakage? There are two types of approaches commonly used to fix or repair the broken test scripts of evolving web applications. This approach utilizes the DOM information of two versions of a web application and suggests potential fixes for the broken test commands to the tester. This technique typically collects the test case execution data for both the original and modified version of the web application. DOM-based Approach: It is based on differential testing and compares the behavior of the test case on two successive versions of the web application: the first version in which the test script runs successfully and the second version in which the script results in an error or failure. By analyzing the difference between the two executions, it suggests a set of repairs for the broken test scripts. : This approach leverages the visual information (such as screenshots) extracted through the execution of test cases, along with image processing and crawling techniques, to support automated repair of web test breakages. Visual Analysis based Approach Wrapping Up Web applications are bound to evolve, and if that happens, test cases break. In this story, you have learned how to deal with test breakage to keep up with evolving web apps. It’s all about the survival of the fittest… You might also like: Web Testing in 2022: The Force Is Strong With Test Automation A Beginner’s Guide to Automated Testing Move fast and find out when you break things