paint-brush
Testing, Evolution, and Web Applications: It’s About Survival of the Fittestby@drjavaria
205 reads

Testing, Evolution, and Web Applications: It’s About Survival of the Fittest

by Javaria ImtiazFebruary 6th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Software applications tend to evolve more frequently to accommodate the recent trends, technological updates, bug fixes, and improvements. The reuse of existing test scripts for the updated version of the application helps in identifying and fixing regression bugs. There are Capture-Replay tools such as Selenium, TestComplete, QTP, and Watir. These test automation tools allow the testers to record the various test scenarios as a sequence of user actions (such as mouse clicks, keyboard entries, and navigation commands) performed on the web application.

People Mentioned

Mention Thumbnail
featured image - Testing, Evolution, and Web Applications: It’s About Survival of the Fittest
Javaria Imtiaz HackerNoon profile picture


Change is the only constant in life.”

This quote is also true in the case of software application development. Nowadays software applications tend to evolve more frequently to accommodate the recent trends, technological updates, bug fixes, and improvements. Such applications require testing [r]evolution to maintain the quality of services provided by these applications.


Capture-Replay tools

In the web application domain, testers perform end-to-end testing of their applications by creating test scripts using Capture-Replay tools such as Selenium, TestComplete, QTP, and Watir. These test automation tools allow the testers to record the various test scenarios as a sequence of user actions (such as mouse clicks, keyboard entries, and navigation commands) performed on the web application and later replay (re-execute) these with the same or different data to test a web application.


One of the key benefits of test automation is reusability of test scripts across the product versions.


The reuse of existing test scripts for the updated version of the application helps in identifying and fixing regression bugs. It shortened the feedback loop by providing the confidence to introduce new changes without affecting the existing functionality.


Similar to other systems, web applications evolve over time. The evolution of web applications can introduce several different types of changes of web elements, for example, changes in visual representation due to new style sheets being applied, changes in visual labels for various fields, or distribution of fields on one page to multiple pages.


The test scripts of Capture-Replay tools are strongly coupled with the elements of web pages and are very sensitive to any change in the web elements of the web application. Even simple changes, such as slight modifications in a web page layout may break the existing test scripts, because the web elements that the scripts are referencing may not be valid in the new version. This overhead is considered a major obstacle for organizations to move towards automated web application testing.


Maintaining an automated test suite is important to avoid additional risks and difficulties for the testing team involved in the project. Unless the web application or the underlying software systems that run it never changes, it is required to ensure that automated tests continue to work well as the application evolves.


In case of evolving applications such as adding new features or fixing newly discovered bugs, will affect the existing regression test suite. The team may need to create new tests from scratch or need to fix the existing ones affected by the modifications.


The figure below shows the classification of the regression test suite into Reusable test scripts, Retestable test scripts, and Obsolete test scripts.

                                       Figure 1: Classification of Regression Test Suite


  • Reusable test scripts refer to those where there is no change corresponding in the evolved version of the application, therefore these test scripts can be used directly.

  • Retestable test scripts are those test scripts that are affected by the changes in the evolved versions. Such test scripts could be fixed automatically by applying repair transformations or manually through testers.

  • Obsolete test scripts correspond to deleted web elements or web pages and hence are no longer valid for the evolved version.


Web Applications evolve over time

Commonly used Test Case Terminologies:

  • Fragile test case: Fragile tests are those that are easy to break. The test case was interrupted due to changes like date/time-sensitive assertions in the application.
  • Failed test case: A failing test is one that is well written and which has determined that the component being tested is broken. This usually happens when an assertion fails in the test case.
  • Smelly test case: Test smells are defined as bad programming practices in unit test code. For example, too many assertions or test dependency, etc.
  • Broken Test case: The broken test usually can't be compiled or executed due to significant changes in the application.  These are affected by the specifications, which the program no longer supports. For example, during the test execution, the locator used to find the web element wasn't found, or the browser crashed.
  • Bad Test: Bad tests are tests that don’t do their job properly. Either they’re testing more than they should, or they’re hanging on data that is inconsistent or subject to change based on external conditions.
  • Flaky test case: A flaky test is non-deterministic in nature that fails to produce the same result each time for the exact same application under the same test environment.

Common Reason for Test Breakages:

  • Timing Issues:  The automated test scripts are composed of a set of actions in a specific order. If one of these actions is delayed, or if this sequence changes, the test is almost bound to fail. The test breakages occur when the wait time specified in the test script is not enough to properly load and display the content of a web page.


  • Strongly dependent on Identifier/locators:  The single greatest cause of test failures is when the value of an attribute of a web element (such as id or name) that was being used to locate the web element in the original web page has been changed in the new web page.


  • Fragile data /Data changes:  One of the issues with any automated test script is working with a known dataset. Any sort of changes to the data can impact the test execution and lead to test failure. The test breakages occur when the input value used by the original test is no longer accepted on the evolved web page, e.g., a dropdown option that was previously selected by a test is not available in the modified web page.


  • Test case dependency:  It is the main culprit that causes tests to be dependent on other tests and share the global state. In this case, a test case cannot be run in isolation and it only passes when the test case is dependent on passes. Test case dependency on other external resources such as files, networks, databases also leads towards the test breakages.


  • Context specific test cases: Large set of test case breakages occurs because of a context. For example, when automated tests depend on certain dates or times, they might suddenly fail when the conditions (temporarily) don’t apply. This is often the case on leap days, but depending on your business logic, other dates may cause your test to fail too.


  • Dynamicity in web applications: Another issue caused by responsive pages causes real problems for Selenium tests because page elements move dynamically. In a dynamic site, the HTML is generated on-demand by scripts in the background and is then displayed on the screen. For instance, a page may display a random news headline or the latest blog entries. This sort of dynamism means that Selenium often is never able to find the correct selector, or having found it, future comparisons may then break.


  • Browser support issues: It is a common problem in web test automation that test scripts execute successfully against one browser but fails on another due to compatibility issues. This is due to the difference in rendering the HTML, CSS, and other scripts that may lead to test breakages. Similarly, different browsers may have different loading times that may cause test breakages.

How to fix test breakage?

There are two types of approaches commonly used to fix or repair the broken test scripts of evolving web applications.


DOM-based Approach: This approach utilizes the DOM information of two versions of a web application and suggests potential fixes for the broken test commands to the tester. This technique typically collects the test case execution data for both the original and modified version of the web application.


It is based on differential testing and compares the behavior of the test case on two successive versions of the web application: the first version in which the test script runs successfully and the second version in which the script results in an error or failure. By analyzing the difference between the two executions, it suggests a set of repairs for the broken test scripts.


Visual Analysis based Approach: This approach leverages the visual information (such as screenshots) extracted through the execution of test cases, along with image processing and crawling techniques, to support automated repair of web test breakages.

Wrapping Up

Web applications are bound to evolve, and if that happens, test cases break. In this story, you have learned how to deal with test breakage to keep up with evolving web apps.


It’s all about the survival of the fittest…


You might also like: