Automation Testing is the use of specialized tools to automatically execute your tests. The benefits of Automation Testing are not limited to reduced human errors, increased test coverage, and a faster testing process.
By applying test automation properly, you can facilitate new development methods, such as Continuous Integration and Continuous Development (CI/CD), and increase the ROI of the whole project.
But there's a very basic mistake that QA teams usually make when trying out Automation Testing. It's choosing test cases for automation. The truth is, not everything is suited to be automated. The World Quality Report 2019 revealed that 24% of teams surveyed are meeting difficulties in deciding which test cases to automate.
This article will give you some tips and pointers to do so.
You should take time evaluating test cases for automation because it helps you:
Before automating a given type of test case, you should consider the following factors:
The test case's complexity: there’s one rule to knowing how complex a test case is. It’s by seeing if you need to execute that test case on various configurations (number of devices, OSs, platforms, browsers).
And don’t forget to analyze if the test case requires different data for every test run. If a test case requires too many arrangements like these, it's a prime candidate for automation.
The test case's execution time and testing frequency: test cases that either take too much time to be manually tested or require the team to execute times and again should be automated.
You can also use these 2 criteria to decide if a test suite should be automated or not. The additional consideration is how many test cases the suite contains.
The ROI of Test Automation: After evaluating the above characteristics of the test cases, then your team should sit back together and estimate their ROI after they are automated. Your best choices are those test cases that have the potential of high ROI when automated.
Whether the costs of automation testing do not outbalance its benefits: The benefits are simply the time, costs, and efforts you can save by using automation testing. The costs of automation testing can include implementation costs and maintenance costs.
Based on the considerations above, the following test cases are ideal for Automation Testing:
Regression Tests (smoke test and sanity test, etc.): Performing regression tests every time a new feature or software fix is deployed would consume a lot of time and resources. This is why when teams discuss which test cases to automate, regression tests usually are at the top of their minds.
Performance Tests (load test, stress test, etc.): Performance testing takes a lot of time to achieve the required test coverage, and it’s also repetitive in nature. There are many other reasons to go for automated performance testing.
First, you can shift left performance testing, which makes it parallel to development and therefore gives developers continuous feedback. Second, you can make performance regression checks an integral part of your Continuous Integration and Continuous Delivery (CI/CD) pipeline.
Security Scanning: You should automate test cases that have potential security problems to mitigate human errors and defects. Automation is also ideal for test cases with security weaknesses or code behaviors, such as encryption ciphers and SQL injection flaws.
Data-driven tests or tests on the AUT’s crucial features: because these 2 types of test cases are highly prone to human errors, test automation should be applied.
Integration tests, API tests, Unit Tests, and cross-browser tests,... are some other test cases you should consider automating.
It’s true that automation testing is a foundation of Quality-at-Speed. However, you don’t want to automate every type of test case. These are the non-candidate for automation testing:
Exploratory Testing: In exploratory testing, testers learn about the software, and get an understanding of what test cases to design and execute. It requires human testers to take time to investigate the whole process. So by definition, you cannot automate exploratory testing.
User Experience Tests (or Usability Testing): Though most automation tools today have innovative simulations, they cannot perfectly portray or predict all of the actions and emotions of the user when he/she uses the software. Therefore you want to leave Usability Testing to manual testers.
Intermittent tests and redundant, low-risk tests: unreliable test results are very likely when you automate these kinds of test cases.
Anti-automation tests: require real human interactions, including CAPTCHA, website forms, web-based SMS messaging platforms, etc. By automating anti-automation test cases, you make your software vulnerable to malicious attacks.
To know if an automated test case is helping or hurting your team’s effort, you should always track these parameters.
The feedback of end-users: After all, meeting end-user requirements is one of the most important goals of software testing. Thus after automating test cases, you should collect users’ reviews to evaluate your automation efforts.
The severity of defects: This is straightforward. Test Automation should reduce defects, not the other way around. Reporting is important in Automation Testing for this reason. You need to continuously report on bugs and other related issues when automating a specific test case.
The number of downtime and system outages: Another way to know if you are automating the right test case is by counting how frequently your product shutdowns or experiences downtime. This very data will help you identify areas where automation does not work so you can improve it.
Automation ROI: ROI is the one thing every stakeholder looks for when adopting test automation. ROI of Test Automation can be measured in terms of how efficient the process becomes, how much cost you can save, and how better the test outcomes are after you adopt Automation Testing.
You want to compare the ROI you expected with the results you actually obtained. From there you will have a clear idea of how well automation is working for each test case.
Also published here.