paint-brush
AI in Software Testing: A Silver Bullet or a Threat to the Profession?by@sandra-parker
759 reads
759 reads

AI in Software Testing: A Silver Bullet or a Threat to the Profession?

by Sandra ParkerApril 13th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

QA and software testing are the most preferred entry-level specialties for those striving to join the IT sector. With the speed at which AI is taking over so many applied fields, it is stipulated that software testing is the next. Will AI solutions eventually replace human expertise? Is there a way to strike a balance between ChatGPT-like tools and human knowledge?

People Mentioned

Mention Thumbnail
featured image - AI in Software Testing: A Silver Bullet or a Threat to the Profession?
Sandra Parker HackerNoon profile picture


QA and software testing are the most preferred entry-level specialties for those striving to join the IT sector. But are they still worth it? With the speed at which AI is taking over so many applied fields, it is stipulated that software testing is the next.


The rise of AI has led to questions about the future of quality assurance specialists. Will AI solutions eventually replace human expertise? Is there a way to strike a balance between ChatGPT-like tools and human knowledge? In this article, we'll delve into these questions and explore the potential implications for the QA industry.


What Does ChatGPT think about AI in Software Testing?

The first rule of journalism — get it right, and get to the source. In my case – ChatGPT, whom I decided to interview before sharing my thoughts and findings on artificial intelligence in software testing.


Yes, I humanize AI, hoping one day it remembers I am “one of the good ones.’


The dialogue went as follows:


Question 1 - ChatGPT, is AI increasingly helpful in software testing?


Ans - AI-powered tools can automate repetitive and time-consuming testing tasks. This can help to reduce the time and resources required for testing and improve the overall quality of the software. However, AI is not a replacement for human testers, but rather a tool that can complement and augment their efforts.


It seems like the last sentence is written to not scare mere mortals, so my next question was more direct:


Question 2 - Chat GPT, is it true that AI can replace human specialists in software testing?


Ans - No, AI is unlikely to fully replace human specialists in software testing in the near future. Although AI can automate certain tasks and assist in testing, it still has limitations and cannot replace human judgment and creativity. Human testers bring unique perspectives and problem-solving skills that AI cannot replicate, and they also play a critical role in verifying the accuracy of test results, interpreting complex scenarios, and making decisions.


Well, this sounds more reassuring. Now, I have the impetus and AI-confirmed background to discuss the pros, cons, risks, and opportunities of artificial intelligence in software testing.


How AI Can Be Used to Improve Software Testing

There is a predictable consensus that AI helps to make processes faster and more efficient. Let’s pinpoint key QA and testing processes and see how AI tools can enhance them.


  1. Test Case Generation. AI algorithms can automatically generate test cases, saving time and reducing the likelihood of human error. They also can create much more tests, increasing the overall coverage of the testing process.



12-second test case example:

12-second test case example by ChatGPT


  1. Test Optimization. AI can prioritize tests based on risk, complexity, and frequency of use, ensuring that the most critical tests are run first. It has extra benefits for such industries as fintech, with their develop-fast, release-faster approach.


Example. AI can analyze software code and usage patterns to determine which tests are most likely to uncover bugs and other issues and prioritize those tests for execution.


  1. Test Data Management. AI tools can manage test data by generating data sets for test cases and maintaining data consistency across tests. It is one more area where human errors are costly.


  2. Defect Prediction. AI algorithms are already used in disaster prediction solutions. Here, they can predict the likelihood of a defect occurring based on past data, emphasizing the most high-risk areas.


Example. AI can help with defect prediction in testing, and QA is in a large software application that has many thousands of lines of code and complex requirements. AI can be used to analyze the code, requirements, and test results to identify potential defects early in the development process.


  1. Test Automation. AI can be used to automate the testing process, freeing up testers to focus on other tasks such as test design and analysis. If I got a dollar every time a QA specialist complained about lack of time for analysis and more creative test design, I could afford early retirement.


  1. Continuous Integration and Continuous Deployment (CI/CD). AI algorithms can be integrated into CI/CD pipelines to help ensure that builds and deployments are done with maximum efficiency and accuracy.


  1. Root Cause Analysis. AI algorithms can be used to analyze test results and pinpoint the root cause of failures, enabling testers to fix defects more quickly and effectively.\


Test Result Analysis. AI can analyze test results and identify trends, allowing testers to make data-driven decisions about the state of the software being tested and change their testing tactics or the overall QA strategy.

What are the Disadvantages of AI in Software Testing?

If you feel like the previous part confirms that you may be out of work… soon, don’t sell yourself short, at least for now. Here are the limitations AI has and will have for a considerable amount of time.


  1. Lacks creativity. AI algorithms experience big problems generating test cases that consider edge cases or unexpected scenarios. They need help with inconsistencies and corner situations.


  1. Depends on training data. Don’t forget — AI is nothing else but an algorithm, a mathematical model being fed data to operate. It is not a force of nature or a subject for natural development. Thus, the quality of test cases generated by AI depends on the quality of the data used to train the algorithms, which can be limited or biased.


  1. Needs “perfect conditions.” I bet you’ve been there — the project documentation is next to none, use cases are vague and unrealistic, and you just squeeze information out of your client. AI can’t do that. The quality of its work will be exactly as good or bad as the quality of the input and context turned into quantifiable data. Do you receive lots of that at the beginning of your QA projects?


  1. Has limited understanding of the software. We tend to bestow superpowers on AI and its understanding of the world. In fact, it is truly very limited for now. May not have a deep understanding of the software being tested, which could result in missing important scenarios or defects.


  1. Requires skilled professionals to operate. For example, integrating a testing strategy with AI-powered CI/CD pipelines can be complex to set up, maintain, and troubleshoot, as it requires advanced technical skills and knowledge of AI algorithms and technologies. Tried and true methods we use now may, for years, stay much cheaper and easier to maintain.

What are the threats associated with AI-powered testing?

There is a difference between what AI can't do well and what can go wrong even if it does its job perfectly. Let's dig into the threats related to testing artificial intelligence can take over.


  • Bias in prioritization and lack of transparency. It is increasingly difficult to comprehend how AI algorithms are making prioritization decisions, which makes it difficult to ensure that tests are being prioritized in an ethical and fair manner. Biases can influence AI algorithms in the data used to train them, which could result in skewed test prioritization.


Example. Suppose the training data contains a bias, such as a disproportionate number of test cases from a particular demographic group. In that case, the algorithm may prioritize tests in a way that unfairly favors or disadvantages certain groups. For example, the training data contains more test cases from men than women. The AI algorithm may assume that men are the primary users of the software and women are secondary users. This could result in unfair or discriminatory prioritization of tests, which could negatively impact the quality of the software for underrepresented groups.


  • Overreliance on AI. Lack of human decision-making reduces creativity in testing approaches, pushes edge cases aside, and, in the end, may cause more harm than good. Lack of human oversight can result in incorrect test results and missed bugs. Increased human oversight may lead to maintenance overheads.


Example. If the team relies solely on AI-powered test automation tools, they may miss important defects that could have significant impacts on the software's functionality and user experience. The human eye catches inconsistencies using the entire background of using similar solutions. AI only relies on limited data and mathematical models. The more advanced AI gets, the more difficult it is to check the results' validity, and the riskier is overreliance. This overreliance on AI can lead to a false sense of security and result in software releases with unanticipated defects and issues.


  • Data security-related risks. Test data often contains sensitive personal, confidential, and proprietary information. Using AI for test data management may increase the risk of data breaches or privacy violations.


Example. Amazon changed the rules it's coders and testers should follow when using AI-generated prompts because of the alleged data security breach. It is stipulated that ChatGPT has responded in a way suggesting it had access to internal Amazon data and shared it with users worldwide upon request.


So, What Will Happen to AI in Testing?

If you are a manual QA beginner — you better hurry and invest in your skills. The less expert and the easier to automate tasks you do now, the faster AI algorithms will come after your job. \

AI automation is a necessary and long-awaited development for the QA industry. It will inevitably take over tasks with big loads of data and thousands of lines of code. It will require human supervision and maintenance, and companies will yet again face a necessity to balance the costs.


I believe that the security requirements will intensify, and AI usage by testers and developers will become more transparent for clients. In the end, the companies are responsible for the data safety, confidentiality and performance of their solutions. Stronger security requirements will result in new roles and checkpoints — human overseeing AI is a must, and the more advanced the AI gets, the more professional that human should be. \

The question of ethical and unbiased data input will be one of the most pressing ones for international companies, especially considering the localization aspects. I suppose that QA experts with extra applied ethics and DEI-related education will be in high demand.


In the end, here is what Chat GPT thinks of it:

ChatGPT reply to AI risks


In our company, we started to apply AI-based tools for test automation back in 2022, and for now, I can confirm that it speeds up the process, but it fully depends on how diligently you “teach” it. We will keep testing AI to make our work for clients faster, better, and more cost-efficient because it may be scary now, but AI is the future.