In the past 2 years, the advancement in AI, especially LLMs (Large Language Models), has turned out to solve traditional problems more efficiently, and one such impact that LLMs can create is through assistance in automating our software testing. For many software teams automation has never been a first-class citizen of the SDLC cycle and teams struggle to automate the test cases that create the "Test Automation Debt". Due to this gap in test automation, a lot of time for quality teams go into manually verifying and writing these test cases which as a result slows down the engineering team's shipping velocity.
The only escape route that exists to avoid such a grind is to automate the test cases at a superior velocity but often we are not able to do so because:
Although all the problem mentioned above needs a separate article on how software teams should tackle them, in this section we will talk about how QA engineers can leverage the ChatGPT or LLMs as their co-pilot in BDD (behavior-driven testing)
What is BDD?
Behavior-driven development (BDD) is an Agile software development methodology in which an application is documented and designed around the behavior a user expects to experience when interacting with it. For the context of this discussion let's apply BDD-based testing on a YC website and consider the login page of YC hacker news as a starting point.
For the above-mentioned page, the expected behavior is
"When a user attempts to log in and enters the valid credentials on https://news.ycombinator.com/login?goto=news and presses login it should redirect the user to hacker news website".
As per BDD guidelines, the above behavior can be written as Gherkin syntax/steps which can be a possible test case, and using cucumber as a framework the same Gherkin can be automated and this behavior would no further need human intervention. However, due to the problem we discussed in the section above QA team generally suffers in automating the same, but we can leverage the power of LLMs by writing a few prompts and creating a workflow for our QA teams to churn the Gherkin steps and its automation with faster velocity on cucumber framework.
Workflow for Generating Gherkin Syntax
In the below image for instance I have shown how you can fine-tune the GPT model by using the prompt to generate the gherkin syntax for the behaviour presented in natural language.
Workflow for generating cucumber-compatible code from Gherkin Syntax
Step 1: Prompt to be engineered
Below is the prompt that you can write on the ChatGPT console to fine-tune it to spit out the appropriate automation using the HTML code and Gherkin steps provided. As a final output, we have asked to generate the code in the form of step definition as needed by cucumber which can be copy pasted as it is by QAs.
Step 2: Injecting the relevant HTML which needs to be used as a base for automation.
As depicted below after fine-tuning the ChatGPT asks for the relevant HTML and Gherkin steps which it can use to generate step definitions for cucumber.
Step3: Generating the step definition
As a result of this, the GPT will understand the DOM structure from the HTML provided and ask you for the relevant Gherkin syntax. For the above case, we have already generated the Gherkin which we can pass as it is:
Feature: User Login Redirect
Scenario: User logs in with valid credentials
Given the user is on the login page of "https://news.ycombinator.com/login?goto=news"
When the user enters valid credentials and presses the login button
Then the user should be redirected to the "hacker news" website
Final output
Finally as per the workflow and the prompt mentioned it generates the step definition file for the above Gherkin steps and scenario mentioned which can be as it is copied by QAs in their cucumber framework after minor quality check to fasten the automation of the behavior they were supposed to test.
With the advent of AI and the emergence of copilots, humans will see a boost in their productivity. They will be able to push their boundaries of innovation further by passing on the rudimentary and routine tasks to AI. However, I still think we should just look them as assistant to whatever work we do and still the intelligence of using them would reside in humans, as tech, marketing, sales and customer success teams are trying to leverage the power of LLMs in their workflows the same can be done by QA engineers so that they are able to uplift their productivity further and make their work of ensuring quality and stability of software exciting!