paint-brush
Automation Testing 2.0: Easing the Way With Google’s Bardby@yanalunts
1,700 reads
1,700 reads

Automation Testing 2.0: Easing the Way With Google’s Bard

by Iana LuntcMay 15th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In this article, the author explains how to create an automation test using Google Bard AI for the popular website LinkedIn. The article includes prompts to ask the AI for creating test cases, page objects, and tests. The author also provides additional prompts that can be helpful during testing, such as generating test data, identifying potential bugs, recommending new tests to automate, generating reports on test execution results, recommending fixes for unstable tests, and generating documentation for automated tests. The article emphasizes the benefits of AI automation testing for speeding up processes and helping QA focus on more important tasks.

People Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Automation Testing 2.0: Easing the Way With Google’s Bard
Iana Luntc HackerNoon profile picture


Why AI?

Artificial intelligence is rapidly changing the world around us. Every day we see news about a new AI created to take over the world help us. As we all know, testing can be quite time-consuming and it can be difficult to keep up with software changes.


While AI automation testing is not a complete solution to replace regular AQA, we can use it to speed up some processes and help QA focus on more important tasks.


In this article, we will explore the process of creating automation tests using an innovative AI-powered approach. Additionally, we will provide useful prompts to help you design and implement highly effective tests.


Disclaimer: The code snippets in this article are AI-generated.

Using Google Bard

We are going to use Bard, a new large language model from Google. It is trained on more up-to-date data than ChatGPT. It also has access to real-time data. However, it is still under development, but it already provides good responses.

Writing an automation test

We will use a popular website, that just so happens to be one of our favorites - LinkedIn, for our experiments 🙃


Creating a test case

Ask a Google Bard these prompts to get the most accurate answers.


“Write a login test case for LinkedIn”


“Write a negative login test case for LinkedIn”


“Write a API login test case for LinkedIn”



Result of the first prompt:


Test Case ID: TC-001

Description: Verify that a user can log in to LinkedIn with valid credentials.
Test Case Name: Login to LinkedIn

Steps:
1. Go to the LinkedIn login page.
2. Enter a valid username and password.
3. Click the "Log In" button.
4. Verify that the user is logged in and is taken to the LinkedIn homepage.

Expected Results:
The user should be able to log in to LinkedIn with valid credentials.
The user should be taken to the LinkedIn homepage.

Pass/Fail: Pass

Comments: None.

Here are some additional test cases that you may want to consider:

Verify that a user cannot log in with an invalid username or password.
Verify that a user is not able to log in if they have forgotten their password.
Verify that a user is able to reset their password if they have forgotten it.
Verify that a user is able to log in with a social media account (e.g., Facebook, Google).
Verify that a user is able to log in from a mobile device.


We got the test case title, description, steps, and expected result. Additionally, AI provided us with some suggestions for future test cases.


Creating a Page Object file

Ask a Google Bard this prompt inside the same conversation.


“Using the test case you’ve created before write a Page Object model”



import { Locator } from "playwright";

export class LinkedInLoginPage {
  constructor(page: Page) {
    this.page = page;
    this.usernameInputLocator = Locator.input("username");
    this.passwordInputLocator = Locator.input("password");
    this.submitButtonLocator = Locator.button("submit");
  }

  async login(username: string, password: string) {
    await this.page.goto("https://www.linkedin.com/login");
    await this.page.click(this.usernameInputLocator);
    await this.page.type(this.usernameInputLocator, username);
    await this.page.click(this.passwordInputLocator);
    await this.page.type(this.passwordInputLocator, password);
    await this.page.click(this.submitButtonLocator);
  }

  async isLoggedIn() {
    const isLoggedIn = await this.page.evaluate(() => {
      return document.querySelector(".me-avatar").textContent !== "";
    });
    return isLoggedIn;
  }
}


We got a new file LinkedInLoginPage with the needed locators and methods to use in the test. It’s not perfect, but now instead of writing a whole new page all we need to do is fix the small details.


Creating the test

After fixing we can now ask AI to create a test based on a Page Object which it created earlier.


“Create a test based on the test case and Page Object which you provided earlier.”



const { LinkedInLoginPage } = require("./linkedin-login-page");

describe("LinkedIn Login", () => {
  let page: Page;

  beforeEach(async () => {
    page = await playwright.chromium.launch();
  });

  afterEach(async () => {
    await page.close();
  });

  it("should allow users to login with a valid username and password", async () => {
    const loginPage = new LinkedInLoginPage(page);
    await loginPage.login("username", "password");
    expect(await loginPage.isLoggedIn()).toBe(true);
  });
});


Now we’ve got a lovely test structure with test and steps before and after the test. It allows us to expand this file with different test cases in the future.


Some additional prompts which might be helpful during testing


  1. Generate a list of possible test cases for an <application>


  2. Identify potential bugs in an <application>


  3. Analyse the results of test executions


  4. Generate test data in <this format> in JSON


  5. Identify the most important tests to automate <list of tests cases or application>


  6. Recommend new tests to automate <application>


  7. Generate reports on test execution results - <results>


  8. Recommend fixes for unstable test - <code for this test>


  9. Generate documentation for automated tests - <tests>