paint-brush
Automation of QA with jest-puppeteer and Headless Chromeby@xezzed
1,415 reads
1,415 reads

Automation of QA with jest-puppeteer and Headless Chrome

by Aleksandr ZakharovOctober 18th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Automation of QA with jest-puppeteer and Headless Chrome will help ease the life of our QA team and make all the regular manual checks to be automated. We will have a fully standalone test project with database (populated with fixtures), API mocking and screenshot testing. All the clauses will be run sequentially. So lattercan rely on the application(page) state produced by the former.All the code inside.beforeAll will run once before any other tests in this describe clause.

People Mentioned

Mention Thumbnail

Company Mentioned

Mention Thumbnail
featured image - Automation of QA with jest-puppeteer and Headless Chrome
Aleksandr Zakharov HackerNoon profile picture

As our application started to grow we felt a desperate need to ease the life of our QA team and make all the regular manual checks to be automated.

Out of all the available tools we decided to go with jest-puppeteer which is operating HeadlessChrome as a webdriver. So... today this is the protagonist of our narrative.

At the end of it we will have fully standalone test project with database (populated with fixtures), API mocking and screenshot testing.

Basic setup

First of all let's do some basic setup.

I've forked flask-base to be our playground today. All my modifications as well as tests itself are available on github:

https://github.com/Xezed/automated-tests

Let's begin with creation of the new

automated_tests
directory and initialize npm package in it.

Then you will need to run:

npm install jest-cli jest-puppeteer puppeteer-screenshot-tester jest-environment-node knex knex-cleaner pg puppeteer sql-fixtures --save-dev

These are all the packages we need to install for this whole project.

It is how my package.json file looks like afterwards:

{
  "name": "automated_tests",
  "version": "1.0.0",
  "description": "Demo for the hackernoon article",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "Aleksandr Zakharov",
  "license": "ISC",
  "devDependencies": {
    "jest-cli": "^26.5.0",
    "jest-environment-node": "^26.5.0",
    "jest-puppeteer": "^4.4.0",
    "knex": "^0.21.6",
    "knex-cleaner": "^1.3.1",
    "pg": "^8.4.0",
    "puppeteer": "^5.3.1",
    "puppeteer-screenshot-tester": "^1.3.1",
    "sql-fixtures": "^1.0.4"
  }
}

Now let's create and configure

jest.config.js
file:

module.exports = {
  preset: "jest-puppeteer",
  testEnvironment: 'jest-environment-puppeteer',
};

This is enough to write and run our first test case. So let's do it! :)

Writing the first test case

describe('Flask-Base', () => {
  beforeAll(async () => {
    // open index page
    await page.goto('http://127.0.0.1:5000/');
  });

  test('should be titled "Flask-Base"', async () => {
    await expect(page.title()).resolves.toMatch('Flask-Base');
  });

});

describe
is used to bundle together the
test
cases for a particular feature which we need to test. All the
test
clauses will be run sequentially. So latter
test
can rely on the application(page) state produced by the former
test
.

All the code inside

beforeAll
will be run once before any other tests in this describe clause.

Aforementioned

describe
,
test
and
beforeAll
as well as
page
are injected by default and we have no need to import them.

Now we need to add

"jest"
to npm test in
package.json
file:

"scripts": {
    "test": "jest"
  },

And run

npm test

It works!

Connecting the database

How can we bind a DB to this build?
This is possible with

knex
and
sql-fixtures

Let's set these packages up.

In order to do it, we need to add

globalSetup
and
globalTeardown
options to our
jest.config.js

module.exports = {
  // A path to a module which exports an async function that is triggered once before all test suites
  globalSetup: "./globalSetup.js",

  // A path to a module which exports an async function that is triggered once after all test suites
  globalTeardown: "./globalTeardown.js",

  preset: "jest-puppeteer",
  testEnvironment: 'jest-environment-puppeteer',
};

and create the appropriate files...

// globalSetup.js
const { setup: setupPuppeteer } = require('jest-environment-puppeteer');
// we will create this module in the next step
const utils = require('./utils');

const knexCleaner = require('knex-cleaner');
const sqlFixtures = require('sql-fixtures');

const fixtureCreator = new sqlFixtures(utils.knex);

module.exports = async function globalSetup(globalConfig) {
  await setupPuppeteer(globalConfig);
  // clean DB before each run
  await knexCleaner.clean(utils.knex);
  // these are fixtures with which we are populating the database
  // Order is important. The earlier an assignment - the earlier creation.
  await fixtureCreator.create({
    roles: [
      {
        id: 1,
        name: 'Admin',
        index: 'admin',
        default: false,
        // admin permission in flask_base
        permissions: 255
      }
    ],
    users: [
      {
        first_name: 'Aleks',
        last_name: 'Zakharov',
        confirmed: true,
        email: '[email protected]',
        // double the colon in order to escape
        password_hash: 'pbkdf2::sha256::150000$cUczhMmV$2a20becb3dd09d6c2464fcc44f2f9154df4f71b05beb9201716f591f4a5f9393',
        role_id: 1
      }
    ],
  });
};
// globalTeardown.js
const utils = require('./utils');

module.exports = async function globalTeardown(globalConfig) {
  await utils.knex.destroy();
};

and

utils.js
file just contains
knex
setup:

const dbConfig = {
  client: 'pg',
  connection: {
    host: process.env.DB_HOST || 'localhost',
    user: process.env.DB_USER || 'flask_base',
    password: process.env.DB_PASSWORD || '123456',
    database: process.env.DB_NAME || 'flask_base',
    port: process.env.DB_PORT || 5432,
  },
};

exports.knex = require('knex')(dbConfig);

With such a setup we have a database which will be cleaned and populated with fixtures each time upon run of jest. Let's test it out with our second test case:

// code above is omitted for brevity
test('should be titled "Flask-Base"', async () => {
    await expect(page.title()).resolves.toMatch('Flask-Base');
  });

  test('login works', async () => {
    // getting array of elements with .sign class
    const sign = await page.$$('.sign');
    // we need the first one
    // click on it redirects us to login page
    await sign[1].click();

    // waiting for login field be loaded
    // it means that login page has been loaded
    await page.waitForSelector('#email');

    // entering email which we've specified in fixtures
    await page.type('#email', '[email protected]');
    // entering password which we've specified in fixtures
    await page.type('#password', 'password');
    // submitting
    await page.click('#submit');

    // if we don't know which element to wait for then we can specify a timer
    await page.waitForTimeout(2000);

    // check that we are actually logged-in as user with unconfirmed account
    await page.$x('//h1[contains(text(), "Aleks Zakharov")]');

  });

});

Let's run it:

It passed but didn't exit correctly. I couldn't find a way to fix it in a clear manner. Seems like some bug somewhere deep inside the source code. But we can enforce the exit with

--forceExit
flag and get rid of warning with
--detectOpenHandles

  // package.json
  "scripts": {
    "test": "jest --detectOpenHandles --forceExit"
  },

Run it again:

Now it's all clean.

API mocking

What about API calls? How can we isolate them?

It's pretty easy to do as well. Just out of the box:

// main.test.js
describe('Flask-Base', () => {
  beforeAll(async () => {
    // open index page
    await page.goto('http://127.0.0.1:5000/');

    // allow request interception to mock api calls
    await page.setRequestInterception(true);

    // mocking api call
    page.on('request', request => {
      if (request.url().includes('jsonplaceholder')) {
        request.respond(
          {
            content: 'application/json',
            headers: { "Access-Control-Allow-Origin": "*" },
            body: JSON.stringify({
              "info": "intercepted by tests"
            })
          }
        );
        // this is a must
        return;
      }
      // if request was not matched continue as is
      request.continue();
    });
  });
// our tests go next

1. We activate request interception.

2. We intercept request with url which contains "jsonplaceholder". But you can intercept based on method, header or content as well.

3. We need to set

"Access-Control-Allow-Origin": "*"
in headers in order to allow the interception to happen.

4. We need to explicitly return after
request.respond()
.

5. We need to add
request.continue()
which will be handling API as usual(requesting from the internet) when no patterns were matched. In this particular case it will be all the API calls which are not contain "jsonplaceholder" in the url.

I've attached a simple function to onclick event in the

index.html
file inside flask_base:

// api-call.js
function callApi() {
  fetch('https://jsonplaceholder.typicode.com/todos/1')
    .then(response => response.json())
    .then(json => console.log(json))
    .catch(error => console.log(error))
}

But in order to see the

console.log
while the tests are running we need to open the browser. To do that we need to add this file to our
automated_tests
package:

// jest-puppeteer.config.js
module.exports = {
  // https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#puppeteerlaunchoptions
  launch: {
    // here we can specify whether we want to launch UI
    headless: false,
    defaultViewport: { width: 1600, height: 900 }
  },
  browserContext: 'incognito',
};

headless
option allows us to chose whether we want to see the UI or not.

Now if we run npm test we see UI clicking but it's too fast to open the console. Let's add some timers. I'm constantly using them for development and debugging. Add this lines to the top of

main.test.js
:

// this is timeouts after which error will be produced
jest.setTimeout(100000);
// this timeout should be lower than jest's otherwise error is happening
page.setDefaultTimeout(50000);

And now we can increase

waitForTimeout
in our code:

await page.waitForTimeout(200000);

Now if we click on paragraph which contains

project
word we trigger the API call(from the
callApi
function) and it gets intercepted:

Screenshot testing

Also we will implement a solution which allow us to compare screenshots (UI testing).

Let's add another test case:

const ScreenshotTester = require('puppeteer-screenshot-tester');

// code omitted for brevity

  test('screenshot testing', async () => {
    const tester = await ScreenshotTester(
      0.1, // threshold of difference to raise error
      false, // anti-aliasing
      false, // ignore colors
      {
        ignoreRectangles: [[650, 300, 700, 200]], // areas to ignore
        includeRectangles: [[300, 200, 1100, 1100]]  // areas to include
      },
      {
        transparency: 0.5
      }
    );

    const result = await tester(page, 'test-screenshot', {
      fullPage: true,  // takes the screenshot of the full scrollable page
    });

    // make assertion result is always boolean
    expect(result).toBe(true);
  });

And run our tests now...

As you can see the output is saying that there was no screenshot. So nothing to compare with. Let's run once again...

Now it failed and we can see why.

There are a couple of screenshots generated in our

test
folder. One of them with the initial image and another one with the actual difference. So let's take look at the difference:

The gray areas are to be ignored (via

includeRectangles
and
ignoreRectangles
). The purple text is highlighting the actual difference and it's pretty substantial because each and every time new text is generated for the index page. If images are the same tests will pass just fine.

Also I found it very useful to utilize

saveNewImageOnError
option which is still raising error on difference BUT saving the new image. So the tests will fail only once for every UI change. It can be useful in devOps environment to not manually remove images from the servers.

Conclusion

I've just scratched the surface. But by now you should have a decent understanding of jest-puppeteer to feel comfortable while working with it.

Code for this article: https://github.com/Xezed/automated-tests