As our application started to grow we felt a desperate need to ease the life of our QA team and make all the regular manual checks to be automated. Out of all the available tools we decided to go with jest-puppeteer which is operating HeadlessChrome as a webdriver. So... today this is the protagonist of our narrative. At the end of it we will have fully standalone test project with database (populated with fixtures), API mocking and screenshot testing. Basic setup First of all let's do some basic setup. I've forked to be our playground today. All my modifications as well as tests itself are available on github: flask-base https://github.com/Xezed/automated-tests Let's begin with creation of the new directory and initialize npm package in it. automated_tests Then you will need to run: npm install jest-cli jest-puppeteer puppeteer-screenshot-tester jest-environment-node knex knex-cleaner pg puppeteer sql-fixtures --save-dev These are all the packages we need to install for this whole project. It is how my package.json file looks like afterwards: { : , : , : , : { : }, : , : , : { : , : , : , : , : , : , : , : , : } } "name" "automated_tests" "version" "1.0.0" "description" "Demo for the hackernoon article" "scripts" "test" "echo \"Error: no test specified\" && exit 1" "author" "Aleksandr Zakharov" "license" "ISC" "devDependencies" "jest-cli" "^26.5.0" "jest-environment-node" "^26.5.0" "jest-puppeteer" "^4.4.0" "knex" "^0.21.6" "knex-cleaner" "^1.3.1" "pg" "^8.4.0" "puppeteer" "^5.3.1" "puppeteer-screenshot-tester" "^1.3.1" "sql-fixtures" "^1.0.4" Now let's create and configure file: jest.config.js .exports = { : , : , }; module preset "jest-puppeteer" testEnvironment 'jest-environment-puppeteer' This is enough to write and run our first test case. So let's do it! :) Writing the first test case describe( , () => { beforeAll( () => { page.goto( ); }); test( , () => { expect(page.title()).resolves.toMatch( ); }); }); 'Flask-Base' async // open index page await 'http://127.0.0.1:5000/' 'should be titled "Flask-Base"' async await 'Flask-Base' is used to bundle together the cases for a particular feature which we need to test. All the clauses will be run sequentially. So latter can rely on the application(page) state produced by the former . describe test test test test All the code inside will be run once before any other tests in this describe clause. beforeAll Aforementioned , and as well as are injected by default and we have no need to import them. describe test beforeAll page Now we need to add to npm test in file: "jest" package.json : { : }, "scripts" "test" "jest" And run npm test It works! Connecting the database How can we bind a DB to this build? This is possible with and knex sql-fixtures Let's set these packages up. In order to do it, we need to add and options to our globalSetup globalTeardown jest.config.js .exports = { globalSetup: , globalTeardown: , : , : , }; module // A path to a module which exports an async function that is triggered once before all test suites "./globalSetup.js" // A path to a module which exports an async function that is triggered once after all test suites "./globalTeardown.js" preset "jest-puppeteer" testEnvironment 'jest-environment-puppeteer' and create the appropriate files... { : setupPuppeteer } = ( ); utils = ( ); knexCleaner = ( ); sqlFixtures = ( ); fixtureCreator = sqlFixtures(utils.knex); .exports = { setupPuppeteer(globalConfig); knexCleaner.clean(utils.knex); fixtureCreator.create({ : [ { : , : , : , : , permissions: } ], : [ { : , : , : , : , password_hash: , : } ], }); }; // globalSetup.js const setup require 'jest-environment-puppeteer' // we will create this module in the next step const require './utils' const require 'knex-cleaner' const require 'sql-fixtures' const new module async ( ) function globalSetup globalConfig await // clean DB before each run await // these are fixtures with which we are populating the database // Order is important. The earlier an assignment - the earlier creation. await roles id 1 name 'Admin' index 'admin' default false // admin permission in flask_base 255 users first_name 'Aleks' last_name 'Zakharov' confirmed true email 'aleks@space-xxx.com' // double the colon in order to escape 'pbkdf2::sha256::150000$cUczhMmV$2a20becb3dd09d6c2464fcc44f2f9154df4f71b05beb9201716f591f4a5f9393' role_id 1 utils = ( ); .exports = { utils.knex.destroy(); }; // globalTeardown.js const require './utils' module async ( ) function globalTeardown globalConfig await and file just contains setup: utils.js knex dbConfig = { : , : { : process.env.DB_HOST || , : process.env.DB_USER || , : process.env.DB_PASSWORD || , : process.env.DB_NAME || , : process.env.DB_PORT || , }, }; exports.knex = ( )(dbConfig); const client 'pg' connection host 'localhost' user 'flask_base' password '123456' database 'flask_base' port 5432 require 'knex' With such a setup we have a database which will be cleaned and populated with fixtures each time upon run of jest. Let's test it out with our second test case: test( , () => { expect(page.title()).resolves.toMatch( ); }); test( , () => { sign = page.$$( ); sign[ ].click(); page.waitForSelector( ); page.type( , ); page.type( , ); page.click( ); page.waitForTimeout( ); page.$x( ); }); }); // code above is omitted for brevity 'should be titled "Flask-Base"' async await 'Flask-Base' 'login works' async // getting array of elements with .sign class const await '.sign' // we need the first one // click on it redirects us to login page await 1 // waiting for login field be loaded // it means that login page has been loaded await '#email' // entering email which we've specified in fixtures await '#email' 'aleks@space-xxx.com' // entering password which we've specified in fixtures await '#password' 'password' // submitting await '#submit' // if we don't know which element to wait for then we can specify a timer await 2000 // check that we are actually logged-in as user with unconfirmed account await '//h1[contains(text(), "Aleks Zakharov")]' Let's run it: It passed but didn't exit correctly. I couldn't find a way to fix it in a clear manner. Seems like some bug somewhere deep inside the source code. But we can enforce the exit with flag and get rid of warning with --forceExit --detectOpenHandles : { : }, // package.json "scripts" "test" "jest --detectOpenHandles --forceExit" Run it again: Now it's all clean. API mocking What about API calls? How can we isolate them? It's pretty easy to do as well. Just out of the box: describe( , () => { beforeAll( () => { page.goto( ); page.setRequestInterception( ); page.on( , request => { (request.url().includes( )) { request.respond( { : , : { : }, : .stringify({ : }) } ); ; } request.continue(); }); }); // main.test.js 'Flask-Base' async // open index page await 'http://127.0.0.1:5000/' // allow request interception to mock api calls await true // mocking api call 'request' if 'jsonplaceholder' content 'application/json' headers "Access-Control-Allow-Origin" "*" body JSON "info" "intercepted by tests" // this is a must return // if request was not matched continue as is // our tests go next 1. We activate request interception. 2. We intercept request with url which contains "jsonplaceholder". But you can intercept based on method, header or content as well. 3. We need to set in headers in order to allow the interception to happen. 4. We need to explicitly return after . 5. We need to add which will be handling API as usual(requesting from the internet) when no patterns were matched. In this particular case it will be all the API calls which are not contain "jsonplaceholder" in the url. "Access-Control-Allow-Origin": "*" request.respond() request.continue() I've attached a simple function to onclick event in the file inside flask_base: index.html { fetch( ) .then( response.json()) .then( .log(json)) .catch( .log(error)) } // api-call.js ( ) function callApi 'https://jsonplaceholder.typicode.com/todos/1' => response => json console => error console But in order to see the while the tests are running we need to open the browser. To do that we need to add this file to our package: console.log automated_tests .exports = { launch: { headless: , : { : , : } }, : , }; // jest-puppeteer.config.js module // https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#puppeteerlaunchoptions // here we can specify whether we want to launch UI false defaultViewport width 1600 height 900 browserContext 'incognito' option allows us to chose whether we want to see the UI or not. headless Now if we run npm test we see UI clicking but it's too fast to open the console. Let's add some timers. I'm constantly using them for development and debugging. Add this lines to the top of : main.test.js jest.setTimeout( ); page.setDefaultTimeout( ); // this is timeouts after which error will be produced 100000 // this timeout should be lower than jest's otherwise error is happening 50000 And now we can increase in our code: waitForTimeout page.waitForTimeout( ); await 200000 Now if we click on paragraph which contains word we trigger the API call(from the function) and it gets intercepted: project callApi Screenshot testing Also we will implement a solution which allow us to compare screenshots (UI testing). Let's add another test case: ScreenshotTester = ( ); test( , () => { tester = ScreenshotTester( , , , { : [[ , , , ]], includeRectangles: [[ , , , ]] }, { : } ); result = tester(page, , { : , }); expect(result).toBe( ); }); const require 'puppeteer-screenshot-tester' // code omitted for brevity 'screenshot testing' async const await 0.1 // threshold of difference to raise error false // anti-aliasing false // ignore colors ignoreRectangles 650 300 700 200 // areas to ignore 300 200 1100 1100 // areas to include transparency 0.5 const await 'test-screenshot' fullPage true // takes the screenshot of the full scrollable page // make assertion result is always boolean true And run our tests now... As you can see the output is saying that there was no screenshot. So nothing to compare with. Let's run once again... Now it failed and we can see why. There are a couple of screenshots generated in our folder. One of them with the initial image and another one with the actual difference. So let's take look at the difference: test The gray areas are to be ignored (via and ). The purple text is highlighting the actual difference and it's pretty substantial because each and every time new text is generated for the index page. If images are the same tests will pass just fine. includeRectangles ignoreRectangles Also I found it very useful to utilize option which is still raising error on difference BUT saving the new image. So the tests will fail only once for every UI change. It can be useful in devOps environment to not manually remove images from the servers. saveNewImageOnError Conclusion I've just scratched the surface. But by now you should have a decent understanding of jest-puppeteer to feel comfortable while working with it. Code for this article: https://github.com/Xezed/automated-tests