There is often the need to check whether your production system is working as expected. This is where sanity checks are coming to be useful.
We will be implementing the project via serverless architecture in the AWS cloud leveraging platform.
A year ago I've written an article about
jest-puppeteer
testing. And it used to be the best tool for testing, in my opinion.But today I'm glad to announce that its ancestor completed the maturation process and is ready to jump into your production codebases.
Let's welcome Playwright!
The essential difference is that puppeteer was coined as an automation framework and playwright as the testing framework. Other discrepancies are growing from this fact.
Moreover, Playwright is designed by the same core developers who did Puppeteer. I'm convinced that they have a ton of relevant experience and will avoid all the challenges they faced with Puppeteer.
1. Playwright covers all three modern browser engines so you write your tests once and they will be run in different browsers.
2. Playwright now has its own test runner. You don't need to use jest or smth of a kind.
3. Built-in screenshot capturing and video recording.
4. Overall it's way more stable and usable.
5. Playwright community is great. I've reported several bugs and they all were fixed really quick.
Firstly we need to create our
package.json
file.{
"name": "playwright_tests",
"version": "1.0.0",
}
Then we need to install browser engines and playwright itself.
npm i @playwright/test --save
# install supported browsers
npx playwright install
As a sidenote, you can install only browser's engines that you need. Or even install custom ones.
Now let's create our
tests
folder and put first.test.js
there. File should have .test.js
in its name to be discovered by the test runner.tests/first.test.js
const { test, expect } = require('@playwright/test');
test('top story navigation', async ({ page }) => {
await page.goto('https://hackernoon.com/');
// search for "Top Stories" text then filter down to visible
await page.click('text=Top Stories >> visible=true');
await expect(page).toHaveURL(
'https://hackernoon.com/tagged/hackernoon-top-story',
)
await expect(page).toHaveTitle(
"#hackernoon-top-story stories | Hacker Noon",
)
});
Code is simple and mostly self-descriptive.
Let's run our test case
aleksandr@aleksandr-desktop:~/work/pw_test$ npx playwright test
Running 1 test using 1 worker
✓ tests/first.test.js:3:1 › top story navigation (17s)
Slow test: tests/first.test.js (17s)
1 passed (17s)
I highly recommend you to open the official docs and play with CLI flags.
Then we need to wrap our previous efforts into containers.
./Dockerfile
FROM mcr.microsoft.com/playwright:v1.14.1-focal
WORKDIR /sanity-checks
COPY /package.json .
COPY /package-lock.json .
COPY . ./
RUN npm install
CMD ["npx", "playwright", "test"]
./.dockerignore
node_modules
infra
I want to accentuate that Playwright has a bunch of its own images. Waiting for us to use them.
Now we need to run the container in AWS. The easiest way to do so is Fargate. It's a serverless solution with all the benefits of Paradigm.
Also I suggest to use nifty Terraform. This tool provide us with the opportunity to define our infrastructure as a code.
So this way we don't need to look at millions of services to understand which resources we use.
Let's install terraform at your OS.
Mac:
brew tap hashicorp/tap
brew install hashicorp/tap/terraform
brew install awscli
brew update
Linux:
sudo apt-get update && sudo apt-get install -y gnupg software-properties-common curl awscli
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt-get update && sudo apt-get install terraform
Windows:
choco install awscli
choco install terraform
Verification of installation:
aws --version
terraform -help
Then you need to run and provide AWS credentials.
aws configure
Then you need to manually create S3 bucket. In my case it's
terraform-state-playwright
In this bucket, Terraform will store information about our AWS infrastructure. This way we can work with it from different machines simultaneously. We don't need to worry about data-loss too.
Afterwards, run:
terraform init
Then change
infra/init.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
backend "s3" {
region = "us-west-2"
encrypt = true
bucket = "YOUR BUCKET NAME"
key = "playwright.tfstate"
}
}
provider "aws" {
region = "us-west-2"
}
Now let's begin to create the actual AWS configs.
First. You need to add networking.
Let it be your default VPC and default subnets where our Fargate container will run.
infra/vpc.tf
resource "aws_default_vpc" "default" {}
resource "aws_default_subnet" "default_az1" {
availability_zone = "us-west-2a"
}
resource "aws_default_subnet" "default_az2" {
availability_zone = "us-west-2b"
}
Next, we need to create the ECS configuration itself.
infra/ecs.tf
resource "aws_ecr_repository" "playwright_tests" {
name = "playwright_tests"
}
resource "aws_ecs_cluster" "playwright_tests" {
name = "playwright_tests"
tags = {
Project = "playwright_tests"
}
}
resource "aws_cloudwatch_log_group" "playwright_tests" {
name = "/ecs/playwright_tests"
}
resource "aws_ecs_task_definition" "playwright_tests" {
family = "playwright_tests"
execution_role_arn = aws_iam_role.ecs_task_execution_role.arn
container_definitions = jsonencode([
{
logConfiguration: {
logDriver: "awslogs",
options: {
awslogs-group: aws_cloudwatch_log_group.playwright_tests.name,
awslogs-region: "us-west-2",
awslogs-stream-prefix: "ecs"
}
},
image: "${aws_ecr_repository.playwright_tests.repository_url}:latest",
name: "playwright_tests",
}
])
network_mode = "awsvpc"
requires_compatibilities = [
"FARGATE"
]
cpu = "2048"
memory = "4096"
tags = {
Project = "playwright_tests"
}
}
And IAM roles for our config
infra/ecs_iam.tf
data "aws_iam_policy_document" "assume_role_policy" {
statement {
sid = "STSassumeRole"
effect = "Allow"
actions = [
"sts:AssumeRole"
]
principals {
type = "AWS"
identifiers = [
"*"
]
}
}
}
data "aws_iam_policy" "ecs_task_execution_role_policy" {
arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}
resource "aws_iam_role" "ecs_task_execution_role" {
name = "ecs-task-execution-role"
assume_role_policy = data.aws_iam_policy_document.assume_role_policy.json
}
resource "aws_iam_role_policy_attachment" "ecs_execution_role_attachment" {
role = aws_iam_role.ecs_task_execution_role.name
policy_arn = data.aws_iam_policy.ecs_task_execution_role_policy.arn
}
Next we will create event_bridge which will trigger our Fargate Container every 5 minutes.
infra/event_bridge.tf
resource "aws_cloudwatch_event_rule" "trigger_playwright_tests_cron_event" {
name = "trigger_playwright_tests_cron_event"
schedule_expression = "rate(5 minutes)"
tags = {
Project = "playwright_tests"
}
}
resource "aws_cloudwatch_event_target" "state_machine_target" {
rule = aws_cloudwatch_event_rule.trigger_playwright_tests_cron_event.name
arn = aws_ecs_cluster.playwright_tests.arn
role_arn = aws_iam_role.role_for_event_bridge.arn
ecs_target {
task_count = 1
task_definition_arn = aws_ecs_task_definition.playwright_tests.arn
launch_type = "FARGATE"
network_configuration {
subnets = [
aws_default_subnet.default_az1.id,
aws_default_subnet.default_az2.id
]
assign_public_ip = true
}
}
}
and according to IAM role
infra/event_bridge_iam.tf
resource "aws_iam_policy" "policy_for_event_bridge" {
name = "policy_for_event_bridge"
policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect: "Allow",
Action: [
"ecs:RunTask",
"ecs:StopTask",
"ecs:DescribeTasks"
],
Resource: "*"
},
{
Effect: "Allow",
Action: [
"events:PutTargets",
"events:PutRule",
"events:DescribeRule"
],
Resource: "*"
},
{
Effect: "Allow",
Action: [
"iam:PassRole",
],
Resource: "*"
},
]
})
}
resource "aws_iam_role_policy_attachment" "event_bridge_attach" {
role = aws_iam_role.role_for_event_bridge.name
policy_arn = aws_iam_policy.policy_for_event_bridge.arn
}
resource "aws_iam_role" "role_for_event_bridge" {
name = "role_for_event_bridge"
assume_role_policy = data.aws_iam_policy_document.assume_role_policy.json
tags = {
Project = "playwright_tests"
}
}
Now we are ready to apply everything. Just run:
terraform plan
to see the resources which will be created and then:
terraform apply
type
yes
The last step is to build and push your docker container.
Go to: https://us-west-2.console.aws.amazon.com/ecr/repositories?region=us-west-2 chose your repo and click
View push commands
It's really simple step. Now we have our image pushed to ECR. You can automate this process with smth like CircleCI.
There we go! Now we have our sanity checks up and running.
If you want to destroy whole resources that you've created just run:
terraform destroy
I don't want to bloat this article. So I suggest you implement notification in case of test failure by yourself. In my company, I used StepFunctions for this purpose.
Also I highly recommend you official Playwright documentation to start writing your own tests.
Github repo for this article: https://github.com/Xezed/playwright_tests
Thank you for reading.
Cheers! :)