paint-brush
Top 6 Factors to Consider When Designing Automated Test Architectureby@kodziak
102 reads

Top 6 Factors to Consider When Designing Automated Test Architecture

by Przemysław PaczoskiMarch 7th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

POM - Page Object Model is a design pattern that is commonly used in test automation for avoiding code duplication and improve test maintenance. POM is the solution to avoid duplications in your test suites. A well-designed strategy will show the user step by step, what's happened in the test. Having a proper logging system could make your work easier and speed up the troubleshooting process. The goal is to save execution time by distributing tests across available resources. Tests are isolated (if not, you need to work on them on them) Tests are the key to a stable, reliable, reliable product.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Top 6 Factors to Consider When Designing Automated Test Architecture
Przemysław Paczoski HackerNoon profile picture

The beginning of automated tests in a project is easy and difficult at the same time. You can start smoothly, using base architecture, simple tests, etc - and it's gonna work!

But... With growing architecture, solutions you'll take could be very hard to develop in the future. That's why you should consider a few things, before the start. I want to show you the problems and solutions.

Page Object Model

POM - Page Object Model is a design pattern that is commonly used in test automation for avoiding code duplication and improve test maintenance.

This model contains classes that represent web pages. You break down each page into a corresponding page class. This class contains locators (selectors) to web page elements and methods that perform actions on them.

This fills the most common actions you'll be doing in test automation - find the element, fill the value, click the button. For small scripts, you can store selectors in the tests, but with growing suites, you'll face many duplications - that's bad, you don't want them. If one of the used selectors gonna change, you'll need to update every place, you've used it.

And POM is the solution. You'll create a separate class that all of the tests gonna use. Then, if any of the selectors change, you'll only have to update one place. It's faster for the person which updates it.

But, there is another model of creating an architecture for automation tests. It's called App Actions. I've never tested it, so I can't recommend it, but here's a nice article about it - click!

Logging

You should use a logging system to give the user (developer, or tester) enough information about what happened in the tests while executing in a relatively short time. You'll run your tests through a remote machine and if they fail, you need to know what's happened fast. Troubleshooting or reproducing failed tests with a stack trace and only the latest assertion message will be very hard.

That's why you should design a logging system.

A well-designed strategy will show the user step by step, what's happened in the test. In the project that I've worked on, we created a three-step strategy:

  • begin
  • info
  • end

To do this quickly, we've used decorators (TypeScript only).

For example (code base on puppeteer):

@logger
function getText(selector) {
	return page.$eval(selector, (el) => el.textContent);
}

Result:

[BEGIN] - getText from: #email
[INFO] - getText result: [email protected]
[END] - getText from: #email
The assumption was simple. We want to know when the method starts, what's the result and when it ends. We've used a method name and parameters as a recognizer. That comes out with well-structured logs, which helps us to know exactly what happened in test execution.

Also, you can use these logs in your reporting system, which is described below. Another good idea to mention is to write custom, detailed assertion messages. For example:

Expect: 5, Received: 8

It's saying not much. Without the context and knowledge of what the test is checking, it's hard to assume what's happened. You'll need to check it manually. But:

We expect 5 elements on the listing, got 8.

It's easy to read and clearly saying what's happened. From the message, you can easily assume where's the problem. There are two more areas where detailed messages could be helpful:

  • timeout exceptions - ex. internet connections problem,
  • architecture exceptions - ex. DB connection, auth connection problem.

I believe, that having a proper logging system could make your work easier and speed up the troubleshooting process.

Test Data

Most common action in your tests gonna be to fill the value within the data, which you need to have, or generate. It's one of the most time-consuming actions, but necessary.

Let's split the data into two types:

  • static data - the one you store in database/repository/somewhere,
  • dynamic data - the one you generate using tools like a faker, etc.

Using class patterns you can design a system that handles the data for you. Connections to a database or dynamically creating sensitive data (ex. from faker.js).

Parallel Executions

Parallelization is an automated process that automatically launches your actions simultaneously. The goal is to save execution time by distributing tests across available resources.

Consider a situation. You have a big database. Execution time takes 50 minutes to complete. You have 50 tests, where one approximately takes 1 minute. Tests are isolated (if not, you need to work on them). Using 5 parallel executions, you can reduce the total testing time to 10 minutes.

Fast, reliable tests are the key to a stable product.

One, the interesting out-of-the-box solution is Knapsack Pro. It helps run your tests in a parallel efficient way.

Deployment Pipeline

A deployment pipeline is a process where you take the code from version control and make it available for the users in an automated way. The team, which works on a project, needs an efficient way to build, test, and deploy their work to the production environment. In old fashion way, this was a manual process, where there were few deployments per week, now it's a few per day / hour. A set of automated tools doing it for us.

Imagine a problem. You have 50 tests and execution takes 50 minutes (without parallel executions). To test the fresh version of the code, the developers need to build an application locally and run the tests against it. And okay, the tests found an issue, so the user fixes the code and does it again. Hours and nervousness growing. I'm pretty sure that after a few rounds they're not gonna use the tests. You don't want this.

The solution is to use an automated deployment pipeline. It's designed especially for cases like this.

Combining it with parallel executions, you will create an efficient, fast strategy, which gonna defend your production with the tests. Even better, if failed tests block the deployment, you'll never push broken code to the real users. 🥳

Reports

Reporting is about documenting the process of test execution. They should combine a summary for the management and stakeholders and be detailed to give the developers feedback. Especially when something fails.

Consider a failed execution. Tests are running on a remote machine, and somehow, you need to know what exactly happened. Why they failed and what's broken. Access to plain logs could be painful in searching this one failed test. A well-created report should cover that.

Each report could be different, but in my opinion, there are a few must-have metrics:

  • machine/environment name,
  • total number of running tests,
  • duration,
  • list of all test cases with logs (steps from logging system) in toggle,
  • test result (for every test case).

Have I mentioned failed test executions and it's crucial to know what happened? 🥳 Constantly looking at a deployment pipeline dashboard or searching in logs isn't that efficient. A well-written report, mixed with the slack notification can speed up this process much.

Conclusion

I've described six topics, which are really nice to consider in every mature test automation project. They solve problems, which you'll encounter on daily basis. By doing some coding and research, you'll ease your work and make the architecture expandable. Using a few hacks, you'll speed up your troubleshooting process and with automatically generated reports - you'll be as transparent as possible with your tests.

I believe that this knowledge will help you at the beginning of your journey.

What's also good to consider:

  • base architecture - pattern to re-use base methods, create browsers,
  • video recordings - record the whole test session,

Have a nice day! 🥳

Previously published here.