Adding end-to-end (E2E) tests to your application from scratch is a lot of work. Building a complete test suite will be a long project, so it makes sense to prioritize wisely which use cases get covered first. In this article, I will demonstrate the testing approach I’ve been using successfully in a project I’ve been working on for the past 8 years.
Return on investment (ROI) is a measurement used to compare the value of different investments. Usually, it’s defined as:
In the case of E2E, we can understand the parts of the equation as follows:
Higher ROI means better investment. To maximize the positive impact of the test automatization, it makes sense to start with the case that has the highest ROI, meaning
Especially when your team is new to testing, it’s good to start with the least challenging cases. It will allow for small victories as your team learns how to test the application. Having some simple tests will help you figure out how to include the tests in continuous integration and the workflow of the developers.
A perfect example of simple E2E is a smoke test—a test that visits a page and checks if it can be displayed without falling apart. In those tests, I usually only visit the given route and check if the title or some key button of the page is there.
Even such a simple test can achieve quite a few things for you:
It checks whether the page works—it will fail on CI if you forget to commit some of the necessary changes, or dependencies
It forces you to lay the groundwork for a more complicated test: setting up the test user, providing basic data for tests.
In my project, I keep a policy of requiring at least the smoke test for every route we create. This helps to keep other developers aware of E2E as well.
If you get the configuration of your E2E right, as a side effect of visiting all the routes, you can get a collection of screenshots of your application. On some occasions, I’ve been using those to see how a given screen looked before I messed up the UI.
The next important topic to cover is the critical workflows of the application. This will be the following:
All these are things that will greatly impact operation if some regression in them were deployed to the production version. Inside those workflows, we can prioritize as follows:
A user is on a happy path when everything goes as planned: there is enough stock for the product, payment works perfectly, etc. The happy path covers cases when the user is happy and the company makes money—and we definitely want our software to work perfectly in those cases.
Some edge cases are so common that they affect the journey of many users. Based on my experience as a consumer, good examples would be mistakes in credit card numbers or rejected transactions. These kinds of critical error cases are a suitable candidate for covering with E2E.
If a bug makes it all the way to the production, it's proof that a team has no sufficient quality assurance process in a given area. Thus, reappearing bugs are a perfect candidate to cover with an additional automated test—no matter how complicated and uncommon a given situation is.
I’ve been trying to follow a policy of covering all regressions from production with E2E, and my conclusions are
Another important thing to keep in mind are features that are used only occasionally. In the application I work on, we have a set of features related to inventory in shops: something that is used intensively but just during a very short period every year. If your application has a similar important but sporadically used part, it’s worth the extra effort to make sure both the users and your team avoid unpleasant surprises when the season comes.
Developing and maintaining a suite of E2E tests is about maintaining a balance and adapting to changing circumstances. Certainly, you need the most important parts of your applications covered, but the relative importance of features can change with time, and your test priority should reflect that as well.
Congratulations on learning about E2E—it’s a great investment for the long-term health of your project and a good way of catching issues before they affect users (and stress your team).
Also published here.