Imagine you're a developer, and a tester brings you a bug that was found during regression. You want to fix this bug, so you ask for a ticket to be created. You’re already imagining how you’ll pick it up, link pull requests to it, and add estimates so that the product manager won’t have any questions.
Some time passes, and a new ticket appears, but inside, there are only a couple of lines and a screenshot. With a sigh, you try to reproduce the bug using this information, but there’s no error. You try several times, but in the end, you write to the tester that the bug cannot be reproduced, and a new round of clarifications begins.
You’re spending time that could have been used to work on new tasks, fix other bugs, or even watch anime refactor the code.
My name is Evgeny Domnin; I’m a QA, and I’ll try to share my view on what makes a good bug report. Sorry for the long introduction—let’s get started.
Try to answer three questions in the ticket title:
An experienced developer will only need to glance at the title to understand the issue. For example:
Login Page: The field is not highlighted when entering an incorrect password
I’ve often seen testers forget to specify in the ticket which environment the issue occurred in. This is especially relevant in UI-related tickets, where the website address or network request isn’t visible. Always specify this. If there’s a separate field in the ticket, great, put it there. If not, mention it in the reproduction steps, for example:
Log in to the staging environment with an admin account.
Speaking of steps...
One of the most important sections is the bug reproduction instructions. I’ll highlight two parts to focus on: the formatting of the steps (visual) and the content (data inside).
Maintain structure
There are different variations of bug reports, but classically, they should contain the following sections:
Use this structure, and stick to it always. This is one of those cases where uniformity will help organize your thoughts when describing the issue.
Use a numbered list
Break down the steps using a numbered list. Sometimes, testers write detailed descriptions, but as a continuous block of text. Don’t do this. It will be much easier for everyone to read if the steps are separated.
Whenever possible, write without grammatical errors.
Now, let's move on to the content of these steps.
You don’t need to break down every single action into a separate step if it’s not crucial for reproducing the bug—this is hard to read and use in practice. Don’t be afraid to include multiple actions in one step. What do I mean?
Bad:
Go to test.com/login
Click on the login field
Enter the login
Click on the password field
Enter the password
Good:
test.com/login
and log in with any account
We’re not breaking down the steps into things the developer will naturally do while following the standard flow. When I was starting out, I used to think that every single action needed its own step, but it’s not necessary.
Avoid ambiguity
Always supplement the steps with the specific request to check at each step, and write the specific button to press (especially if there are buttons with the same name).
Include test data
Provide login data if the error is related to your account, and don’t hesitate to include test payloads that help reproduce the bug.
Review your steps again
Sometimes, you write the steps immediately after encountering the bug, but it may turn out that you’ve missed a step for full understanding or the bug can’t be reproduced later. In that case, more precise steps may need to be found.
A separate section is the expected result, where we describe (unsurprisingly) the result that is expected when the steps are followed. There aren’t many special recommendations here other than that this section must exist—the developer needs to understand what behavior the functionality should lead to. Phrases like “everything should work fine” don’t cut it—write the specific behavior.
Here, we write what actually happened when we followed the steps. Specificity is important here too. Don’t just write “everything broke” (even though that’s probably what happened). Describe the indicators that show everything broke. For example:
A 500 error is returned on the GET /accounts
request, and the UI is blocked. The user cannot exit the page or click on elements.
Refreshing the page triggers the request again and leads to the same error.
In other words, describe the actual effect and how it impacts the user’s flow.
This is a separate section worth mentioning. It’s where you attach additional information that describes the bug. I know some developers who aren’t fans of reading reproduction steps and go straight to the actual result and additional materials that explain it.
It’s better to see it once than hear about it a hundred times. This is a great way to visually show what’s happening and where. Always try to attach a screenshot.
If there’s a request where the error occurred, it should always be included in the ticket. However, requests contain many different parameters. I highlight the following as important to include:
GET
, POST
, TRACE
, OPTION
, etc. There are many methods, just like there are requests with the same URL but different methods. Don’t forget to specify it in the ticket.
Sometimes, errors are found in the console, and these can be added to the ticket. Maybe you’ve already been doing this, but I’ll just note that a large block of text can always be saved as a .log
file and added to the ticket. This improves the readability of both the logs and the ticket itself.
This is all well and good, but where will we find the time to make everything look this nice? Deadlines are looming, the manager is getting angry, there’s a blocker in production, and I’m being asked to sit and write everything out? I’ll just message the developer directly, and that’s it.
This is a logical argument that may arise. I don’t harbor any illusions about the perfect world of a tester, where ample time is allocated for testing, everything goes by process, and thorough and high-quality documentation is maintained. I understand—often, it’s a time crunch, burning... well, eyes, and a race to get everything done in time.
Small errors tend to pile up, take more time due to context switching, and lead to poor practices. If we start gradually implementing improvements and monitoring how they work, we’ll be able to create a process that’s more stable, standardized, and predictable for all participants.
The project manager will understand what’s happening with the product without needing to pull everyone for updates, the developer won’t have to ask the tester for clarification on reproduction conditions and won’t pull them away from testing, and stakeholders will, in turn, have a clear view of the product’s progress.
This article is more aimed at beginners who are starting or have already begun their path in testing. I believe that small steps lead to big results, and the recommendations in this article will lead to higher-quality bug reports.
If you have any questions, suggestions, disagreements, or complaints, feel free to leave them in the comments—I’m interested in hearing your opinion!