Working in QA, I often hear a voice in my head, “Are you sure you've checked everything?” Sometimes it’s a helpful nudge, but if left unchecked, it starts to become a problem. Below, I’ll talk about this pesky Inner Bug and how it manifests.
In this article, I want to share my thoughts and insights I’ve gained from studying this phenomenon. I hope you find them useful, and I’d love to hear your perspective on this in the comments. After all, feedback is one of the best ways to see myself from the outside and improve.
One time, after switching between tasks and already completing testing, I decided to double-check everything, just in case. That’s when I noticed a small but crucial detail. The task didn’t include the formula for a calculation that the new function was supposed to perform. Curious, I reread both the task description and the epic and, to my surprise, the calculation formula wasn’t specified anywhere. So, how had I been calculating it?
It's embarrassing to admit, but I'd been using and verifying the calculations with a formula from a different task. While the two tasks were related, the formulas were supposed to function independently. Realizing this oversight, I quickly requested the correct calculation rules, re-tested the task, and discovered that the developer had made the same mistake. They, too, had used the formula from the other task.
Once I've gone through the test plan, this little troublemaker starts to throw out ideas like, “What if the client uses larger font size or an outdated OS?”
This is incredibly helpful when testing is done, the feature is live, and suddenly an error pops up. After it's identified and fixed, I check my test records to determine whether I missed the issue during testing or if it was introduced later in production. Sometimes, I spot the bug hiding in plain sight in my screenshots or recordings. When this happens, I dig into why I overlooked it and why there wasn’t a test case to catch it.
This sometimes happens after a screw-up, but there are also times when the Little Bug's voice pipes up unprovoked. On a few occasions, it wouldn't leave me alone even after I went to bed, and I'd end up making notes to myself about what else I should check.
This is a direct consequence of the first point: anxiety breeds the strangest and most nightmarish scenarios in my head. At the moment, they seem critical, but looking back, they often turn out to be something like “What if a thrush whistles in a pine tree under the moon?”
Sometimes, even after moving a task to the next status and starting something new, thoughts about those test cases haunt me and prevent me from concentrating on the new task. In such cases, it can be difficult to switch off the anxious Checker-Bug.
that popped into my head when I was writing about the downsides – and something I repeat like a mantra – is this: Exhaustive testing is a myth. There will always be bugs.
No matter how thorough you are, it’s impossible to predict every combination of actions and scenarios, which means you can’t catch every single bug before your users do.
Especially in a constantly changing world.
This is something you just have to accept and move on.
What helped me come to terms with this was analyzing the root causes of production bugs – what some people call postmortems. It’s when you talk with everyone involved to figure out how the bug happened and what we could do to prevent it from happening again.
And I learned that more often than not, serious defects were caused by simple oversights: sometimes test cases with empty values weren’t checked, leading to certain products not displaying in the store; other times, localization was missed, resulting in a blank screen title.
Yet the sky didn't fall. Work continued, and we all paid closer attention to the areas where we'd slipped up before.
I used to quiet the pesky Checker-Bug was implementing test design techniques: decision tables and state transition diagrams.
These help you visualize the application logic and get a clearer picture of the possible test cases, which means you can be more confident that you won’t overlook them.
If you need a refresher, a decision table is a table where we enter conditions and rules into columns and rows. Once we've specified the options for all conditions and rules, we fill in the expected outcome.
A state transition diagram is used when we have an object with different states, and the object changes its state based on certain conditions. It's not always the right fit, but it was super helpful when I worked on developing an accounting service; the objects in those diagrams were things like reports, applications, or digital signatures.
the remedy for that pesky Inner Bug actually found me all on its own. It turned out to be peer review of test cases and communication after screw-ups.
Simple, maybe even obvious, but it works like a charm.
way to calm my brain down was by assessing effectiveness and risks. When the Inner Bug started whispering in my ear, “Check a few more test cases,” I would remember my team lead and ask three questions:
Yes, sometimes it makes sense to test on several OS versions, with different language settings, dark and light themes, increased font size, and so on, but more often than not, these checks are unnecessary.
Imagine you find a bug while performing such tests: What priority would it have? Due to the specific steps to reproduce, even a crash could be assigned a minor priority.
How long will these checks take? Five to ten minutes – not a big deal, but even that time isn't always available. In that time, you could read the description of an average-sized task.
Like any tool, that nagging Inner Bug of yours can be a blessing or a curse. It often takes experience and time to learn how to use something effectively. And I hope this article helps you tame your inner critic faster, save your nerves, and boost your self-confidence.
I just want to give you some support, and maybe instead of battling it out until you’re exhausted, you’ll find a way to work with this little beastie and make it your ally.