Not everything that can be automated should be. Not everything that can be automated should be. As QA engineers and leads, we often face the same question: Should we automate this, or handle it manually? Should we automate this, or handle it manually? And the truth is – there’s no single answer. The right choice depends on context, stability, and value. Automation is powerful when it accelerates learning and confidence, not just when it replaces human effort. But manual testing is equally valuable when it provides context, empathy, and deep insights that automation can’t reach. After leading hybrid teams of manual and automation testers, I built a simple framework to guide these decisions. context, stability, and value 🧩 My “Should We Automate It?” Decision Matrix When we’re unsure whether to automate a scenario, my team runs through this quick self-check. If most answers are “Yes”, we automate. “Yes”, If “No” dominates, it stays manual until it stabilizes or proves recurring value. “No” 💡 Question to Ask: YES → Lean Towards Automation NO → Keep Manual for Now Is this scenario repeated often (e.g., regression, smoke tests)? Is this scenario repeated often (e.g., regression, smoke tests)? repeated often ✅ It’s worth automating – it’ll save time long-term. 🚫 It’s a one-off or rare flow; manual testing is fine. Is the feature stable (logic and UI rarely change)? Is the feature stable (logic and UI rarely change)? feature stable ✅ Good candidate – stable behavior means low maintenance. 🚫 Frequent changes = automation debt. Do we have clear acceptance criteria or expected results? Do we have clear acceptance criteria or expected results? clear acceptance criteria ✅ Perfect – automation thrives on predictability. 🚫 Too ambiguous? Manual exploration will find more insights. Would automation provide faster feedback than manual testing? Would automation provide faster feedback than manual testing? faster feedback ✅ Yes – prioritize it for quick CI/CD validation. 🚫 No — setup might take longer than manual checks. Can this be reliably automated with existing tools? Can this be reliably automated with existing tools? reliably automated ✅ Great – low technical complexity. 🚫 Hard to automate (CAPTCHA, payments, animations) — stay manual. Will it increase confidence before releases? Will it increase confidence before releases? increase confidence ✅ Valuable regression safety net. 🚫 Not critical to overall quality confidence. Do we have ownership and the capacity to maintain it? Do we have ownership and the capacity to maintain it? ownership and the capacity ✅ Go for it – sustainable automation. 🚫 If not maintained, automation adds risk, not value. 🧩 Pro tip: If you answer Yes to 4 or more, it’s a good automation candidate. If less than 4 – keep it manual, monitor its stability, and revisit later. 🧩 Pro tip: If you answer Yes to 4 or more, it’s a good automation candidate. If less than 4 – keep it manual, monitor its stability, and revisit later. 🧱 The Testing Pyramid: Finding the Right Balance Another concept that influences automation decisions is the Testing Pyramid. It reminds us that not all automated tests are equal — some are fast and reliable, while others are slow and costly. Testing Pyramid At the base, we have unit tests and static analysis — fast, cheap, and easy to automate. As we move up to integration and end-to-end (E2E) tests, the cost and maintenance effort increase significantly. unit tests static analysis integration end-to-end (E2E) E2E automation is valuable, but it’s also: 🐢 Slower — involves UI, databases, and APIs. 🧩 Fragile — small changes can break multiple tests. 💸 Expensive — requires setup, infrastructure, and constant debugging. 🐢 Slower — involves UI, databases, and APIs. Slower 🧩 Fragile — small changes can break multiple tests. Fragile 💸 Expensive — requires setup, infrastructure, and constant debugging. Expensive That’s why teams need a balance: automate stable, repetitive flows and keep manual or exploratory testing for learning, usability, and high-risk areas. balance manual or exploratory testing Sometimes, one well-structured manual session provides more insight than ten brittle automated tests. one well-structured manual session 🧠 Example from Real QA Practice One of the most controversial areas we deal with is Billing and Plans. It’s a feature that’s both business-critical and highly volatile – prices, currencies, and plan configurations change often, sometimes even daily. Billing and Plans business-critical highly volatile From a risk perspective, this makes it one of the hardest modules to test: Frequent updates cause constant UI and API changes. Logic for upgrades, downgrades, price changes, and trials has multiple dependencies. And yet, every release depends on it being correct – even a small bug can immediately affect real users and revenue. Frequent updates cause constant UI and API changes. Logic for upgrades, downgrades, price changes, and trials has multiple dependencies. And yet, every release depends on it being correct – even a small bug can immediately affect real users and revenue. So, according to our decision matrix, Billing would normally look like a “manual-first” area: unstable, complex, and expensive to maintain in automation. But in reality, we made the opposite choice. We prioritized it for automation despite instability because the business risk outweighed the maintenance cost. Billing opposite choice despite instability business risk maintenance cost We focused on: Automating core payment paths (subscription start, renewal, downgrade). Using AI-assisted regression prompts to detect pricing inconsistencies early. Keeping edge cases manual for flexibility and context. Automating core payment paths (subscription start, renewal, downgrade). core payment paths Using AI-assisted regression prompts to detect pricing inconsistencies early. AI-assisted regression prompts Keeping edge cases manual for flexibility and context. edge cases This hybrid approach allows us to: React fast to business logic changes. Maintain confidence in daily releases. Still learn manually where automation can’t keep up. React fast to business logic changes. Maintain confidence in daily releases. Still learn manually where automation can’t keep up. So even when the matrix says “manual,” the context may say “critical – automate anyway.” That’s why frameworks should guide us, but not replace human judgment. “manual,” “critical – automate anyway.” not replace human judgment. ⚖️ Visual Summary Testing Pyramid Reminder Testing Pyramid Reminder Base → Fast & Stable (Unit, Integration) → Automate. Top → Slow & Costly (E2E, Exploratory) → Balance with Manual. Base → Fast & Stable (Unit, Integration) → Automate. Top → Slow & Costly (E2E, Exploratory) → Balance with Manual. Automate when: stable, repeatable, measurable. Automate when: Stay manual when learning, exploring, or unstable. Stay manual when Ask first: “Will automation bring clarity – or complexity?” Ask first: ✅ Key Takeaways Don’t automate everything – automate what brings clarity and speed. Manual testing isn’t “less valuable” – it’s how we explore and learn. Use the decision matrix to evaluate value vs. effort. E2E tests are powerful but costly – balance them with faster layers. Revisit automation decisions often; context changes over time. Don’t automate everything – automate what brings clarity and speed. automate what brings clarity and speed. Manual testing isn’t “less valuable” – it’s how we explore and learn. Manual testing isn’t “less valuable” Use the decision matrix to evaluate value vs. effort. E2E tests are powerful but costly – balance them with faster layers. E2E tests are powerful but costly Revisit automation decisions often; context changes over time. 💬 How do you decide when to automate? How do you decide when to automate? I’d love to hear how your teams draw the line between automation and manual testing – do you use a framework or rely on instinct?