Delivering quality isn’t easy.
We all make mistakes in the software development world. Especially in this current period, where the pressure is on to deliver high-quality at speed, and developers and testers alike are contending with lightning-quick SDLCs, the ‘need for speed’ can mean that quality suffers.
But the issue is: when mistakes are made, it’s our customers that feel the impact.
And, in an oversaturated tech world where competition is sky-high, a buggy app simply won’t cut it. That’s the long and short of it.
When faced with a critical bug, as we all know, customer’s will have no qualms about deleting an app. But, in fact, according to HelpShift, around 80% of apps are deleted after just one use.
You heard that right: just one.
This means that your app quality can’t just be ‘ok’. It needs to blow away customers and keep them coming back for more. The way to do so is by implementing a world-class QA strategy.
In my book, Leading Quality, I spoke to top tech teams from across the globe to gather their thoughts on what makes great QA. Engineering leaders, QA experts and CTOs from the likes of Etsy, Github, eBay and more shared their thoughts on what made them so successful, and how they delivered quality, time and time again.
From these conversations, I built up a clear picture of what goes wrong, and, perhaps more importantly, what goes right in the world of software development and QA. So, what are the common mistakes app-first companies make, and how do you prevent them?
Let’s take a look at an SLDC we might consider typical.
Testing, here, is considered a necessary step later on in the process. It is the ‘afterthought’ after the groundwork is done in requirement gathering, analysis, design and development.
But treating QA as solely important after the development stage is a common, and costly mistake.
A study by Capers Jones found that the majority of bugs are introduced in the early stages of the SDLC: in the design and build phases. Moreover, the later bugs are discovered in the timeline, the more they cost to fix.
That means that by the time you reach the ‘testing’ stage, you will be discovering a lot of expensive bugs.
Treating QA as an afterthought, therefore, is an expensive, and time-consuming mistake.
Every software developer knows that fixing bugs after deployment takes a lot of hard work and energy, and to top it off, your bugs are out in the wild at this stage, putting you at risk to customer churn or poor app store reviews.
The best way to fix this? Continuous testing.
By incorporating testing early into the SDLC, even before a line of code is written, you can ensure that you build testability into everything you do. Then, continue testing at every stage of your SDLC. Test during the design phase, test during development and continue testing even after launch. Making QA a priority will ensure you catch critical bugs that could be impacting your users, and you will see product quality improve as a result.
The second big mistake teams often make is not striking the right balance between automated and manual testing.
Why is this so important?
Automating too much of your testing won’t provide you with full test coverage.
Imagine you are repeating the same test, once again, for what feels like the hundredth time. You are confirming that the ‘yes’ button works on a specific function. This takes up precious hours that could be spent working on testing strategy or improving your product quality.
So: automate it!
If a test is repetitive, not likely to change or easier to automate than to conduct manually, automation is the answer. What’s more, once written, automation test scripts are there for you to use time and time again. There’s no need to pay extra manual testers to spend time conducting repetitive tests, or to spend unnecessary time testing something that could be automated. Lightning-fast test case execution and the ability to run hundreds of tests simultaneously makes automation a fantastic way to speed up your SLDC, and ensure your vital testing resources are being used to their full potential. Avoiding considering automation, and working it into your QA strategy where possible, is a big mistake.
But, equally, trying to automate every single element of your testing is a mistake too.
That’s because you cannot automate human creativity. For subjective tests, like UX tests, UI tests and exploratory testing, actual, human testers are needed, so that they can experience an app as your customers would, and report back on issues they find.
Trying to force 100% automation, therefore, reduces your testing scope, siloing your QA. This will jeopardize your product quality because your app won’t be thoroughly investigated by real human beings. Automation may be fantastic for verifying tasks like a ‘yes’ or a ‘no’, but manual testing provides crucial feedback that can influence how you improve your product for the better.
When Mercedes Benz first launched in China they did so under the name, Bensi. Sounds cool, right? Not when, in China, Bensi, in fact, translates to “rush to die”. That’s not the most trustworthy name for a fast car…
Localization ‘fails’ happen more than you might expect. And they definitely happen in the world of software development too.
That’s because an app that was created, tested, and launches successfully in the UK, isn’t going to work as effectively in new global markets. When you launch across the globe, you are contending with new networks, devices and OS combinations. That means that, if your app isn’t tested thoroughly, global users simply aren’t going to have the same fantastic user experience that your original market had.
So, avoid this common mistake and work localization testing into your QA strategy. By harnessing the power of a crowd of testers across the world, you can gain local insights into your app, discovering country-specific bugs and identifying cultural nuances.
That way, you won’t have any awkward translations or glitchy products.
Let’s say you’re measuring the results of your current QA process. You decide to gauge success on how many test cases you write and execute. In theory this should be a good indicator of how efficiently your team is working and how fast things are getting done.
Months later, you’re confused with the results.
Your app store rating is low, customers are reporting bugs, and the app just won’t stop glitching. But the test metrics confirm that your team is writing more test cases than ever.
What went wrong?
You were using the wrong test metrics.
Consider tracking these metrics instead:
Number of System Outages and Length of Downtime Due to Product Errors: This is relatively self-explanatory, but the number of all-out system outages and downtime is a key indicator of whether your app is glitchy or not. Hours of downtime signals that something needs to change, and fast.
Number of Customer Complaints Directly from Product Errors; (via support channels): If hundreds of you customers are complaining directly to you about your app quality, that’s evidence there are critical bugs that slipped through the cracks. The customer is key, so this is an important indicator of whether or not they are happy with your product.
Mean Time to Detect (MTTD): This is all about team efficiency. How quickly are you discovering bugs? The faster bugs are discovered, the quicker they can be fixed.
Mean Time to Resolve (MTTR): Linked to MTTD, a fast detection time means nothing if bugs aren’t being resolved in a similarly timely fashion. If your MTTR is extremely long, you might want to work on prioritization and team productivity.
Don’t get me wrong, I believe it’s truly important that someone ‘owns’ QA in your company.
Having someone in charge of QA means you have dedicated manpower to work on strategising, prioritising and analysing test results. Without this, it’s going to be hard to make QA a priority. You need high-level decision making.
But, just because you have a dedicated QA person, doesn’t mean sole responsibility falls on their shoulders.
In reality, everyone in your company owns quality.
Building up a culture of quality is all about ensuring every single person in your team is aligned on what quality means, and how to achieve it. If everyone in your company understands the importance of quality, they can incorporate it into their existing routines and workflows.
QA should be considered right from the early design through past launch. It should be front of mind for developers, CTOs and C-Suite executives alike.
If everyone is responsible for quality and aligned on its importance, you will see product improvement and growth as a result.
Author Bio: After selling his first startup, Ronald Cummings-John is now scaling up Global App Testing, a VC-backed, crowdsourced testing platform with over 40,000+ professional testers globally. Global App Testing was selected as one of the fastest-growing tech companies in the UK. His passion for quality assurance has taken him around the world, and he has worked with QA and product teams from top companies including Etsy, Microsoft, King, Spotify, and eBay. Ronald is co-author of the book, "Leading Quality: How Great Leaders Deliver High-Quality Software and Accelerate Growth.