Does your development team feel stuck when it comes to knowing what specific things to focus on to improve your software testing and quality management? Need to figure out how to fill in the gaps and improve efficiency and results?
We’ve found it immensely valuable to periodically conduct a software testing maturity assessment when working with clients. We also often do it during the early stages of a new engagement to better understand where our client’s team stands in terms of their quality goals and to put together a strategy to take them to the next level.
In this post, I’ll share the process behind our software testing maturity assessment so that you can bring some of these ideas to your own test strategy, working towards continuous improvement.
In the end, we believe that mature teams are those who have mastered the practice of continuous testing, which is what we’ve designated as our highest level of testing maturity. A continuous improvement approach for your test strategy is what can help your team to succeed along with the adoption of CI/CD.
The assessment deals with how teams go about efficiently matching testing and quality control tasks with several processes throughout the software development and establishing the appropriate feedback cycles for continuous improvement.
Firstly, we conduct the analysis considering the three main pillars of software engineering: people, technology, and processes.
To keep things simple, we’ve defined three different levels of software testing maturity:
When performing the assessment, we follow a three-step process:
One of the first things to analyze, after understanding the objectives and the context, is the maturity of the team in terms of skills, communication, and other aspects that also influence the quality of the final product.
In the same way, we analyze everything related to the process, methodology, (whether the team works in agile, waterfall, or a hybrid environment), etc.
Putting it all together, we see something like this:
Next, we delve into other points that are largely related to the technological and process aspects of software development, but highly focused on everything that affects quality.
For each of the areas we analyze, we define a table with three levels of maturity, for which certain preconditions should be previously met to be at each level.
This leads us directly to the action plan because, in order to advance to a higher level, it’s clear what else must be tackled first. Of course, everything is validated in context, prioritizing accordingly and weighing the benefits, costs, and risks of each activity.
In the chart above, you can see a base model for the different areas we analyze, from how teams manage the source code to usability testing.
Of all the ISO 25010 quality factors, we included only the most common ones that are relevant for most companies, but for each quality factor, we can define levels in a similar way.
For each area, we identify key activities for each level. As you can see, we define precedence between activities. For example, a team can’t claim to have continuous integration if it doesn’t first have a centralized code repository that manages artifact versions.
Associated with this analysis, the following chart shows some of the most common pain points that each level of maturity manages to eliminate or solve:
We hope this software testing maturity model can serve as a useful reference when analyzing how to improve your testing strategy.
We’ve found it to be a helpful tool to clarify which are the most important areas to prioritize, what gaps exist in a test strategy, and how to make a plan to reduce risks and optimize quality under controlled costs.
Curious as to which level your team lands for the different areas of quality? Take our free, online software testing maturity assessment!
Also published here.