In the Atlassian playbook, it states that Sprint Retrospective’s goal is to identify how to improve teamwork by reflecting on what worked, what didn’t, and why. Usually, the meeting consists of brainstorming what the team did well and what the team needs to do better.
During my journey, I’ve been in many of these meetings, sometimes participating, sometimes facilitating. I’ve noticed that writing down post-its is painful for some people. When everybody is speaking, engineers get stuck and can’t think of anything else during the meeting.
Then, right after the meeting is over, many situations and problems emerge. They would like to discuss those topics. However, they wait a couple of weeks for the next meeting. Rarely they remember the issue. I don’t think the meeting is as effective as it could be.
To increase effectiveness, I think managers should promote a culture in which people raise their hands when the problems occur. They should discuss issues fresh in their mind when possible.
If the issue needs a more extended discussion, the team can add it to Evernote, Notion, GitHub Gist, or any other shareable document. A couple of sentences telling the problem with a little context would do the job.
More post-its may pop up during the meeting. Then, the Scrum Master or facilitator can read the document.
When the meeting starts, the team already has a list of essential items to discuss. It saves time and makes it more productive.
So, how can teams make Sprint Retrospective meetings even more effective?
I’ve been talking about pull request Lead time and throughput for a while. If you’re familiar with the term Cycle Time, I recommend reading my article on why I rather use Lead Time instead of Cycle Time.
To be explicit, in this article, pull request lead time measures how many days a pull request takes to be merged, and throughput is the number of merged pull requests.
These two metrics can truly help teams to understand how they are performing. Besides, teams benefit from them when they look for the variability along the weeks. The data bring valuable insights and allow teams to question their decisions.
Let me explain through an example.
I took this chart as an example. Let’s say we’re back in January 2020. We’ve started the Sprint Retrospective creating post-its for the team’s issues and appraisals during the last weeks of December 2019.
We’ve discussed a bit, and now someone shares the chart above to analyze the pull request lead time and throughput.
If the sprint is two-weeks long, we could use the metrics to ask ourselves questions like this:
Asking this kind of question is fundamental. However, nobody should point fingers or blame other people. That’s not healthy nor effective.
I would expect answers like this:
The numbers themselves don’t matter as much as the story behind them. Now it’s time to ask what are learnings from the last iteration. Here are some alternatives to address those issues:
As you can see, the data allow teams to derive relevant aspects for the Retrospective. The discussion is a tremendous source of knowledge and should generate insights for improving the development flow.
Sprint Retrospectives can be more effective using metrics. The team can extract valuable knowledge by questioning the pull request lead time and Throughput. Indeed, other metrics are useful, as well.
The crucial point is to use the metrics to distill tacit knowledge and formalize it. What went well and what went wrong should promote changes in the process, feed guidelines, and motivate new practices.
That’s what an effective Sprint Retrospective is about, in my opinion.
Sources: