Recently we decided not to gather feedback at our biweekly sprint reviews anymore and concentrate instead on making the development process transparent for the stakeholders.
We’ve always thought that feedback is the main goal of a sprint review. So why have we discontinued this practice?
At the recent sprint reviews at Dodo Pizza, we were gathering feedback diligently. We invited guests, managers, and partners. Our teams reported on feedback they’d received. But I wouldn’t say it was a success.
At a sprint review, teams show what work has been done in brief sessions 20 minutes each. When demonstrating a client app (a website, mobile site, or mobile app), we don’t have time to question more than two or three clients. And that’s not enough. And when we demonstrate a back-office application, we ask our managers and partners to test new features. But 20 minutes is not enough time to really dive in or even understand what’s going on there, not to mention giving proper feedback.
But that’s not even the point. The point is, feedback gathered by our subject-matter experts working with the teams during a sprint is more comprehensive, more valuable, and of much better quality.
For instance, Sergey, our mobile app product owner, meets with clients regularly, questions our users, and gathers feedback in a cozier environment. He often sits in a pizzeria in Lyubertsy where he lives and talks quietly with local guests. During a sprint, he gets to meet several dozen clients, and towards the sprint finish, he obtains a lot of info about the pluses of the product and those things we need to change. So, when a sprint review begins, feedback is already gathered, considered, and changes are implemented.
You’d be surprised, but even in the Scrum Guide, there is not a word about gathering feedback at a sprint review. “During the Sprint Review, the Scrum Team and stakeholders collaborate about what was done in the Sprint” — that’s how it’s put there. The Scrum Team and stakeholders! Not users or customers. And they collaborate, not gather feedback! It’s not about that at all.
Not all stakeholders have a chance to consider all the details of the development process carefully every day, especially if there are eight teams working on a project, like they do here. But nearly everyone can spare two hours every other week.
A sprint review is a very expensive affair, because all teams (58 people in all) and the key stakeholders take part in it. Even the company CEO spares us these two hours biweekly. So it’s important to conduct this meeting with maximum efficiency and waste no time on things that don’t benefit the project.
Not all the stakeholders take an active part in the development process, but all of them want to be kept informed of what’s going on there. That’s the reason we need sprint reviews. It’s not enough to see what’s been done during a sprint. Stakeholders can’t understand whether the sprint goal was a tough one for the team or was achieved in a breeze. They don’t even know how many people there are in our teams.
How do you know which team was working like crazy and which had preferred to set an easy sprint goal and didn’t strain themselves? How do you judge why a team hasn’t reached a sprint goal — is it their low motivation, or is there some deeper reason? How do teams react to emerging problems — do they simply trudge on or do they look for a systemic cause and try to eliminate it (and maybe the sprint goal suffers because of the elimination of this cause)?
There are sprint review tips in the Scrum Guide, and №3, “The Development Team discusses what went well during the Sprint, what problems it ran into, and how those problems were solved,” is what I’m talking about here. So this is what we’ve decided to concentrate on at our last sprint review.
In other words, we’ve decided to use our sprint review to maximize the transparency of the development process. We showed what we’ve done, as usual, but besides that, our teams talked about the genesis of new features and functions.
What was our goal? Has it been achieved? What was going on during the sprint? What distracted or impeded us? What measures did we take to reach our goal? What did we do to eliminate obstacles? What tests did we run? What are their results? What decisions did the team make?
For example, several teams could not achieve their sprint goals because of the Stop the Line principle. Stakeholders were interested what the team had been doing during Stop the Line periods and how this will benefit the whole project. The Traction for Abstraction team told us a real detective story with hopes of a quick solution, a disappointing discovery that there wouldn’t be free rides here, plot twists, and a happy end :-) And finally, the team managed to increase the website’s traffic capacity, doubling it and then some, which is a success! But a success is perceived differently if you know its cost.
The participants liked this new sprint review format, as proven by the mini-retrospective at the end of the meeting. Teams could talk about failed sprint goals comfortably. Many of their stories were dramatic, exciting, and impressive. The stakeholders have understood why concealing a client’s email address in a receipt with asterisks is not a trivial task at all and not just “half an hour of programming,” as they thought previously. There was a lot of deep talk about how business and software developers could collaborate more closely to expedite launches in other countries. And vice versa: how software developers could collaborate with clients more, not getting carried away by the solution they had come up with, but thinking in terms of the clients’ problems. Overall, it was good.
As always, however, there is room for improvement. So we continue our experiments.