Four software engineers lined up for the crime of missed deadlines and a poorly functioning product. Who is to blame? It’s all of them, but not for the reason you think (and they had little choice in the matter…)
Johnny is a terrific developer. He cares deeply about his craft and believes it is his ethical duty to write the best code he can. He knows his bread and butter languages inside and out, always tries to research and follow best practices, and spends his evenings reading technical books to expand his knowledge.
But Johnny has a problem. He has been working for the last week on a small component of a larger software system for Drivl, the tech startup he works for. He had thought his solution was very clever indeed when he’d come up with it at the beginning of the sprint, but he has only now realised that he might have been mistaken. A small pang of anxiety runs down his spine, and he feels his palms begin to sweat slightly. Johnny has noticed a flaw in his technical approach that won’t cause immediate problems if included in next week’s release, but he knows that if the code continues to run, it will result in performance degradation and maintainability problems for his team over the long term. He pauses typing, furrows his brow, and begins to ponder to himself.
“The sprint will be over tomorrow evening”, he thinks. “But I know I’ll need at least a couple of extra days to refactor the code into something more workable.”
He has a choice. His first option is to confess he was wrong, however this will delay the release and inevitably harm both his reputation and the reputation of his team within the company. It will have to be explained to the higher ups, who are unlikely to understand, as the problem is really quite technical. The team have already missed a few deadlines and the last thing they need is another stain on their reputation.
The other choice is to stay silent, let the code deploy, and allow the can to be kicked down the road.
“I guess I have no choice”, he thought to himself. “I’ll try to make sure I have time to fix this in the next sprint.” Unfortunately, for whatever reason, this never happens…
In the story, what Johnny doesn’t realise is that he is the ‘prisoner’ in an iteration of the classic game theory problem the ‘prisoner’s dilemma’.
The prisoner’s dilemma is a paradox in decision analysis in which two individuals acting in their own self-interest pursue a course of action that does not result in the ideal outcome. The typical prisoner’s dilemma is set up in such a way that both parties choose to protect themselves at the expense of the other participant. As a result of following a purely logical thought process, both participants find themselves in a worse state than if they had cooperated with each other in the decision-making process.
The prisoner’s dilemma tends to show up a lot in day to day life on various macro and micro scales. It is why unilateral nuclear disarmament is difficult, why vampire bats engage in reciprocal food exchange, and why when it snows for one and a half days in the UK, shops completely sell out of bread for no particularly good reason.
The ‘pure’ example of the prisoner’s dilemma is as follows:
Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communicating with the other. The prosecutors lack sufficient evidence to convict the pair on the principal charge. They hope to get both sentenced to a year in prison on a lesser charge. Simultaneously, the prosecutors offer each prisoner a bargain. Each prisoner is given the opportunity either to: betray the other by testifying that the other committed the crime, or to cooperate with the other by remaining silent. The offer is:
If A and B each betray the other, each of them serves 2 years in prison
If A betrays B but B remains silent, A will be set free and B will serve 3 years in prison (and vice versa)
If A and B both remain silent, both of them will only serve 1 year in prison (on the lesser charge)
Because betrayal offers a greater reward than cooperation and cooperation carries the risk of betrayal from the other, two purely rational actors will always betray each other.
The most important fact to realise about the dilemma is that people are forced into defection by the logic of the setup, not because of any bad intention. If they want to minimise their losses, they must defect, because a misjudged attempt at cooperation will cost them more.
The same dilemma can arise amongst groups of more than two people as is the case for Johnny in my story above. This is known as an ‘n-person’ prisoner’s dilemma.
In Johnny’s story, cooperation takes the form of transparency around technical debt, and calling it out to colleagues early so that it is visible and can be fixed either immediately or strategically. Defection involves an individual hiding technical debt and carrying on as if it doesn’t exist because of pressure to deliver on time and the misguided equating of incompetence with the need for iterated problem solving.
Unfortunately, after many iterations, if tech debt is introduced and not fixed, issues with the codebase will inevitably start to pile up to unmanageable levels even though this is not desirable for anyone involved.
Johnny is subject to a particular set of rules, and he has to ‘strategise’ his way through in a way that is least objectionable to his circumstance. In a setup that incurs reputational damage for confession of technical debt, it is in his interests to defect. In fact, for Johnny, it is a ‘dominant strategy’.
In game theory, a ‘dominant strategy’ is when one strategy results carries equal or better outcomes for an individual player, no matter what strategy the other players use. A prominent feature of the prisoner’s dilemma is that the dominant strategy for the individual player results in a bad outcome for everyone, when played by everyone.
To illustrate how this applies in Johnny’s case, the following decision matrix shows a representation of payoffs and resulting codebase quality for the individual vs the group, depending on strategies taken. The punishments are intended to be representative in a working environment that enables reputational damage when an individual commits technical debt into the codebase (this could take a variety of forms e.g. accusations of incompetencey, or shaming for causing the team to miss a deadline).
In the diagram, defection (not owning up to technical debt) receives little punishment for the individual in all circumstances, whereas confessing tech debt sometimes carries a very high penalty, and never carries a penalty with a better reward than when not confessing. Rational actors will always play their dominant strategies where they exist, and in this case, the dominant strategy for the individual is the bottom row. The resulting equilibrium is the bottom right square: a poor quality codebase.
Technically, things aren’t too bad if the team manages to stay in a situation something like the bottom left box (most people aren’t owning up to tech debt introduced, but maybe a couple of them still are sometimes). However, this position is unstable, because once enough people notice the occasional defections they will start to wonder why they should confess and take reputational damage while others are clearly not, and the situation quickly devolves into something close to the bottom right box.
You may find some ‘heroes’ who are willing to put their own interests aside, and confess the tech debt they think needs to be fixed in all circumstances. However, this is stressful for them and unsustainable over the long term. If the ‘heroes’ come to realise that their efforts are in vain, and that no cultural change is in sight despite the reputational damage they are taking, they are liable to leave for other jobs.
The skill of the team is also irrelevant, because there are plenty of perfectly remissable reasons why even a skilled team member might introduce technical debt that are indistinguishable from mistakes to a non-technical stakeholder. To name a few:
The culture and process of your company can be thought of as a set of rules to a game that will affect the strategies taken by the individual players (the employees).
The most optimal outcome for both the business and the engineering team consists of either solving technical debt as soon as it arises, or making it transparent so that it can be managed judiciously and kept at a agreeable level. To enable this, the company will have to structure the rules of the game to ensure that the dominant strategy for individual players is to confess to inferior technical decisions.
There are two logical options for achieving this:
Solution 1 is unfortunately not very plausible. Defection punishment only works if punishment is swift, obvious and accurate. This is not possible in a software engineering culture, because technical debt is easy to hide, at least initially. Often bad technical decisions will not be known about even by those that made them until a significant time after they have been made, and are often difficult to pin down to an individual choice.
Solution 2 is the most likely candidate for success, although it is still not easy. It is likely that most successful software engineering cultures have some level of safeguards in place to prevent reputational damage.
There is not a one size fits all way to implement this, but here are some suggestions:
My argument here is not inconsistent with the findings of google re:work’s two year study of 180 teams that found that the most important cultural property of successful team is “psychological safety”.
Google define a team as being psychologically safe when “team members feel safe to take risks and be vulnerable in front of each other”. In their words:
Turns out, we’re all reluctant to engage in behaviors that could negatively influence how others perceive our competence, awareness, and positivity. Although this kind of self-protection is a natural strategy in the workplace, it is detrimental to effective teamwork. On the flip side, the safer team members feel with one another, the more likely they are to admit mistakes, to partner, and to take on new roles. And it affects pretty much every important dimension we look at for employees.
Julia Rozovsky, Analyst, Google People Operations
In reality, humans display a systemic bias towards cooperative behavior in this and similar games despite what is predicted by simple models of “rational” self-interested action.
The prisoner’s dilemma arises when rational actors play their best strategies. However, people are not rational actors. Sometimes, we may be driven by human nature to benevolently act against our own self interest when faced with a prisoner’s dilemma. Conceivably this could mean that there is hope when trying to turn even the most toxic of workplace prisoner’s dilemmas around. People don’t really want to to be self interested cowards, and will try not to be if given the opportunity. It’s just that sometimes they are faced with payoffs that are so bad, that they feel that they can’t.
The key takeaway here is that you should watch out for the unintended game theoretical side effects of your process, because perfectly well meaning people might cause you serious headaches if you make them follow a set of rules where their best strategy for self preservation facilitates a bad outcome for everyone. Normally, no one wants this bad outcome, so change the rules of the game and it doesn’t have to happen that way.