Listen to this story
Software engineering manager, author and computer scientist.
The writer was physically present in relevant location(s) to this story. The location is also a prevalent aspect of this story be it news or otherwise.
Much of the life of software engineers, and those in other professions, is spent problem-solving. Some may say it is the bedrock of an engineer’s professional life.
Over the past year, I’ve spent a significant amount of time learning how different people and professions solve different problems. At its heart, there is a fundamental conflation between what problem-solving means to different people.
I have found this leads to a misunderstanding of how we expect others to address problems and can often lead individuals into careers where they are solving problems in a way that does not align with their own intrinsic motivations.
Fundamentally, this originates from individuals conflating different expectations on how problems are solved. These conflations are the three fallacies of problems I will present in this article.
When you solve a problem, you find a solution to it. However, when you resolve a problem, you put an end to it. Resolving a problem may well mean that the root cause problem is not solved. These two different skills are often conflated.
An investigative journalist may publish a story solving a cover-up by making it public what has gone on. However, as the resulting media attention will reflect, they will not have resolved it. By contrast, a PR officer may well resolve the story by seeking to stop the media storm, but they have not solved the underlying problem.
A software engineer may well solve a problem by creating a new product, but the problem isn’t resolved (as the ongoing software maintenance will show). An engineering manager may well “resolve” a bug report citing that it’s worthwhile fixing, but the underlying problem remains unresolved.
A police officer may solve a crime, but a prosecutor resolves the problem through the prosecution process. An anesthetist may well resolve the pain, but a surgeon solves the underlying issue.
HR may well resolve a workplace dispute between two individuals by separating them, as the ongoing hostility will show, they have not solved the underlying disagreement. A conciliator may well address the underlying human challenges.
That isn’t to say that professions can’t do both - an engineering manager needs to both be able to solve and resolve. A clinical psychologist needs to both be able to resolve psychological challenges whilst also helping the patient resolve the underlying issues.
However, the danger of conflating these two approaches risks either never solving the underlying challenges, or not bringing trivial issues to an end when they serve no purpose to address them.
Loss aversion, a psychological factor where humans feel the pain of a loss twice as much as the pleasure of a gain, can lead us down the path of continuously resolving and never solving. This can lead to problems snowballing from minor issues into catastrophes.
By being mindful of whether we “solve” or “resolve”, we can both prevent issues from snowballing and spending too much time on trivial issues.
For the purposes of this fallacy - preservation refers to maintaining or improving the internal order of a system, whilst protection refers to shielding the system from external order.
This isn’t so clear cut by professions: a cybersecurity engineer needs to preserve internal culture so problems are flagged, doctors sometimes protect from infections (rather than just the intrinsic threat of old age) and fighter pilots must preserve an effective team relationship to succeed in their protective function.
However, adopting an approach where preservation and protection are used interchangeably can be catastrophic. An extreme example is attacking whistleblowers to preserve a system, instead of protecting from the harm they are flagging. Conversely, ‘protecting’ society through authoritarian security policies may well not be as effective as preserving social well-being through policies aimed at addressing underlying challenges.
This can be explained by psychological biases like loss aversion (as mentioned in the last section), the sunk cost fallacy (reluctance to abandon approaches we’ve invested in), and normalcy bias (minimizing or disbelieving threat warnings).
However, there appear to be two ways of mitigating this:
Success in imitating does not necessarily correlate with success in innovating. I don’t mean to say imitation in a disparaging way, it’s hugely important - day-to-day society would not operate unless there were people capable of learning a body of existing knowledge and applying it effectively.
However, often those who have innovated have not necessarily been those who have mastered a given discipline. Thomas Edison was a telegraph operator, The Wright Brothers worked in a bicycle shop, and Jeff Bezos had no retail experience.
Whilst expertise in a single area is an easier brand for people to consume, it seems having a broader knowledge range enhances, not diminishes, problem-solving skills.
Often, we do ourselves no favors by criticizing those who have expertise across disciplines or requiring a homogeneous path to success. After all, how many great products have come from competitive programmers?
To summarize the various distinctions explored in problem-solving:
Whilst many of these distinctions are spectrums rather than binary switches, you can use this model to identify the problems you’d most like to solve. In my case, forced to pick with varying degrees of conviction, I’d go with: solving, protecting, and innovating.
In other words, I like creating new solutions to address the underlying problem in areas where the problems are about shielding the problem space from an external order.
This approach allows me to understand the problems I like to solve and the potential blindspots when working through new problems to ensure I don’t bias myself to using the approaches I am most likely to find myself drawn to.