(Source)
To solve a problem, you first need to understand the problem.
Problems exist within systems.
So, to understand the problem, you need to understand the system.
By systems I mean complex adaptive systems. They are the most interesting kind, and difficult to deal with.
Complex, because removing a part destroys the system. Each part is interdependent on other parts. Like removing a heart from the human body.
Adaptive, because each part reacts to what the other parts do. Like the heart pumping faster when artery blockages develop.
Treating complex systems like a black-box, where you figure out the problem by trying solutions is a terrible idea. Imagine going to a doctor and without listening to you, the doctor gives you aspirin. Do you feel better? No? Then paracetamol. Still no? An antacid. Still no? .. Sigh, okay, tell me what’s wrong?
But that’s how we solve most surface problems. And in simple systems, it usually works.
The key is, can you distinguish between simple and complex systems?
If you don’t know what to call it, you can’t distinguish it. [1]
An alternative is white-box thinking. Here, we dissect the system, get into its internals and figure out why things weren’t working, or how the system works.
Simulation is either computational or thought based.
One is running a computer simulation,
the other is running a thought experiment.
In both cases, what you get are insights into how the system works. They go hand in hand.
Both need a basic understanding of how things will work — without an existing model, you can’t simulate. These methods are also used to refine your model. For example, you start with a barebones model built by experience. Then, you run it. Then, you improve the model based on the difference in results between your model and the world.
One pillar example of simulation is The Limits to Growth done in 1972. It tracks the demise of humans on the current trajectory of consumption and environment degradation.
Criticized in the beginning, LTG has come to be accepted as the model predicting reality, now that we have 50 years of living the model behind us.
Take a hypothesis, theory or a principle. Think through what will happen if it were true. That’s a thought experiment.
Consider an example — Schrödinger’s cat (1935). It presents a cat that is indeterminately alive or dead, depending on a random quantum event. Figuring out if it’s alive or not is dependent on looking in the box. Till that point, it’s both, alive and dead.
What makes this thought experiment so great is the way it explains quantum theory. Say, instead, you had a coin inside the box. You can toss this coin by pressing a button outside the box. If it’s heads, the cat will die. Tails, and it lives. With every button press, the probability to live decreases, but at no point is that cat both alive and dead. Is your mind blowing to pieces yet?
(ANDRZEJ WOJCICKI/ Science Photo Library/ Corbis)
You begin with a set of hypothesis about what would happen.
You run through them in your mind
There are two things that might happen.
You arrive at an outcome different from what you expect
You arrive at your version of reality
Both cases are a cause for concern.
You confirm that’s what would happen in the real world, via experience
If your model is close enough to the real world, profit.
else you refine your hypothesis and restart.
Here, we enter the territory of real world experiments with real world consequences that might change the state of the system.
You’re perturbing the system to try and figure out what will happen. Thus, you want to experiment with small changes first. [2]
Want universal basic income? Try experimenting in a few cities first. it’s A/B testing on whatever scale works.
The only caveat: things break when you scale up.
Experimentation with system is needed when your imagination can’t keep up with what you’re seeing. This usually happens in complex systems. There are too many variables — you don’t know half of them, and you can’t keep the other half in your brain. [3]
The main aim, using the above approaches is to build an accurate-enough model. One that behaves the same way the system behaves, given the same input.
Input → System → Output
Input → Model → Output
Fair warning though, knowing this model isn’t the same as controlling the system. That’s the Law of Requisite Variety — to control a system, you need a controller at least as complex as the system.
Behaviour over time diagrams are an excellent tool to build this model. Sometimes sketching things out, looking at it all together helps see the relationships. That’s what BOTDs are for.
In both approaches, you’re trying to fill in some blanks. Here are some important questions to answer. They exist in every system.
If you’re unsure what some of them mean, head over to seeing systems.
Find the driving factors
What would happen if I do X?
Figure out the loop structure
If I do more of a driving factor, d , what will happen?
Figure out delays in systems
How long before the results of changing d occur?
Identify the current dominant loop
A second from now, towards which driving factor is the system going?
Some very common models come into play again and again. If you know these well, the problem switches from trying to figure out what is happening, to which model fits? — This is powerful and dangerous.
Dangerous because you get a tendency to beat the current situation into one of these models.
Powerful because you don’t have to start from scratch every time.
This is how Mental Models work. They are archetypes for life.
Wikipedia has an excellent introduction to system archetypes.
To dive into available archetypes, The Systems Thinker is an excellent resource.
I haven’t explored archetypes enough, yet.
What I do know, is that the above archetype list is incomplete.
As Charlie Munger found out, sometimes it’s easier to look in an adjacent field than diving deeper in yours.
That’s what I’ll do.
Into Chemistry — equilibriums. Systems can be thought of as an equilibrium.
That’s when we can use results from chemical equilibrium to explain system behaviour. Presenting, Le Chateliers Principle.
When a settled system is disturbed, it will adjust to diminish the change that has been made to it.
This is precisely what Jay Forrester says happens in social systems.
“At any time, a near-equilibrium exists affecting population mobility between different areas of a country. To the extent that there is disequilibrium, it means that some area is slightly more attractive than others and population begins to move in the direction of the more attractive area. Movement continues until rising population drives the more attractive area down in attractiveness to again be in equilibrium with its surroundings. Other things being equal, an increase in population of a city crowds housing, overloads job opportunities, causes congestion, increases pollution, encourages crime, and reduces every component of the quality of life.”
— Jay Forrester, Counter intuitive behaviour of social systems
[1] Case in point: The Piraha tribe
[2] This doesn’t mean the large scale change would mimic the small scale change. Things can go unexpectedly wrong when you scale up.
[3] Computer simulations shine on this half. But, still produce crap results because they are just half the variables, and not necessarily the most critical.