Urizen shows his book of laws. Image Credit, William Blake Archive
When human endeavors so often go wrong we blame people for making bad decisions, whether from a moral, or political, or scientific point of view. This only raises our emotional temperature without fixing problems or heading off future catastrophes. However, there are recent theories that explain that many failures stem from general dynamical patterns. Let’s look at two of these patterns. They might someday be part of a science of how to get things done, a kind of moral philosophy of the practical.
First, let’s consider what we’re up against. As in, what are the given background conditions for any human project, the constraints that we simply have to accommodate?
Murphy. The first is commonly called Murphy’s Law: what can go wrong will go wrong. This is really the second law of thermodynamics. It says there are many ways for disorder to increase and very few for it to decrease. We ourselves are temporary thrusts against the gradient of disorder. Things we do to provide ourselves a safe environment are therefore also temporary.
This can happen at a low level of abstraction, as when things that we make wear themselves out, when our food rots, our bodies degrade, or when weather tears up our habitations. But it also applies at a higher level of abstraction. Case in point being software. It is hard-to-impossible to make a useful program that does not have ways to fail. Likewise, our social and cultural arrangements are subject to decay.
Unintention. Economists speak about a “law” of unintended consequences. We do something to solve a problem, but as a result, some other problem arises, something that we could not or did not predict. Examples are innumerable, but for now, remember that every drug has unwanted (so-called) “side” effects. The underlying core principle of unintended consequences is the familiar chaos theory, the butterfly wing effect.
Human Nature. The third thing we are stuck with is human nature, a conglomeration that includes some real downers. We have sociopaths. We have tribalism. Selfishness and free-riders cause problems directly and indirectly by causing trust issues. We find it hard to agree on basic facts because we are ruled by unconscious cognitive biases.
One bias (that I have covered elsewhere) happens when we try to understand others’ motivations, a process called Theory of Mind. We over-generalize this, thus attributing agency to parts of nature that don’t have it. We then manufacture belief in supernatural forces whose whims we need to appease. The resulting behavior may be benign (nature worship), or harmless (folk tales), or horribly wrong (crusades, crucifixions, cleansings).
Another major “human nature” issue is in the scope of how many other people that a person cares about. This sets up conflicts between those who care only about family or tribe versus those who also care about larger groups or even humanity as a whole.
So, we exist in a background of threats from: entropy, inability to foresee the consequences of our actions, and everyone’s often unconscious cognitive biases. There are also higher principles involved, dynamics that lead good intentions astray.
Moloch: by John Singer Sargent @ Boston Public Library
An anonymous blogger, pseudonym-ed Scott Alexander, gave us a catchy name for the most destructive dynamic in the world. Moloch was an ancient Carthaginian god of human sacrifice. In Allen Ginsburg’s famous poem, Howl, Moloch was identified with all the myriad horrors and miseries of civilization. Alexander, impressed by the analogy, used Moloch as the nickname for what he termed multipolar traps.
These are situations that can arise whenever multiple parties are pursuing some valuable goal, G. Nearly always the end in itself or some resource leading to it will be in short supply, and thus competition will arise. The bad dynamic occurs when someone decides to sacrifice some other value, V, in order to get ahead. Any other parties will be out-competed unless they also sacrifice V. Soon everyone who survives the competition will find themselves without V and possibly have less of the original goal G.
There can instead emerge a monopoly or oligopoly when one player or a few players sacrifice enough V’s to get all the G.
The more interesting cases are those where there are many players. Consider the story documented in Elizabeth Warren’s 2004 book, The Two-Income Trap. Suppose the Goal is to live in a safe neighborhood with good schools. The game players are families raising children. To buy a home in an expensive school district, both parents have to work. The Values sacrificed include time with children, cultural opportunity, and diversity, as well as the money that is used for child care, higher taxes (because of higher joint income), mortgage increases, a second car, convenience foods, and tranquilizers.
Also sacrificed are the aspirations of people who can’t get good enough jobs to compete. These folks are not even in the original game; they are collateral damage. Meanwhile, the price of desirable houses goes up and up, taking the Goal out of reach for many. Without anyone making a decision to do so, other values are sacrificed as household stress, debt, divorce, and bankruptcy increase.
Of course, as we know, it went further than that. Even back then Warren mentioned subprime mortgages as a dangerous development. Libertarian, greed-is-good schemes are Moloch’s favorite food. Now, in the continuing aftermath of the last crash, it’s Jurassic Park for the middle class, bunkers for the rich, and fascism for the rest. Moloch, indeed.
So the first Failure (note capital F) dynamic is Moloch of the Multipolar Traps. ‘Multipolar’ because it is a competition of multiple parties. ‘Traps’ because once you are in the downward spiral, you are stuck until something gives, some limit is reached. In our two-income example, perhaps the limit is a pandemic with widespread unemployment, or… pick your own favorite show-stopper.
The story above, of course, has a lot more going on than a multipolar trap. It has the basic background conditions mentioned earlier, and several doses of our second major type of Failure dynamic. That dynamic has several technical names from game theory, economics and even from artificial intelligence theory. But the name I like is a new-ish one, Surrogation. It simply means the substitution of a surrogate goal for an actual goal. It sounds innocent enough. However, there are plenty of times when we get together and agree on totally admirable goals, and it still ends in tears.
Take a moment and remember the Phrygian king King Midas. Dionysus offered Midas anything he wanted as payment for a favor. What he wanted was riches, but his surrogate for that (“let everything that I touch turn to gold”), awarded literally by Dionysus, was a curse.
Surrogation is the basis of legal systems. A prescriptive law is said to have a spirit (its goal), which is to encourage some better state of affairs or to eliminate some problem. On the other hand, the letter of the law (its surrogate) specifies rules for bringing the spirit to fruition. Rules for shaping human behavior are often tricky, and so laws are imperfect. William Blake’s mythical being, Urizen (”your reason”), tried to constrain the universe by creating laws but made chaos and misery instead.
We engage in projects whose ideal outcome might be easily stated and agreed upon, but whose execution has to be guided by a substitute. The surrogate is that which can be measured, or tracked, or is simply feasible for the situation. Effort can only be applied to the surrogate, not to the actual goal. Wise observers tell us that an enterprise based on surrogation very often misses its mark.
The ill effect of surrogation is so reliable that there’s a version known as Goodhart’s Law. Economist Charles Goodhart said: when a measure becomes a target, it ceases to be a good measure. So we can design a project (private or public, it doesn’t matter), and say (oh, so smugly) that we’ll know we’re getting there because we will measure X, Y, and Z. The measures are chosen because they make sense, they logically connect somehow with our project’s true goal. Often they are measures of the direct effects of particular policies that we think will help lead us towards our real goal. Perhaps in the past, they even occurred at times when we seemed closer to achieving the goal.
So, we’re set, all happy because we have a good goal and good measures to lead us to it. We change our policies to set the project in motion and sit back to watch our success. In what way could anything go wrong?
Let us count the ways, as explained by Scott Garrabrant, one of those people who tries to imagine how we might get super AIs to behave admirably.
(1) Goal Drift. People who have incentives to do things that change the Surrogate measures start to think of the Surogate as an end in itself. The Surrogate mentally becomes the new Goal. But optimizing for the Surrogate does not optimize for the Goal.
(2) Causal Disconnect. The Surrogate and the Goal have been correlated in the past because they both are affected by other causes. But changing the Surrogate has no causal effect on the Goal at all. Or maybe you failed to realize that the Goal actually causes changes in the Surrogate, but not vice versa. You can’t make yourself taller by practicing basketball skills.
Sometimes the project itself changes the causal effect. We decide to place a wind farm in an area where there is a lot of wind. We calculate how many windmills to use and what our power output (the Goal) should be. But the presence of many windmills in an area actually decreases the Surrogate measure, wind speed.
(3) Statistical Regression. The correlation of the Surrogate with the Goal is causal, but imperfect, because random other unknown causes are involved. When your actions increase the Surrogate measure, there might be no corresponding effect on the Goal.
Sometimes you look at multiple Surrogate measures and pick the one that correlates best with the Goal. But (says basic statistical theory) that correlation, because it is high, is most likely to be high due partly to random factors. So when you start to use the Surrogate, its actual correlation with the Goal becomes less, i.e., it “regresses towards the mean”.
Height is correlated with basketball ability, but unless you are the coach in a tiny town, you don’t just pick players based on height.
(4) Gaming. Perhaps the most powerful way that Surrogates fail to lead to Goals is also the one that is easiest to understand. People are getting rewarded for changing the value of the Surrogate. So, they find the easiest way to do that, which might have no effect on the Goal, or may even work against it.
Here are some classic real examples of surrogate gaming. Many more can be found in attempts to regulate the financial industry.
The Cobra Effect. Offer a bounty in India for dead cobras, and people start raising cobras to turn in for the bounty. When the government wises up, the unused cobras get released, increasing the population that everyone wanted to decrease.
Central Planning Confounded. Soviet factories that are rewarded for the Surrogate, number of nails, produce a lot of tiny useless nails.
I was tempted to write the TL;DR about college loans, which involves interlocking hierarchies of surrogation and the two levels of multipolar traps. Instead, readers can try to puzzle that out for themselves. The top-level goals are American national competitiveness in a knowledge economy, and rewarding careers for individuals. The surrogate policy was to have government guarantees of student loans, making them easy to obtain. The unintended results were rampant college cost inflation, the rise of fraudulent educational enterprises (even a disreputable real estate developer could start one), staggering student debt, adjunct professors living out of their cars, and higher unemployment for graduates.
There’s probably also a multipolar/surrogate story for the U.S.’s absurdly expensive healthcare, or for any combination of <unfavorable adjective> &<tragic societal circumstance>.
We need people to understand that there are dynamics, even nuances, to how things go wrong. Most people prefer simple slogans to complicated (’blah blah, blah blah blah’) explanations. ...? Maybe better education would help.
Another barrier is the pervasive, eternal multipolar and surrogate trap of power politics, which also is subject to the corrosive influence of sociopaths and narcissists. In fact, power politics might be a selective pressure for the elevation of bad characters.
What we need from political and business leadership is not competitive gamesmanship. They should help develop and then make use of a science of the possible; to use factual evidence, obtained by transparent processes; to acknowledge traps and weigh trade-offs, crafting better incentives, and making clear priorities.
We also need to realize that we are chewing up planetary resources that took enormous, deep time to create. Thus, we are stuck in the biggest multipolar trap of them all: our own tar pit of hubris, myopia, and fear. And half the United States’ electorate still denies all of this.