The amount of complexity in an engineering project was once limited by space more than anything else. How many gears you can fit in that clock? Pipes in that sewer system? Transistors on that chip? With near-infinite disk space, the limit is now our own minds. How many function names can you fit in that API? We’ve learned some things the hard way about this type of complexity. There’s a lower bound of how little you can get away with and an upper bound of how much your team can handle.
When informed of the need to walk between these bounds, to offer certain sacrifices of time and labor to the gods of complexity, some say: “Just quantify the monetary savings gained by doing so and we’ll hear you out.” Let’s say you do. Let’s say you present a research-backed way of quantifying complexity. Your strategy takes into account the time it will eat up now, yet save later; it addresses the impact of complexity on cognitive load and code quality; it clearly lays out how process X and new hire Y will lose the company $1 this quarter but return $1.10 the next quarter, $1.30 the next, and so on, compounding indefinitely. You present this strategy, and still no dice. The complexity NIMBYs strike again.
A NIMBY is a person who tries to seal off their neighborhood from the world like a zoo exhibit. When people move to their city and vote for housing and transit projects, NIMBYs say “not in my backyard.” Complexity NIMBYs likewise ignore reality by ignoring the lower and upper bounds of complexity in software projects and organizations. They make their immediate team’s work simpler by making the work of other teams (or worse, the users) more complex. They downsize, switch from in-house tools to shiny prepackaged ones, and shave requirements too much. They obtain millions of users before beginning to think of a solid monetization plan. They “80-20” everything. This counterintuitively makes things more complex over time, especially for those in the trenches of maintaining or refactoring things later.
They’ve been supercharged by AI code generation and low-code platforms. They can order smaller teams to make more things happen more quickly, meanwhile making the complexity issue infinitely worse. This just hides increasing amounts of complexity behind increasingly opaque tools. The illusion grows ever more vivid of having transcended both the upper and lower bounds of complexity, both the minimum amount inherent in any software project and the maximum amount allowed before it becomes a headache to explain things to a new developer. It’s like Wile E. Coyote running off two cliffs at once, not seeing the drop below him.
Why are people like this? As much as many people in the software industry like to read about behavioral economics and rationality, systemic pressures leave them laser-focused on the next quarter, the next deliverable, just as prone to hyperbolic discounting as anyone, meaning if the benefit is far enough in the future then it’s considered worthless. It doesn’t help that the work to wrangle complexity has no customer-facing, ribbon-cutting ceremony at the end.
But as every software engineer knows, the complexity gods are unforgiving. Dealing with them may be a hard, boring, internal task, going against all of the hottest trends in programming, but the more it goes unaddressed in favor of generating more code, the more it becomes an abstract bomb that’s constantly exploding, annihilating both profit and productivity.