What if there was a way to remove opinions and personal preferences from the equation and unambiguously determine what code is better given two competing solutions?
I came up with this idea about 5 years ago and it has withstood intense scrutiny from developers and architects at various companies since then. I think about it countless times throughout the day as I write pristine code that is extremely difficult to criticize.
And without further delay, here it is…
Baring any approaches which contradict the objective, the smallest file-size post-compilation trumps any alternative.
There are two inverse things to play with in that statement.
How to determine what code is better “pre-compilation” requires a different discussion and it’s a judgment to be made after considering the “post-compiled” solution. That being said, it’s hard to argue that following precedence isn’t a top consideration.
It goes without saying that dependencies inflate the size of a codebase post-compilation. As a general principal in life, who wants to argue that dependence is better than independence? Remember the Left Pad debacle? In a perfect world, without deadlines, a codebase should stand on its own… minimalistic, elegant, and consistent.
Of course the objective is going to appear at some point and put time constraints on a project. That’s when it makes sense to reach to the shelf and pull in a dependency which inevitably has more code inside that you’ll require.
Realistically a developer/team has to stand on the shoulders of others. However there’s leeway when it comes to placing the dividing line between a codebase and its infrastructure. If I build a project on AWS that doesn’t mean that any of Amazon’s code has to be considered when evaluating some post-compiled solution against another.
What about the software stack (i.e. React, Node, Java, Linux, etc.). It can be a matter of debate what is the infrastructure/platform versus a dependency. Generally speaking however, I’d consider something like React and Typescript a dependency which involves a transpilation routine and that results in a final build file (post-compilation). Java and Node.js don’t show up within a build file (ignoring something like Docker) so I wouldn’t consider those platforms/languages to have any impact on debates which utilize the Coder’s axiom.
In many cases, when two competing solutions are evaluated, they’re both running on the same stack anyway so considering the size of the platform/infrastructure becomes irrelevant.
A common argument I’ve heard is that one version of code might be larger, but since it’s more efficient it’s therefore “better”. Well, only if the objective makes it so! Have you ever heard the mantra that premature optimization is the root of all evil? In other words, don’t spend time (or code) making things more efficient if a smaller/simpler alternative proves sufficient.
This is where the Coder’s axiom really helps a lot. Too often I see over-designed systems where structures are abstracted before the need arises. Clearly that unnecessarily inflates the size post-compilation.
Not looking for signs of repetition and abstracting when the need arises? Converse to over-designing a system, under-designing involves duplicating patterns/routines which inflate the file size post-compilation.
Unless you’re absolutely certain that you’ll need to use functionality multiple times in the very near future, opt for the monolith. Just don’t get lazy when it’s time to pull something apart as your system evolves. That way you’ll continuously have the best (I mean least-wrong) code for your momentary context.
Amazon and Netflix both started as a monolith because that’s what you do when you’re starting out. The Objective changes over time as traffic increases and features are added.
Here’s what Rob Brigham, Amazon AWS senior manager for product management, had to say at a 2015 conference.
Now, don’t get me wrong. It was architected in multiple tiers, and those tiers had many components in them. But they’re all very tightly coupled together, where they behaved like one big monolith. Now, a lot of startups, and even projects inside of big companies, start out this way. They take a monolith-first approach, because it’s very quick, to get moving quickly. But over time, as that project matures, as you add more developers on it, as it grows and the code base gets larger and the architecture gets more complex, that monolith is going to add overhead into your process, and that software development lifecycle is going to begin to slow down.
Feel like arguing against Occam's Razor?
If I have a working solution that meets the objective, who wants to be on the side of “adding more” just to satisfy some righteous principal or distant prediction? “Keep it simple silly” and opt for elegance. After all, since nobody can do anything perfectly, the less of “it” we do the better off we are.