In 1980, a brand new Xerox 9700 printer was installed in an office of MIT’s computer science department. It often jammed. Several people in that department probably could have fixed it if not for the fact that it ran on proprietary Xerox software.
One of those people, Richard Stallman, tried to at least program it to notify the rest of his office when it jammed, but wasn’t allowed to do that either. Whenever he and his colleagues needed a printout, they had to go check if it was jammed first. It was on a different floor.
Stallman begged a colleague with Xerox connections to share the code for the driver. The colleague, despite having access, was contractually obligated to say no.
This experience broke something in Stallman’s brain. It was a catalyst that solidified his resolve to give free software a permanent place in the world. If he couldn’t make all software free, he could at least lay the groundwork for free alternatives. He had the skills to help the world preserve the possibility of fixing your own printer, or asking a nerdy neighbor to do so, rather than remaining at Xerox’s mercy. He would not tolerate a society where people were actively discouraged from helping each other by the fact that Xerox’s profits might suffer a little.
He soon left MIT to work on GNU, which stands for “GNU’s Not Unix.” Unix was an operating system built and owned by AT&T. Stallman’s goal with GNU was to build a completely free alternative that could run on any computer.
Crucially, when he did this, he created a new type of copyright license to protect it called the GNU General Public License. It’s also called a “copyleft” license because it reverses what a traditional copyright license does: the GPL states not just that anyone can modify or distribute the software, but that any modified versions fall under the same rules. Corporations who tried to build on top of GNU, for example, would have to be careful in how they did so lest their product become free software.
His vision was no pipe dream. In the early 1990s, a Finnish student named Linus Torvalds used what already existed of GNU to finish a central piece of the operating system, known as the kernel. The GNU/Linux operating system was born. Stallman, Torvalds, and their globally distributed team showed the world that wildly successful software could be developed both remotely and without corporate funding.
Many tools that developers take for granted today, such as the version control system Git, have been developed for this project. Likewise, many huge companies owe much of their profit and success to it. GNU/Linux got so good that countless software giants began using it extensively, such as Google, Apple, Facebook, Amazon, and even Microsoft, despite Steve Ballmer once calling the GPL a “cancer.” (They’ve been careful, of course, to not do so in a way that makes their products free.)
That such a project could be built this way feels obvious now, but it wasn’t obvious at all back then. In 1997 a hacker named Eric Raymond, inspired by his experiences working on GNU/Linux, published an essay called “The Cathedral and the Bazaar,” arguing that building software in this open, communal, bottom-up way, rather than the private and top-down way favored by most companies, yields inherently better results. Putting vastly more eyeballs on a project’s code, as it turns out, makes it vastly harder for bugs and security flaws to take root. It doesn’t hurt when a lot of the contributors work on it out of pure interest and fun, either, rather than economic need.
A year later, a browser company called Netscape did something crazy. Desperate to stop Microsoft’s Internet Explorer from cutting into their market share and inspired by this essay, they publicly shared the code for their latest web browser, inviting the world to build it with them. In the same initiative, with help from supportive hackers like Raymond himself, they also coined the term “open source.”
The nascent open source community ended up scrapping the browser and starting Mozilla.org instead, but the move heralded the future. Companies were now allowed, even encouraged, to start projects where large chunks of development, along with distribution and branding, still happened in-house, but where the whole world could contribute. (Projects like React continue to see great success under this model.) From their perspective, it makes perfect sense: if people are going to build free software anyway, why not benefit from that?
In giving companies a free pass to enter the “open source community,” however, certain hackers said “take what you want and give what you want” to a bunch of organizations built around maximizing the ratio of the former to the latter.
It’s unbroken in the sense that people still write terrific software because they can and want to, and the world still benefits from it. There is also, of course, a darker side.
The recent log4j fiasco is merely the latest dancer in a long conga line of problems stemming from the unholy marriage of the cathedral and the bazaar. Companies adapt an open source library without contributing; the library falls into disrepair because its maintainers can’t work for free forever; security vulnerabilities arise that the companies, building their cathedrals, don’t notice or address until irreparable damage has been done.
These companies often have the nerve — as Microsoft did in 1998 in response to Netscape’s “stunt” — to claim that the open source maintainers are the ones being risky, and that the cure is pouring even more trust and resources into their company, which know what it’s doing.
Again, though, they’re incentivized to be this way. Software consumers like you and me, instead of rallying around better free software, want and use their products. The collective demand that this puts on good cheap software has direct impacts on the maintainers of the free software underlying it.
Facing increasing pressure without proportionate rewards, maintainers can burn out. Even Guido van Rossum, “benevolent dictator for life” of Python, stepped down. Linus Torvalds took a break from Linux. Jacob Thornton, whose on the history of open source I’ve been paraphrasing large chunks of, references “cute puppy dog” syndrome: you start a project because it’s fun, but then it grows. It becomes a more thankless job. While it’s hard to pass off your baby to people who won’t care for it like you did, it’s often harder to take care of Clifford the Big Red Dog when everyone wants to play with him and no one wants to finance his kibble.
Filippo Valsorda, an engineer on the Go team at Google with extensive open source experience, recently suggested forcing companies to “professionalize” (invoice without hiring) the maintainers of the open source projects they use. I suspect this suggestion will grow in popularity; it lets companies part with some extra cash rather than their fundamental assumptions about how freely shareable software should be, or about who should have input in decisions about software that affects the entire world. (Some have carried the torch of questioning these assumptions, like the late Aaron Swartz in his unfinished work about the Programmable Web, though they’re few and far between.)
In a 2001 speech where he told the Xerox printer story, Stallman noted that in the 1970s, the heady days of free software being the norm, none of this was an issue. The survival of your project didn’t depend on funding or software licenses. It didn’t really even depend on what model of governance your project had. People contributed work they cared about. They debated it carefully. They delegated decisions about specifics to people who clearly knew and cared about those specifics, probably because they’d already worked on them. There was also no insane reactionary erasure of all governance whatsoever, as some have found in “flat” organizations; there was no leaving people to grapple with the Tyranny of Structurelessness — informal cliques of power — and no driving people to burn out and disengage because they cared more than the higher-ups knew how to handle. (Or because they kept running into buggy proprietary software they weren’t allowed to fix.)
It was all a “do”-ocracy. People weren’t in it for power and clout; they were in it for fun and community. “Cooperation was our way of life,” Stallman said.
For a long time after the Renaissance, European culture actually stayed pretty authoritarian, hierarchical, and bound by strict puritanical codes of morality.
People like Jean-Jacques Rousseau who finally started touting “Enlightenment” values — individual liberty, questioning authority, rational debate, all that good stuff — often did so with traceable influence, sometimes even explicitly-stated influence, from Native Americans, many of whose nuanced and well-reasoned critiques of the West (read up on Kondiaronk for a good example) found their way into widely-read books written by New World explorers.
In fact, it was natives of the American northeast and Great Lakes regions who were already living these values. They prized and prioritized personal autonomy, doing virtually nothing they morally disagreed with, to an extent that would make any modern libertarian jealous.
They went so far with those principles as to automatically ensure a social safety net. When you’re going hungry or otherwise in mortal danger, you don’t have much personal autonomy, as you’re just trying to survive. The community would help you out. One could say the reverse was true; in caring for each other, they protected personal autonomy. It doesn’t really matter. This was just how they lived. Cooperation was their way of life.
There was no blind, reflexive, dogmatic reverence to the authority of the humans you lived with beyond their ability to morally and logically convince you. Words like “lord” and “commandment” were hard for early missionaries to translate into the natives’ languages. Rigorous notions of private ownership were not really a thing; in practice, these amount to telling everyone else in your community “you cannot access this, even if you find it helpful; it’s mine.” Amassing power over others wasn’t really a thing either. The mechanisms that allowed it in European society, like money and social status, were simply not that important, even if they existed. The European reflex to sanctify and protect ideas like these was alien, especially when it came at the cost of actual human life.
This had the interesting effect of translating directly into the kind of “equality under the law” that the Founding Fathers began enshrining. (If anything, the Greco-Roman traditions that my high school history books instead pointed to when explaining the Founders’ talk of “equality” merely kept it limited to property-owning white males.) The natives, in living this way, flouted the European cultural assumption that individual liberty and social cohesiveness were at odds. It was literally revolutionary.
In the centuries since, the tendency of Western historians and anthropologists, lulled by stereotypes of the “noble savage” (or simply the “savage”), has been to discount the direct shoutouts of people like Rousseau, as though they couldn’t possibly be accurate. These thinkers must have been trying to seem “exotic,” or were just joshing us, right? Indigenous cultures were too “simple,” “innocent,” and “primitive” to know what they were talking about when it came to statecraft, right?
The time has come to show my hand again. Much of what you’ve just read is a direct paraphrase of the beginning of the book The Dawn of Everything by David Wengrow and David Graeber. They go on:
What if the sort of people we like to imagine as simple and innocent are free of rulers, governments, bureaucracies, ruling classes and the like, not because they are lacking in imagination, but because they're actually more imaginative than we are? We find it difficult to picture what a truly free society would be like; perhaps they have no similar trouble picturing what arbitrary power and domination would be like. Perhaps they can not only imagine it, but consciously arrange their society in such a way as to avoid it.
The historical record shows that even the Europeans preferred these ways. Benjamin Franklin himself wrote puzzled and disgruntled accounts of how pretty much anyone with years of experience living among both indigenous people and Europeans (via adoption, kidnapping, etc.) would overwhelmingly end up choosing to live amongst the Native Americans.
As Wengrow and Graeber write: “There is the security of knowing one has a statistically smaller chance of getting shot with an arrow. And then there’s the security of knowing that there are people in the world who will care deeply if one is.”
The type of world you live in comes from the way your society values and prioritizes different freedoms.
Everyone, by nature, is “free” to do anything, including murder; when we deny the freedom to murder, though, it protects our freedom to do much more. So while freedom is not always a zero-sum game, like when people write useful free software that enables greater autonomy for all, there can be tradeoffs.
The current trouble with open source comes from the fanatical worship and defense of one particular freedom at the exclusion of others. It is the freedom to own and hoard property — to keep software (or land, or whatever else) private and maximally-profitable by all means necessary — even at the cost of our freedom to help each other, breathe clean air, enact a 4-day workweek that lets us write more free software, or not be wrongfully imprisoned by Chevron.
As those examples show, the freedom to hoard is tied up in a vicious cycle (well, virtuous for some) with the freedom to use money to influence laws. People from a certain class, like Billy McFarland of Fyre Festival infamy, get 6 years in prison for committing tens of millions of dollars in wire fraud, and that’s when they get caught and publicized; poor people routinely go to jail for life for stealing $50 of food.
As Stallman mentions in his 2001 speech, when something (like a computer, or a justice system) is broken, people stop caring as a defense mechanism:
You know if the computer is constantly frustrating to use, and people are using it, their lives are going to be frustrating, and if they're using it in their jobs, their jobs are going to be frustrating; they're going to hate their jobs. And you know, people protect themselves from frustration by deciding not to care. So you end up with people whose attitude is, "Well, I showed up for work today. That's all I have to do. If I can't make progress, that's not my problem; that's the boss's problem." And when this happens, it's bad for those people, and it's bad for society as a whole.
The status quo is that workers in many industries, not just open source software, are increasingly overburdened and undercompensated, largely due to the same dynamics that hobble the justice system. Companies take, and fail to give, as much as the law will allow, and “money as speech” means the law will allow a lot. The minimum wage, especially compared to inflation, remains a sad joke.
In tandem with spending less time caring about and helping each other, then, we therefore have to spend more time grinding, hustling, and seeing each other as competition, which compounds the problem. Our financial rewards for doing so are artificially suppressed by both corporate lobbying and government unresponsiveness (but I repeat myself).
Charlie Warzel makes this connection in a recent though unfortunately paywalled article called Guardians of the Internet. There is a growing realization — not just among software developers, but among Kellogg’s workers and healthcare professionals and others — that we need to force necessary changes in how companies not only pay the workers, but also invest in the systems, that sustain them.
Our world obviously depends on systems: software systems, energy systems, healthcare systems, and more. The legwork of keeping these systems going falls to workers. We have to incentivize caring for those people more than the brand names that leech off of them, lest those people, and those systems, fall apart.
I urge you to ask yourself, not just once but often, if it’s really worth prizing private property and its protection as we do (i.e. above the protection of all else) or instead if, as the natives did, we should prize community and personal autonomy, perhaps even more — crazy, I know — than private property. Ask if you want your laws to enshrine the freedom to care only and solely for yourself, or the freedom to cooperate.
As the early free software community shows, living in a complex technological world doesn’t mean the latter is blocked. It can be harder, of course, given all the corporate resistance one may face, but it’s possible.
Since I value autonomy, I’ll respect your choice; I just find it a pretty obvious one.
Previously published here.