paint-brush
Trajectories for the future of softwareby@enkiv2
1,733 reads
1,733 reads

Trajectories for the future of software

tldt arrow

Too Long; Didn't Read

Some decisions are sane. By this I mean: some decisions are things you would choose to do with full knowledge of all options and reasonable knowledge of potential ramifications, based on an analysis of how likely each possibility would be to increase the success of the general project it is a part of.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Trajectories for the future of software
A pig in a cage on antibiotics HackerNoon profile picture

Some decisions are sane. By this I mean: some decisions are things you would choose to do with full knowledge of all options and reasonable knowledge of potential ramifications, based on an analysis of how likely each possibility would be to increase the success of the general project it is a part of.

We don’t make sane decisions all the time, and we don’t need to. Sometimes, we make perverse decisions — we ignore sanity or (optimize for as little sanity as possible) in order to achieve some external goal that conflicts with the ostensible goal of the project. Perverse decisions are good and important, too.

When we choose the worst tool for the job with full knowledge of the toolbox, we make the job harder for ourselves. Sometimes, we do this in order to demonstrate our capability to ourselves or others — this is one of the historical meanings of “hack”, and it’s a big part of how the software development community self-organizes. Other times, we do it in order to force ourselves to think about a problem in a new way. Nevertheless, perverse decisions are (by definition) not sane — they take more effort for an equivalent result (or produce a worse result by equivalent effort).

When we make what we believe to be a perverse decision and then discover it to be sane — when we use the wrong tool for the job and then find the job easier than it would have been with the right tool — we have discovered a flaw in our understanding of our tools and their appropriate use. More often, however, we choose what we believe to be the appropriate tool and find the job much more difficult than we expected. Sometimes this is because we don’t know about the right tool; sometimes, the right tool doesn’t exist.

The biggest problem I see in software development today is threefold:

  1. We teach people that certain tools are much more generally applicable than they really are
  2. We prevent people from using appropriate tools and discourage people from knowing about them
  3. We accept developer ignorance as reasonable and treat perverse tool ecosystems as sane

The clearest example is web applications.

HTML is a bad way of representing static documents — strictly worse than TeX for text documents meant to be viewed on paper, strictly worse than postscript for all documents meant to be viewed on paper, strictly worse than plain text or markdown for text documents meant to be read on a screen, and capable of only awkwardly and crudely representing a small handful of non-paper-oriented concepts (jump links and anchors, but not transclusions or links between spans). There is no situation in which the most appropriate representation for a static document is HTML.

HTTP is a bad way of serving static documents — far more complicated than gopher, yet not actually benefitting from any of its complications; it has none of the guarantees provided by IPFS (so, documents can change out from under you without warning, and if a host goes down so does the document). On top of this, no major HTTP server or web browser properly uses features of the standard that would slightly improve the problems baked into the rest of HTTP; instead, a whole parasitic ecosystem has spawned out of abusing improperly-handled parts of the spec. (According to the spec, of course, the same URL should always point to the same page, 404 should only ever be used for URLs that have never pointed to anything, and different codes should be used to represent pages that have moved or been accidentally deleted.) There is no situation in HTTP is the most technically appropriate way to serve a file.

For all the problems of using the web to host static HTML documents, things get far worse when it comes to interactivity. HTML was a mistake, but CGI is a crime, and using an embedded scripting language inside of a static document to force it to self-modify is a tragedy.

It’s the pinnacle of cleverness, of course. Had someone written a web app and presented it as a tongue-in-cheek demonstration of their own skill and perversity, I would applaud it.

After all, HTML is barely capable of representing minimally-formatted text in a sane way, and HTTP is a bloated stateless protocol for fetching files. Making a GUI work in a web browser is like porting DOOM to postscript and running it on your laser printer, one frame per page.

The problem is that the web invited in two generations of programmers who somehow believed that this perverse ecosystem was sane and that wasting their own time and the resources of the computers their stuff was running on was natural. It’s as though the video game industry decided that laser printers were the natural devices on which to write and play first person shooter games, and invested twenty years into making laser printers that printed faster, used thinner paper, and came with rumble pads.

Sometimes, the reason we don’t make sane decisions is structural. We know what the right decision is, but the person paying us is adamant that we make the wrong one. We may underestimate the cost of that perversity at the time — when a person who doesn’t understand how to make technical decisions makes them nevertheless, there is a tendency for demos, toys, and proof of concept projects to be reused as part of a theoretically-production-quality system. (The web is an example of this: it was a quickly-put-together simplified demo intended to teach suits at CERN basic hypertext concepts so that they could understand Tim Berners-Lee’s real hypertext project.)

Software engineers are in a unique position among white-collar professions — what other profession is in such high demand, with such inflated starting salaries, with so little formal education expected, getting paid for so little productive work? So, a software engineer is uniquely well-equipped to refuse non-sane solutions. Outside of places where the cost of living is inflated, a software engineer can (if they live frugally) afford to be fired for their unwillingness to actively make the world and their own lives worse.

There’s a related problem in pedagogy, and in UI.

There’s a sense of strict division between programmers and non-programmers today. There are campaigns to require students to “learn how to code” — mostly with the implication that this would guarantee them a high-paying job or leave them well-prepared to deal with an increasingly computerized ecosystem. Of course, any school that taught students to program in the way that most public schools teach students algebra or The Great Gatsby would not equip students to understand their computers any moreso than it equips them to understand Mochizuki’s proof of the ABC conjecture or Lacanian commentary on Finnegan’s Wake. The gap between novice programmer and minimally-competent programmer is far larger than the gap between someone who has never seen a computer before and a novice programmer, and it contains many more fundamental shifts in thinking.

The division between non-programmers and novice programmers is not a natural part of the learning curve, but is actually enforced by our tooling, which has steadily moved toward segregating users into “technical” and “non-technical”. The personal computer of the 80s, happily used by exceedingly non-technical people, which expected users to be able to type in a line or two of code from a manual in order to do much of anything, has been replaced by the modern PC, where getting the tools necessary to write any code at all involves seeking out somewhat-dubious-looking third-party websites. Beginning with certain management decisions made on the Macintosh project in 1982, the UI philosophy moved from “the simple should be easy and the difficult should be possible” to “anything not featured in the television advertisement should be impossible”. As such, intermediate states between non-programmer and novice (such as “power user”) are nearly extinct.

There is no fundamental technical barrier that keeps us from having “graphical user interfaces” that have the same kind of flexibility and composability as the unix command line. There is merely a social barrier: non-technical users are expected to obtain software produced for profit by corporations, each living in a walled garden, and they are expected to have no curiosity at all about how to change how these pieces of software work, while technical users are expected to run technical-user-oriented operating systems that are visually unpolished and to prefer text-based interfaces.

I am not ragging on Linux here… or, at least, not too much. If I must be classified, then I certainly count as a Very Technical Boy — but the line between technical and crude should be a curtain, not a brick wall.

The Xerox Alto showed us how to make independently-developed graphical applications composable by non-technical or minimally-technical users back in the 70s — while they were running! It worked fast too, but then again they weren’t using Electron.js

Who benefits from separating programmers and non-programmers into distinct ghettos? Well, products that advertise (mostly falsely) that they will teach you how to go from non-programmer to programmer quickly and easily benefit quite a lot from such a division. (If everybody knew that it was possible to learn to code by gradually exploring more of an interface in which the division between programming and using existing programs was fuzzy then there would be a much-diminished demand for programming bootcamps.) Graphical applications can reframe lack of flexibility as “user-friendly”, even if they take more effort to learn to use than an equivalent (but scary-looking) command-line application; likewise, poorly-designed command-line applications can advertise themselves as “powerful” even if they do less than graphical applications designed for the same task & do it slower and worse, and technical users (particularly new converts) will buy into it. (For examples of the former, look at any Apple product since 1985; for examples of the latter, look at wannabe-hacker-oriented linux distros like Arch and Gentoo, which refuse to supply an installer and instead require users to retype the source code of an installer script from the manual.) But, outside of short-term status-seeking, the users do not benefit from this division: it’s not any harder to remain ignorant of programming on a system where programming is possible, more flexible systems have a consistent internal logic that can be understood and incorporated into the intuition of non-technical users, and flexible systems make it easier not just for beginning developers to progress but for intermediate and advanced developers to create useful and interesting programs.

When Peter Thiel said that “every company wants to be a monopoly”, he was correct. And, from the perspective of someone owning a substantial stake in such a company, that’s desirable. From the perspective of literally everyone else, it’s horrible. Well, segregating programmers from non-programmers and discouraging non-programmers from using or understanding flexible and composable systems allows the creation of little fiefdoms on their hard disks. In the popular imagination of both groups, the programmers, due to some imagined inborn talent, get to play freely in the green fields of unix; the non-programmers, who lack the divine calling, are constitutionally incapable of playing in those fields and should be grateful that they have been provided their leaky hovel and stale piece of bread. Of course, like many exploitative divisions that elevate one group over another, it’s totally artificial — engineered and maintained by a third party who benefits by making the lives of both groups worse but making sure one group has things much worse than the other.

These two situations are, of course, related. They feed into each other.

You don’t need to be employed by a corporation to write code. You don’t need to be part of a large group to write open source code. You can write things that make sense to you, and give them away as you see fit. You can write code for yourself, without considering what somebody else wants. Nobody can fire you for making good decisions in your personal projects.

Technical and non-technical users aren’t different groups. They both want to live their lives comfortably. They want to make sane decisions, or to make perverse decisions for the sake of showing off. Technical users don’t have a monopoly on decision-making facilities, and they aren’t even necessarily more knowledgable about what the most appropriate choice is, when it comes to a user’s particular situation.

Non-technical users are actually highly technical. They have elaborate intuitions about the behavior of the software they use. These intuitions may not be accurate, but they aren’t necessarily less accurate than the false beliefs many skilled programmers have about the tools they use.

Programming language design is part of user interface design. Not only that, but user interface design is part of programming language design. A user interface is a language with which a user explains their intent to the computer, and a user interface that makes decisions that would not be welcome in a programming language is broken, because a user interface is a programming language.

We all just want to be able to solve our problems. We all would like tools that make solving our problems easier. When tool-makers cockblock each other in such a way as to make their tools less useful, nobody benefits. When tool-makers tell one group of people that they are too dumb to use their tools and refuses to teach them, nobody benefits. The difference between a programmer and a non-programmer is fundamentally that the non-programmer was told that some tools were off-limits to them and they believed it.

It hasn’t always been like this, and it doesn’t need to be like this anymore. But, fixing the problem requires breaking compatibility with the old ways.

Using systems just because we are told to use them locks us into a spiral of progressively worse decisions. Using systems just because they are familiar locks us into a spiral of ignorance. Going out of our way to learn new techniques and refuse to implement or use bad solutions is not only a good idea — it’s a moral imperative.

If corporations want to create little walled gardens on our computers, then we should starve them out and eliminate them. Reject any system that makes it harder to craft solutions that work for you. Reject any system that artificially limits your ability to craft anything at all. Reject any system that wants to own your solutions or prevent them from working for you.

We’re all going to need to learn new tools and new languages. It’s okay, because the tools will be ours.

(This document is also hosted on Gopher at gopher://fuckup.solutions/0enkiv2/trajectories.txt)