In this post, we’ll see if we can answer a big question: what is technology?
Then we’ll finish by teeing up some implications for the future and for how we think about institutional reform.
There are lots of ways to define technology. For now, though, I’m going to share one of my favorites from a book called The Nature of Technology by economist William Brian Arthur.
The Nature of Technology is a gem of a book, precise and poetic and at times beautiful in the way it visualizes technological progress.
Arthur makes “one long argument” about what technology is. His goal is to define the essence of technology. To show us the nature of the thing. So what is the argument? I’m going to run through the key points as succinctly as I can before circling back to see why it’s relevant to this blog series. In other words — bear with me!
At the core of Arthur’s story is the idea that technology is “combinatorial.” What he means by this is that technology is built by combining pieces together — pieces that are themselves technologies. So a jet engine, for example, is built from other technologies, like a compressor and a turbine, and these, in turn, are made up of other technologies, like a compressor anti-stall system and a turbine blade-cooling system. And so technological progress happens mainly when we identify a new need and then combine existing technologies to meet that need.
This might all sound obvious, but it’s a useful correction to the way we tend to talk about technology. It rejects, for example, the idea that technology moves forward thanks to lone inventors who ‘discover’ ‘new’ technologies. The model Arthur is proposing is a deeply social one, in which progress builds up over time (although progress is still uneven and proceeds in fits and starts).
Arthur’s argument also has implications for the formal structure of technology as a body of knowledge or practice. He thinks technology has a recursive quality: technologies are built from technologies that are built from technologies. So the whole thing is fractal. In Arthur’s terms, technology builds itself out of itself.
The second core idea in Arthur’s definition of technology is the concept of phenomena. Roughly speaking, phenomena are things that happen reliably in nature. For example, there’s the phenomenon that “things move more easily when they are floating on water.” Phenomena are the ultimate “source from which all technology arises.” They work via a process in which we capture them and use them to meet human needs.
An example is the phenomenon: “some objects (like quartz crystals) oscillate at a steady frequency.” When we unearthed this phenomenon, we put it to use to make watches so we could tell the time.
To visualize the way a society captures phenomena, Arthur introduces one of my two favorite images in the book. He points out that some phenomena feel superficial while others feel deep. The phenomenon “if you rub two things together, they get hot” feels superficial. It’s almost like it was lying around on the surface of nature just waiting to be discovered by our ancestors. By contrast, the phenomenon “high-frequency radio signals show a disturbance in the presence of metal objects” feels deeper. We couldn’t just stumble on it. We had to dig. And when we did dig and found it, we then put it to use in the technology of radar.
This leads Arthur to describe the process by which we unearth phenomena as a mining operation. Here is the description in his words:
I like to think of phenomena as hidden underground, not available until discovered and mined into. […] Effects nearer the surface, say that wood rubbed together creates heat and thereby fire, are stumbled upon by accident or casual exploration and are harnessed in the earliest times. Deeper underground are seams such as the chemical effects. […] The discovery of the most deeply hidden phenomena, such as the quantum effects of nuclear magnetic resonance or tunneling or stimulated emission, require a cumulation of knowledge and modern techniques to reveal themselves. They require modern methods of discovery and recovery — they require, in other words, modern science.
As we see here, Arthur also uses the mining metaphor to show how phenomena seem to cluster together. He calls a cluster of phenomena a domain, with examples being the domains of optics, chemistry, or electronics. When we unearth a phenomenon in a new domain, we tend to then unearth lots of other phenomena around it. It’s nice, therefore to picture domains as rich seams in the mine. When we hit one, miners rush to this part of the mine and dig away, and, if it’s a big seam, we enter a new technological era.
This takes us to another powerful concept: redomaining. As we’ve seen, a domain is a group of technologies that cluster together (like optics, electronics, hydraulics, or even canals or railways) in the sense that they put to use a set of similar phenomena. Domains have a kind of internal coherence, even aesthetically speaking. Arthur gives the example of what he calls “canalland,” the internally coherent domain of canal technologies — “a world we enter for what can be accompanied there.” In this case, it’s a world that puts to use the phenomenon that heavy things can float and move easily on water.
Redomaining, then, is a particularly important way in which technology gets better over time. As Arthur explains, when our society makes leaps of technological progress, it’s often not because we’ve improved technology within a certain domain but because we’ve learned how to meet the same need in another domain.
In the 1830s, for example, the need to move heavy things long distances began to be redomained from canalland to railwayland. And this was really disruptive. Why? Because when you move domains you don’t just get a better technology (a faster barge) you also escape the constraints of the original domain. Because trains rely on different phenomena to boats — i.e., they use wheels, not water, to overcome friction — they can move much faster and carry much heavier loads, and so this particular redomaining enabled (and in some senses also forced, by requiring upgrades to surrounding technologies) significant social and economic change.
There are lots of implications here for how we see the digital revolution, some of which are pertinent to how we design our institutional response. For now, though, let’s just say that redomaining shows, with clarity so sharp it hurts the eyes, precisely why the digital revolution is so disruptive. It’s disruptive because (a) a very wide array of technologies can be redomained into digitalland, and (b) digitalland removes some really fundamental constraints, most obviously space and time. And this changes pretty much everything.
There is a lot more in the book that we don’t have time for here. Arthur goes on to describe, for example, how this all plays out in practice. What do engineers do? And how does engineering relate to science? To simplify a lot, we learn that engineering is a process of problem-solving in which fresh needs are identified, and technologies are combined to address them. Technologies are also “deepened” and become more complex. Science meanwhile complements technology, and the two move forward in lockstep, with each enabling the other.
Let’s finish this summary, though, with my second favorite image in the book. It describes the whole process of technological change, and it does this by describing an algorithm Arthur has written to model his theory so he can watch it play out.
I hope you’ll forgive me for quoting this passage at length because it’s such a lovely example of crisp and energetic writing. Here it is:
If we allow the algorithm to play out, at first progress is slow. Not only are technologies few, but because of this, opportunities are also few. Once in a long while a purpose will be satisfied by harnessing some simple phenomenon — in our own historical case the use of fire […]. The stock of technologies builds, and with it the stock of available building blocks. […] New combinations or melds of technologies begin to be possible. And as new building blocks form, the possibilities for still further combinations expand. The buildout now becomes busy. Combinations begin to be formed from combinations, so that what was once simple becomes complex. And combinations replace components in other combinations. Opportunity niches begin to multiply; bursts of accretion begin to ripple through the system as new combinations create further new combinations; and avalanches of destruction begin to cascade […] These avalanches vary in size and duration: a few are large, most are small. The overall collective of technology always increases. But the active set varies in size, showing, we would expect, a net increase over time. There is no reason that such evolution, once in motion, should end.
What we’re left with is an image of technology that teems with life and yet that, in its form, is almost crystalline, exquisitely intricate, and forking. Arthur’s image of choice is a coral reef. My mind jumps to a network of blood vessels, pulsing and branching ever more finely. Either way, the whole thing “seethes with change.” And because progress happens by combination — i.e., it’s determined by the number of possible combinations from a lengthening stock of components — the pace of change at the frontier of technology is exponential.
In short, buy the book. In the meantime, though, here are some reflections on what this all means. What parts of this argument are most relevant to the exam question for this blog series: how will we govern digital capitalism?
Quite a lot is the short answer. But there are three particular threads of Arthur’s argument that I’d like to pull on over the coming weeks.
If Arthur is right about technology, and this is its nature and form, what does that mean for our institutional settlement and for the way we should approach institutional reform at a time of rapid technological change?
Can we extend Arthur’s argument into a theory of political and institutional reform and evolution? Is there any reason to think that the mechanisms by which we govern technology conform to the same logic as the process by which technology itself evolves? If so, what would that mean for the art and science of statecraft? And if not, how is it that these two worlds — of technological and institutional progress — differ?
These are big questions, and although they’re not Arthur’s focus, at certain points in the book he does shine a torch down these paths. To simplify a lot, he seems to say that institutions are themselves technologies in a broad sense of the word, in that they are “constructions with a purpose.” By implication, we might think that institutions share something of technology’s logic, that they develop in a way that’s a bit like the way technology develops.
If that’s the claim, I’m not totally sure I buy it. Or at least I’m not sure it’s helpful to think in this way. At a minimum, it feels like the topic of institutional reform deserves some dedicated attention. So I’ll explore this in a future post. i.e., I’ll take Arthur’s account of technology — which I buy in spades — and ask what this means for the way we govern a technologically mature and exponentially advancing society.
The second aspect of the argument I’d like to spend time unpacking is the role the economy plays in technological progress.
Arthur’s story here is one in which prices, set through a free exchange in a market, function as a mechanism to signal or weigh human needs. Prices guide the effort of scientists and engineers; they show them where to focus next in their work, combining and deepening technologies and mining and putting to use phenomena.
In one sense, I love all this imagery. There’s a particularly brilliant metaphor of a departure board, in which a crowd of engineers and scientists look up at an ever-changing list of human needs, along with the prices/rewards on offer for meeting them. We see how prices direct and coordinate our efforts.
Still, I can’t shake the feeling that this whole process feels a bit neutral. Or perhaps optimistic is a better way of putting it. Maybe it’s something about Arthur’s metaphors being drawn mostly from maths (fractals and recursion) or from biology (evolution). It means the whole account feels almost unconscious, like a happy if frenzied shared endeavor in which the ants’ nest of humanity swarms toward a better life for all.
As I read these passages — particularly the ones about the role played by prices — the question I kept wanting to write in the margin was: to what end?
Sure, free exchange in a market sets prices that guide the efforts of engineers and scientists, but what are those prices calibrated to do? Not many sensible people these days would answer that prices are set in order to meet the most important human needs (and I doubt Arthur would say this either). More likely, the answer is some mix of prices being set ‘to turn a (short-term) profit’ or ‘to satisfy shareholders’ or ‘to keep the firm alive,’ which are, of course, different things.
All of which is to say: I’d love to explore this technology-market relationship more. What would we see if we took Arthur’s account of technology and combined it with a more skeptical view of the price-setting mechanism, and in particular, of the way markets seem to function in the dark-pattern-infected landscape of digital capitalism? (i.e., a view like the one I set out here).
TL;DR — my sense is that what would emerge from this inquiry would be a story with no less awe but with a bit less wonder and quite a lot more horror at the power of the thing we’ve created.
Finally, there’s another aspect of power I’d like to explore. Because the question above — ‘to what end?’ — wasn’t the only one I kept wanting to write in the margin as I read the book.
When I read Arthur’s account of how technology meets people’s needs, I also wanted to write: whose needs?
The process by which needs are identified and met by technology is not objective; power plays a role. We know, for example, that the frontier of technology is today — and has always been — dominated by a small and unrepresentative elite. Today it’s young, straight, white men who have most of the money and power at the technological frontier.
So what would we see if we introduced ideas like bias, discrimination, privilege, and oppression, and insights from political economy and sociology to Arthur’s formal account of the way technology behaves? And what could this mean for the way we think about the role of the state in governing a society defined by technological change?
Finally, before I wrap up, there’s a bit of special sauce we can sprinkle over all of this — one that makes the whole thing spicier/more interesting.
This is Arthur’s account of the role of chance and path dependency, which adds a frisson of luck and uncertainty to everything I’ve written above.
If we think back to Arthur’s view of technological change, it should be clear that chance is at play in this process.
It would be easy to imagine, for example, a different path of human history in which we unearthed important phenomena in a different order. And if this had happened, because technology evolves through combination, even a small difference would have become increasingly significant with each new generation of technology, creating a very different world.
And because technologies then get locked in — through human habits or sunk costs or network effects — these outcomes might well have proven irreversible, even if they’d been really obviously bad or less good than the alternative paths.
The point is: the society and body of technology we have today could have been entirely different. And those differences could have made our lives better or worse.
Or, put another way, the world we’re living in now might be (well, is) in large part an accident. And so better worlds might be (sorry, are) at least theoretically possible.
In one sense, this might all sound academic. Isn’t it just a fun “what if?” question with no real-world importance? After all, it’s not like we can access those alternate realities.
But where this gets interesting is if we bring in the points above — about the role of profit and power in steering the identification of human needs and the technologies we create. Because then we see that these path dependencies — the outcomes we end up stuck with, even if they might not be good for us — don’t arise from pure chance. They flow from the policies and power dynamics that happen to hold at the time those technologies emerge. You might say we make our own luck. Or that we choose our own paths.
And as we sit here today, watching our digital future take shape and solidify before us, emerging from the uncanny and unequal world of Silicon Valley, this all starts to feel anything but academic. If anything, it starts to feel kind of urgent.
Anyway, this is all getting a bit heavy. My point is really just that there’s lots here to keep us busy. And this idea of path dependency is one we’ll want to keep in mind as we work through these issues.
And I guess this meta-point is the one to end on: this is all really complicated and consequential stuff, so as our economy and society moves into digitalland, we should spend more time getting to grips with it. It’s not enough anymore to read a few blogs about crypto on our lunch breaks, close the tab, and flick back to our day jobs. We need to get deeper into the question of how technology functions, understand how we now live and work at the digital frontier, and use this as the basis for a new institutional settlement.
If you’d like to read along as I fumble my way through this, you can follow me on Medium here. Or if you’re feeling lovely, want to support the project, and can spare the price of a coffee a month, you can subscribe on Substack here.
This article was first published here.
For the big and optimistic story behind all of this, buy my book, End State: 9 ways society is broken — and how we can fix it.