Cyber Poetics: writing and publishing in the digital ageby@john_s_west
274 reads

Cyber Poetics: writing and publishing in the digital age

by John WestNovember 20th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<em>The following is a talk that I gave at Oberlin College on November 7, 2017 after being invited by the staff of </em><a href="" target="_blank">Wilder Voice</a><em>, Oberlin’s magazine for long-form </em><a href="" target="_blank"><em>journalism</em></a><em> and creative nonfiction. Some of the points I made here, I’ve made elsewhere on the web, most notably at </em>Quartz<em>. Especially </em><a href="" target="_blank"><em>here</em></a><em> &amp; </em><a href="" target="_blank"><em>here</em></a><em>.</em>
featured image - Cyber Poetics: writing and publishing in the digital age
John West HackerNoon profile picture

All photos from the European Space Agency:

The following is a talk that I gave at Oberlin College on November 7, 2017 after being invited by the staff of Wilder Voice, Oberlin’s magazine for long-form journalism and creative nonfiction. Some of the points I made here, I’ve made elsewhere on the web, most notably at Quartz_. Especially_ here & here.

Hi everyone. Thank you Maxwell, Olivia, Lydia, and the whole Wilder Voice staff for the invitation to come out here, and thank you all for being here. My name is John West. I am a narrative technologist at the MIT Media Lab, which means that I use computers to find and tell stories in data. I am the co-founder and technical director of Et Cetera Gallery, a hub for online and IRL narrative experiments, and I am a graduate of both Oberlin College and Conservatory, where I studied philosophy and historical performance, edited Wilder Voice magazine, and co-founded Wilder Voice Press. I am also currently an MFA candidate in nonfiction writing at the Bennington Writing Seminars.

The title of this talk is “Cyber Poetics: Writing and Publishing in the Digital Age,” which is a great title that I can’t really take credit for. The proximate reason I can’t take credit for it is that I didn’t come up with it; the Wilder Voice staff did. Of course, on the Et Cetera Gallery website, we have a section called “Digital Poetics”, which Maxwell tells me inspired the staff’s title. Now, my Et Cetera Gallery cofounder Galen took a class called Digital Poetics at the School of Art Institute of Chicago, where she got her MFA, but before that, it was the title of a book by Loss Pequeño Glazier, Digital Poetics: The Making of E-Poetries, which was published in 2001, when it was still fashionable to add the prefix e- to anything that happened online. Of course, before that, talking about any kind of poetics was to liable to lead you off the path and into the thorny thicket of critical literary theory. Finally, at some point long before that, poetics was Aristotle’s word.

This is all to say, that while I’m going to be talking about cyber poetics, I’m not really going to be talking about poetry. Instead, I’m going to be talking about what makes a piece of online writing tick: what makes a text or prevents a text from becoming or being seen as worthwhile, or if you’d rather, becoming or being seen as literature. I’m going to focus on two facets of digital life that I believe have most changed the distribution and creation of writing: algorithms, which upended the distribution and publishing of texts, and hypermedia, which has reshaped writing’s possibility.

I’ll start with algorithms, but before we get too far into the weeds, let me explain what I mean by algorithm. An algorithm, simply put, is a system of rules that gets you from input to output. Think of recipes: you have ingredient (input), rules about how to combine them and what to do with them, and then you end up with a delicious pie (output). Here’s a more computer-y example: Facebook tracks all your activity on their platform — your likes, your shares, the amount of time you spend with a given piece of content in view, and on and on. That’s the input. The output that they want is to keep you on their site or app in order pour as many ad units into your brain as possible. To get from input to output, Facebook’s algorithm has a system of rules for ordering content in your News Feed to keep you happy, engaged, and scrolling. Last example: Google tracks all your activity on their various platforms — your searches, the text of your emails, the locations you enter into maps, which websites you go to in your Chrome browser. That’s the input. The output is, again, injecting ads straight into your brain. (Spoiler alert: this is almost always the output, and I’ll come back to that). To get from input to output, Google’s algorithm ranks search results to keep you happy, engaged, and clicking.

Now, I want to draw your attention to a couple of things about these algorithms. First, both of them are opaque. That is, no one except maybe the people who designed the algorithms knows what the rules are — and it’s likely that even these algorithm’s designers don’t fully understand how they operate. The second — and most important — thing to draw your attention to is that for these algorithms to work, they need to represent pieces of writing numerically. They need to count them.

Caleb Crane, in Harper’s, writes:

A new kind of disenchantment has come over literature. It has to do with what you might call the working myth of the life of literature — the half-conscious way that people decide which texts they consider literature, and how they carry those texts forward. The catalyst, I believe, is the recent revolutionary advance in counting. That may not sound like a startling technological breakthrough, but thanks to computers, we are now able to count with unprecedented speed and thoroughness.

When it comes to writing, counting is nothing particularly new. We’ve long counted circulation rates, page and word counts, and column inches. What’s new is the scale on which we count. When I was a kid, I learned to sing the first dozen-odd digits of pi from memory. Now, we count pi’s digits in the quadrillions. I’m not really sure what the virtue is in the second quadrillionth digit discovered over the first quadrillionth, but I think it’s fair to say that the difference between my dozen and our computers’ quadrillions represents something more than mere addition; it’s a kind of paradigmatic shift.

Algorithms are almost blindly reshaping the world into which we read and write. The result is an environment arbitrarily conducive to some kinds of writing and unthinkingly hostile to others.

That shift has come to writing, too. Facebook and Google — two companies — basically decide how writing is distributed online. Together, they are responsible for over 80 percent of referral traffic to news sites — which are pretty loosely defined here, and include everything from short reportage to essays to creative nonfiction. I’ll say that again, four out of five trips to news sites are because of Facebook or Google. This is to say nothing of Amazon, whose book recommendation algorithm can launch a book’s success or cause its near-complete commercial failure. Since Amazon, Facebook, and Google use algorithms to decide what you see, and how what you see is weighted, it’s fair to say that algorithms, in some meaningful way, control the distribution of writing online — and increasingly offline as well.

Of course, the particulars of the Facebook News Feed algorithm and the Google search algorithm were not inevitable. They were contingent on a host of technical and design choices made when no one really anticipated what the internet would become. They were not designed with their consequences for writing in mind. Algorithms are almost blindly reshaping the world into which we read and write. The result is an environment arbitrarily conducive to some kinds of writing and unthinkingly hostile to others.

One easy example of the way algorithms reshaped writing is the curiosity gap headline. You’ve certainly seen — and you might have even clicked on — a headline that promises that “you’ll never guess what happens next.” The truth is, in general, what happens next is that you leave the web page almost immediately.

Now, Facebook has been tweaking their algorithm, trying to whack-a-mole its way free of clickbait — which its users say they hate. Unfortunately, whack-a-mole is a rigged game. Facebook executive Sheryl Sandberg, just last month, said that Facebook isn’t a media company. In other words, they don’t want to have an editorial approach. So, as recently as last year, Facebook was finding the locus of clickbait in the headline. This move is typical of a reductionist approach to the world: Break an article up into its component atoms, take the least complex and the most easily studied of its parts, and muck around with that one. Of course, a good article can lie beneath a clickbait-y title, and a cheap article can be lurking behind a sophisticated headline. But, we shouldn’t be surprised that Facebook would approach a problem like clickbait in such a reductionist manner. After all, this is how engineers think. It’s also how content farmers think. What I mean is that it’s the very same logic that brought us clickbait in the first place.

Algorithms also incentivize the breathlessly counterintuitive op-ed, the Wikipedia-sourced explainer, and the analysis-free John Oliver aggregation. I call these kinds of writing Content with a capital C, and here’s how it works: A publishing company starts to think of the value of a piece as the difference between how much money it cost to produce and how much monetizable attention — in the form of display advertising — it captures. So, an underpaid freelancer might take one hour to produce a zany listicle on confused cats and one hour to create a slideshow about the history of mac-and-cheese pizza. If each one captures readers’ attention long enough to shove the same number of advertisements down their eye-gullets, then — congratulations! — you have produced fungible, interchangeable pieces of Content.

Once you start seeing the web this way, it’s hard to stop. Suddenly, long, well-sourced, well-researched, or even well-argued pieces look like inefficient ways to capture a few minutes of a user’s attention. Reporting, researching, and deep thinking takes time, which drives up costs. And the going theory is that people don’t even really want to read those kinds of articles, anyway.

So publishers don’t make well-sourced, well-researched pieces, at least not as much as they ought to. Instead they make Content. Many excellent writers and publications have at one time or another partaken in what amounts to a venial sin of the internet because it makes a warped kind of sense. When all you need is for 50% of an ad unit to be seen for one second so you can bill an advertiser, you incentivize page views over engagement and clicks over reads.

This isn’t the only kind of cheap writing that is incentivized by our algorithms. A particularly virulent form is the “hot take,” which Salon’s Simon Maloy defined as “deliberately provocative commentary that is based almost entirely on shallow moralizing.” Is Adele’s single, Hello, the musical equivalent of Trump’s racism? Is spooning inherently problematic? Are dogs bad? These are all questions posed by real articles online. It’s possible, I believe, to write about these topics with depth and seriousness, but these articles did not. The truth is that publishers can produce a hot take on the outrage of the day — which generally comes with zero reporting — just as easily as they can produce a piece on 21 times that Harry Potter made us hangry. Plus, if the take is hot enough, which is to say counterintuitive enough, shallow enough, moralizing enough, it can yield far greater dividends (read: shares) than your average listicle.

This is what I mean by the logic of content farmers. It is also the logic of capitalism, which demands greater efficiency no matter the human cost. It privileges what we can count and monetize. But just because publishers have landed on an advertising model that is satisfactorily monetizable doesn’t mean they’re producing work that is valued by humans. Maybe people want to eat their vegetables. We wouldn’t know, because publishers’ business models — and the algorithms those business models serve — subsidize high-fructose corn syrup. That is, when it comes to cheap writing online, many argue that this is simply what people want to read. After all, we are the ones who click! But saying the internet is awful because people are awful is a lot like saying guns don’t kill people; people kill people. It’s superficially true and completely misses the point. The fact is that Americans kill people — including themselves — at staggering rates because we’ve made it easy to get guns. Similarly, the internet can feel like an awful place not simply because we’re awful people, but also because we have designed the internet to be a garbage fire.

Theses algorithms, by necessity, must reduce a piece of writing down to its measurable, quantifiable parts in order to rank it. This lossy process destroys crucial information about the value of a piece of writing — namely, if humans actually valued it.

In Italo Calvino’s novel, If on a Winter’s Night a Traveller, Calvino pokes fun at the idea that computers could be used to count writing:

A suitably programmed computer can read a novel in a few minutes and record the list of all the words contained in the text, in order of frequency. “That way I can have an already completed reading at hand,” Lotaria says, “with an incalculable saving of time. What is the reading of a text, in fact, except the recording of certain thematic recurrences, certain insistences of forms and meanings?”

In Calvino’s book — published in 1979 — this was a punchline. It also serves as a warning, as censors in a faraway, imaginary country use this method to decide which books are allowed into the world. “We have machines capable of reading, analyzing, judging any written text,” a government censor tells the incredulous narrator. It only took a few decades for reality to catch up with Calvino — if you believe our tech giants.

But the dumb, wooden ruler is just as effective as the most sophisticated AI algorithm in telling us what good a piece of writing does in the minds of its readers — which is to say, not very effective at all.

Most of my job revolves around understanding how people write and interact with writing online so I can tell stories about it. In other words, I spend a lot of time thinking about counting. I am tempted, as I know the writing industry is tempted, to believe that my measurements can tell us the story of a piece of writing — if not the whole story, then at least the important parts of it. After all, the progress narrative we tell ourselves is, at its core, a story of measuring things. Start with a foot — based on the human body; end with a laser-leveled, digital caliper. Start with the naked eye; end with an electron microscope. To move this metaphor into the world of writing: start with magazine circulation rates; end with attention-tracking. Certainly, there’s utility in these ever more precise ways of measuring. But the dumb, wooden ruler is just as effective as the most sophisticated AI algorithm in telling us what good a piece of writing does in the minds of its readers — which is to say, not very effective at all.

Marilynne Robinson in The Givenness of Things writes that we live in a culture of naturalism, which she defines as “exclusive attention to the reality that can be tested by scientists.” I propose that our writing is inhabiting a kind of shadow naturalism, one in which we pay exclusive attention to the reality that can be tested by software. To be sure, computers can now test more of the world around us than ever before, but the ontology algorithms can grasp pales next to the reality our writing inhabits. Wonder and mystery, messy though they may be, at least hold the possibility of joy and awe, which is more than I can say for the cold sword of reductionism our tech wizards now wield. We are like Hamlet’s Horatio, with nothing in heaven or earth except what our data scientists dream up: as they quest to algorithm away our problems, they seem determined to algorithm away good writing as well. Of course, naturalism serves us well when employed, say, in the sciences. But it ought to serve us; all too often, our endeavors have begun to serve it, which is probably what Christians mean when they warn against idolatry.

This is the hinge. It’s when I say that there a better world exists within reach. One in which these amazing counting machines are not used to strip meaning, context, and value from writing, but instead used to support our humanistic endeavors. The incomparable Ursula K. Le Guin, in her speech at the National Book Awards in 2014 said:

Books aren’t just commodities; the profit motive is often in conflict with the aims of art. We live in capitalism, its power seems inescapable — but then, so did the divine right of kings. Any human power can be resisted and changed by human beings. Resistance and change often begin in art. Very often in our art, the art of words.

What Le Guin is suggesting, and what I hope to build on, is a path that re-channels the power of the internet, computers, and, yes, even algorithms away from the profit motive and reliance on the easily quantified.

The first step is to rid ourselves of the great unexamined tenet of digital life, what my colleague at the Media Lab, William Powers, calls digital maximalism. In his book, Hamlet’s Blackberry, he defines it as: “It’s good to be connected, and it’s bad to be disconnected.” The corollary of Powers’s axiom in our writing lives might look like this: It’s good to reach a wide audience, and it’s bad to reach a narrow one.

For Facebook, this axiom is written into its DNA. “Connect with friends and the world around you,” Facebook says it right there on its sign-up page. It’s an imperative. More friends is better, which is why Facebook pesters you to give it everything you’ve got. And when the likes and shares roll in, each one is counted — more is more and more is better. That’s how Facebook works interpersonally. At the publisher level, it offers economies of scale and network effects — reach a wide audience, show them an ad, or maybe get a micro-payment; that’s how you know your writing is valuable.

That is the false prophet of digital maximalism speaking. Often, the most valuable kinds of writing are intimate and local. Consider the fate of the regional paper, which provided — before the internet demolished its business model — essential information about the areas they represented. Richard Rodriguez, writing in Harper’s, says “In the nineteenth-century newspaper, the relationship between observer and observed was reciprocal: the newspaper described the city; the newspaper, in turn, was sustained by readers who were curious about the strangers that circumstance had placed proximate to them.” But, he writes, “We no longer imagine the newspaper as a city or the city as a newspaper.”

To break of free of digital maximalism, we need new kinds of social interactions on the web — those that deepen relationships rather than broaden them, tighten the reciprocal bonds between observer and observed. Of course, the trouble with a social network based on narrow but deep connections rather than broad but shallow ones is that the typical way we monetize online life — ads — won’t work. The answer to this problem is to begin to insulate this new, imagined social network from typical market forces. How about a non-profit, community-owned cooperative social network that charges service fees on a sliding scale? A passionate, user-governed co-op isn’t insured against the fickle winds of capitalism, but if it becomes an integral part of new online or pre-existing offline communities, it can weather them in unexpected ways.

The degree to which the network is insulated from capitalistic, profit-driven market forces is the degree to which this network changes distribution for the better — provided, of course, that it doesn’t fall back on the batch of metrics we use now. Metrics like positive comments or even responses to prompts about the article could take the place of simple one-click reactions. Natural language processing algorithms can be used on those responses to get a limited view of how they impact readers’ thinking. The network could track how — or if — a user’s language changes after reading an article and could supplement the more qualitative, prompted signals by weighting for high-impact articles. These are just examples — and not particularly innovative ones. The truism here is that if more people imagine entirely different kinds of metrics, we’ll start to see entirely different kinds of metrics.

But the problem that I’ve outlined isn’t just that we’ve chosen poor, reductive metrics; it’s also that we make metrics — good or bad — our guiding star. What I mean is that metrics are almost always used to decide how we pay for writing, and we have a troubling tendency to allow market value to stand in for human value. We end up, then, valuing a piece of writing based on how well it scores on a metric all because we need a way to decide how to pay for writing.

Of course, some will argue that metrics-driven writing is more equitable. After all, there’s good reason to be skeptical of human curation of writing, which has shown itself to be susceptible to the inequalities and injustices that run riot through the rest of human life. Pre-web — or, maybe, pre-social media, depending on how you define social media — the universe of writing was much smaller, and the article wasn’t its base unit; the publication was. This meant that human editors — often, white men who lived in New York City — had an outsized role in the distribution of writing. Now, we have disassembled the publication into individual articles, and we have algorithms — created, often, by white men who live in San Francisco — remix those individual articles back into a publication made just for you. There was a hope that by removing human gatekeepers from the equation, we would usher in a more democratic age. But we haven’t dethroned the gatekeepers, we’ve simply swapped human intelligence with machine intelligence. Progress of a kind perhaps, but hardly the radical flattening that was promised.

But this state of affairs wasn’t always the case online, nor is it inevitable that it will stay the case online. In fact, in the past, access to new modes of writing have enabled greater access to ideas and power. Marilynne Robinson says that,

The history of the Reformation is very largely a history of books and publication, a response to the huge stimulus given to intellectual life by the printing press.

She also tells us, that in fourteenth-century England, long before the Reformation, John Wycliffe translated the Bible from Latin and into Middle English. This set off a movement called Lollardy, whose adherents wandered through the countryside preaching and teaching from the vernacular bible. Robinson writes:

It was a radically popular movement, critical or dismissive of many teachings of the church of that time, and claiming exclusive authority for Scripture over priesthood and Papacy.

Indeed, the powers that be of the time tried their best to stomp out Lollardy, including making the English Bible a forbidden document. In other words, Lollardy was a democratizing force, and while this isn’t always the case, as Robinson writes, “the Bible may be fairly said to have entered English as a subversive document.”

The internet, too, can be subversive. I propose that hypermedia, especially hypertext and the hyperlink — the ability to link one document to another, one idea to another — isn’t just what makes the web the web (it’s no mistake that the language used to make pages on the web is called Hypertext Markup Language or HTML); hypertext is also what enables Lollardy of today. As the Iranian blogger Hossein Derakhshan movingly wrote for Matter:

Stemming from the idea of the hypertext, the hyperlink provided a diversity and decentralisation that the real world lacked. The hyperlink represented the open, interconnected spirit of the world wide web — a vision that started with its inventor, Tim Berners-Lee. The hyperlink was a way to abandon centralization — all the links, lines and hierarchies — and replace them with something more distributed, a system of nodes and networks.

In the decentralized, democratized world that the hyperlink helps enable, groups of people who previously lacked the ability to be heard can demand that their experiences are taken seriously. And, yes, there are also cat gifs and bad lip reading videos and far too many Nazis, but at its core, the web is a library so vast it makes the one in Alexandria look like a toy.

I’m not calling the web a library simply because the internet enables information at unimaginable scale to be accessed; I’m calling the web a library because it is so subversive. After all, if no one had ever heard of a library, and you proposed to them a place where you could, for free and with expert help, check out virtually any piece of media, you would be called a communist and laughed out of the room. But such a thing exists, and the internet means information is even more graspable than ever before.

I used to run a blog called Ich Bin Ein Oberliner which never made it past a few thousand monthly pageviews. I set up shop on Blogspot, and linked and commented my way to a loyal following of people whose identities and views were often very different than mine. In turn, I was a loyal follower of their work. And together, with only the hyperlink, we carved out an intimate space in the wild expanses of the internet.

At Et Cetera Gallery, our first project was to make interactive charts about every first kiss Galen had with everyone she ever kissed (many of which happened here at Oberlin). We charted the astrological signs of her kissers, how much she enjoyed it, how intoxicated she was, how they made her feel. Our next project was to turn that web app into a book with prose poems. As you might imagine, this was not a runaway commercial success. It was, however, a thing that broke even, and that — from qualitative feedback — impacted many readers in our immediate communities. Our other projects have been similarly nourishing, but not particularly lucrative. That’s the point: In producing these works, because we don’t have large scale, we have been deeply connected with the community of people who will eventually consume them. Fortunately, Et Cetera Gallery did not have to invent the model of community-informed creation, and people are already working on using technology to bring creators and communities closer together, and to boost and encourage voices in communities to create for themselves. Readers, it turns out, are the final arbiters of good writing and what good writing can do in the world.

While the Reader is an active participant in the social contract between writing and the people it describes, the User, as presently imagined, merely wants to watch watermelons explode. The User’s fickle tastes demand obsequiousness. Publishers loathe their stupidity and fear their power.

Over the course of this talk, I’ve been trying to avoid using the word “user” in favor of the word “reader,” as they indicate two dramatically different paradigms. A Reader, like a Citizen, is a very different kind of person than a User or a Taxpayer. As Marilynne Robinson writes in an essay on the fate of public universities, “While the Citizen can entertain aspirations for the society as a whole and take pride in its achievements, the Taxpayer, as presently imagined, simply does not want to pay taxes.”

Similarly, while the Reader is an active participant in the social contract between writing and the people it describes, the User, as presently imagined, merely wants to watch watermelons explode. The User’s fickle tastes demand obsequiousness. Publishers loathe their stupidity and fear their power.

To come back to Ursula K. Le Guin. She said that “We live in capitalism, its power seems inescapable — but then, so did the divine right of kings.” She also said that “Resistance and change often begin in art. Very often in our art, the art of words.” I would add that the art of writing requires both a writer and a reader to operate. We forget the human readers at the end of our words at our own peril, and we must remember that the metrics we end up using are mirrors, not lenses. In their reflection, we see ourselves and our biases just as much as the world around us.