paint-brush
This Tangled Web — Intelligence, Technology and Fictionby@LochlanBloom
1,137 reads
1,137 reads

This Tangled Web — Intelligence, Technology and Fiction

by Lochlan BloomJanuary 8th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Human culture is ruled by narrative. Everywhere from the board room to the playground, from casual conversation to complex political discourse the story is key. And yet, it seems, we are increasingly at a loss to understand this web of fiction that surrounds us.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - This Tangled Web — Intelligence, Technology and Fiction
Lochlan Bloom HackerNoon profile picture

What’s the story?

Human culture is ruled by narrative. Everywhere from the board room to the playground, from casual conversation to complex political discourse the story is key. And yet, it seems, we are increasingly at a loss to understand this web of fiction that surrounds us.

A few hundred years ago a good proportion of mankind relied on manual dexterity and strength to generate income. Now the ability to turn out a well-structured sentence is of much more economic value than the ability to plough a field, manipulate a loom or blow glass.

In the last twenty years this shift toward a knowledge-based society has accelerated further with the internet driving people to spend more and more of their day manipulating and refining the basic building blocks of story.

From the CEO who needs to create a compelling narrative to avoid bankruptcy to the twitter user crafting a 280 character tweet; the creation of a good story is at the core of so many tasks today.

Yet with all these man-hours spent creating narratives do we truly comprehend the impact this has on our understanding of ourselves?

Is this merely a shift in market demand or does it mark some difference in our understanding of the world? Why might this growth in narrative be important and hat importance should we attach to the skill to create and interpret a story?

Storytelling as a guide to intelligence

One key reason that narrative dominates our culture is that it is a primary method by which we, as humans, gauge the intelligence of others and formulate our responses to complex situations.

The leading artificial intelligence theorist, Roger Schanck, discusses the fact that human society has long considered two aspects of narrative as key indicators of intelligence in his book “Tell Me a Story: Narrative and Intelligence” (Schank, 1999). Here he outlines the proposal that it is not only our comprehension of a particular story that shapes our understanding but also the role that story plays in building conceptual models of others around us.

“We assess the intelligence of others on the basis of the stories they tell and on the basis of their receptivity to our own stories.” Roger Schanck

As Wikipedia defines it “A narrative or story is a report of connected events, real or imaginary, presented in a sequence of written or spoken words, or still or moving images, or both’’ — a pretty broad scope.

And it is not only human intelligence that we gauge based on storytelling. Going back to the Turing test, one of the earliest attempts to measure machine intelligence, the idea of narrative and discussion is essential. In his seminal paper ‘Computing Machinery and Intelligence’ (Turing, 1950), Turing discusses not a quantitative measurement of intelligence but an imitation game.

The interrogation he proposes is designed to see whether a machine can fool a human into ascribing it attributes that they would otherwise attribute to a person. Turing makes no mention of measuring intelligence, no empirical certainty, only a dialogue between human and machine.

The human judge attempts to decide if the respondee is human or machine based on the sum of its answers. The narrative the machine tells through its answers, combined with the judge’s understanding of that narrative, decides whether the machine is good enough at imitation. If it is only then the human can say that they believe the respondee is intelligent.

“Narrative is not a single entity, or a single tightly related set of concepts… Narrative can mean many things.”

(Mateas M, 2003)

As Turing states, ‘It might be urged that when playing the “imitation game” the best strategy for the machine may possibly be something other than imitation of the behaviour of a man. This may be, but I think it is unlikely that there is any great effect of this kind. In any case there is no intention to investigate here the theory of the game, and it will be assumed that the best strategy is to try to provide answers that would naturally be given by a man.’

This imitation game, of course, is no different from the process (or processes) we use to judge the intelligence of other humans around us. (K. Dautenhahn, 1998) We are constantly engaged in dialogue and constantly struggle to create a consistent narrative that gives us clues as to the intelligence of others around us.

However, even if we agree that storytelling is the sole way we measure intelligence there still remains the question of what exactly we are assessing when we say somebody, or something, possesses intelligence.

If we read and understand a text are we making an assessment of the intelligence of the author? Or of something else? More on this later.

Life in the realm of fiction

When presented with a new narrative does everybody approach it in essentially the same way? Is the process of understanding fixed — in the same way that vision is largely fixed by the mechanical processes of the eye? Is it something we can objectively measure?

Or is narrative merely a word that covers a disparate set of responses, a process that evolves and changes, such that narrative understanding depends only on context and can never be said to be intrinsic.

When we talk about vision there is a certain physical basis that defines what sight is; while people may have hallucinations or other aberrations in their vision, there is a neuro-physical basis that we generally agree underlays what we ‘see’.

The mechanical processes that control the cornea, the retina and the optic nerve are fairly well understood and provide a basis or standard framework when talking about sight. Are there analogous processes in the brain when we process narrative?

The complexity of human strategies in relation to story-telling suggest there is little hope of any such ‘narrative pathway’. If we accept that narrative, in its broadest sense, is any account that connects a series of events then the scope becomes impossibly vague.

Modern neuroscience of course does not have time to worry about such assertions and instead tells us that we can safely ignore most of the stories we spend time telling ourselves since the self — the central actor in most of our stories — does not exist.

Research increasingly suggests that the brain will go to fantastic length to create fictional explanations for our actions that have little to do with the real causes at a chemical level. From an empirical perspective the self simply does not exist.

It may seem that our sense of self is deeply ingrained but science is pretty unequivocal — the evidence shows that in many situations the self is a fiction that we are each continuously rewriting.

From a neuroscience perspective, stories or narrative in its broadest sense, seem nothing more than a curious byproduct, something to be discounted from empirical consideration. There is no “I” or “we” except for in the made-up stories we tell ourselves.

But this of course is not how anybody actually lives.While the neuroscientist’s perspective may be empirically consistent it ultimately fails to capture the real experience of self and so can make it that much harder to understand what we talk about when we use the term intelligence in everyday speech.

If when we use the word ‘intelligence’ we are referring to a property of a fictitious entity then it is not certain we will ever find it on a brain scan.

It seems clear there are certain empirical criteria that are not applicable to fictional entities. Try measuring the average body mass of a unicorn? Or determining the chemical composition of Aladdin’s lamp?

If the self is to be classed as a fiction — if “I” is simply a story that we retrospectively create— then it must follow that “we” exist in a world of fiction. If that is the way that the real and unreal are demarcated so be it but when we talk about the realm in which “we” exist we must accept that it is not wholly the mechanistic realm.

The role of technology in shaping narrative

For most of human history our understanding of narrative was reliant solely on body language and aural interpretation but with the introduction of the first tools that allowed people to inscribe letters onto papyrus we have steadily changed to a culture that is closely entwined with technology.

In the last two decades this shift has moved into overdrive as we now scarcely imagine consume any narrative that is not in some way mediated by technology.

From the earliest uses technology has provided a crutch for our (human) understanding, and consequently expectations, of intelligence. Four hundred years ago knowledge could be equated with intelligence fairly simply. In the 1,600s, the fact that someone had gained specific knowledge on a topic (say the operation of a steam engine) implied they had invested significant time in understanding the problem space.

Now any problem space can be accessed very easily via Google such that knowledge on its own is generally insufficient to claim intelligence. A five year old can easily find and regurgitate a compelling explanation of Einstein’s relativity from the internet but we would be right to hesitate in calling the child intelligent on this evidence alone until we had probed a little deeper.

We have seen circumstantial evidence that technology affects our attention span and we understand stories in much compressed forms. Nowadays we tend to furnish our understanding by searching on Google rather than spending time puzzling over meanings. A side effect of this reduced attention span is the result that we cross reference our stimuli many times more frequently than previous generations.

If we are reading an article, and a thought occurs to us, it is unlikely we will wait long before jumping to a new tab and searching for information on that point. If we are reading a novel length work it is highly unlikely that we will get through it in one sitting without referencing other material that will influence our ‘reading’ of the text. One can say that this interplay is simply a diversion that comes about because of the easy availability of resources, but it has a much more profound effect on the way we understand.

We assume that the way we understand narrative may be affected by our knowledge but that the way we consume it has relatively little impact. We may read The Odyssey via an ipad rather than a papyrus, or hear a human voice relayed through a cinema sound system rather than in person, but we assume that the story itself is essentially the same as it has always been. After all is the difference in consumption important? Surely it is the message that is important in any given text?

If we read Homer’s Odyessy on an ipad — while sporadically checking Wikipedia, googling words and browsing beach holidays in Ithaca — then the process is radically different from anything that was possible even fifty years ago.

Certain key points may remain with us, certain universal messages, but it is by no means certain that we understand these in the same way as we always did.

We are used to an almost constant dialogue with search engines, inserting questions and queries into every narrative to better understand it and test how much we trust it.

Homer

In reality our approach to understanding story is, and always has been, evolving. The way we read today is drastically different to the way Oscar Wilde read and that in turn is drastically different to the way that Aristotle read.

When we are told a story in spoken language, most people directly realise the importance of the delivery. The way somebody tells a story is certainly very important to our understanding of both the narrative and the storyteller’s intelligence.

If Shakespeare’s ‘Hamlet’ is performed by idiots not only do we judge the intelligence of the performers but the words themselves lose their power to capture our attention. A poorly delivered version of Hamlet will appear little better than a poorly performed play by a third rate playwright.

Will we recognise other intelligence?

When we now turn to consider the case of artifical intelligence or indeed the idea of an artificially intelligent singularity the question of narrative becomes much muddier.

The idea that computers and machines will reach a level of complexity that exceeds human intelligence and beyond this point will be able to design ever superior versions of themselves, rapidly accelerating the rate of intelligence far beyond what we can imagine raises a number of questions.

If we accept that narrative is a key indicator of intelligence then it is reasonable to ask what kind of dialogue we might have with such a machine that could convince one way or the other. How might we notice that it has equalled or surpassed our own understanding. Is this something we, or any of our descendants, are equipped to recognise, even in principle?

“One thinks that one is tracing nature over and over again, and one is merely tracing round the frame through which we look at it…

When philosophers use a word — “knowledge”, “being”, “object”, “I”, “proposition/sentence”, “name” — and try to grasp the essence of the thing, one must always ask oneself: is the word ever actually used in this way in the language in which it is at home?”

Ludwig Wittgenstein

We face a tricky problem when it comes to recognising any potential singularity event in that our understanding of narrative is increasingly entwined with the technology we use to consume stories.

What are the implications for recognition of a super-intelligent machine when to interrogate it we rely on the same technology that we use to determine if its stories are trustworthy?

When the subject is a human this is a fairly straightforward task, and something that we do on a daily basis. However we have seen that the neuro-physcial description rarely gives us insight into this.

Whenever we meet a new acquaintance or colleague we sub-consciously create a representation of their intelligence based on the stories that they tell (and the stories others tell about them).

We may check their Facebook page, read a blog post they have written, listen to them recount the joys of a recent trip to Paris and all of this we seamlessly join together to form a mental representation of this person, a sense of their intelligence. We do not generally get useful information about their intelligence from a brain scan.

When it comes to a machine the situation is more complex. If we seek to understand the narrative that it creates, we can ask it questions, listen to the stories it spins, look at the code it is based on, but the modern machines we interact with are in one sense a single machine, co-joined by the internet.

If intelligence is a conversation whereby we create a mental picture of the other then the process of asking if a machine is intelligent is inherently dependant on our knowledge, or lack thereof, of its goals or intentions.

If in turn our level of understanding is in part dependent on information generated by the machine, can we truly have an objective representation of its intelligence?

We can interrogate the programming language or try to understand its physical processes but, as with the human brain, a scan might explain mechanisms but it does not feed our sense of intelligence.

Imagine asking a human outright if they are intelligent — whatever their answer it tells us nothing about the real situation. A person may reply ‘Yes, I am intelligent’ or ‘No, I am not intelligent’ but they may be joking or obfuscating or self-deprecating. We build our mental picture based on experience.

To put this another way, we cannot expect to see a super-AI emerge because any intelligent machines we build will be explicitly designed for, and intricately linked to, the systems we use to consume narratives.


“No man is an Iland, intire of itselfe; every manis a peece of the Continent, a part of the maine”

John Donne

They will only become more deeply embedded with us not more distinct. It may not be possible to recognise an intelligent machine because it is not something anybody would, or could, design to exist on its own.

To understand narrative in the future we will be increasingly reliant on machines to spread and parse the stories we share and as a result the very concept of intelligence will become less and less meaningful outside the context of this machine infrastructure.

Thank you for reading. If you enjoyed this, please consider sharing, or follow on Twitter. For reprint or licensing enquires please get in touch via: lochlanbloom.com.