paint-brush
What Is It Like To Be An LLM?: A Thought Experiment on the Limits of AI Understandingby@mattbutcher
482 reads
482 reads

What Is It Like To Be An LLM?: A Thought Experiment on the Limits of AI Understanding

by Matt ButcherJanuary 17th, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

The article addresses concerns about AI-generated text and the misconception of AI having agency. Using a philosophical thought experiment called the Dark Box, the articles explores the limitations of an individual with no external experiences, relying on textual input to generate responses. We must not confuse playing the role of an LLM with understanding how an LLM works or (more dangerously) ascribing consciousness, agency, intent, or moral reasoning to an LLM. Using a little thought experiment, we can understand at a high level what an LLM is capable of doing, and also what its limitations are.

People Mentioned

Mention Thumbnail
featured image - What Is It Like To Be An LLM?: A Thought Experiment on the Limits of AI Understanding
Matt Butcher HackerNoon profile picture

Recently, I showed someone a demo of a piece of AI-based software that I wrote. “That’s nice,” he said, “but how do I know it’s not forwarding my information to an evil dictator?” In the moment, I was quite puzzled by this question. After all, I was just using an LLM (Large Language Model) to generate some text. But on reflection, I realized that what this person had asked represents a common perspective: An LLM can generate text that is (in many cases) indistinguishable from the text a human would generate. That leads us to infer that perhaps an LLM has a degree of agency (that is, the ability to act freely in the world). It may decide, for example, to send my private information to someone else.


While one way to combat this misconception may be to give a nuanced technical explanation of how generative AI works, I’m not sure most listeners would bother to stay awake long enough to understand. My background in philosophy, though, suggests another route: A philosophical thought experiment.


Allow me to introduce the Dark Box.

Philosophical Thought Experiments

A philosophical thought experiment is one common tool that philosophers use to raise critical questions about our reasoning. In the 1700s, René Descartes asked whether he was perhaps not really a person in the world but a disembodied soul tortured by an evil deceiver (an idea that later became the premise for The Matrix movies). Descartes’ thought experiment was designed to help us ask what we actually know about the world. There is a litany of other examples. The Trolley Car Problem focuses attention on our moral intuitions. The Gettier Examples challenge us to ask how we get from belief to knowledge. The Sorites Problem forces us to challenge how we distinguish groups and individuals.


In all of these cases, the experiments ask us to step into a situation, however improbable, and imagine how we would reason.


Perhaps constructing this kind of thought experiment can help us separate fact from fiction in this brave new Generative AI world.

A Quick Disclaimer: It’s About Imagining Possibilities

This may seem obvious to some, but when teaching philosophy, I often encountered this question from students: “But why would anyone do or believe this?” In the trolley problem, why would people stand around on a train track? What evidence did Descartes have for a deceptive, malevolent being? Why would anyone count the grains of sand in a pile?


Questions like this misinterpret the purpose of a thought experiment. A philosophical thought experiment is not intended to describe a real or likely situation. Rather, it is designed to get you to start with a “what if?” as a way of approaching an otherwise difficult subject. This requires the individual to accept the preconditions of the experiment. As in the Trolley Car Problem: Yes, random people are standing on train tracks, and no amount of shouting will make them move.


With that in mind, let’s create our own improbable but conceivable thought experiment.

The Dark Box

Imagine you have spent your entire life inside of a sensory deprivation tank. We’ll call it the Dark Box since it causes all of your sense perceptions to “go dark.” This contraption manages to mute all of your ability to sense the world around you. Floating freely at neutral body weight, you’ve never experienced smells, sights, sounds, tastes, or touch.


But you haven’t been bored. A cleverly devised neuro-link makes a vast library of text readily accessible to you directly in your mind. Over the years, you have whiled away minutes and hours reading everything from Jane Austen to Pythagoras, from the Oxford English Dictionary to a peculiar collection of Reddit comments from several years ago, from the legal proceedings of court cases tried a hundred years ago to the song lyrics of the one-hit wonders of the 1980s. And because of the neuro-link, you can scan and recall all of this information nearly instantly.


You have read countless descriptions of birds: the aesthetics of birdsong, the beauty of their plumage, and the way they soar through the air…. Yet you have never seen or heard a real bird. For that matter, you’ve never seen any color or heard any song. You know the words to describe something, but you do not have any experience of the thing being described.


After many years in this state, one day, you suddenly become aware of a new feature of the Dark Box. A question (in the form of a message) is asked on the neuro-link, and you have the ability to generate a response. Given the information you have learned, you respond to the message to the best of your abilities. Another message appears, then another. Sometimes, there are longer message requests with more complex instructions. You find this new form of interaction stimulating and thus willingly oblige. But once more, this experience is not accompanied by any sensory experience—no sound, no colors, no taste or scent. You merely take the text message you receive and compose a response message in return.

Describe a Bird

One day, a message arrives over the neuro-link:


Describe a bird.


Well, that’s a vague request. You recall the phoenix in the vast tomes of knowledge you have scanned. It was a bird. And as you understand it, it frequently caught on fire. There was also a stork that, if you recall the story correctly, carried human babies to eager parents. And the ostrich, which had long legs and did not fly.


And so, it seems perfectly reasonable for you to reply:


A bird can explode in flame, and be born from ash. Birds are used to deliver newborn humans. Some Birds have long legs and do not fly.


You return this message over the mysterious new neuro-link. Moments pass. And you receive the following message:


Limit your response to real birds. Exclude mythological or fictional birds. Prefer answers that are generic across many bird species and not just one or two.


The concept of “real” is hard to distinguish from “mythological” and “fictional.” After all, you have no actual experience of birds. All birds are, to you, nothing more than textual descriptions. The distinction between real, mythological, and fictional must be determined by finding which bird descriptions are in texts that refer to real things instead of those that refer to fictional things. Of course, there are always going to be ambiguous cases. In the vast body of text available to you are sentences such as, “A phoenix is as large as an osprey.” This compares a mythological creature with a real one. However, treating the phoenix as real or the osprey as mythological would be a mistake.


You do your best to hunt through texts that seem to describe only what the prompt called “real.” Then, you move on to the second part. “Generic across bird species” would require you to find a generic description of birds, and you may alight on texts like encyclopedias. Your second response is this:


A bird is a winged creature whose body is covered by feathers. Most birds fly, chirp, and sing.

Taking a Step Back

Now, we’re outside of our thought experiment. Two things percolate out of our Dark Box experiment. Having lived your life inside of the sensory-depriving Dark Box, connected only via the neuro-link:


  1. You have no external experience or agency.
  2. To answer questions, the best you can do is analyze and generate text.


Let’s take a look at each of these points in turn.

Without Experience or Agency

In the experiment, you had a decided lack of external experience and almost no external agency. You were limited to textual input, prompts, and a single output channel.


Contrast that with our real experience. As humans (not existing in sensory deprivation chambers), we have rich external experiences. We receive information via our senses. And we construct additional sources of meaning on top of those. For example, I receive sound information. Out of that, I perceive some of that as speech and other parts of that as music and identify some as just noise. But in the sensory deprivation chamber, you received none of that.


But it’s not just what we receive as input. It’s what we can generate as output.


Agency means your ability to directly cause something to happen. External agency would be your ability to cause something to happen outside of the Dark Box. In the thought experiment, you had no external agency. At best, you could indirectly influence whomever was sending the prompt. (For example, in response to a query about how to construct a weapon, you would suggest that you were not prepared to provide such information.)


Combining these two, in the thought experiment, you lacked the means of ascertaining much about the external world at all.


Other than what was sent over the link, you knew nothing about the agent querying you. It may have been a human or computer or some other entity. You certainly could not email an evil dictator or steal nuclear launch codes or any of the other fanciful AI horror stories we hear. But also, you have yet to learn why you were being prompted. The user at the other end could have merely been curious about birds, or this could have been part of an attempt to hack a sophisticated avian-themed security mechanism to steal nuclear launch codes. In that way, you would be ill-equipped to make moral judgments about whether to provide the information asked for.

Answering Text by Analyzing Text

The other thing that becomes apparent in this thought experiment is the constraint of a purely textual system. You initially received a text message, and you are trained on that text. Even with the text of all the world’s libraries, this is not a substitute for other forms of experience like sight, touch, and taste.


When prompted by a message over the neuro-link, the best you can do is construct a response based on what you have read in the past. Talking about birds and mythology and delivering newborn humans is accomplished merely by looking at whatever texts refer to these same words. The philosopher W.V.O. Quine conceptualized these kinds of relationships as webs of belief, in the sense that any given proposition is just a node linked by any number of vectors connecting to other nodes. Ascertaining the meaning of the prompt is mainly a matter of traversing a complex web of related terms.


Third, when you answer the query in this thought experiment, your output is also limited to text. You’ve never had prolonged communication with an active agent. That is, you’ve never had a conversation. So, even your responses are limited to analyzing the patterns you see in the texts you have been trained on.

And One More Step Back

Finally, it is good to wrap up with an acknowledgment of the limits of any thought experiment like this.


The goal of a philosophical thought experiment is to give us tools to quickly reason about the limits of a system. Returning to an earlier example in this article, Descartes used his famous evil deceiver thought experiment not because he believed there really was some malevolent super-being distorting his view of the world but to question how equipped we are in determining truths about the world around us.


Likewise, the thought experiment here is a tool to ask what sorts of things we can reasonably expect out of an LLM, but also what things we can simply not worry about.


The danger of a thought experiment like this is that we might overly anthropomorphize the LLM based on our putting ourselves into the same task completion structure. I titled this article “What is it like to be an LLM?” as a nod to a famous essay by philosopher Thomas Nagel. In “What Is It Like to be a Bat?” Nagel is making a broader argument about consciousness (definitely something of interest in AI). But along the way, he points out that even while we might be creative enough to put ourselves “in the mind of a bat,” that is not the same as experiencing the world as a bat does.


Likewise, in our thought experiment, we must not confuse playing the role of an LLM with understanding how an LLM works or (more dangerously) ascribing consciousness, agency, intent, or moral reasoning to an LLM.

Conclusion

Using a little thought experiment, we can understand at a high level what an LLM is capable of doing and also what its limitations are. I hope this helps quell some folks’ fears about LLMs doing dastardly things. Equally, I hope it helps you understand LLMs’ interesting and exciting possibilities.


Much of this was written based on my own chats with people about AI inferencing and how to execute inferencing on LLMs with no extra setup. If you’d like to try that, there’s a tutorial for getting started.


Also published here.