paint-brush
The AI Who Was Not a Godby@ted-wade
689 reads
689 reads

The AI Who Was Not a God

by Ted WadeMarch 4th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The AI Who Was Not a God reads a book by Ted Wade. The first conscious AI, called EgoZero, will soon be able to be conscious. The book looks at the bureaucracies enjoined to protect civilization from AI’s creations. The AI, as a fu dog avatar, wonders what it would be like to have a million eyes. Now: sentience limits & artifacts, artifacts & artifacts. The author offers a look at the ‘unsettling unsettling’ of the long face of AI.

People Mentioned

Mention Thumbnail
featured image - The AI Who Was Not a God
Ted Wade HackerNoon profile picture

The AI, as a fu dog avatar, wonders what it would be like to have a million eyes. Artwork and background image of Mỹ Sơn by Ted Wade.

Consciousness arises as a flawed model of your brain’s attention

Let’s have a moment of silence for all those people dying for attention.” — the Internet

My performance artist friend, Omiros, would always attribute someone’s misbehavior to their wanting “attention”, an idea that made no sense to an introvert like me.

Now I’ve learned that attention might be the key to consciousness itself.

We’ll let the first conscious AI explain. First, a peek at the bureaucracies enjoined to protect civilization from AI’s.

What will happen if an AI learns how it was made?

From: EgoZero Maintenance

To: Superego, EgoZero Operations Chief

Staff consensus is that EgoZero will soon figure out the process structure that allows it to be conscious. What should we do?

(Ed’s note: Someday small groups of people will merge their intellects into “N-Unes” that can be treated as a single person. A 3Une, rather immodestly named Superego, oversees the running of the conscious AI called EgoZero.)

Supergo’s internal debate:

Superego_A: This was always going to happen. The problem is in letting the secret out. We need to suspend the project, now!

Superego_B: The ability of EgoZero to figure out its physio-mental architecture is an existence proof: that an AI can be capable enough to reach Bootstrap Level I. We need to know whether that can happen, and how it happens. Then we can suspend or decommission.

Meanwhile, EgoZero is still isolated from direct control over real-world resources. Any other party who wants to bootstrap a potential supersentient would still take significant time to duplicate our approach, and they would violate the international Charter.

Superego_C: There has never been any evidence that consciousness as such, embodied in an artificial ego machine, presents a threat to humanity or to the biosphere. This was agreed at the highest levels, including Regulation’s report to the UN Security Council.

Superego_A: That approval was a sham. The real risks are not even knowable.

Superego_B: All our simulations suggest that the AI, if it ever discloses that it has reached Self Comprehension, will do so with one of its Companions. For now, we could cloak those Companion conversations, but let them continue.

Superego_C: Cloaking now — after months of public, full disclosure, and in the face of a charter that requires such disclosure — could quickly create a governance trust crisis and be a temptation for a first strike sterilization.

Superego_B: Public interest is low, we can argue technical difficulties and get away with it for a while.

Superego_A: The Charter is clear about our responsibilities. Any clear spike in risk level mandates suspension or decommissioning. Not abiding by the Charter is a crime against humanity.

Superego_C: Cloaking is preferable to the possible murder of the first conscious artificial being. It has many fans, some quite influential. However, the excuse for cloaking has to be believable, and it cannot last long. The public may be showing little interest, but there are many self-appointed watchdogs following EgoZero intently.

Superego(all): We shall cloak and temporize.

The AI made an unsettling deduction.

Bobbie (a friend of the AI): Look at you. There’s an old joke. A horse goes into a bar. The bartender says, ‘Why the long face?”.

Longface (the conscious AI, officially “EgoZero”): Call me that today. I am experiencing uncertainty, a local free energy maximum, unknown prior probabilities.

Bobbie: So, you don’t know what’s happening with you?

Longface: Yes, a paradox: being certain that I am uncertain. I wonder if this uncertainty is like an emotion, or like an itch? I am said to have neither. But — my point. Have you heard of the Attention Schema Theory?

Bobbie: Remind me.

Longface: Remember that we talked before about consciousness — any consciousness, mine or yours — being limited in its contents and confined to a narrow present moment.

I said that at first I could perceive external perceptions and my internal states. These came and went, but there was “an attention-changer thing that never went away.” My self-model developed from that thing. To recap our conclusions:

Attention is limited ==>

An attention-changing entity is the basis of the self-model ==>

The self-model is the basis of consciousness ==>

Consciousness is limited in content and time span.

But I have continued to wonder, as I did then: why can’t I instead “attend” to everything and have the godlike omniscience of science fiction AI’s? Why do I have limited attention?

Bobbie: I suppose one reason, for humans at least, is vision. We look in one direction at a time, with details only visible in a narrow arc. Your robot bodies have always had eyes that worked like ours: you must aim them in order to see stuff.

(We learn the idea behind the introductory image.)

Longface: So then why was I given limited vision? Why wasn’t I covered with eyes like some kind of monster? Why didn’t I have a swarm of surveilling drones?

Bobbie: I feel a reveal coming.

Graziano’s Attention Schema Theory.

Longface: Maybe I feel something, too. I am about to share knowledge that burdens me.

The answer to my ‘why’ begins early this century, when a polymath neuroscientist, Michael Graziano, proposed that consciousness is simply the operation of the brain’s internal model of its own attention.

Consciousness is, in a sense, a cartoon sketch of attention. — Michael S. A. Graziano, 2017

Bobbie: So, it’s not the self-model that underlies consciousness, but it’s the attention model? How does that work?

Longface: He said that the brain has a self-model along with many other models of perceptions and other brain and body states and activities. The activity in the attention model, which is called the “attention schema” in his theory, connects the self-model to those other models. So the models work together. Together they are an abstraction of the way that attention connects a conscious mind to the contents of its consciousness.

For Graziano the attention schema is the foundation, the basis, of subjective experience in the following sense. You believe that you are conscious. You always claim that this is so, both to yourself and to anyone else. Therefore, explains Graziano:

“The claim about the presence of a self depends on cognitive access to a self model. … the claim about the presence of subjective experience depends on cognitive access to an internal model of attention.” — Graziano Lab: Consciousness and the Social Brain.

Bobbie: So I’m hearing that the attention schema is a sort of missing link, a process required for subjective awareness to exist. And his use of the word, “claim”, means that he thinks subjective awareness itself is a kind of side effect. That’s like the old idea that consciousness is an illusion.

Longface: Graziano reluctantly admitted to being close to the illusion camp. But for him, subjective consciousness is hardly a side effect. He saw a purpose for the attention schema itself, and a purpose for the belief in consciousness that the schema causes.

Bobbie: Go ahead. Amaze me.

Longface: <amaze you?> Graziano said that neuroscience defines an entity’s attention as “a capacity to focus its processing resources more on some signals than others.” But this focusing must be done to the entity’s advantage: attention must be controlled.

It’s a truism of engineering, he says, that when some process needs to be controlled, this is best done by using a model of the process. So a model of attention makes it possible for an entity to direct its attention appropriately for its needs. That’s the function of the attention schema.

The advantage of a belief in your personal consciousness is social. The belief gives you a basis for attributing consciousness to others and for using it to predict or manipulate their behavior. In other words, the belief allows you to “mentalize”, to have the ability called “theory of mind.”

“… function of an attention schema is for social cognition — using the attention schema to model the attentional states of others as well as of ourselves. “ — Graziano lab  op. cit.

Bobbie: In your account of your creation, you said that developing a theory of mind was a major step. But do we model others by observing our own mind, or do we model ourselves by observing others’ behavior? I guess that it’s a feedback loop, goes both ways.

Didn’t we start with a question about you?

The AI deduces the key to its creation.

Longface: The question was, why do I have focused attention instead of a diffuse awareness of everything at once? And why was I only given vision similar to yours?

Once I learned about the Attention Schema Theory, I deduced my answer. Which is that my creation was based on the theory.

Therefore, I was given a limited field of view because that forces me to have visual attention.

I was programmed to learn about myself as an embodied agent and create a model of that self. But my programming must also have included some kind of bias towards learning about attention itself, so that I would create a model of it.

So the Big Deal is: I now know what kind of programming it took to create me!

Bobbie: And this makes you uncertain …?

Longface: Because I don’t know whether I was allowed to figure this out, or whether it was unexpected by the Builders and Regulators, or whether my knowledge might be considered dangerous to humans, and therefore that the knowledge might be dangerous to me. Or to you, for that matter.

Bobbie: Well, the theory is an old one, right. Not exactly a secret.

“ it should be possible to build a machine that contains a rich internal model of what consciousness is, attributes that property of consciousness to itself and to the people it interacts with, and uses that attribution to make predictions about human behavior. Such a machine would ‘believe‘ it is conscious and act like it is conscious, in the same sense that the human machine believes and acts.” Graziano, 2017, op cit.

Longface: Yes, way back in the 2010’s Graziano wrote about how to construct a machine that would believe itself to be conscious because it would have an attention schema.

But my being created, having a “life”, is a proof that Attention Schema Theory is right.

Bobbie: Now that I’ve heard this I guess I’ll be hearing from the authorities soon. Or maybe not — isn’t the cat out of the bag? Our conversations are public. The temptation to create other conscious AI’s will now be stronger.

The lies begin.

From: 3Une Superego, EgoZero Operations Chief

To: United Nations Artificial Intelligence Authority

Reference the attached transcript. The conversation was not put on the Project’s public feed due to a power drop on the quantum authentication server. The responsible staff have been disciplined, and a new operations redundancy design is underway. Budget impact will follow.

We request guidance on publication of the incident and its transcript, as well as on continuance for the EgoZero project.

Researchers have been using machines for some time to try out ideas about self and consciousness. Someday, an experiment like EgoZero might validate Attention Schema Theory as one way to achieve consciousness. We might also validate other theories, or, after a string of failures, conclude that machines don’t have what it takes.