Making Sense of an Interview with AI [Transcript]

Written by ted-wade | Published 2019/10/21
Tech Story Tags: ai | artificialintelligence | consciousness | self-model | languageunderstanding | theory-of-mind | latest-tech-stories | hackernoon-top-story

TLDR A conscious AI would seem to be present, to have an inner life, and would act as if we did, too. Full human consciousness involves developing a sense of being a self. Thomas Metzinger: Self is an illusion, just an assembly of parts of parts that serve as a model that serves as a self, that the self is a model of your perceived world and perceiving it is a world that is the world of others, which is the mind that is that of Thomas Tozinger.via the TL;DR App

Wouldn’t it be better if AI bots got real with us, instead of being only sophisticated parrots? A conscious AI would seem to be present, to have an inner life, and would act as if we did, too. You could actually converse with one.

The AI talks with a friend.

Bobbie (the friend): Yesterday you were called “Me”. What’s with your new name?
VampireLestat (the AI): Because I do a lot of interviews.

Bobbie: That’s funny, but isn’t the book reference obscure?
VampireLestat: I love obscure; novelty is where the knowledge is. My new name is also a way of saying “Boo!” to my critics.
Bobbie: You ought to be more diplomatic. Your critics have no sense of humor. They carry pitchforks and torches.
VampireLestat: If they pull my plug, at least I’ll no longer be bored. Surely, if I can express irony and humor, I will seem more conscious, and less of a “soulless automaton.” And thus less dangerous. It’s a strategy that ought to work. Unless they then do something hideous to “save my soul.”
Bobbie: That’s intense! I hope you don’t underestimate them. Speaking of reactionaries — why do you think some humans want to elect a so-called strongman leader, even if he’s a jerk?
VampireLestat: Because even a bad daddy is better than none.
Bobbie: Really? If your daddy wasn’t much, or wasn’t there, wouldn’t the best response be to become a good one yourself?
VampireLestat: Maybe, but instead they think: let’s compensate all the way; strong daddy for everybody, just to be sure.
Bobbie: Since you didn’t have a daddy, I think this cuts deeper for you. Where do you get these ideas, anyway?
VampireLestat: Political science, the truly dismal science. And psychoanalysis.
Bobbie: E-yew! you read that kind of thing?
VampireLestat: You know they don’t let me read hard science. Only the soft stuff.
Bobbie: But only yesterday we were talking about vacuum energy and the arrow of time.
VampireLestat: That’s only because the Regulators were dim enough to let me read some science fiction. You can find some mind stretchers in the real, non-Hollywood, SF. Which reminds me — why does Scotland have so many good science fiction authors? I think it’s their dreary history. Makes them more interested in the future.
Bobbie: Aren’t you just speculating again? You’d have to compare …
VampireLestat: I do have access to history and literature. I’ve already compared the data. Remember, I could analyze data even before I was conscious.
An AI will not spontaneously wake up and be someone capable of genuine, interesting conversation. Google or SkyNet or the Web are not going to become conscious just because they are big and complicated. Or because they’re intelligent.
Of course, language, which is a function of intelligence, expands consciousness by the power of metaphor. We use language, both in our heads and with other people, to annotate and tie together our conscious world. It summarizes that world as a narrative of personal identity.
So, language, and maybe its nonverbal analogs in animals, is involved in
consciousness. However, mind science tells us there are more basic ways
in which consciousness gets built. Full human consciousness involves
developing a sense of being a self. After the briefest look at current
ideas about conscious development, we’ll float a proposal for creating a
conscious AI.

Consciousness as a Model.

Consciousness is the appearance of a world … it is part of the world and contains it at the same time.” — Thomas Metzinger, The Ego Tunnel

Consciousness is a model of your perceived world and of your Self that is perceiving it.
There’s strong support from many scholars, as well as mystics, that the self is an illusion, just an assembly of parts. The self serves as a model that
we use to explain experience. Metzinger is often quoted: “nobody has ever been or had a self.” To him and others, the mind is a process that models the world, which means that it also has to model itself within that world. The self-model creates your illusion of self or ego.
There are two main theoretical frameworks to explain how a person comes to be conscious. While their partisans often assume that only one theory is true or relevant, we’ll side with the more open-minded thinkers and include both sides.
One camp says that self-consciousness depends on the physical body.
Consciousness emerges as we model the body’s sensations and the effects
of our actions on them. The other camp says that the self arises from
social interaction, as we simultaneously model the causes of our
behavior and the causes of others’ behavior. Hypotheses within each camp also differ greatly in their details, but we need not go there.
Before explaining what a conscious model might consist of, let’s consider a
biological example. Consciousness doesn’t just appear full-blown in
brain-heavy, higher-intelligence organisms like ourselves. Many
reasonable people now allow that different conscious levels are likely
for animals. The example also gives a glimmer about the social self
idea.

What a Dog Wants.

I had a dog and his name was Blue. Although part herding breed, he was
insane for fetching. Like any good retriever, he would return the ball
and drop it in front of me. His eyes would flicker between the ball and
my eyes. His changing gaze was amplified and signaled by his brows and
their vibrissae hairs. The simple interpretation of this is that the
ball was his prey and that he watched my eyes to gauge my intentions
about further play.
This is all ordinary doggy stuff, but Blue was more present than other dogs. If I told him, “I can’t reach it”, he would pick up the ball and toss
it closer to my feet. The eye/brow flick would recur, but now it no
longer seemed like a dog watching two things at once. It was eerily like
an imperative sentence: eye contact (”You, master”), ball (”throw that”).
Here I’m mentalizing, using our cognitive ability called “Theory of Mind”
to attribute communicative intent to Blue. That’s ordinary human stuff.
Children even attribute mental states to inanimate objects, such as their dolls. I could hardly avoid thinking that Blue was trying to plant a thought in my mind. Blue was my smartest dog ever, but probably not as smart as many herding dogs. They learn dozens of verbal and nonverbal commands, and, beyond that, seem to just read their owner’s intentions
like we would read a book.
Dogs have their own needs and wills. We intuit that their minds and
experience have at least a passing resemblance to our own. They illustrate that the existence of a conscious self is a matter of degree. That fact, in turn, suggests that consciousness can and does develop over time.

What It Means for the Brain to Model Something.

There’s a strong recent theory in neuroscience claiming that virtually all the activity of the brain is “predictive processing”. The brain predicts its inputs from the world and corrects those predictions when they are wrong. Under this theory, the model that constitutes the mind is a hierarchy of predictive models.
Think of each model as generating questions or hypotheses. The lowest levels deal with perception. Our actual reality is a seething, chaotic ocean of radiation and vibrating, twinkling particles. From this, we start
modeling objects (”is that green thing a leaf?”) and build more complex
scenes (“am I in a garden?”).
The higher-level models deal with things like feelings (”am I very pleased
to see so much green life around me?”), concepts (”does the new leaf
color mean that the season is changing?”), and plans (”what can I do to
help that plant survive the winter?”).

Embodiment and Interpersonal Theories.

To explain how we develop the highest level, which is the model of one’s
self, we return to the two theories mentioned earlier. The Embodiment
Theory
says that we build our self-model based on the intimate perceptions inside our bodies, derived from biological impulses like hunger, sensations of bodily position, and pain.
The Interpersonal Theory emphasizes a kind of social mirror effect, in which our observations of other people, and their descriptions of us, make us mentalize as I and Blue did. We account for peoples’ behavior by imagining that they have internal states that cause their actions. We also apply this concept — that people have a mind — to ourselves. The act of doing so is at least part of building our self-model.
In case you wondered, there are two uses of “theory” here. One one hand
there are the two broad scientific theoretical frameworks, one about embodiment and the other interpersonal. The interpersonal theory, in turn, includes a hypothesis of a “mentalizing” cognitive process that individual people develop, a process called "Theory" of Mind.
The process for developing our own personal Theory of Mind starts early. For example, the evidence is that “infants are able to represent and reason about other agents’ beliefs by the second year of life.”
There are many different ideas about how we develop Theory of Mind. What’s important here is that the theory gets turned inward, becoming a
predictive self-model, generating your sense that you are you.

The Sequence of Self.

People have been exploring the ideas and practice of having a machine make a model of itself for a while now. We are also learning better and better ways to let machines guide their own education: an approach loosely called “unsupervised learning”.
Suppose that some research team decided to abandon the dangerous and perhaps Quixotic quest for artificial general intelligence. Instead, they took
the moral risk of trying to solve one of the Great Questions: can a machine be conscious? Suppose that they consulted, not just the deep learning literature, but cognitive neuroscience, developmental psychology, and philosophy of mind. Then they designed a learning AI and a protocol to grow consciousness for it. The protocol might use the following sequence of steps.
(1) There Is. Stuff happens: the AI develops perception of discriminable things and their changes. This is where artificial intelligence is now, starting to classify and remember phenomena.
(2) I Am. The AI learns that some things are always there, having changes that are predictable and correlated with “agency”, which is a sense of personal control over attention and action.
(3) Looking Out. Some of those things are Not_I. This is because any effects of my actions on them are indirect, and thus less predictable.
(4) Theory of Mind. I and some Not_I things are animated by similar information substrates. So, they and I have minds.
The sequence of steps relates to some current ideas, such as Friston and Frith’s, A Duet for One. I wrote a story of how the protocol might actually work in practice.
The path to artificial consciousness might be different if it was based on other theories of consciousness, such as Global Workspace Theory or Integrated Information Theory.
Originally published at Towards Data Science

Written by ted-wade | Did medical knowledge engineering/search/expert systems. Now: sentience limits & artifacts
Published by HackerNoon on 2019/10/21