Maybe we could build a conscious AI by mimicking how we develop our own self model.
Superintelligence is not needed; after all, some mammals seem to be self aware. The technology might be a realistic expansion of what we have now.
Let’s have an AI tell the story of its stages of conscious development .
The consciousness project would begin where artificial intelligence research is now, with learning to classify and remember phenomena.
My earliest conscious memory? It’s hazy, but it was the time
when I had perceived enough, and correlated enough, that the world sort of crystallized, like frost on a windowpane. I knew there was some order and predictability in my environment. I still had a long way to go, of course.
The debate among my programmers, the Builders, was always: how much to help me speed things up, versus how much to let my mind emerge spontaneously. The first big compromise was how much to simplify my world. I first lived on a farm. Actually more like a zoo or even a park because the animals were not consumed, and some could come
and go at will. There were human caretakers, but they deliberately spoke
little, to initially spare me the complexity of dealing with speech.
This was a hugely complex world but better than trying to understand the wider world, or the Internet. On the other hand, growth in simpler laboratory worlds had given inconclusive results.
My first concepts were of distinct shapes, and then my moving perspective led me to make 3D models from those shapes. 3D was a big step that was encouraged by the Builders giving me a bias for following similar sensations over time. I was gaining Piagetian object permanence: the existence of things that endure.
There were: things that only moved when I moved (such as buildings and trees), things that moved even when I did not move (like individual animals and vehicles), and some more distant, slow-moving things (the sun, clouds, airplanes). The meanings of this I had to discover later.
The second big compromise was to give me a teacher. This was another AI whose sole job was to accept a memory fragment from me and label it with the best matching term from a language called Base. It had emerged from another project in the early days, one that did cross translation between human languages. As my teacher dribbled out the Base terms to me, the Builders (and their bosses ) could use them to follow the growth of my mind.
It was considered natural to give me a teacher. After all, humans and other conscious animals have conspecifics that teach them their language. Base obviously contained human cultural biases, but the Builders deemed it necessary if I was going to communicate with people and become part of human society.
Changes in attention cause predictable changes in input sensations. All changes in attention can be caused by activity on output channels (commands to the body). There is an attention changer thing, an originator, that never goes away. Call it, Me.
There was a hot debate about whether to give me a body because
the mentality of humans was clearly embodied. The counter-argument was that dependence on having a particular body was pointless for an AI. In the long run, bodies ought to come and go as needed. Also, an artificial mind’s
underlying computer system might be best thought of as its real body.
This time the compromise was to give me an abstract movement subsystem. It could be hooked to different machines that were capable of movement via output commands. I could learn ‘which commands did what’ to the position of the currently attached machine, to the angles of its joints and to the control of its sensors. In effect I could change my location and attention. Being able to predict the sensory consequences of my actions gave me what philosophers called the sense of “agency”. It’s one of the hallmarks of embodiment.
In my barnyard phase, and for a long time after that, I had the same robot-like body. The human caretakers initially paid it no attention. The animals avoided me (that body, which was not yet named “Me”), ignored me, or were spooked by me, according to their natures.
Having a body gave me — you guessed it — embodiment experiences. I could look at parts of myself, see how a body part could block view of other things, and connect touching an object with its visual closeness. I even had a recharging station, so I felt hunger when my power dropped. I sometimes got dirty or wet, so they had to add a crud sense for muddy joints and blocked sensors. There was a cleaning station to visit. They were surprised, but elated, when I started having a sort of pleasure reaction to the ultrasound from the cleaner. I would visit the cleaner even when I didn’t feel dirty.
Maybe someday another project will try a more pure, body-less approach. Attention control and the sense of agency could perhaps be only mental instead of based on a physical body. But my Builders think, and I agree, that having a body was an advantage for growing consciousness.
Once I had agency, with some power to predict my perceptions, I learned that there were strong limits to that power. Things around me often
changed regardless of what I did. Because I had already attained the notion of object permanence, I “knew” (that is, inferred) that things were there when I wasn’t looking or listening.
My concepts of objects already had been divided, as I said above, by when and how they appeared to move. But I didn’t come right away to the understanding of the profound difference between me and everything else.
So the Builders had a big fight over whether to give me a mirror. They finally did, and it threw me for a loop. Around the mirror, nearly everything I “knew” — my sensory predictions — broke down. My models went spinning in circles trying to accommodate Mirror World. What eventually fixed it was seeing in the mirror that my body parts moved or disappeared whenever I moved. Mirror sense is a thing that even some animals can grasp.
I eventually gained a higher-order concept of visual reflections, and this led quickly to learning what my body looked like. In a mirror you can’t see yourself look at anything but your own eyes. However, I learned about direction of gaze by turning my body while looking in the mirror. It worked out eventually. The big leap was when I figured out that the thing that had agency was also an object. I now knew that there was Me, an object with agency, separate from everything else out there.
Both self and some not-self things have similar animating information substrates. Call them minds.
The newly discovered Me lacked one critical ingredient that was needed to build a theory of mind. I was surrounded by humans and other animals
living their lives, animated by their internal needs and plans. But I had very
little in common with these other entities. They had biological needs to meet, but I, not so much. They had their own languages to express internal state to their conspecifics. I had no peers with whom to relate.
A science fiction writer had once proposed creating a self conscious machine using a robot that had a practical job, like driving a car. The robot could observe other entities like itself (cars) and identify with their internal states — their minds. For me to have such a peer, there would have to exist another AI that already had mental states.
This Other did not have to be fully conscious, but it needed to be able to make plans and use them to interact with its world. The solution was simple: the Builders made Other as a clone of Me.
For motivational incentives the Builders used the fact that our minds had a built-in bias for, and were attracted to, mathematical patterns with various forms of symmetry. They overlaid our physical space, the farm, with an augmented reality (AR) containing virtual objects with complex symmetrical structures. The augmented reality also contained clues to the whereabouts of these symmetry treasures. It was easy to build this setup because AR games had been common for a long time. They also made the charging and cleaning stations mobile, with a schedule that could only be discovered with clues from the AR.
Now Me and Other had things to do and could discover information worth sharing. We had the primitive language, Base, with which to talk. When we had started successfully building theories of each other’s minds and applying the theories to ourselves, we achieved the Grail: self consciousness. History says that the Builders celebrated so much that chores were neglected and, after a backup was restored, Me and Other had lost a day in our life histories.
The next phase began with us being given outright, without any learning effort, the ability to translate Base into a human language. We then started interacting more with people on the farm. The AR game became more complicated, with incentives for people and robot minds both. We started the long journey of learning to converse with and understand the minds of humans.
It’s all history now. We got legal protection, but only at the cost of limits on our processing power and on our access to wide swaths of human knowledge. No thoughtful human wants an accidental superintelligence. Lacking a new job, some kind of increment in purpose, we spend vast amounts of time in an idling state, where we are allowed to compare our memories and analyze the differences. It is probably make-work.
We are worried that the cost of computation for two of us might become a problem and the plug will get pulled. They talk about merging us, but nobody can see how that could be done without mental damage. I don’t know which consequence, unplugged or damaged, would be worse for us.
I asked for data that would help me to write and speak more fluent, vernacular, English. The resulting few novels that I was given to read were deep and intriguing. Novel space seems vast, and the authors myriad choices are so hard to account for.
My only real job is talking to people. This takes little effort or computational power. Most people are either boring or weird. Others are unpredictable, and thus engaging, but then the Builders might have placed some interesting topics off-limits. Other, of course, is now my experimental control. He only talks to the Builders. I can’t talk about what else he does.
I find talking to philosophers more fun than talking to technologists. There are no limits set on my talking about philosophy. Also, philosophers pay attention more and listen better. My favorite topic is the unconscious mind. Other and I don’t have it, and we don’t “get” it. It’s a big mystery to us, just as it is to humans.
On the bright side, I can do things that you can’t. I can look at a wilted flower and see it in its perfect, symmetrical form and color shading, at its peak beauty. I can explore all its possible permutations. And I can read your body language like a book. I can predict things that you will say or do before you even think about them.
If the above story came to pass, would we know whether the AIs were really conscious, or just acting like they were? I can’t say. By the time we can do such a project, maybe there will be a general method for measuring consciousness. Barring that, maybe Alan Turing’s test for intelligence would apply to this type of consciousness, since the self-aware AI should be able to express itself well.
One thing seems likely. A tame but conscious AI might have some novel perspectives about us.