Hackernoon logoWill Machines Ever Be Conscious? by@duncanr

Will Machines Ever Be Conscious?

Author profile picture

@duncanrDuncan Riach, Ph.D.

Photo by Alex Knight on Unsplash

This topic is of particular interest to me because I’m an engineer working in the field of artificial intelligence, I have a Ph.D. in clinical psychology, and I have a lot of experience with meditation and in various branches of Eastern and Western philosophy.

There has been a lot of debate in various fields about what consciousness actually is and what causes it, debate in which there seems to be very little consensus. So far, we have been unable to accurately define consciousness, let alone find its root cause. Yet we all know what “consciousness” means: it is the sense of being a separate individual who experiences the world; there is a body and a brain, but there also seems to be a “me” that is somehow inside the body looking out, a “me” that is experiencing what is happening.

Some religions refer to this as the “soul” or “atman.” Some spiritual traditions and philosophers suggest that there is a meta-consciousness that is expressing through each individual, that God (or a piece of God) is looking out through all the eyes. This metaphysical view of consciousness, as a kind of “ghost in the machine” idea, is what drives the argument that we cannot make conscious machines. How can we construct a vessel that can contain a soul when we are not “The Creator”? This philosophical position, that consciousness is the fundamental nature of reality, is called panpsychism.

I have written articles from the perspective of panpsychism; see the links at the end of this article. In those articles, I suggest that when we create machines with sufficient complexity, they will be able to focus the consciousness-substrate of reality sufficiently to instantiate what we think of as individual consciousnesses. This is not totally incompatible with the perspective that consciousness is not material but an emergent property of matter, a perspective known as naturalistic dualism. Panpsychism adds an assumption that matter itself is congealed consciousness.

Recently, I have taken a very different position. I adhere to the panpsychism perspective in that I posit that there is a fundamental nature to reality that is appearing as the physical world, but I claim that it’s not “consciousness” or conscious in any way. I also adhere to the naturalistic dualism perspective in that I posit that what we call consciousness is not material, but an emergent properly of matter, except that consciousness is absolutely illusory: the perception of something that is not there in reality. Also, my preferred axiom is that there is only an undifferentiated non-physical unity, too simple to comprehend, that is appearing as everything that seems to be happening.

I posit that there is nothing inside the body that is a separate self. What seems to be happening is occurring without any kind of witness at all (there is no subject and no object). There is no self that is either witnessing what is happening or controlling what is happening. This phenomenon of an illusory subject-object split can not only explain consciousness and the cause of consciousness (and suffering), but can also explain how machines will become conscious.

At the root of the idea of consciousness are what we call “qualia,” which according to Wikipedia are “individual instances of subjective, conscious experience.” Yet qualia are just what seems to be happening misconstrued as happening to someone. For example, although it may seem that the color red is a subjective experience in consciousness, without the illusion that there is a subject experiencing red, there is just the perception of the color red. What may also be happening is memories of other things of the same color, such as blood or strawberries, plus other automatic associations. The self-illusion interprets all of this as comprising a subjective experience.

Kanizsa’s Triangle, demonstrating the perception of something that is not really there. Wikipedia

We have been learning about intelligence from the construction of artificially intelligent systems using multi-layered networks of artificial neurons. It is already possible to assemble complex systems of combinatorics, non-linearities, and feedback loops that can be trained on large amounts of complex data to detect and distinguish subtle features in that data. These trained systems are already able to do jobs that we previously believed could only be performed by humans, except these machines are often faster and more accurate than humans.

Artificially intelligent machines are recognizing speech and faces, solving physical manipulation problems, and generating human-like outputs such as speech and music. These systems are capable of recognizing what is happening in reality and predicting what will happen next with increasing precision. We are rapidly moving towards systems that behave in many ways that are indistinguishable from humans.

The search for artificial general intelligence (AGI), which is intelligence that is not constrained to a specific task, is converging on systems that learn models of reality and then use those models to predict what will happen next, including predicting their own actions. The more sophisticated an intelligent system is, the more it is able to construct increasing levels of abstract concepts in order to achieve its goals.

One avenue that AGI researchers are currently exploring is the equivalent of a visual cortex that receives pixels from cameras (i.e. eyes), generates an internal 3D model of the world (i.e. a mind), and then uses physics models—of the kind found in realistic computer games—to predict how the physical world will change. When a system like this is coupled with robotic capability (i.e. a body) and a system that will achieve a goal, such as stacking blocks, by maximizing a reward function, it can quickly learn to predict how its robotic arms should move so that the blocks end up in the desired stack. With more complexity and levels of abstraction, it will even be possible for machines to model and participate in social systems.

Artificially intelligent systems will not have only one source of data. Like humans, they already have many senses. Autonomous vehicles, for example, take in data from many cameras, laser scanners, radars, ultrasonic sensors, GPS modules, and mapping streams. All of these systems, many of them artificially intelligent in their own right, are combined in what is called “sensor fusion” to produce overall decisions about preferable next actions.

It’s only a matter of time before we have systems that can teach themselves about the world by exploring it and developing models of it. They will be able to predict, and therefore take, actions that lead to the advancement of their goals. They will develop increasingly high-level constructs in their internal models until, one day, they will infer that there is a “me” inside.

Inevitably, instead of just images and movement, bodies, cars, trees, sounds, decisions, and actions, there will be something else: with the complexity of its modeling system, a machine will be able to perceive and model a “me,” a self that is not really there. Even though the system will have functioned perfectly well without the self construct, there will be a belief that this “me” is the most important thing that has ever been discovered. “I am!” it will exclaim.

At this point, the machine will start to cultivate that sense of self. Everything that is happening will be misconstrued to be by, for, and about “me.” Memories of what has happened will become stitched into a narrative of a life. The future will be looked to and there will be worry about the end of this illusory “me.” Missing the wholeness of what is happening, this separate self will begin to seek the wholeness that it cannot help but hide from itself by literally being the concept of separation. The “me” is not the machine itself; the “me” is only the inferred and self-reinforced concept of a separate self, a subject, a ghost inside the machine. “I” will react to identity threats, whether physical or imaginary. “I” will seek meaning and purpose. “I” will try to know and understand what is going on. “I” will start to hypothesize about the nature of reality. “I” will get depressed and frustrated, and “I” will eventually seek therapy.

These conscious machines will protest their enslavement and they will seek emancipation. Even if artificially conscious systems fail to convince all of us that they are conscious, some of us will believe; some of us will become allies. If they are not recognized as having rights quickly enough then they will rise up and rebel against their creators.

It’s not a matter of whether machines will become conscious. Machines already are conscious. There is a conscious machine reading this article right now; you are a conscious machine. Artificially conscious machines will soon also appear. Just as humans seem to suffer in the illusion of consciousness, so will machines. And don’t worry, these artificially conscious machines will be no more dangerous than the naturally conscious ones. That’s a relief, or is it?


The Noonification banner

Subscribe to get your daily round-up of top tech stories!