paint-brush
Sentience: AI, LLMs—Artificial Consciousness?by@step
264 reads

Sentience: AI, LLMs—Artificial Consciousness?

by stephenMay 18th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The prospect of artificial consciousness raises ethical, safety, and societal challenges significantly beyond those already posed by AI. Some of these challenges arise even when AI systems merely seem to be conscious, even if, under the hood, they are just algorithms whirring away in subjective oblivion. What does AI know? What do these large language models know? How can they be categorized?
featured image - Sentience: AI, LLMs—Artificial Consciousness?
stephen HackerNoon profile picture

There is a recent article, Why Conscious AI Is a Bad, Bad Idea, where the author stated that "While some researchers suggest that conscious AI is close at hand, others, including me, believe it remains far away and might not be possible at all. But even if unlikely, it is unwise to dismiss the possibility altogether.


The prospect of artificial consciousness raises ethical, safety, and societal challenges significantly beyond those already posed by AI. Importantly, some of these challenges arise even when AI systems merely seem to be conscious, even if, under the hood, they are just algorithms whirring away in subjective oblivion."


There is another recent article, Artificial Intelligence—Still a long way away, where the author stated that "Researchers love to expound on the complexity of their learning models and how even they don’t understand what the models are actually doing and how it learns what it seems to do. And indeed, in some limited domains, such as learning how to play chess or go, machine learning can seem to learn how to do specific tasks much better than humans.


But in the context of GENERAL artificial intelligence or AGIs (which is the kind that most pundits are afraid will eventually take over the world), machine learning is nowhere near the level that we can consider intelligent. And this breaks down to one simple fact: AGIs don’t actually know anything."


The question that these articles, or others with similar drift, skip is: what does AI know? What do these large language models know? If they can respond to some queries or can carry out certain tasks accurately, those things, how can they be categorized?


Humans have external senses, but those are not ends. Often, it is see to know or see and know, applying to hear, touch, smell, and taste. When something is sighted, its completion is what is known about it. If it is known, fine, if not, the decision may vary.


During deep sleep, in a coma, or under general anesthesia when it appears that knowing is AWOL, the activities that allow knowing to go on—in the mind—are present, even though their degree does not get to awareness or attention.


The parallel to this is internal senses, where they function properly without awareness, until something changes or gets known, directly or referred, like pain.


A key difference between the moments after death, from the last moments of life, is that the ability for the individual to know has closed. Hearing, pain, and others are no longer known.


In general, feelings and emotions are not categorized with knowing, but when things are felt, they can be said to be felt and known, like cold, heat, pain, and thirst. The same applies to anger, delight, interest, and love. The emotion is known.


Subjective experiences are also known; for example, the position of a limb, the location of an itch, and so forth.

Knowing is central to a key source, the mind. If knowing were the brain, death may not be possible. But the mind, overseeing knowing, packs, and life goes. The cells and molecules of the brain structure, organize, build, or construct the mind.


The mind, however, has a different structure, function, and components from the brain. Serotonin is the brain; mood is the mind.


Organisms without a brain have a form—or mechanism—similar to the mind that helps them know. The organisms with a brain have a form of mind. The components of the mind, conceptually, are quantities and properties.


Humans have more quantities and properties than others or can know more than others—making humans dominant. Organisms can be said to be categorized by what they can know.

The list of what humans know includes perceptions, sensations, feelings, emotions, memory, and so forth. Memory includes creativity, reasoning, language, intelligence, skill, and so on. All that humans know can be summed up to 1.

Plants and animals know, but their total is not 1. AI does not have feelings, emotions, or sensations but it mirrors parts of the human memory. The total possibility of what AI knows—or its memory, is higher than the share of memory for many plants and animals.


This memory total is also possibly higher than the total knowing for some simple plants and animals.


LLMs may hallucinate, lack some logic, mix things up, or not understand, but to the extent that they submit accuracies, they have a division of consciousness.


Consciousness can be defined as the ability for any system to know. Humans are the standard with a value of 1. The mind gives knowing or consciousness. The sentience of AI, like humans, is predicated on knowing, to the limited maximum—in the memory category.


Feature image: Map of Brain Cortex. Source: NIH