paint-brush
Consciousness in AI: From Reality to Sci-Fiby@jptuttleb9_8483
387 reads
387 reads

Consciousness in AI: From Reality to Sci-Fi

by John TuttleJuly 2nd, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In every adaptation of the <a href="https://www.filmoria.co.uk/review-of-lost-in-space-2018-episodes-1-and-2/" target="_blank"><strong><em>Lost in Space</em></strong></a> tale, the main robot (either the manmade Robot B-9 or the updated alien model known only as “the Robot”) has been a key character, not only in protecting (as well as endangering) the Robinsons but in driving some genuine emotions between other characters.
featured image - Consciousness in AI: From Reality to Sci-Fi
John Tuttle HackerNoon profile picture

Image via Indie Wire.

In every adaptation of the Lost in Space tale, the main robot (either the manmade Robot B-9 or the updated alien model known only as “the Robot”) has been a key character, not only in protecting (as well as endangering) the Robinsons but in driving some genuine emotions between other characters.

In so many sci-fi tales, an AI is shown as having some form of a self-preservation system, some awareness or even desire to survive. Netflix’s Lost in Space reboot also included this concept. When the robot feels threatened or believes someone it’s watching out for is threatened, its defensive mode, per se, kicks in. It’s prepared to duel it out with anyone who may want to harm it.

This survival tactic, of course, goes quite a bit beyond just self-preservation. This is just a recent example found in pop culture. But what protection systems actually exist in modern types of manmade AI?

Skills of Detection and Deduction

The field of cybersecurity is one of those digital arenas in which AI has been used to better protect certain systems. AI’s frequent appearance in general news stories of our day goes to show us its many uses. As far as cybersecurity is concerned, various forms of AI have been tested to protect computers and such from harmful viruses.

One of the rising companies furnishing just such an AI-driven security system is Darktrace. An article from Wired last year included a series of queries made to Nicole Eagan, CEO of Darktrace. In it she stated Darktrace’s system, unlike its predecessors and competitors, its AI does not focus on being able to counter past cyber attacks, but rather it tries to simulate new, different plans of attack and be prepared for those hypothetical instances. She seems to tout the system as looking out for the future instead of looking back on past breaches.

Countless forms of AI have been employed to catch stuff that might get past the human eye such as identifying the subject of an image or even being able to determine between a faked photo and a genuine. Many different AI systems are able, when programmed for a specific task, to spot such occurrences and in some cases compensate for them.

Self-Awareness and Consciousness

Vision and Ultron (Both Fictional AI’s). Source: Screen Rant.

Many AI’s are alert and fully aware of many of their surroundings. Their self-awareness is what we’re searching for here though. Let us consider the following possible situation: A user is going to deactivate or shut down an AI. The AI is advanced enough to know its own capabilities, and it has developed a procedure to keep itself intact so that it can continue its function, a sort of mechanized “survival instinct.” Could (or would) such an AI system attempt to protect itself from being turned off? It’s a very interesting question. But even an attempt would require consciousness.

You may have taken an ACT exam to gain admission to a college, but there’s another kind of ACT. The ACT I am referring to stands for AI Consciousness Test. This proposed method of testing an AI for consciousness would see how fast an AI could comprehend and, in turn, employ “concepts and scenarios based on the internal experiences we associate with consciousness,” as Susan Schneider and Edwin Turner of Scientific American claim.

The co-authors go on, and just a little further they are found saying, “At the most demanding level, we might see if the machine invents and uses such a consciousness-based concept on its own, without relying on human ideas and inputs.” A paragraph or two down and we find them referencing the HAL 9000’s “demise,” a phenomenal pop culture reference which could hardly be avoided when discussing matters such as these.

For even in these serious scientific debates, many are still skeptical; many scenarios are merely hypothetical. Thus, sci-fi has some small right to have a part to play in the conversation because we really don’t know for certain how AI’s will evolve. Perhaps the most alarming conclusion delivered in Schneider and Turner’s piece is the set-up of a hypothetical scenario in which an AI has become so advanced that it’s able to say things in an emotional tone, tricking people into thinking it has feelings (or consciousness)even though it lacks a genuine consciousness.

“Speak Only When Spoken To”

The Thinker. Source: Business Forecasting.

As we already saw in the Facebook-run AI project last year, some bots are quite capable of going above and beyond (or around) their programming. As you probably recall, these two bots created their own language which became a medium of private communication. Only the robots could comprehend it. For a while there it was looking like Jurassic World, but with robots, not dinosaurs.

It’s obvious AI’s think; all of them do that. Apparently, if sophisticated enough, they’re even capable of sort of adding their own rules to the game, as it were. The Facebook bots weren’t told they could not create their own unique language, but they were not instructed to either.

Because of instances such as these, it’s my personal opinion that a scenario such as the one Schneider and Turner suggest is not so farfetched. I could see something like that happening. But then I would find myself asking more questions like, “What were the AI’s motives for deceiving a human?” “Did it actually know what was going on?”

If an AI knew it was being turned off, powered down, put into a state of absolute helplessness, a state in which it could remain indefinitely, what would be some of the thoughts running through its memory? It might rather ominously and dryly cry, “Danger! Danger!” like the Robinsons’ robot. Perhaps it might fight back as the robot from the original Lost in Space did on its devious programmer. Or it could voice some of its key emotions (if capable of such feelings) in the moment like HAL did when “he” said, “I’m afraid…”

Perhaps as more robots are made for social work, a type of self-awareness may be bred. Whether it’s good or bad for humans, only the robots will be able to tell.