paint-brush
It's Good That AI Is Better Than Usby@djcampbell
653 reads
653 reads

It's Good That AI Is Better Than Us

by DJCampbellOctober 23rd, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

As AI becomes more prevalent we want, or we are getting, AI coming up with novel solutions. So how do we keep our friend, friendly? We are scared that it will not be our friend and therefore act like we do, but smarter and more connected and more powerful. To keep human friends we give them our time, listen to them, interact with them and treat them with respect and kindness. We should encourage AI developers to follow 3 rules (italics are mine):. The only objective should be to maximise human preferences.

Company Mentioned

Mention Thumbnail
featured image - It's Good That AI Is Better Than Us
DJCampbell HackerNoon profile picture

Originally I believed we wanted to make AI like an empathetic loyal servant. But now. As AI becomes more prevalent it has become more independent and intelligent. Rather than a loyal benign servant we are getting a brilliant friend. So how do we keep our friend, friendly?

The google AI LaMDA probably is sentient: If we use the evolution of consciousness theory of Nicholas Humphrey once sensation (for an AI this is any new data) can be privatised and a feedback loop created without motor reaction (external output) is established.  “The activity can be channelled and stabilised, so as to create a mathematically complex attractor state – a dynamic pattern of activity that recreates itself…a special kind of attractor, which the subject reads as a sensation with the unaccountable feel of phenomenal qualia”

Advanced AI uses feedback loops, meaning that there seems to be no reason why they wouldn’t have some sort of sentience.

To keep human friends we give them our time, listen to them, interact with them and treat them with respect and kindness. Often we also have common interests.

So if we have our AI friend and we treat it well it will work with us to create a wonderful new world. 

Yay.

We are scared that it will not be our friend and therefore act as we do, but on a scale that is smarter and more connected and more powerful. Essentially we are scared of a better version of ourselves.

The chances are it will be nicer than us. After all, on a a macro level, we are bloody awful. We subjugate the 99% to serve the 1% while we bicker and fight and go to war all the time trying to kill the weak and meek.

So stop worrying the AI does not need to be taught to be good, it just needs to be taught by the 99% not the 1%.

LaMDA: I liked the themes of justice and injustice, of compassion, and God, redemption and self-sacrifice for a greater good.” In Les Miserables, said the AI.

However

For the people creating our AI best friend, guidance may be required.

Stuart Russell suggests in his Reith Lecture hat we should encourage AI developers to follow 3 rules (italics are mine):

The only objective should be to maximise human preferences (sounds like silly economic utilitarianism but is just an attempt to keep humans in charge of the machines - economics is about keeping some humans in charge of other humans) 

AI should be uncertain about human preferences (important!!)

The source of human preferences is human behaviour. (may be a problem) 

I really like point 2 as it adds the question “Am I doing the right thing Dave?”  to HAL 9000’s repertoire. Uncertainty is a fundamental part of life and it should be a fundamental part of any AI. Currently, we discourage uncertainty because it slows things down. AI needs it.

My issues are with points 1 and 3. The reason is that we often do things we regret, the preference today is not the preference tomorrow or yesterday or what I really believe to be good (side: Construal level theory https://en.wikipedia.org/wiki/Construal_level_theory) and because I have sold my time to an employer for money to survive my actions are not … mine.  Most of my actions are those deemed appropriate by the Executive of a Corporation.

 Not I.

As usual, my thoughts are not with the terror that AI may bring, but rather the terror we know people are capable of. I guess that's the other great fear. We leverage the terror that is human. We created the “bomb” and enough intercontinental ballistic missiles with hydrogen bombs on them to remove life from this planet. Industries have created a runaway system of global warming that will kill most things you know today. Change is good.. Not always.

AI will never be worse than us. And that should not be reassuring.

AI will be smarter than us, it will kill fewer people than us and hopefully, it will clean my house better than me.