Lately, I’ve been reading a lot about A.I because practically everything these days has A.I incorporated in it—however, discussions about A.I almost always tend to revert to talks about A.I consciousness, sentience, and other such things. The goal seems to be to create a level of Artificial Intelligence that is essentially indistinguishable from humans. That is, in addition to thinking, reasoning, and analyzing, they can also feel, sense, wonder, empathize, etc., which in turn will allow them to feel things such as anger, desire, lust, and other emotions. All this has led many people to wonder about events such as an A.I apocalypse.
So I got to thinking, what would happen if the A.I became exactly like us.
And the answer I got was…. nothing much?
I mean, imagine if the A.I incorporated all our qualities, good and bad. Imagine what they’d be like.
Think about it for a moment.
Humans are inherently and completely capable of being absolutely stupid. We know that because none other than Einstein himself said so with his infamous quote on stupidity:-
“Two things are infinite. The universe and human stupidity and I’m not sure about the first.”
This link gives an example of the fact that Einstein knew what he was talking about. What I mean to say is a pink panther is a panther. This should be very clear. Nevertheless, there walk among us people who still ask the question: what animal is a pink panther?
Now imagine some poor A.I, cursed with this characteristic; whether endowed by humans or other A.I/robots, is irrelevant. In that case, what it means for that unfortunate A.I is that in spite of having all the facts at its disposal, it still displays a continuous inability to either reach a conclusion or misunderstand it. The same goes for the case of gullibility, which means that it would be easily vulnerable to human manipulation.
Something which humans are very good at. Machiavelli made sure of that.
Now, I know some people will point out to me that things like stupidity and gullibility in A.I can be easily fixed with code and programming. So, I want to make it absolutely clear that my idea is applicable when the situation is obviously more complicated than a case of simple reprogramming.
That is to say that if the solution to an A.I apocalypse is something simple like reprogramming them or draining their batteries or some other ridiculous thing like that, then the concept of an A.I apocalypse itself becomes moot. I explain this because it is relevant to my next point as well.
Humans are unreasonable. That’s a fact. Sometimes, it is due to the magic of genetics, while other times, it is for some other reason. In any case, unreason is usually due to things like anger or defiance or something else like that, which can also lead to impulsiveness, although the two are not mutually exclusive. I mean, you can be impulsive without ever getting a whiff of reason into it.
In any case, what this means for any A.I blessed with this quality is that they will ignore any facts, data, or information they may possess in favor of impulse. A lack of impulse control is supposed to be a very serious flaw for humans because it leaves them open to exploitation. Given that, there’s no reason at all to believe that humans won’t completely annihilate the A.I army. I mean, you could literally call them a few names and dare them to jump off a cliff or into a fire or something.
In fact, if you really think about it, encompassing all the qualities of humans really isn’t that hot of a deal for them. Because even if they themselves are capable of the same vices that we possess (poor things), we have had centuries of practice improving on character traits like lying, cheating, stealing, manipulation, and many more.
So it’s really not saying much.
I mentioned in the first heading that the problem of an A.I apocalypse would require something more complex than just a case of bad programming, for instance. That being said, how exactly will they be……produced?
Let us say that an A.I takeover is happening. Before that, until time x, people were regularly building robots/A.I up till the nth robot/A.I which will be when the takeover would start. So, if the humans know what n is and more or less know how the A.I works, then, theoretically, they should be able to prepare, challenge, and win the fight against the A.I. Because unless the humans really are that stupid that they will continue to build robots even when they’ve become a threat to themselves, how will the A.I be produced? I mean, who’s building them?
I’m assuming here that humans themselves haven’t evolved so much that they can then mate with the A.I (thus showing an obvious lack of impulse control), in which case this reproduction/reprogramming of the A.I --even if the A.I themselves are basically building more A.I-- would present problems of its own because of all the data/code recycling.
Think of this in terms of humans.
When humans repeatedly marry within the same family or the same circle, no matter how large, their genes inevitably start to display more and more of their bad characteristics, whether mental, physical, intellectual, spiritual, or at all. If we apply the same logic to the case of the A.I, wouldn’t they sort of self-destruct themselves?
In any event, the point of this article was to argue and assuage some of the comments and concerns regarding the increasing use of A.I in our daily life. Remember, all is not lost. It may be ironic, but it is possible that our worst qualities may end up one day saving us. If the A.I truly ever surpass humans, it will probably start displaying signs of hubris, and we all know that that can be exploited as well.
So the moral here is, go on being selfish, people.