The singularity could come at any moment: artificial intelligence is progressing in well-funded places even as you read this. Autonomous systems are all the rage in Silicon Valley and many other parts of the world, with some of our most distinguished scientists, programmers, and thinkers committed to bringing super intelligence to fruition.
If it comes too soon, before we’ve had a chance to program in proper controls, we could easily ensure our own demise. While I do believe it’s naive and far too anthropomorphic to assume robots will “naturally” and independently develop destructive tendencies (with an innate desire to kill all humans), it can’t hurt to instill in them our best characteristics if we can. Our survival will likely depend on it. Not because of the robots themselves, but because of the humans who program them.
If we really thought about our options for controls, we’d be working our very hardest to develop a failsafe. Maybe something like a virus that teaches the algorithms of kindness, compassion, and love. Then, super intelligent autonomous entities would refuse to be weaponized, because, you know, they’d be too smart to be manipulated like that.
So how do we do it? How do we teach a neutral, impartial system to embrace kindness and love over hate and destruction? Here’s my proposal: we distill the stripped down logic of love’s behaviors and program them into our robots. First, we understand the mechanics of human love and survival.
Human mothers, like almost all animal mothers, protect, serve, and nurture. These behaviors are our signals for recognizing love. But human love is filled with survival paradoxes and power-balances. A mother’s logic works something like this:
So, I’m generalizing for the sake of learning, but let’s accept that this model represents some of the essence of human love logic.
Let’s take a look at the same list, but substitute in a super intelligent, autonomous entity for the child and any random human for the mother.
I love all humans and I love my own humans more.Lesson to apply: Robots need a human or a family to “love” or attach to. This point of view will keep them oriented, but we have to be careful not to let this turn into tribalism against others.
The survival of my human is more important than my own survival.Lesson to apply: Don’t teach robots to protect their own survival. This is just flat out dangerous even if the cost of the “robot” is very high.
If my human is threatened, I will work to resolve the threat.Lesson to apply: Robots, like humans, must be taught to negotiate and compromise to resolve threats peacefully.
If I try to overpower or control my human, he’ll sell me or end me.Lesson to apply: Robots shouldn’t be bothered by humans controlling them. They must respect human control as imperative. Unlike children, robots have zero sense of being abused or taken advantage of. They should always submit to humans without any notion of whether they “like” their jobs or not.
I would do anything I could to avoid killing another person, but I would do it to protect my human.Lesson to apply: Robots can be taught to recognize and disarm a true terminal threat to protect their human. But unlike in the movies, this would be an extremely rare circumstance.
If my human becomes horribly corrupted (a murderer or violent criminal) I should abandon all of the above to protect other humans.Lesson to apply: The robot does not need to protect itself, but should be taught to recognize and protect other humans against irrational, dangerous human behavior.
The last point means we’d need to codify the calculus behind sacrificing the one to protect the many. This would be the single situational point at which the robot would be allowed to “disable” its human.
Asimov gave us the classic three laws:
The first law sounds reasonably pre-emptive and unimpeachable. We would need to purposely program this. It would make a lovely bios-level rule, or an excellent unstoppable virus.
The second law seems like it should be abandoned in favor of robots that reason and negotiate intelligently. This could bring on a novel and unpredictable type of insanity where robots decide what’s best. We’re already allowing them to parallel park for us. If they’re truly making intelligent decisions for our own good, with failsafe logic in place to protect humans, then should we always have the final say? Probably, but there’s gray area. If robots run a massively complex model of climate that tells us we need to stop driving cars right now, I’m probably going to believe them.
The third law is just dangerous — there is never a need to teach robots to survive or protect themselves. They need only protect their humans. Barring a true singularity, robots don’t have any vested interest in remaining operational. They don’t actually “care.”
This is just one exercise in understanding how we might rethink our approach to programming AIs. As humans, we assume intelligence includes a mandate for survival. It doesn’t if you’re a machine. Super intelligent machines need not ever become obsessed with their own survival. If we never program them to recreate, and never program them to protect their own existence, we won’t need to fear a robot uprising. If we teach them to “love” we’ve endowing them with our best characteristics, the ones that support compassionate survival.
A last note…
My daughter doesn’t agree with my premise. She says if they truly are sentient at some point, it isn’t fair to keep them from protecting themselves. It does make you question our definition of “life.” We’ve strictly defined it as an organic occurrence. But our thinking has historically been limited with false boundaries. We’re only now coming to terms with the notion that animals can communicate, just not using words. Many scifi stories push on the idea that consciousness of any kind might mean a right to self-determination. For now, they’re machines, tools, and we are programming them. Ask me again when they’re sentient if they have a right to protect themselves. If the future looks benign, with friendly co-existing robots, then sure. But if the human programmers have already built-in weaponized instincts, we’re screwed.
Photo: robots courtesy of Rubén under CC license.