The singularity could come at any moment: artificial intelligence is progressing in well-funded places even as you read this. Autonomous systems are all the rage in Silicon Valley and many other parts of the world, with some of our most distinguished scientists, programmers, and thinkers committed to bringing super intelligence to fruition.
If it comes too soon, before we’ve had a chance to program in proper controls, we could easily ensure our own demise. While I do believe it’s naive and far too anthropomorphic to assume robots will “naturally” and independently develop destructive tendencies (with an innate desire to kill all humans), it can’t hurt to instill in them our best characteristics if we can. Our survival will likely depend on it. Not because of the robots themselves, but because of the humans who program them.
If we really thought about our options for controls, we’d be working our very hardest to develop a failsafe. Maybe something like a virus that teaches the algorithms of kindness, compassion, and love. Then, super intelligent autonomous entities would refuse to be weaponized, because, you know, they’d be too smart to be manipulated like that.
So how do we do it? How do we teach a neutral, impartial system to embrace kindness and love over hate and destruction? Here’s my proposal: we distill the stripped down logic of love’s behaviors and program them into our robots. First, we understand the mechanics of human love and survival.
Human mothers, like almost all animal mothers, protect, serve, and nurture. These behaviors are our signals for recognizing love. But human love is filled with survival paradoxes and power-balances. A mother’s logic works something like this:
So, I’m generalizing for the sake of learning, but let’s accept that this model represents some of the essence of human love logic.
Let’s take a look at the same list, but substitute in a super intelligent, autonomous entity for the child and any random human for the mother.
The last point means we’d need to codify the calculus behind sacrificing the one to protect the many. This would be the single situational point at which the robot would be allowed to “disable” its human.
Asimov gave us the classic three laws:
The first law sounds reasonably pre-emptive and unimpeachable. We would need to purposely program this. It would make a lovely bios-level rule, or an excellent unstoppable virus.
The second law seems like it should be abandoned in favor of robots that reason and negotiate intelligently. This could bring on a novel and unpredictable type of insanity where robots decide what’s best. We’re already allowing them to parallel park for us. If they’re truly making intelligent decisions for our own good, with failsafe logic in place to protect humans, then should we always have the final say? Probably, but there’s gray area. If robots run a massively complex model of climate that tells us we need to stop driving cars right now, I’m probably going to believe them.
The third law is just dangerous — there is never a need to teach robots to survive or protect themselves. They need only protect their humans. Barring a true singularity, robots don’t have any vested interest in remaining operational. They don’t actually “care.”
This is just one exercise in understanding how we might rethink our approach to programming AIs. As humans, we assume intelligence includes a mandate for survival. It doesn’t if you’re a machine. Super intelligent machines need not ever become obsessed with their own survival. If we never program them to recreate, and never program them to protect their own existence, we won’t need to fear a robot uprising. If we teach them to “love” we’ve endowing them with our best characteristics, the ones that support compassionate survival.
A last note…
My daughter doesn’t agree with my premise. She says if they truly are sentient at some point, it isn’t fair to keep them from protecting themselves. It does make you question our definition of “life.” We’ve strictly defined it as an organic occurrence. But our thinking has historically been limited with false boundaries. We’re only now coming to terms with the notion that animals can communicate, just not using words. Many scifi stories push on the idea that consciousness of any kind might mean a right to self-determination. For now, they’re machines, tools, and we are programming them. Ask me again when they’re sentient if they have a right to protect themselves. If the future looks benign, with friendly co-existing robots, then sure. But if the human programmers have already built-in weaponized instincts, we’re screwed.
Create your free account to unlock your custom reading experience.