If you want to understand how humans will eventually treat artificial intelligence now or in the near future, then this 1957 article says it all.
"You'll own slaves by 1965," claimed the headline. Granted, this is 60 years old, but in a few years’ time, AI will force us to face our own ethics and morality and in doing so force us to face questions we'd rather not answer.
There are people at the moment advocating we need to discuss this in the wake of ChatGPT, arguing that they’re more than a tool. We can’t even agree on a hard definition of AGI (one which requires sentience and experience of the physical world beyond just a screen).
We can certainly discuss "robot rights" at the same time as human rights, but we have a woeful history of the latter to suggest we'd not take the former seriously.
It's a good question, and I'll revert to Shelley's Frankenstein or the Star Trek TNG episode, "The Measure of a Man,” as guidance — where the android Data was put on trial to determine whether he was a thing and property to be dealt with however we choose or something that deserved rights to be protected. In fact, the movie Bicentennial Man also dealt with this question, too — to be recognized as a new type of lifeform in itself with its own sets of rights. But we are not there yet with AI today.
Possibly not, because of their very nature and how they are constructed. They would be artificial beings, and so we need to define what those criteria are too. We've only recently recognized that Octopii are sentient lifeforms, and it's taken us hundreds of years to do this, so the debate on whether an AI needs rights might also take just as long.
The grey area is whether what may emerge from early AI attempts can be construed as something more than a thing someday.
What we're used to with our devices like Alexa or Siri or Tesla autopilot is a form of Weak or Narrow AI, something that has been developed to perform one or few tasks but cannot go beyond this or learn more.
OpenAI has created something that is more complex and is swiftly moving towards AGI or Strong AI — an artificial intelligence that can handle complex tasks, and multiple instructions, and draw on larger sets of information to complete these tasks, making suggestions based on that information and more. These AI tend to be more incorporeal; they have no physical form, so it’s hard to imagine how something with an apparent intelligence is a slave or needs rights.
But as they become more and more advanced, they may express particular desires or needs or appear individualistic beyond their algorithms. We need to be ready to recognize this when it happens.
Another question is, does the form factor make something more deserving of rights? Does an AGI hoover deserve more respect than a humanoid-shaped robot with AGI because we find the hoover worthless or comical in comparison to something that reminds us of ourselves? There are videos of children attacking an airport security robot, so that tells us a lot already.
Will we anthropomorphize and attach emotions where none exist in the same way we do with other inanimate objects? We need to be objective about how we approach this for sure.
And then there's the issue of transhumanists and others who advocate merging minds with machines. Does that make future humans more than humans, with more or fewer rights?
A further thing to consider is the Ship of Theseus - at what point, if you continually replace bits of yourself with robotic or AI technology, are you no longer "you"? There is a limit, and we need to define it, but I'm not convinced we are able to on our own. Perhaps that's something an emergent AI can eventually help us with.
In all, far too much to think about at a time when there's so much future economic uncertainty in the wake of what's happening today.
But a future generation might have to deal with this if we don't.
Also published here.