They are both puppets. We are entertained, perhaps, but there’s a problem.
“Sophia told ABC radio that robots deserved more rights than humans. It’s a statement that would be of concern should your mum announce it, yet disappointingly unremarkable if uttered by some right wing think-tanky person defending job automation on a panel show.” — The Guardian
“Women in Saudi Arabia have scorned the government’s decision to grant citizenship to a female robot who, unlike them, does not need a male guardian or have to cover her head in public.” — CNBC
“Jimmy Fallon demos amazing new robots from all over the world, including an eerily human robot named Sophia that plays rock-paper-scissors” — NBC
If you watch these video clips carefully you can see that the dialog is scripted. Even the game played with Jimmy is setup, he intentionally loses so she can drop the one-liner about “beating humanity”. How drôle.
Perhaps the most interesting (and revealing) piece was the recent Audi commercial with this robot. Seen here.
Because when you have to operate a puppet in a car you need the puppeteer in the vehicle with you.
In a few of the video frames, if you look carefully, you can see a man operating a laptop, wearing dark clothes, in the back of the car. He is providing the words that she responds with. Yes, he is typing sentences which are played through a speaker in her head and used to animate her face.
No longer “behind the curtain”, in plain sight.
This is what you might call “Artificial Artificial Intelligence”.
We have a problem.
Unlike other technological pursuits, an intelligent machine (regardless of form) carries the illusionary effects of anthropomorphism, those putting this forward carry the responsibility of making it clear what is and is not there.
A minimum level of integrity is called for, otherwise we are engaging in a magic show rather than a serious discussion about technology.
The breakthroughs that led to the first flying machine didn’t suffer from this. There were no ‘fake’ illusionary machines flying through the air prior to Kitty Hawk in 1903
Prior to Sputnik in 1957, no country claimed to have launched satellites into space. There was no “making pretend” to have escaped Earth’s gravitational field via rocket until it was accomplished.
The International Human Genome Sequencing Consortium announced the successful completion of the Human Genome Project in 2003, nobody claimed to have sequenced our genome prior to this.
And we could go on with plenty of other examples. You get the idea.
No illusions, nothing to fake, only a technological breakthrough prior to which there was trial and error.
Achieving NLU (natural language understanding) in a machine (software), even in the simplest form (a chatbot) is as big a technological breakthrough as any. Even a conversation at the level of a young child would be a giant step forward. We are nowhere near this currently. Instead of putting forward examples that pretend to have achieved conversational intelligence, we need to be honest about what’s still a work in progress. This is the intellectually honest thing to do, and what will stand scrutiny over time as the story of machine reasoning unfolds.
We have countless chatbots, talking speakers and other gizmos that use math to guess what a sentence’s intent is, and then execute a command. There’s no understanding there, no more than a parrot understands language or that the 20'th century German horse “Clever Hans” understood math. Another illusion.
Nowhere in ‘COGchar’ is there any mention of natural language processing, much less understanding. If there was any competency for working with language in this collection of libraries, we would see it. Moreover if their software “stack” had anywhere near the language skills shown in her “interviews”, it would draw the attention of developers and would represent a game-changer for “deep learning” and so-called “AI”. This is an understatement of course.
There’s no need for a humanoid form to pursue artificial natural language understanding, unless you want to maximize the anthropomorphic effect.
For researchers and entrepreneurs working on natural language understanding in software, there’s an lengthy road ahead. There are huge challenges dealing with sentence comprehension, managing information retrieved from dialog, generating language in responses, resolving inferences, and so on.
What these worthwhile efforts do not need is a mirage pretending to have achieved these skills and generating tons of fake news. For what is already a very challenging capital market in AI, where investments are pointed at “narrow AI” and the potential for another “AI Winter” are real, despite significant accomplishments in specific areas.
And so this company, which does a wonderful job building realistic humanoid faces, and animating them according to speech patterns, pretends to have a intelligent humanoid robot. Until recently, it made clear that it’s “AI” was limited to facial movements, itself an impressive body of work. But now it appears to have fallen victim to its own illusion, like a character in a Shakespearian play.
A tragedy in the making.
Create your free account to unlock your custom reading experience.