paint-brush
Bots Masquerading as People Crosses Boundariesby@azeem
196 reads

Bots Masquerading as People Crosses Boundaries

by AsadMay 21st, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

When Google <a href="https://www.youtube.com/watch?v=bd1mEm2Fy08" target="_blank">demo’d</a> Duplex, its new chatbot thing, I was pretty impressed with the technical complexity of what they had achieved. (Yes, <a href="https://www.vanityfair.com/news/2018/05/uh-did-google-fake-its-big-ai-demo/amp" target="_blank">some people argue that the dialogue may have been staged or, at least heavily edited</a>.)

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Bots Masquerading as People Crosses Boundaries
Asad HackerNoon profile picture

They are tools, crudely engineered toys from the vantage point of the 22nd century, however much they might impress the troglodytes of 2018.

When Google demo’d Duplex, its new chatbot thing, I was pretty impressed with the technical complexity of what they had achieved. (Yes, some people argue that the dialogue may have been staged or, at least heavily edited.)

There are real problems with bots masquerading as people. We’ve seen clumsy bots on Twitter and Facebook confuse and befuddle people for close to a decade. I used to use Andrew/Amy, an impressive scheduling bot. But I stopped using it when I realised that humans who I cared about were spending their times crafting thoughtful messages to a script. Wasting their time and attention.

X.AI (the company behind Amy) has rapidly addressed that situation, and now flags that the messages come from a bot.

I’m not an unalloyed fan of the rabid anthropomorphisation of today’s AI tools. These are tools, spreadsheets, hammers, flints, with a bit more verve and fairy-lights. They are impressive. They use better maths to act less deterministically than the dumb cogs and wheels of the past. They can deliver us some quite remarkable benefits, as long-term readers of this missive will know. But today, they are tools. They are improved by the application of scientific method and good engineering. We use words like “training” because those processes are analogous to the way we train conscious, biological entities. But it is only an analogy.

They do not have agency. Even the largest (non)self-promoters (looking at you, Sophia and Atlas) have no agency, nothing that resembles personhood. They are tools, crudely engineered toys from the vantage point of the 22nd century, however much they might impress the troglodytes of 2018. The golden lab puppy we all wished we had has more agency, a better sense of self.

So in the same vein that we shouldn’t be treating these portions of executable code as people, we shouldn’t be having them masquerade as people (except in certain therapeutic or specialist circumstances). During its demo, Google showed, once again, that the tech company had grasped with the ethical boundaries that it regularly butts up against. It crassly wandered over a line.

That is a line we might one day o’erleap. Science might help us step over it by helping us deliver a better understanding of a system’s capacity for awareness, agency or consciousness. But it is not a line we need to cross today.

I write an awesome newsletter which covers topics like this weekly. You can sign-up at http://exvw.co