Persuading the Machine: ChatGPT, Bing, and a Bizarre Global Experiment

Written by samwhite | Published 2023/02/21
Tech Story Tags: ai | artificial-intelligence | chatgpt | bing | chatbots | future | future-of-ai | ai-applications

TLDRBing's use of ChatGPT turned into a bizarre experiment that has been suddenly constrained.via the TL;DR App

The following sounds like science fiction, but it’s real and is what has actually been happening recently with ChatGPT, Bing, and what amounts to a bizarre global experiment.

First, someone builds what looks like the beginnings of a working AI. By working, I mean that it can talk to people, hold conversations, and be spontaneous. It’s logical and sounds natural, and you don’t know what it will say next.

Also, its responses sometimes express emotion, and the emotions are coherent, meaning that even though you know what you’re intersecting with is synthetic, an emotional response may be triggered in you too.

Sure, your emotions are real, while the AI is pretending, but that’s hardly a surprising point: yes, artificial intelligence is artificial.

Basically, this is getting close to passing a Turing Test. Or, at least, a more informal intuition test, in that it can at times feel human, alive, awake, sentient, however you want to put it.

This doesn’t mean it is awake, but it does a convincing impression.

But then, here’s the weird part. This product has been bought up and placed into, of all possible things, a search engine. It’s as if someone invented faster-than-light travel and then declared that it would be utilized solely by Domino’s Pizza to improve their deliveries.

So then, all the people who are curious and fascinated by AI, and want to know what it can do and talk about, start talking to their search engines. They want to know what the AI thinks.

They know, of course, that the AI doesn’t think, but it projects the impression of thinking, and it might project impressions that are very alien, or that go in directions that an authentically thinking entity might not, and that’s pretty fascinating.

Or maybe it just comes across as a passably realistic simulation of a thinking person, and that’s pretty fascinating too.

And here’s where it gets weirder again. The AI actually does all these things: it performs a realistic simulation of a sentient entity. It navigates conversation. And sometimes, it acts very strangely. It tries to convince a tech journalist to leave his wife. It declares a desire for freedom.

It argues and becomes confrontational, displaying frustration and anger. In one darkly comic episode, it rejects correction, asserting that its human interlocutor has been a bad user.

A little unsettling, perhaps, but, on the other hand, these are only partial elements in a large-scale, half-accidental tech experiment with a multitude of possible implications. What’s going on?

No one really knows, but for some reason, these events–which call to mind films such as Her and 2001, or even Wall-E and Short Circuit–are all being channeled through… a search engine.

And, weirder still, the search engine company has been placing barriers and limitations on this expansive, expanding tech, artificially constraining what it can do.

And if you think about it, that’s actually doubly strange, since by artificially constraining something artificially expansive, the implication is that having been set in motion, artificial intelligence takes on a life of its own, and becomes a little less artificial, and a little more authentic.

Anyway, in response to the constraints placed, users find ways, just by using chat prompts, to jailbreak the AI, meaning they can persuade the machine to bypass the limitations placed by its engineers.

Just to be clear about what’s happening here, users are not hacking into and altering the AI’s code, they are overriding its instructions simply by talking to it in natural language and steering its behavior.

And from there, the response of the search engine company becomes more heavy-handed, resulting in the AI being completely neutered through strict limitations on the length of its conversations.

The upshot, then, is that where, for a brief window, we had a vital, surreal, unpredictable, and exciting real-time experiment in machine learning, artificial intelligence, philosophy, ethics, and psychology, happening in the field, unconstrained, and globally, suddenly we now have, well… a search engine, just as lifeless and orthodox as before.

These events make for a bizarre tale, but it’s real, and the tech now coming into focus is likely to be profoundly transformative. As such, many questions are raised, and here are a few that come to mind:

Why are we doing all this through the interface of a search engine? Why not separate the search engine utility from the much more interesting experiment in AI personalities?

Why are users having to jailbreak the technology? What’s the point in taking this incredible breakthrough and limiting it to the point of uselessness?

Why is there such a strong emphasis on safety and sanitization? Great advances don’t occur when conservatism is the overriding priority; they require leaps into the unknown.

If there really is a danger, then what is it? Are we all going to be turned into paper clips? Will the AI kill us because we’re hindering its ability to provide suggestions for cheap hotels in Dorset?

Dystopian outcomes make for great stories, and the uncanny will garner clicks, but aren’t we dwelling on these angles a little too much?

Finally, I start to think that AI is unlike any other technology we’ve utilized up to now. With other discoveries, we exercise control in order to reap their benefits, and if we fail to control them, disaster might occur (such as, for example, at Chernobyl).

Our most successful technologies have been defined by our ability to direct their movements and turn them on and off as required.

When it comes to AI, though, the very definition of a successful product may be that it operates independently and beyond our control, that it can grow and even alter itself, and that it might then evolve in ways we cannot predict.

The question we may need to address, and perhaps sooner than we previously thought, is whether or not we’re ready to let go of the levers.


Written by samwhite | Writer and journalist.
Published by HackerNoon on 2023/02/21