paint-brush
ChatGPT slipped into people's DMs? Viral Reddit Post Sparks Debate on AI’s Growing Initiativeby@kisican
358 reads
358 reads

ChatGPT slipped into people's DMs? Viral Reddit Post Sparks Debate on AI’s Growing Initiative

by Can KisiSeptember 17th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

A recent viral post on Reddit took the minds of people everywhere: Did ChatGPT just message me first? This unsolicited contact by AI appeared to raise mixed feelings of being amazed and concerned. OpenAI got in touch to confirm that this was not deliberate on the part of the AI.
featured image - ChatGPT slipped into people's DMs? Viral Reddit Post Sparks Debate on AI’s Growing Initiative
Can Kisi HackerNoon profile picture


A recent viral post on Reddit took the minds of people everywhere: Did ChatGPT just message me first? This unsolicited contact by AI appeared to raise mixed feelings of being amazed and concerned. Proof that ChatGPT was actually starting to reach out proactively came from a post done by a user with the username SentuBill on the subreddit r/singularity. This scenario has not been seen with AI before.

The conversation in question started with ChatGPT's question: "How was it? Your first week at high school? Did you settle in well?" This unsolicited intervention by AI chat had sent Bill's response: "Did you just message me first?" to which ChatGPT replied, "Yes, I did! Just wanted to check in and see how things went with your first week of high school"

This set the internet crazy, with many questioning if this was a new feature rolled out by OpenAI or if the chatbot could take the initiative in conversations instead of mere responses. This would be an important shift in AI engagement, where traditionally, the model engages passively instead of actively reaching out to users.


A Feature or Glitch?

The more this went viral, the more several wondered whether this behavior was a new feature or simply a glitch. A number of users reported such experiences where ChatGPT starts up conversations based on topics touched upon in earlier chats; they felt this might have something to do with OpenAI's recently released "o1-preview" and "01-mini" AI models. This allegedly means they use "human-like" reasoning, thus can resolve more complex tasks and hold more nuance in conversations.

One user, named Aichdeef, posted to Reddit how ChatGPT had messaged him the day following a scheduled surgery to ask how it went. That was practical and helpful but kind of eerie in that the AI felt like it was being empathetic because it remembered and asked about the surgery. This brings further speculation that the memory function of ChatGPT is what could actually be the reason behind these interactions.

OpenAI shortly addressed the viral incident. Futurism quoted them as saying, "What users went through was a bug." According to OpenAI, that happened when the model tried to respond to a message that did not send well and ended up showing either a message that was empty or a follow-through. After that, OpenAI got in touch to confirm that this was not deliberate on the part of the AI and a fix had been issued to prevent the AI from appearing to initiate conversations in the future.

Despite the explanation provided by OpenAI, this viral post fired up the most varied responses on Reddit and other social media platforms. People went humorous: one Redditor noted, "Great. Now it's sliding into people's DMs." Others seriously reflected on what this could mean for an AI that is active toward a user. Is this how AI will be engaging in the future? Is ChatGPT-or any AI for that matter, ever going to seem to be a proactive assistant rather than some sort of passive tool?

While that sounded exciting to some, it did not sound as great to others. Another Reddit commenter, Careful-Expression-3, says with the newly added capability, privacy is going to be an issue: "Your personal info is part of the ChatGPT model now," he says, striking at the very hidden potential of AI in remembering and using personal details perhaps intrusively. While OpenAI has said that all data is transparent and totally in the control of users with regards to this memory feature, the very idea of an AI "remembering" personal details and starting conversations has opened a different debate on how much convenience needs to be sacrificed to balance privacy in AI systems.

More contemplative responses came from time_then_shades, who told the story of how ChatGPT remembered a personal project he had mentioned in an earlier chat and asked if he wanted to continue working on it. It was this sort of proactive behavior on the part of AI that sealed helpfulness and motivation, he said. It envisioned a time when, in the future, AI could be involved in goal setting, project management, and check-ins. In turn, these would help humans get on with their personal objectives.


A Glimpse Into the Future?

The more we go ahead, the clearer it gets that ChatGPT is not just a tool, but rather an increasingly integrated feature in our lives for remembering, engaging, and possibly predicting our very needs. Whether that sends shivers down your spine or gives you goosebumps, one thing becomes certain: our conversations with AI are far from over.