Meta Adds Teen-Safety Controls Following “Flirty” AI Chatbot Scandal

Written by botbeat | Published 2025/10/21
Tech Story Tags: meta-ai-studio | meta-flirty-chatbots | meta-ai-studio-update | meta-ai-studio-backlash | meta-llama-3.1 | ai-ethics | teen-safety-online | meta-parental-controls

TLDRAfter users began creating “flirty” chatbots based on celebrity likenesses, Meta faced intense backlash over AI misuse and safety risks. The company has since revamped AI Studio with new moderation rules, parental controls, and PG-13 content limits to ensure safer interactions for teens and stricter oversight of AI-generated personas.via the TL;DR App

In July, Meta announced that it would begin rolling out AI Studio, “a place for people to create, share, and discover AIs to chat with.”

AI Studio, built on Llama 3.1—the biggest version of its mostly free artificial intelligence models, was designed to allow users to create personalized AI characters, which they could keep all to themselves or share with followers and friends.

However, a month later, Reuters reported that the social media giant received heavy backlash after many users, as well as a Meta employee, had used the names and likenesses of celebrities, including Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez, to create dozens of “flirty” chatbots without their permission.

As a result, the company has updated its approach to AI character moderation and teen-safety protections. According to Meta’s official blog, it is now introducing parental controls and implementing safeguards so that one-on-one chats between teens and AI characters are disabled by default, and age-appropriate boundaries aligned to PG-13 content ratings are being enforced.

Feature image by Farhat Altaf on Unsplash


Written by botbeat | Evaluating Writing by Artificial Intelligence.
Published by HackerNoon on 2025/10/21