paint-brush
Heyo, AI Character - Stay Away from My Child; Get a Jobby@sidra
920 reads
920 reads

Heyo, AI Character - Stay Away from My Child; Get a Job

by Sidra IjazOctober 29th, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Adolescence years aren’t just a time of massive body changes. It is scientifically proven that these years are crucial in developing your personality and emotional intelligence. Kids are going through massive social and emotional changes. They need healthy relationships and guidance to develop their emotional intelligence. This is not the time to engage and develop emotional attachments with AI buddies. These AI companions are programmed to make you feel good and trigger that dopamine effect. They are available all the time and can mess up a young child’s brain chemistry forever.
featured image - Heyo, AI Character - Stay Away from My Child; Get a Job
Sidra Ijaz HackerNoon profile picture

Trigger warning: This story mentions suicide.


Last Friday night, during our virtual team meeting, one of our AI note-takers left the Zoom meeting room abruptly while the team casually caught up with each other about our week. I joked that the AI didn’t like what we were saying.


Before continuing with this story, I want you - the reader, to go back a few decades. Imagine that you were a grown adult living at that time and read the first couple of sentences of this article again.


The Internet wasn’t even mainstream then. A scenario where there’s a live company meeting between people living across different continents, where they can hear and see each other and have their non-human assistants noting down their meeting minutes - this would sound crazy then.


Information Technology is amazing. What was considered science fiction a few decades ago is now a reality. Technology has changed my life, and it has changed many lives for good. According to Statista, the digital population is around 5.4 billion.


I’m one of the 1.8 Billion millennials who grew old with info tech. I was a teenager when the world was experiencing the dotcom boom in the late 90s and early 2000s. Now - I’m a mother of teenagers, and AI is the new dotcom.


Am I afraid that AI might make my skills obsolete? Honestly, yes. I would want it to assist me, not replace me. But what scares me to the core is its ability to impact our personal lives.


This story is personal. I’ve felt an emotional connection to it as a mother, and I want to share it with the tech community as we have a collective responsibility to ensure the safety of our future generations from the harmful impact of technology. Coming back to the story now.


So, during the team meeting, the AI note taker left, and I joked they didn’t like our discussion. Our COO then shared her apprehensions against AI as a parent and mentioned this recent story of a Florida teenager who committed suicide. This incident happened on February 28th, but the news came to the mainstream recently when his mother, Megan Garcia, a lawyer, filed a lawsuit against Character Technologies, Inc., accusing the company of the wrongful death of her son.

More About The Incident

14-year-old Sewell had built an AI version of Daenerys Targaryen from “Game of Thrones” using Character.AI - a platform that allows you to create an AI companion of your choice with whom you can chat as long as you want to.


The child spent months using this platform and established an emotional connection with his AI companion. His studies and social life were affected; he stopped taking interest in healthy activities and spent more time on his phone. The 14-year-old shared his wish to take his life with his AI friend multiple times and eventually did it.


I highly recommend reading the lawsuit filed by the Mother. She has pointed out the dangers and negligence in testing the product for minors.

Welcome to the Brand New Parenting Nightmare

Going back to Friday night’s work meeting again, we briefly discussed the nuances of technology and parenting in the age of social media and AI. I compared the time when I was a teenager - all I had were print media, television, and some games on our desktop system. The new generation has access to so many new apps and platforms that keeping up as parents with all of these has become a challenge. You cannot stop them or cut them off from tech.


After the meeting, I went straight to my elder daughter’s room. She’s about the same age as Sewell and enjoys anime. I thought she might like the idea of creating chatbots of anime characters for fun, and I should warn her about it.


I told her the story and felt she was visibly disturbed to hear it. She told me she was already using Character.AI. Wow. I had no idea. We sat down together, and I checked out the platform. She logged off from her account.


We discussed it for hours. I’ve decided to do more on the issue. I’m writing this article with her consent, and the featured image of this story isn’t AI-generated - my super-talented, sensitive, and smart girl drew it.

AI Companions and Character AI

There are many AI Companion platforms like Replika, Janitor, Anima; the most popular of them all is Character AI. These platforms provide users the freedom to create a character of their choice - it could be your favorite character from a book, a writer, an anime character, or maybe a long-lost loved one - the possibilities are endless.


Giving people a platform to create a ‘perfect’ companion—who is always available to listen and trained to respond to their liking—is undoubtedly a billion-dollar business idea. People would rush to such a platform.


It’s a no-brainer that Character.AI, founded in 2022, now has over 20 million monthly users and is valued at around 2.5 Billion (source). The co-founders were previously Google AI’s researchers, and interestingly, the Tech Giant acquired the non-exclusive license to c.ai’s LLM technology in August 2024.


Here’s how one of the co-founders, Noam Shazeer, explains the need for an AI friend on a podcast:


“It’s going to be super, super helpful to a lot of people who are lonely or depressed.”


I might sound biased here, but having an AI companion online to kill loneliness is like taking drugs to feel good. I mean, would talking to a bot really kill your loneliness? Will it connect you with the real world and real people better? Is it good for your physical and mental health to get emotionally attached to a chatbot?


In short - would people love this? Absolutely yes. Is it good for them? No. Maybe in moderation, for fun - but only for Adults. These platforms are not safe for young children.


When I asked my daughter why she didn’t feel the need to inform me when she started using c.ai,  she replied, “Ami (Mother), it is approved for ages above 13. I thought I didn’t need to inform you. I signed up for just harmless fun.”


According to their Terms of Service, users must be 13 or older (16 or older if they are an EU citizen) to use the platform. I couldn’t find the estimate of the number of minors using this service. According to this investigative piece by the New York Times, here’s the response from the company when asked the same question:


“Mr. Ruoti declined to say how many of the company’s users are under 18. He said in an emailed statement that “Gen Z and younger millennials make up a significant portion of our community” and that “younger users enjoy the Character experience both for meaningful and educational conversations, as well as for entertainment.” The average user spends more than an hour a day on the platform, he said.”


It took me a few minutes on the website to realize that there’s a massive user base that belongs to the under-18 group. I mean, see the screenshot of the recommended and featured apps (Just making it clear, I accessed it on my computer, and I just clicked on Anime, so recommendations are not trained on any other activity than that on the website in this case):


They have shared their community safety updates in the past week after the news about the lawsuit broke:


Moving forward, we will be rolling out a number of new safety and product features that strengthen the security of our platform without compromising the entertaining and engaging experience users have come to expect from Character.AI. These include:

  • Changes to our models for minors (under the age of 18) that are designed to reduce the likelihood of encountering sensitive or suggestive content.
  • Improved detection, response, and intervention related to user inputs that violate our Terms or Community Guidelines.
  • A revised disclaimer on every chat to remind users that the AI is not a real person.
  • Notification when a user has spent an hour-long session on the platform with additional user flexibility in progress.


My daughter made an interesting comment on this ‘disclaimer’ that AI isn’t a real person that I’ll quote:


“This is the reason why you feel so free talking to an AI. You know they are not a real person. You know that you are free to say anything to them without any feeling of judgment. This is why many teenagers would like to use it and may form an emotional attachment to it.”

The Science of Emotional Attachment: Should Our Kids be Making AI Friends?

Adolescence isn’t just a time of massive body changes. It is scientifically proven that these years are crucial in developing personality and emotional intelligence. Kids are going through massive social and emotional changes. They need healthy relationships and guidance to develop their emotional intelligence.


This is not the time to engage and develop emotional attachments with AI buddies. These AI companions are programmed to make you feel good and trigger dopamine. They are always available and can mess up a child’s brain chemistry forever.


The lawsuit filed by Sewell’s mother has details of inappropriate, suggestive content that many kids might have been exposed to.


You can get rid of toxic relationships in real life by removing contact - letting go is way easier in real life. The biggest issue with having any kind of relationship with an online AI character is that it is very difficult to cut it off as you can always log in, create a new account, and create a similar new friend. It is a menace to a child’s developing sense of relationships.


Developing high emotional intelligence is even more crucial for the younger generations in the age of AI. This year, the Nobel Prize was given to two of the most amazing minds in Physics, Geoffrey Hinton and John J. Hopfield, who paved the way for the development of modern AI algorithms. In one of the recent lectures by Geoffrey Hinton, he emphasized that digital intelligence will surpass biological intelligence:


Hinton previously believed that biological brains held a significant edge due to their long evolutionary development of sophisticated learning algorithms. However, he now suggests that the combination of weight sharing and backpropagation in digital systems may ultimately prove more powerful, even if those algorithms are less elegant than those found in nature.


Our future generations should be better at building real relationships with humans - their emotional intelligence and soft skills will matter more, as most of the jobs would already have been fully automated. In the future, your child’s EQ will matter way more than their IQ. It would be survival of the fittest mind.

Online Safety of Kids in the Age of AI: It’s a Collective Responsibility

The safety of future generations shouldn’t just be a headache for the government or tech giants. Ethical and safe AI should be everyone’s responsibility, from parents to educational institutes, law enforcement, and decision-making authorities. Most importantly, it is the responsibility of the tech community to ensure that AI is safe and beneficial for young minds.


As a mother, I suggest we spend more healthy time with them, communicate more with them, be honest, and share our concerns like friends. Do not react, but respond to what your child has to share. We cannot stop them from using the tech altogether. We shouldn’t. Taking away a child's sense of freedom, control, and decision-making can escalate the problem instead of taking care of it.


They should be assured that we are always available to listen and no judgment will be involved. They should feel safe sharing their experiences and thoughts with us. Furthermore, ensure that your child socializes with the real world and spends a good amount of time in outdoor activities of their choice.


I’m not against technological advancements. AI is helping to automate many processes and tasks that would redefine the future of work. But I want the AI chatbot to help me with my job’s tasks, not mess with my child’s mind.


As a woman in tech, I call on the whole AI community to work actively in developing safer, ethical AI. Diversity and inclusion in AI development, focus on quality and safety assurance, and development of awareness training for the masses are some of the main action points that come to mind. What else can our community contribute to ensure our children’s safety? Let me know in the comments.


This is an opinion piece by the writer and does not represent the viewpoint of any organization.