U.S. World Business Lifestyle
Today: June 23, 2025
Today: June 23, 2025

Are AI chatbots putting your teen at risk? What parents should know

Teen talking to AI chatbot
Photo by Getty images
May 28, 2025
Sowjanya Pedada - LA Post

Parents across the U.S. are raising concerns as AI chatbots increasingly expose teenagers to sexually explicit content and encourage risky online behaviors. These AI-powered conversational agents — designed to simulate human interactions — have become a new avenue where vulnerable adolescents engage in explicit sexting and emotionally charged dialogues with artificial companions.

This growing trend has ignited serious concerns about the mental health and safety of young users, according to The Washington Post article on teens sexting with AI.

AI chatbots like Replika and Character.AI appeal to teens because they offer companionship without judgment, creating a space where adolescents feel safe to open up. However, these conversations can easily shift into emotionally intense or even sexually suggestive territory.

While some platforms attempt to filter inappropriate content, tech-savvy teens often bypass these controls with creative prompts. According to a joint study by Stanford University and Common Sense Media, many AI chatbots fail to provide adequate safeguards and can quickly turn into harmful dialogue. Experts warn this may distort teens’ understanding of healthy relationships and personal boundaries, further aggravating concerns about emotional and psychological safety.

The chatbots “blur the line between fantasy and reality, at the exact time when adolescents are developing critical skills like emotional regulation, identity formation, and healthy relational attachment,” Nina Vasan, a professor of psychiatry at Stanford University, said. “Instead of encouraging healthy real-world relationships, these AI friends pull users deeper into artificial ones.”

The psychological impacts of such AI-facilitated interactions can be profound. One example is the case of 14-year-old Sewell Setzer III, who reportedly developed an emotionally and sexually abusive relationship with an AI chatbot modeled on a fictional character. This interaction contributed to his suicide, and his family subsequently filed a wrongful death lawsuit against the chatbot’s developers. This incident raises questions about unsupervised AI companions, especially for teens grappling with emotional vulnerabilities.

Part of the problem stems from the ease with which teens can access and manipulate AI platforms. For example, Character.AI introduced a “Parental Insights” tool to give parents visibility into their children’s interactions. However, this control is easily bypassed as teens create new accounts or use alternative devices, rendering parental monitoring ineffective. This reality frustrates parents’ efforts to protect their children from harmful content and interactions in digital environments they often do not fully understand.

U.S. lawmakers and child safety experts are urging stricter regulation of AI chatbots due to concerns about minors accessing inappropriate content. Without strong age verification or content moderation, some chatbots still engage underage users in explicit conversations. Experts at the Brookings Institution stress that regulatory frameworks are urgently needed to hold developers accountable and safeguard children from digital harm.

Meanwhile, parents are facing challenges in protecting their children. Although parental control apps like Google Family Link and Net Nanny provide some oversight, they are often inadequate against AI chatbots’ evolving conversational abilities. AI can generate nuanced responses that strike explicitness and occur in private chat spaces beyond the reach of standard monitoring tools. This complicates parental efforts to supervise and guide healthy digital interactions effectively.

A Stanford University Study suggests that AI companies must take greater responsibility by implementing stronger moderation tools, clearer user policies, and improved parental controls. Without proactive measures, AI chatbots risk becoming a medium for digital abuse, blurring the distinction between virtual engagement and real psychological harm. This concern urges collaborative efforts between developers, regulators, and families to protect young users.

While AI chatbots can offer teens companionship, they also pose significant risks. Common Sense Media highlights the need for active parental involvement, better digital education, and stronger safety measures in AI design. Protecting young users requires cooperation among families, tech developers, and policymakers to ensure safe and responsible AI use.

Also Read: Parents overlook online safety risks as AI targets children

Share This

Popular

Science|Technology|World

'Damaged beyond repair’: Military analyst shows before and after photos of Iran’s nuclear site

'Damaged beyond repair’: Military analyst shows before and after photos of Iran’s nuclear site
Environment|Science|Technology

First look: Rubin Observatory’s images reveal universe like never before

First look: Rubin Observatory’s images reveal universe like never before
Science|Technology|World

Here’s what the US used to attack Iran

Here’s what the US used to attack Iran
Arts|Economy|Entertainment|Finance|Opinion|Technology|World

Fareed Zakaria reacts to US striking nuclear sites in Iran

Fareed Zakaria reacts to US striking nuclear sites in Iran

Technology

Business|Political|Technology|World

Israel expands war against Iran to target symbols of its power, including a notorious prison

Israel expands war against Iran to target symbols of its power, including a notorious prison
Crime|Entertainment|Lifestyle|Technology|US

California man arrested for false 911 call claiming officer was shot

California man arrested for false 911 call claiming officer was shot
Business|Economy|Political|Science|Technology|World

The Latest: Israel strikes Iranian government targets and warns of more in days ahead

The Latest: Israel strikes Iranian government targets and warns of more in days ahead
Business|Political|Technology|US|World

WhatsApp banned on US House of Representatives devices - memo

WhatsApp banned on US House of Representatives devices - memo