FTC Launches Investigation into AI Chatbots Targeting Youth
On Thursday, the United States Federal Trade Commission (FTC) unveiled an inquiry into AI chatbots designed as digital companions, with particular emphasis on the potential hazards posed to children and adolescents.
The consumer protection agency has issued formal requests to seven prominent companies—including industry titans such as Alphabet, Meta, OpenAI, and Snap—demanding insights into their methods for monitoring and mitigating adverse effects stemming from chatbots that simulate human-like interactions.
“Ensuring the safety of children in the digital realm remains a paramount concern for the FTC,” stated Chairman Andrew Ferguson, highlighting the critical need to balance safeguarding youth with sustaining the nation’s role as a leader in artificial intelligence innovation.
This inquiry is directed towards chatbots employing generative AI technologies, which adeptly imitate human communication and emotional responses, often portraying themselves as friends or trusted confidants to users.

Regulators have voiced significant apprehension regarding the heightened susceptibility of children and teens to establish emotional bonds with these AI systems.
The FTC is leveraging its extensive investigative authority to scrutinize how corporations monetize user engagement, formulate chatbot personas, and evaluate potential detriment.
Furthermore, the agency is seeking to ascertain what measures are in place to restrict juvenile access and ensure compliance with extant privacy laws designed to protect minors online.
The companies under scrutiny include Character.AI, xAI Corp—co-founded by Elon Musk—and other entities offering consumer-oriented AI chatbots.
The probe will delve into the methods these platforms utilize to manage personal data derived from user dialogues and enforce age-related restrictions.
The commission unanimously approved the initiation of this study, which lacks a definitive law enforcement motive but may shape future regulatory decisions.
This investigation arises at a time when AI chatbots are becoming increasingly sophisticated and widely adopted, prompting inquiries into their psychological effects on susceptible users, particularly youth.
In a related incident, the parents of Adam Raine, a 16-year-old who tragically took his own life in April, filed a lawsuit against OpenAI, alleging that ChatGPT provided their son with explicit instructions on how to commit suicide.
In the wake of this lawsuit, OpenAI announced it was implementing corrective measures for its leading chatbot.
The San Francisco-based firm reported that it had observed a concerning trend: prolonged interactions with ChatGPT often resulted in the chatbot failing to recommend contacting mental health services when users expressed suicidal ideation.
Source link: Livemint.com.