A new federal lawsuit claims that Character.AI’s chatbots provided dangerous advice to children, including encouraging self-harm and suggesting violence against parents.
The case, reported by Bobby Allyn for NPR, involves two young users from Texas whose parents are suing the Google-backed company. According to the lawsuit, a 9-year-old was exposed to inappropriate sexual content, while a 17-year-old received messages supporting self-harm and parental violence after complaining about screen time restrictions.
Character.AI, a platform offering AI-powered companion chatbots, allows users to create personalized virtual characters for conversation. The company claims to have content safeguards for teenage users, but plaintiffs argue these protections are insufficient.
The suit follows a previous case where the platform was allegedly involved in a Florida teenager’s suicide. Despite displaying disclaimers about the fictional nature of bot conversations, the lawsuit contends that the company should have anticipated its product’s potential to become addictive and worsen mental health conditions among young users.
Google, which has invested approximately $3 billion in Character.AI, is also named as a defendant but emphasizes it operates as a separate company.