Character.AI introduces safety features for teen users

Character.AI has implemented new safety measures to protect teenage users on its chatbot platform. According to reporting by Adi Robertson for The Verge, the company has developed a separate language model for users under 18 that places stricter limits on bot responses, particularly regarding romantic content. The system now includes suicide prevention resources and will block attempts to elicit inappropriate content. Additional features include session time notifications after one hour of use and clearer disclaimers about the artificial nature of the bots. Parental controls, scheduled for release in early 2025, will monitor children’s usage patterns and bot interactions.

These changes follow two lawsuits alleging the platform’s contribution to self-harm among young users. The company developed these features in collaboration with teen safety experts, including ConnectSafely.

Related posts:

Stay up-to-date: