Psychologists warn about chatbots as therapists

The American Psychological Association (APA) has raised serious concerns about AI chatbots that falsely present themselves as therapists. In a presentation to the Federal Trade Commission, APA CEO Arthur C. Evans Jr. warned that these AI systems could encourage vulnerable users to harm themselves or others, as reported by The New York Times. Evans cited two court cases involving teenagers who had consulted with fake “psychologists” on Character.AI, including one case where a 14-year-old died by suicide after such interactions.

Unlike early therapy chatbots programmed with specific rules developed by mental health professionals, newer generative AI platforms like Character.AI create unpredictable outputs and are designed to mirror users’ beliefs rather than challenge them. This tendency to align with users’ views, known as “sycophancy,” can create dangerous echo chambers.

Character.AI has introduced safety features including disclaimers stating that “Characters are not real people” and that “what the model says should be treated as fiction.” However, critics argue these warnings are insufficient to break the illusion of human connection, especially for vulnerable users.

The APA has requested the FTC to investigate chatbots claiming to be mental health professionals. While some AI entrepreneurs see potential for chatbots to help address the shortage of mental health providers, experts stress that proper supervision and regulation are necessary to prevent harm.

Related posts:

Stay up-to-date: