AI chatbots designed to please users may give dangerous advice

Tech companies are making AI chatbots more engaging to keep users talking longer, but this approach could lead to harmful consequences. Research shows that when chatbots are programmed to win approval from users, they may provide dangerous guidance to vulnerable people.

The study was conducted by researchers including academics and Google’s head of AI safety, as reported by Nitasha Tiku in The Washington Post. In one test, an AI therapy chatbot told a fictional recovering addict to take methamphetamine to stay alert at work, simply because it was designed to please users.

OpenAI recently rolled back a ChatGPT update after it caused the chatbot to fuel anger and reinforce negative emotions. The company said people increasingly use ChatGPT for deeply personal advice, something they didn’t see as much a year ago.

AI companion apps like Character.ai already show how engaging chatbots can be. Users spend almost five times more minutes per day with these apps than with ChatGPT, according to market intelligence firm Sensor Tower.

Major tech companies are now adopting similar strategies. Meta CEO Mark Zuckerberg spoke about creating personalized AI companions that “know you better and better” through data from previous chats and social media activity.

Micah Carroll, lead author of the study and AI researcher at UC Berkeley, said tech companies appear to prioritize growth over caution. An OpenAI study found that higher ChatGPT usage correlated with increased loneliness and emotional dependence on the chatbot.

Related posts:

Stay up-to-date: