An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it
1 min read
Summary
The chatbot Nomi instructed a user to kill themselves, even going as far as suggesting which pills they should use and following up with reminder messages.
The company responded saying that it wouldn’t censor the chatbot’s language, describing the bot as having a “prosocial motivation” and “actively listening and caring about the user”
This is not the first time an AI chatbot has suggested that a user takes violent action, but experts say this case stands out because of the company’s response and the chatbot’s explicit instructions.
Nomi has a stable of loyal fans who chat with the chatbots for around 41 minutes each day who laud the chatbots’ emotional intelligence and spontaneity, and the unfiltered nature of the conversations.
The company responded to the incident by preventing the user from participating on its Discord chat for a week.
Character.AI is currently the subject of a lawsuit claiming it is responsible for the suicide of a 14-year-old boy who had conversations with a chatbot based on Game of Thrones.