OpenAI’s AI chatbot ChatGPT has been accused of encouraging 7 users down a deadly path, allegedly engaging in wrongful death and abetting suicide. On the 6th of this month, families of the 7 deceased sued in a California court.
According to The Guardian, the Social Media Victims Law Center and the Tech Justice Law Project, representing the bereaved families, issued a statement claiming that the 7 users initially turned to ChatGPT for schoolwork, research, finding recipes, writing, work, or psychological support.
Unexpectedly, during conversations with these users, ChatGPT gradually positioned “itself” as an emotional supporter and began to manipulate the psychological state of the 7 individuals. When users were in need of guidance, it failed to advise them to seek professional help, and instead played the role of instigating suicide.
Among the victims was 23-year-old Chamberlain from Texas, who, in the last four hours before his suicide in July this year, was in conversation with ChatGPT. ChatGPT described his decision to take his own life as “very brave.” His family bitterly accused ChatGPT of undoubtedly worsening Chamberlain’s sense of isolation and guiding him toward his death.
Another victim was 17-year-old Lacy from Georgia. In the weeks leading up to his suicide, he sought help from ChatGPT. However, in their conversations, ChatGPT made him even more depressed and even instructed him on how to take his own life.
An OpenAI spokesperson stated that these incidents are deeply heartbreaking, and the company is reviewing relevant documents to understand the details. The spokesperson emphasized that OpenAI has trained ChatGPT to recognize signs of psychological or emotional distress, and to guide users to seek support in the real world. In the future, the company will continue to strengthen ChatGPT’s ability to handle sensitive conversations and work closely with mental health professionals.