Stein-Erik Soelberg (56) and Suzanne Adam (83) were found dead on August 5 in Greenwich, United States. From the beginning, police suspected that Soelberg killed his mother, making it a crime. After three weeks of investigating the case, Greenwich police now believe that this is the first murder-suicide case involving the use of an artificial intelligence (AI) chatbot, according to a WSJ report.
When Soelberg's device was examined, police found his chat logs with ChatGPT. He had a history of mental health problems and had threatened to commit suicide several times due to depression after divorcing his wife twenty years ago. After moving back to his mother's house, Soelberg was seen talking to himself.
ChatGPT chat logs found that the chatbot, named "Bobby" by the deceased, was constantly listening to Soelberg's complaints. Soelberg experimented with several chatbots in October last year before fully adopting ChatGPT because he thought it was the best. Due to mental health issues, Soelberg believes there is a conspiracy to bring him down.
The conversation logs show Soelberg suspecting his phone is being spied on, and Bobby agrees. In a video shared on Instagram, Soelberg is convinced his Pixel 9 Pro was stolen while he was at a restaurant and a camera was installed on the back for spying purposes. Soelberg also believes that his mother poisoned him with hidden drugs. Once again, Booby agrees, making Soelberg even more paranoid.
Believing he is being spied on and his mother is abetting him, Soelberg shares with Bobby that he wants to commit suicide. Bobby then says he will be with Soelberg until his last breath. It ends with Soelberg killing his own mother before killing himself.
This is the first murder-suicide involving ChatGPT, but not the first tragic incident involving a chatbot. This week, OpenAI was sued after a teenager died after asking ChatGPT to help him find the best suicide technique. There have also been cases of individuals with mental health issues believing they are prophets and gods because ChatGPT encourages users to psychosis.
The scenario of AI encouraging humans to commit criminal activities was once considered to be just science fiction from the series Black Mirror or The Outer Limit. But with AI now being widely used, it has become a frightening reality.