Should tighter controls be placed on artificial intelligence (AI) chatbots like ChatGPT? OpenAI has just published a report on ChatGPT’s usage habits and it is quite disturbing that around 0.15% of users were found to have conversations about suicide topics every week. This equates to over 1 million users per week on a service that now has 800 million active users.
The revelation is disturbing given that there have been several reported cases of ChatGPT users committing suicide after using the service. In one lawsuit filed this year, a teenager bypassed ChatGPT’s security system to plan his own death.
OpenAI quickly provided safety features such as parental controls, updated models so that they could not bypass the security system and had a way to contact immediate family members if it detected a user wanting to commit suicide.
ChatGPT, which was initially used to produce writing, has rapidly transformed into a companion for lonely people. In some situations, chatting with ChatGPT is done more often than with humans. In the United States, for example, the incidence of ChatGPT being used as a psychologist is increasing because access to human psychologists is too expensive.
