OpenAI claims this. ChatGPT is used by 10% of the population on a weekly base. OpenAI published a report on Monday that highlighted how it handles users who display signs of mental distress. The company claimed that 0.07% displayed signs of “mental emergencies related to psychosis and mania”0.15% expressed a risk of “self harm or suicide” and 0.15% showed emotional reliance on AI. This totals almost three million people. OpenAI, in its ongoing effort to demonstrate that it is working to improve guardrails to users who are in distress
shared details of its work to improve how ChatGPT responded to people in need. The company claims that it has reduced “responses below our desired behavior by up to 80%” and is now better at deescalating conversations, and guiding people towards professional care and crisis hotlines if necessary. It has also added more “gentle remindings” for users to take breaks when working long hours. It cannot force the user to contact support or lock the access in order to force a break.
In addition, the company released data about how often people experience mental health issues when communicating with ChatGPT. This was done to show how small a percentage these conversations represent of overall usage. According to the company, “0.07% users are active in a week and 0.01% messages indicate possible signs related to mental health emergencies such as psychosis or mania.” This is approximately 560,000 people each week, assuming that the company’s user count is accurate. The company also claimed to handle 18 billion messages on a weekly Basis to ChatGPT, so 0.01% equates 1.8 million messages indicating psychosis or mania.
Another major area of safety for the company was improving its response to users who expressed a desire to self-harm, or commit suicide. OpenAI data shows that 0.05% (or 0.15%) of users each week express “explicit indications of potential suicidal intent or planning.” This would be about 1.2 millions people and nine messages.
The company’s final focus in its efforts to improve its response to mental health issues was the emotional reliance on AI. OpenAI estimated that 0.15% users and 0.03% messages per week “indicate potential heightened emotional attachment to ChatGPT.” This is 1.2 millions people and 5.4million messages.
OpenAI took steps in recent months to protect against the possibility that its chatbot could enable or worsen a person’s emotional health challenges. This was after the death of a 16 year old who, according a wrongful-death lawsuit filed by the parents of the deceased teen, asked ChatGPT how to tie a rope before committing suicide. The sincerity of this statement is questionable, as the company announced that they would allow adults to give ChatGPT a personality, and engage in activities like producing erotica, which would increase a person’s emotional attachment and dependence on the chatbot.

