Finding is one of most direct statements from the tech company on how AI can exacerbate mental health issues
More than a million ChatGPT users each week send messages that include “explicit indicators of potential suicidal planning or intent”, according to a blogpost published by OpenAI on Monday. The finding, part of an update on how the chatbot handles sensitive conversations, is one of the most direct statements from the artificial intelligence giant on the scale of how AI can exacerbate mental health issues.
In addition to its estimates on suicidal ideations and related interactions, OpenAI also said that about 0.07 of users active in a given week – about 560,000 of its touted 800m weekly users – show “possible signs of mental health emergencies related to psychosis or mania”. The post cautioned that these conversations were difficult to detect or measure, and that this was an initial analysis.
“How can we monetize this?”
Just a matter of time before it recommends therapists in your area (that paid OpenAI to be suggested to you).
Nah, that would be v1. V2 will be chatgpt acting like a therapist and recording everything for more marketing and money.
Preface: I love The Guardian, and fuck Altman.
But this is a bad headline.
Correlation is not causation. It’s disturbing that OpenAI even possesses, and has mined for these statistics, or that millions of people somehow think their ChatGPT app has any semblance of privacy, but I’m reading that millions reached out to ChatGPT with suicidal ideations.
Not that it’s the root cause.
The headline is that the mental health of the world sucks, not that ChatGPT inflamed the crisis all of the sudden. The Guardian should be ashamed of shoehorning in some “Fuck AI” article into that for clicks, when there are literally a million other malicious bits of OpenAI they could cover. This a sad story, sourced from an app that has an unprecedented (and disturbing) window into folks psyche en masse, they’ve twisted into clickbait.
Sounds like there’s more than a million people a week that would benefit from free or even low cost mental health care.
What do you expect from people who basically have no friends left, are seemingly permanently isolated, and the last “social” arrangement they have is talking to a fucking agreeable robot?
It’s a really sad “society” we’ve built here.
possible signs of mental health emergencies related to psychosis or mania
It can be amusing to test what triggers this response from LLMs. Perplexity will reliably do it if you propose sacrificing a person or animal to Satan, but not Ku-waha-ilo, the Hawaiian god of war, sorcery, and devourer of souls.
I imagine a large fraction of the conversations flagged this way are people doing that rather than actually having a mental health crisis.

I would have suicidal thought if I had to chat with Chatgpt… I apologize, I’ve had too many friends die by suicide. If you are having thoughts of harming yourself or others please find help and not from an illusionary intelligence bot.
Holy shit. We know that ChatGPT has a propensity to facilitate suicidal ideation, and has led to suicides. It not only fails to direct suicidal individuals to the proper help, but actually advances people toward taking action.
How many people has this killed?
I am a depression survivor. Depression is a disease and it can be deadly, but there is help.
If you are having suicidal thoughts, you can get help by texting or calling 988 in North America, or text ‘SHOUT’ to 85258 in the UK.
OpenAI: “ChatGPT, estimate how many discussions on suicide you have in total per week.”
Why believe any company using this kind of “AI”?







