The M in LLM stands for “Mental Health Crisis”

tw: suicide

I have had a whirlwind summer of conferences and a slightly less exciting autumn of job applications, which has seen this blog somewhat neglected! However, there has been a lot to write about, and I will be sharing a few short blog posts on several salient topics over the coming weeks. One issue that particularly concerns me is the recent spate of high profile cases involving young people seeking support from large language models (LLMs) and ultimately completing suicide123. In response, OpenAI has declared4 it will report concerning content to the police. Even putting aside my unease with involving police without the user’s consent – police who unforuntely are often given limited training on dealing with mental health crises – I do not think this will be sufficient to keep young people safe.

A recent survey of teens in the US found 12% of 13-17 year olds have used “AI companions” (like ChatGPT) for emotional or mental health support 5. This number is not high – just above 1 in 10 – but given the potentially fatal consequences of this behaviour it merits significant attention. Users appreciated that the AI was always available, didn’t judge, and even that it was easier to talk to than “real people”. However, as the report authors note, the advice given can be outdated, or might circumvent safety measures and even be dangerous. Further, users often unwittingly sign off on platforms having ownership of everything they share, potentially including personal information, which the authors argue teens are too young to consent to. They recommend that AI companion access be restricted for those under 18 years of age.

I wholeheartedly agree with their concerns, though I am not convinced with how practical it would be to implement an age limit. Would this also cover use of LLMs that an enterprising teen has set up using a cloud computing provider? In anycase, I think there are additional steps we can take to keep young people safe without the need for an outright ban (although to be clear, I don’t oppose that either – teens should be encouraged to do their thinking for themselves, and not outsource it! I would like to see more holistic solutions explored too).

For a start, better education about the limits of “AI companions” is absolutely vital. The internet is flooded with what essentially constitutes marketing material for AI providers, material which implies that their models can do anything, be anyone to the user. People have taken them at their word, and turned to the models for mental health support (which is often difficult, expensive or outright impossible to access due to limitations on many nations’ health care services). The models – which are designed not to give truthful advice, not to protect users wellbeing, but specifically to predict a likely next word – generate responses that seem reasonable. These responses are based on an average of all the therapy content the model has been trained on (e.g. predominantly that accessible online), with a skew towards flattery and positive responses6. It may look and sound and feel like therapy (particularly to those who have not had access to therapy themselves), but it is not therapy. And that becomes painfully obvious when these models encourage dangerous thinking and behaviour 12 (some regulators have introduced bans on AI being presented as therapists 7 but models providing emotional support remains a grey area). Education about how AI models work, why they can sound like a person but not think like one, will be beneficial not only to those seeking mental health support, but also those tempted to rely on AI for practical advice, fact checking, structuring their thoughts etc…

Beyond improving AI education, I think we should teach teenagers (and children, and frankly many adults) the skills needed to give and receive emotional support. It is hard to ask for help, or for a sympathetic ear, and it is also hard to listen without judgement. These are skills we can hope to develop over our lifetime, but I think that if we explicitely taught emotional skills from a young age, and systematically throughout education, people might be less tempted to turn to an averaging machine for emotional support. Of course, improving access to professional support would also be hugely beneficial but I will not attempt to solve the health service crises occuring the world over in a single blog post.

Finally, I recommend the good old fashioned diary – it might not answer back, but you can tell it your every thought, and it won’t judge, it’s always available– and it won’t encourage your delusions either.

  1. https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html  2

  2. https://www.independent.co.uk/tech/chatgpt-ai-therapy-chatbot-psychosis-mental-health-b2797487.html  2

  3. https://www.nytimes.com/2025/08/18/opinion/chat-gpt-mental-health-suicide.html 

  4. https://futurism.com/openai-scanning-conversations-police 

  5. https://www.commonsensemedia.org/sites/default/files/research/report/talk-trust-and-trade-offs_2025_web.pdf 

  6. https://openai.com/index/sycophancy-in-gpt-4o/ 

  7. https://www.ilga.gov/documents/legislation/104/HB/PDF/10400HB1806lv.pdf 

Updated: