The Role of ChatGPT in Mental Health: A Balanced Evaluation by Dr Adeel, Clinical Psychologist
March 29, 2023 · Reading time: 4 minutes
ChatGPT and similar large language model (LLM) AI systems have been adopted by millions of people as informal mental health support — for processing difficult emotions, understanding diagnoses, exploring therapy options, and even working through distress at 3am when no human support is available. This use is outpacing research, and the honest picture involves both genuine utility and real risks that users should understand.
What People Are Actually Using ChatGPT For in Mental Health
A 2023 survey by the American Psychological Association found that 22% of adults had used an AI chatbot for mental health support. The most common uses: understanding a mental health diagnosis they had received; processing difficult emotional experiences; drafting communication about mental health concerns (to employers, family members, or clinicians); and accessing information about treatment options. These uses — essentially psychoeducation and emotional processing support — are relatively low-risk and represent genuine utility where access to professional support is limited.
What ChatGPT Does Well in This Context
Availability and accessibility. ChatGPT is available 24 hours a day, requires no appointment, and carries no stigma. For someone experiencing distress at an hour when professional support is unavailable, a well-constructed conversation can provide grounding, psychoeducation, and a pathway to appropriate help. This is not a substitute for professional care but it is better than isolation.
Non-judgmental engagement. Some people find it easier to disclose difficult thoughts, experiences, or behaviours to an AI than to a human clinician — at least initially. This reduced disclosure barrier can facilitate articulation of experiences that the person then brings to a clinician more prepared and less ashamed.
Psychoeducation quality. When asked factual questions about mental health conditions, diagnostic processes, medication effects, or evidence-based treatments, current-generation LLMs provide generally accurate information when they draw on well-established clinical consensus. The quality is comparable to a well-researched health article. Factual errors occur, particularly on cutting-edge research or nuanced clinical questions.
Serious Limitations and Risks
No memory, no continuity. Each ChatGPT conversation starts without history. The clinical utility of any therapeutic relationship depends on accumulated understanding of a person over time — context, pattern recognition, knowledge of triggers, understanding of what has been tried. ChatGPT cannot provide this continuity, which limits its value for ongoing support beyond individual conversations.
Hallucinations and confident inaccuracy. LLMs generate plausible-sounding text without internal verification of its accuracy. For mental health questions, this can produce confident-sounding misinformation about medication doses, diagnostic criteria, or treatment approaches that a user may act on. The risk is elevated in domains where the user lacks sufficient prior knowledge to identify errors.
Crisis situations. There is no reliable evidence that any current LLM provides appropriate crisis support for active suicidality or acute psychiatric emergency. The model cannot call emergency services, dispatch a welfare check, or provide the accountability that human crisis support offers. Using ChatGPT as a primary crisis resource is inappropriate and potentially dangerous. Crisis situations require crisis services: the 988 Suicide and Crisis Lifeline (US), Samaritans (UK), or equivalent in your country.
Privacy. Conversations with ChatGPT are processed by OpenAI's servers and subject to their privacy policy. Mental health disclosures in these conversations may be stored and potentially used in model training depending on settings. Users should review OpenAI's data practices and consider whether they are comfortable with their disclosures being stored if this matters to them.
A Framework for Responsible Use
ChatGPT and similar tools can usefully complement professional mental health care — not replace it. Reasonable uses include: using it to prepare for therapy appointments (articulating what you want to discuss); exploring information about a diagnosis or treatment before a clinical conversation; processing low-severity daily stressors when professional support is not warranted; and drafting difficult communications about mental health to employers or family members.
Uses that carry more risk and warrant caution: relying on it for diagnosis; using it to make medication decisions without clinical involvement; using it as a primary support during mental health crises; and substituting it for professional treatment of significant mental health conditions. AI tools are likely to become increasingly capable, but the current evidence base does not support treating LLMs as a primary mental health intervention.
