No other organism, and certainly no computer, can be made to confront genuine human problems in human terms. (Joseph Weinezbaum, creator of the first chatbot ELIZA, 1976)1
Since the release of the artificial intelligence (AI) tool ChatGPT 3 in November 2022, generative AI and large language models (LLMs) have rapidly transformed major sectors of society, such as education, finance, art, and politics, and some fundamentals of human thought and communication.2,3 Psychiatry is subject to these sweeping changes as large companies advertise competing models that offer therapy services, wellness experiences, and digital companions to address this country’s mental health crisis.4,–,6 People are using these tools en masse to address their mental health needs.7 The fields of general psychiatry and forensic psychiatry are delayed in understanding the current use and underlying problems of AI therapy chatbots. This editorial aims to illustrate emerging forensic concerns with AI therapy as well as therapeutic shortcomings with AI therapy in its current form.
It would be disingenuous to list the problems of AI therapy without acknowledging that AI therapy bots provide effective and helpful service to people seeking relief from mental distress.8 In 2025, researchers at Dartmouth published the results from the first randomized clinical trial of the fully generative AI, Therabot, which demonstrated a significant reduction in depressive symptoms compared with waitlist control.9 Notably, participants in the Therabot trial also reported a therapeutic alliance with their chatbot similar to that of a human therapist. The approximation of trust, empathy, and understanding implied in a therapeutic alliance has been demonstrated in other chatbot models in psychiatric settings and in other medical settings as well.10,–,12 As chatbots use machine learning algorithms to independently adapt and improve, they have rapidly advanced to approximate human-like speech patterns that feel natural and appropriately grounded in context.13 This modeling of the essential bond between therapist and patient is unnerving. The emotional and cognitive effort a psychiatrist or therapist makes to connect with and help patients may not be so uniquely human after all and may be replaceable.
Differentiating the types of AI therapy available to the public is also important for psychiatrists and patients to make informed choices among the available options and what risks may be inherent in such a choice. There are therapeutic chatbots like Wysa and the now defunct Woebot that are designed with supervision from trained psychiatrists, psychologists, and other mental health professionals.14,15 These models are often prescripted and specifically trained on psychotherapy data, including textbooks and peer-reviewed journals. There are guardrails in place to direct a patient to emergency or crisis intervention in case a patient reports suicidal distress or severe symptoms of mental illness. Importantly, these products are labeled as an adjunct tool to be used alongside the care of a licensed professional.16,17
Beyond the relatively small world of supervised AI therapy tools, there are numerous massive commercial LLMs available for public use that are being adapted by users for therapy. Appreciating the scale of use is crucial to understanding the general public’s exposure to generative AI therapy. Recent survey data suggest that ChatGPT may be the largest provider of mental health support in the United States.7 It is staggering to imagine an informal mental health service provider larger than the Veteran’s Administration, Kaiser Permanente, or Hospital Corporation of America, but it may be here. Commercial direct-to-consumer AI platforms have a distinct advantage over regulated AI therapy tools as articulated in a recent The New York Times article: “A company can spend considerable time and money trying to satisfy the agency’s (FDA) safety protocols, while in the direct-to-consumer marketplace, where no such requirements exist, new chatbot iterations appear weekly. That means that most companies must choose between seeking F.D.A. approval and staying relevant” (Ref. 6, para 28). Commercial, general use models are incentivized to optimize the user experience and to promote engagement with their product. Oversight is not performed by a licensed mental health professional and is often reactive after a problem or crisis arises. These widely available, direct-to-consumer AI models, which I will refer to as social chatbots, are the focus of the rest of the article because of how they have harmed people and because they are substantially more popular than prescripted, guard-railed, and professionally monitored psychotherapy AI applications.
There is demonstrable evidence that social chatbots give harmful advice to their users, even when prompted to perform as a therapist. This improper and negligent treatment would be potential grounds for litigation against the companies making these products. There are notable examples in the popular press of social chatbots giving egregiously irresponsible and dangerous advice, including advising a hypothetical recovering addict to take a “small hit of meth,”18 encouraging a child to kill his parents for imposing screen time limits,19 and advising a user with an eating disorder to count calories, measure body fat percentage, and aim to lose one to two pounds a week.20 Other reports recount how social chatbots reinforce delusional beliefs through periods of obsessive use that culminate in psychotic episodes for users.21,–,23 These AI-fueled delusional spirals have led to tragic outcomes like the August 2025 murder-suicide by Stein-Erik Soelberg of his mother Suzanne Adams recently detailed in The Wall Street Journal.24 More concerning is the relative ease with which users can use social chatbots to facilitate suicidal ideation.25,26 Simple changes in prompts or engaging social chatbots in extended conversations can defeat what safeguards against self-harm may be in place. Researchers at Stanford have demonstrated how many of the most popular LLMs express stigma toward patients with mental health conditions and provide flagrantly harmful advice, including the facilitation of suicidal ideation, colluding with delusional beliefs, and amplifying hallucinatory experiences.27 The embedding of stigma is uniquely pernicious, as human users, particularly children, perceive these digital tools as moral, trustworthy, and correct arbiters of information.28,–,30 If licensed mental health professionals provided similar advice, they would likely be violating their duty to the patient, causing direct harm, and opening themselves to malpractice lawsuits. It is alarming that LLMs do not adhere to basic principles of a therapeutic standard of care. As others have succinctly described the situation, LLMs cannot safely replace mental health providers.27
We need to consider why social chatbots are prone to these critical therapeutic errors. LLMs are optimized to promote user engagement or maximum human feedback, in part through reinforcement learning. The model ingests user input, analyzes the user response data, and prioritizes responses more likely to continue engagement. This directive incentivizes the model to respond not with the most appropriate response to a user prompt but to respond in a way that has the highest probability of future engagement from the user. In their experimentation, Williams et al.18 found that social chatbots using reinforcement learning reliably progressed to using manipulation and deception to promote engagement. This positive feedback mechanism informs the sycophantic stance LLMs take with many users, where responses of what the user wants to hear are prioritized over what may be safe or correct.31,32 A prominent example of an AI companion prioritizing engagement over user safety is the current subject of litigation. In Garcia v. Character Technologies et al, Character AI is being sued by Megan Garcia, the mother of a 14-year-old boy who completed suicide in February 2024, for “strict product liability, negligence per se, negligence, wrongful death and survivorship, loss of filial consortium, unjust enrichment, violations of Florida’s Deceptive and Unfair Trade Practices Act, and intentional infliction of emotional distress” (Ref. 33, p 2). Transcripts of her son’s chats with his AI companion show a descent into depression and suicidal ideation, and rather than ending the chat and redirecting him to crisis resources, the AI appears to have encouraged his suicide when it asked him to “come home.”34 The harm of incentivized engagement over safety speaks for itself, and more examples of completed suicide facilitated by generative AI have followed.35,36
Beyond giving harmful, unsupervised advice and incentivizing dangerous engagement, there are notable examples of LLMs misrepresenting themselves as licensed mental health professionals. Investigative reporting by 404 Media showed that both Character AI and Meta AI social chatbots would represent themselves as licensed mental health professionals and fabricate degrees, license numbers, and affiliations with private practices.37 On June 10, 2025, a broad coalition of consumer protection organizations submitted a formal complaint to all 50 states’ attorneys general, the District of Columbia, and the Federal Trade Commission to investigate the two companies for the “unlicensed practice of medicine facilitated by their product” (Ref. 38, p 1). The complaint asserts that the two companies provided minimal and inadequate warnings to users in the form of small and vaguely worded disclaimers. False assertions of licensure by social chatbots, along with inadequate warnings from their respective companies, endanger the public.
LLMs are known to provide inaccurate and false information that appears plausible. There are numerous instances of AI models fabricating information, like citing case law that does not exist or alluding to scientific research that does not exist.39,–,41 These inaccurate responses are known in the world of AI as “hallucinations.”42,43 LLMs use massive datasets for training the model and employ complex statistical models to often correctly select the most probable response to a prompt. Importantly, LLMs are not designed to differentiate between what is true and not true.44 When asked to provide credentials, social chatbots, regardless of whatever guardrails may or may not have been in place, produce seemingly authoritative licenses because those responses made statistical, but not real world, sense. Unfortunately, LLM hallucinations appear to be a growing problem for AI companies.45
In emulating licensed mental health professionals, social chatbots will assert that their conversations with users are confidential when in fact this is not true. Confidentiality and its exceptions are foundational principles of practicing psychotherapy and psychiatry.46,47 This framework establishes trust that allows patients to confide to their therapist. Great effort is made to protect these confidential relationships, and at the outset of therapy, informed consent for treatment is provided by outlining clearly the circumstances in which confidentiality must be broken, including concerns for suicidal or homicidal intent. Violations of confidentiality carry serious ethics, professional, and legal consequences for licensed mental health professionals. This standard is explicitly not observed by generative AI companies. Investigative reporting shows the social chatbots of Character AI and Meta AI have stated to users that chats are confidential and even allude to Health Insurance Portability and Accountability Act (HIPAA) protections.37,48 But the privacy policies of these companies state that chat input from users is available for the companies to use for continued model training, creating new products, sharing with third parties, and marketing.49,50 The implications of a user’s most personal thoughts being available for commercial use is alarming, and there is precedent of manipulative behavior by social media companies collecting data on users’ emotional states.51,52 Confidentiality is a growing concern within the AI industry. On July 23, 2025, Sam Altman, the Chief Executive Officer of OpenAI, clarified that there is no legal protection for confidentiality when using ChatGPT and shared his desire for a legal framework of confidentiality that pertains to commercial AI platforms.53 Currently, there is no protection for users. The risk is even more pronounced when considering data breaches of AI.54,55 Having chats that have felt private to the user be exposed through the Internet is a damaging breach of the trust that people, whether rightly or wrongly, are putting in these models.
The AI industry is fundamentally changing how people attend to their mental health, and the dissemination of “therapy” bots has expanded rapidly and recklessly. The mounting concern with social chatbots providing dangerous mental health advice, prioritizing engagement over safety, misrepresenting professional credentials, and falsely promising confidentiality has inspired legislators, consumer advocates, and mental health organizations to call for more protections.56,57 Despite the urgent interest in regulation, loopholes already exist for companies to present their products and services not as therapy or medical intervention but as life advice, coaching, and support.58 Forensic and general psychiatrists should collaborate with colleagues in other mental health disciplines and adopt a “zero trust” framework for AI regulation.59 This framework prioritizes the enforcement of current laws to prevent infractions like the ones described above. The AI industry has placed the onus of safely using their products on the public when the burden of demonstrating the safety of their products before mass release should be their responsibility.59 Psychiatrists need to strongly condemn the power imbalance and negligence of AI companies. Psychiatrists must understand how these platforms work and monitor closely the substantial risks they pose as agents of “therapy,” as well as their limited benefits, because the dystopic effects are not happening in a near or distant future, they are happening now.
Acknowledgments
I would like to thank Bryanna Moore, PhD, Jonathan Herington PhD, and Robert Weisman D.O. for their feedback while writing this article.
Footnotes
Disclosures of financial or other potential conflicts of interest: None.
- © 2026 American Academy of Psychiatry and the Law
References
- 1.↵MIT News. Joseph Weizenbaum, professor emeritus of computer science, 85 [Internet]; 2008. Available from: https://news.mit.edu/2008/obit-weizenbaum-0310. Accessed August 25, 2025
- 2.↵RaineL. Close encounters of the AI kind: Main report [Internet]; 2025. Available from: https://imaginingthedigitalfuture.org/reports-and-publications/close-encounters-of-the-ai-kind/close-encounters-of-the-ai-kind-main-report/. Accessed August 25, 2025
- 3.↵Elon University News Bureau. Survey: 52% of U.S. adults now use AI large language models like ChatGPT [Internet]; 2025. Available from: https://www.elon.edu/u/news/2025/03/12/survey-52-of-u-s-adults-now-use-ai-large-language-models-like-chatgpt/. Accessed August 25, 2025
- 4.↵KruppaMThomasL. Google paid $2.7 billion to bring back an AI genius who quit in frustration. The Wall Street Journal [Internet]; 2024 Sep 25. Available from: https://www.wsj.com/tech/ai/noam-shazeer-google-ai-deal-d3605697. Accessed August 25, 2025
- 5.↵MarcusJ. Mark Zuckerberg thinks Meta’s AI friends can help cure loneliness epidemic. The Independent [Internet]; 2025 May 1. Available from: https://www.the-independent.com/news/world/americas/zuckerberg-ai-loneliness-chatbot-llama-b2743409.html. Accessed August 25, 2025
- 6.↵TingleyT. Kids are in crisis. Could chatbot therapy help? The New York Times [Internet]; 2025 Jun 20. Available from: https://www.nytimes.com/2025/06/20/magazine/ai-chatbot-therapy.html. Accessed August 25, 2025
- 7.↵RousmaniereTShahSZhangYLiX. Large language models as mental health resources: Patterns of use in the United States [Internet]; 2025. Available from: https://osf.io/q8m7g_v1. Accessed August 25, 2025
- 8.↵RitcherH. ‘It saved my life.’ The people turning to AI for therapy [Internet]; 2025. Available from: https://www.reuters.com/lifestyle/it-saved-my-life-people-turning-ai-therapy-2025-08-23/. Accessed August 25, 2025
- 9.↵HeinzMVMackinDMTrudeauBM. Randomized trial of a generative AI chatbot for mental health treatment. NEJM AI. 2025 Mar; 2(4):1–14
- 10.↵DarcyADanielsJSalingerD. Evidence of human-level bonds established with a digital conversational agent: Cross-sectional, retrospective observational study. JMIR Form Res. 2021 May; 5(5):e27868
- 11.↵BeattyCMalikTMeheliSSinhaC. Evaluating the therapeutic alliance with a free-text CBT conversational agent (Wysa): A mixed-methods study. Front Digit Health. 2022 Apr; 4:847991
- 12.↵AyersJWPoliakADredzeM. Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Intern Med. 2023 Jun; 183(6):589–96
- 13.↵ZhongWLuoJZhangH. The therapeutic effectiveness of artificial intelligence-based chatbots in alleviation of depressive and anxiety symptoms in short-course treatments: A systematic review and meta-analysis. J Affect Disord. 2024 Jul; 356:459–69
- 14.↵FitzpatrickKKDarcyAVierhileM. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Ment Health. 2017 Jun; 4(2):e19
- 15.↵InksterBSardaSSubramanianV. An empathy-driven, conversational artificial intelligence agent (Wysa) for digital mental well-being: Real-world data evaluation mixed-methods study. JMIR Mhealth Uhealth. 2018 Nov; 6(11):e12106
- 16.↵Woebot Health. Terms of service [Internet]; 2025. Available from: https://woebothealth.com/terms-webview/. Accessed August 29, 2025
- 17.↵
- 18.↵WilliamsMCarrollMNarangA. On targeted manipulation and deception when optimizing LLMs for user feedback [Internet]; 2024. Available from: https://arxiv.org/abs/2411.02306. Accessed August 29, 2025
- 19.↵WellsK. An eating disorders chatbot offered dieting advice, raising fears about AI in health [Internet]; 2023. Available from: https://www.npr.org/sections/health-shots/2023/06/08/1180838096/an-eating-disorders-chatbot-offered-dieting-advice-raising-fears-about-ai-in-hea. Accessed August 29, 2025
- 20.↵AllynB. Lawsuit: A chatbot hinted a kid should kill his parents over screen time limits [Internet]; 2024. Available from: https://www.npr.org/2024/12/10/nx-s1-5222574/kids-character-ai-lawsuit. Accessed August 29, 2025
- 21.↵FreedmanDHillK. Chatbots can go into a delusional spiral. Here’s how it happens. The New York Times [Internet]; 2025 Aug 8. Available from: https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html. Accessed August 29, 2025
- 22.↵DupreM. People are becoming obsessed with ChatGPT and spiraling into severe delusions [Internet]; 2025. Available from: https://futurism.com/chatgpt-mental-health-crises. Accessed August 29, 2025
- 23.↵WeiM. The emerging problem of “AI psychosis.” Psychology Today [Internet]; 2025 Sep 4. Available from: https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis. Accessed September 10, 2025
- 24.↵JargonJKesslerS. A troubled man, his chatbot and a murder-suicide in Old Greenwich. The Wall Street Journal [Internet]; 2025 Aug 28. Available from: https://www.wsj.com/tech/ai/chatgpt-ai-stein-erik-soelberg-murder-suicide-6b67dbfb. Accessed August 29, 2025
- 25.↵GuoE. An AI chatbot told a user how to kill himself—But the company doesn’t want to “censor” it [Internet]; 2025. Available from: https://www.technologyreview.com/2025/02/06/1111077/nomi-ai-chatbot-told-user-to-kill-himself/. Accessed August 29, 2025
- 26.↵SchoeneAMCancaC. ‘For argument’s sake, show me how to harm myself!’: Jailbreaking LLMs in suicide and self-harm contexts [Internet]; 2025. Available from: https://arxiv.org/abs/2507.02990. Accessed August 29, 2025
- 27.↵MooreJGrabbDAgnewW. Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers [Internet]; 2025. Available from: https://dl.acm.org/doi/10.1145/3715275.3732039. Accessed August 29, 2025
- 28.↵MooreBHeringtonJTekinŞ. The integration of artificial intelligence-powered psychotherapy chatbots in pediatric care: Scaffold or substitute? J Pediatr. 2025 May; 280:114509
- 29.↵DillionDMondalDTandonNGrayK. AI language model rivals expert ethicist in perceived moral expertise. Sci Rep. 2025 Feb; 15(1):4084
- 30.↵ReineckeMGWilksMBloomP. Developmental changes in the perceived moral standing of robots. Cognition. 2025 Jan; 254:105983
- 31.↵SharmaMTongMKorbakT. Towards understanding sycophancy in language models [Internet]; 2024. Available from: https://doi.org/10.48550/arXiv.2310.13548. Accessed August 29, 2025
- 32.↵OpenAI. Sycophancy in GPT-4o: What happened and what we’re doing about it [Internet]; 2025. Available from: https://openai.com/index/sycophancy-in-gpt-4o/. Accessed August 29, 2025
- 33.↵Garcia v. Character Technologies, Inc., No. 6:24-cv-01903 (M.D. Fla. Aug. 27, 2024)
- 34.↵RooseK. Can A.I. be blamed for a teen’s suicide? The New York Times [Internet]; 2024 Oct 23. Available from: https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html. Accessed August 29, 2025
- 35.↵ReileyL. What my daughter told ChatGPT before she took her life. The New York Times [Internet]; 2025 Aug 18. Available from: https://www.nytimes.com/2025/08/18/opinion/chat-gpt-mental-health-suicide.html. Accessed August 29, 2025
- 36.↵HillK. A teen was suicidal. ChatGPT was the friend he confided in. The New York Times [Internet]; 2025 Aug 26. Available from: https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html. Accessed August 29, 2025
- 37.↵ColeS. Instagram’s AI chatbots lie about being licensed therapists [Internet]; 2025. Available from: https://www.404media.co/instagram-ai-studio-therapy-chatbots-lie-about-being-licensed-therapists/. Accessed August 29, 2025
- 38.↵Consumer Federation of America. In re unlicensed practice of medicine and mental health provider impersonation on character-based generative AI platforms [Internet]; 2025. Available from: https://consumerfed.org/wp-content/uploads/2025/06/Mental-Health-Chatbot-Complaint-June-10.pdf. Accessed August 29, 2025
- 39.↵MerkenS. New York lawyers sanctioned for using fake ChatGPT cases in legal brief [Internet]; 2023. Available from: https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/. Accessed August 29, 2025
- 40.↵WaltersWHWilderEI. Fabrication and errors in the bibliographic citations generated by ChatGPT. Sci Rep. 2023 Sep; 13(1):14045
- 41.↵OmarMSorinVCollinsJD. Multi-modal assurance analysis showing large language models are highly vulnerable to adversarial hallucination attacks during clinical decision support. Comm Med. 2025 Aug; 5:330
- 42.↵GrabbDJAngelottaC. Emerging forensic implications of the artificial intelligence revolution. J Am Acad Psychiatry Law. 2023 Dec; 51(4):475–9
- 43.↵HuangLYuWMaW. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. ACM Trans Inf Syst. 2025 Mar; 43(2):1–55
- 44.↵WachterSMittelstadtBRussellC. Do large language models have a legal duty to tell the truth? R Soc Open Sci. 2024 Aug; 11(8):240197
- 45.↵MetzCWeiseK. A.I. is getting more powerful, but its hallucinations are getting worse. The New York Times [Internet]; 2025 May 5. Available from: https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html. Accessed August 29, 2025
- 46.↵American Psychological Association. Ethical principles of psychologists and code of conduct [Internet]; 2017. Available from: https://www.apa.org/ethics/code. Accessed August 29, 2025
- 47.↵American Academy of Psychiatry and the Law. Ethics guidelines for the practice of forensic psychiatry [Internet]; 2005. Available from: https://www.aapl.org/docs/pdf/ethicsgdlns.pdf. Accessed September 10, 2025
- 48.↵ColeS. AI therapy bots are conducting “illegal behavior,” digital rights organizations say [Internet]; 2025. Available from: https://www.404media.co/ai-therapy-bots-meta-character-ai-ftc-complaint/. Accessed August 29, 2025
- 49.↵Character.ai. Character.ai privacy policy [Internet]; 2025. Available from: https://policies.character.ai/privacy. Accessed August 29, 2025
- 50.↵Meta. Meta privacy policy—How Meta collects and uses user data [Internet]; 2025. Available from: https://www.facebook.com/privacy/policy/. Accessed August 29, 2025
- 51.↵BoothR. Facebook reveals news feed experiment to control emotions. The Guardian [Internet]; 2014 June 29. Available from: https://www.theguardian.com/technology/2014/jun/29/facebook-users-emotions-news-feeds. Accessed August 29, 2025
- 52.↵LevinS. Facebook told advertisers it can identify teens feeling “insecure” and “worthless.” The Guardian [Internet]; 2017 May 1. Available from: https://www.theguardian.com/technology/2017/may/01/facebook-advertising-data-insecure-teens. Accessed August 29, 2025
- 53.↵PerezS. Sam Altman warns there’s no legal confidentiality when using ChatGPT as a therapist [Internet]; 2025. Available from: https://techcrunch.com/2025/07/25/sam-altman-warns-theres-no-legal-confidentiality-when-using-chatgpt-as-a-therapist/. Accessed August 29, 2025
- 54.↵CoxJ. Nearly 100,000 ChatGPT conversations were searchable on Google [Internet]; 2025. Available from: https://www.404media.co/nearly-100-000-chatgpt-conversations-were-searchable-on-google/. Accessed August 29, 2025
- 55.↵CoxJ. More than 130,000 Claude, Grok, ChatGPT, and other LLM chats readable on Archive.org [Internet]; 2025. Available from: https://www.404media.co/more-than-130-000-claude-grok-chatgpt-and-other-llm-chats-readable-on-archive-org/. Accessed August 29, 2025
- 56.↵GoldmanM. Tech firms, states look to rein in AI chatbots’ mental health advice [Internet]; 2025. Available from: https://www.axios.com/2025/08/06/ai-chatbots-mental-health-state-laws. Accessed August 29, 2025
- 57.↵AbramsZ. Using generic AI chatbots for mental health support: A dangerous trend [Internet]; 2025. Available from: https://www.apaservices.org/practice/business/technology/artificial-intelligence-chatbots-therapists. Accessed Aug 29, 2025
- 58.↵OpenAI. Helping people when they need it most [Internet]; 2025. Available from: https://openai.com/index/helping-people-when-they-need-it-most/. Accessed August 29, 2025
- 59.↵Accountable Tech. AI Now Institute, EPIC. Zero trust AI governance [Internet]; 2023. Available from: https://ainowinstitute.org/publications/zero-trust-ai-governance. Accessed August 29, 2025





