The Power and Peril of AI as Mental Health Coach: What Consumers and Clinicians Need to Know

How AI Is Changing Mental Health Care

Artificial intelligence is changing how we work, shop, and even date. Should it guide our mental health? As a licensed clinical psychologist, a psychodynamic psychotherapist, and a student of the University of Cincinnati’s Lindner College of Business Certificate in Artificial Intelligence, I have been seeking answers to this question.

AI technology is being rolled out at a dizzying pace. Fortunately for my field of mental health, many AI tools can help people manage stress, track goals, and ease mild symptoms. However, they cannot replace the human connection at the heart of psychotherapy. The most effective care still depends on a real relationship, not an algorithm. In this article, I’ll discuss how people can use AI as self-help, and how therapists and patients can incorporate discussion of AI into their work together.

What AI Mental Health Tools Do Well

AI-enabled mental health tools are strong at three things: skill delivery, personalization, and tracking progress. Many apps now offer CBT-based exercises, mood tracking, and adaptive prompts that fit a user’s daily patterns. Multiple peer-reviewed studies show short-term symptom reductions when people use structured digital interventions. For example, a randomized controlled trial of a generative AI chatbot reported larger improvements in depression and anxiety compared with waitlist controls over 4 to 8 weeks, alongside high engagement time and user-rated alliance comparable to human therapy settings (Heinz et al., 2025). A JAMA Network Open trial of a self-guided CBT app for young adults found substantial anxiety reductions across different engagement incentive models, suggesting well-designed apps can help people practice skills consistently (Bress et al., 2024). A meta-analysis in npj Digital Medicine concluded that AI-based conversational agents can reduce distress to a moderate or large degree, especially when tools are multimodal or integrated into everyday messaging platforms (Li et al., 2023). A 2025 PLOS ONE review similarly found generally positive effects across 38 studies, while noting variability and the need for longer follow-up (Shahsavar & Choudhury, 2025).

These findings support a practical claim: AI can make self-help more user-friendly. Dynamic feedback, reminders, and tailored exercises are broadly appealing, and often more accessible and discreet than carrying around a self-help book. Like other areas of our lives, AI is good at solving problems that are repetitive, predictable, and popular. In other words, AI tools function best when they are designed to rely on vast amounts of data already freely available on the internet.

Where AI Falls Short in Mental Health Treatment

What happens when your problem is not run-of-the-mill? AI cannot crowd-source the solution to complex questions of human experiences, particularly those which are ineffable (like those feelings humans find hard to put into words). Psychotherapy is not just focused on building coping skills; rather, the relationship between human therapist and patient communicates, contains, and co-creates these skills. Human therapy involves attunement to pauses, tone, and meaning over time, within an ethical frame that protects safety and accountability. Even the most advanced chatbots, which can simulate conversation, cannot participate as a subject in a shared relationship. Psychoanalytic perspectives are at the forefront of the scientific study of the power of healing relationship (Essig, n.d.; Levy, n.d.). Leaders in psychoanalysis caution that simulated intersubjectivity can feel convincingly human. Some people even feel a stronger connection to their AI agent than to their therapist, friends, or partner. The allure of an ever-attentive, knowledgeable, validating companion is undeniable—and research on AI-assisted communication shows that AI can help clinicians craft more constructive messages (Longhurst et al., 2024). However, interactions with AI-bots lack the mutual, human core that makes true relationship possible. Therapy involves joint attention, thinking and reflecting together, for the purpose of cognitive and emotional change.

Another area of concern is managing the risks alone. Some users may substitute algorithmic companionship for human contact, which can reduce loneliness in the moment, yet pull them into more time alone, isolated from human-to-human contact. Psychoanalysts describe this dual nature as a remedy that can also act as a poison (Essig, n.d.). When safety guardrails are weak, and when corporate interests prioritize engagement over well-being, it is far too easy for chatbot usage to lead to tragic outcomes, including suicides. Therefore, it is essential for clinicians and the public to focus on boundaries, data privacy, and escalation pathways that lead back to human support when needed. Professional guidance further urges clinicians to participate in the AI conversation, shaping standards and regulation so that tools align with human values rather than pure engagement metrics (Essig, n.d.).

When to Use AI—and When Human Therapy Is Essential

Consumers and therapists can consider using AI tools for everyday stress management, sleep hygiene, or building healthy habits. Integrate them into care plans as practice partners. They are not replacements for therapy. For trauma, active suicidal ideation, severe mood episodes, eating disorders, or complex relational problems, licensed professional care is essential. Consumers should favor AI products that are built specifically for mental health support, and especially those that provide evidence, publish research, and provide clear privacy policies. Clinicians should consider tools that include risk escalation, crisis links, and transparent data practices. The evidence base is promising for AI products as self-help tools, but results are uneven across products, and long-term outcomes are still being established (Ni & Jia, 2025). That means both clinicians and consumers should monitor usage and effects over time rather than assume a permanent solution.

AI-ASK: A Practical Framework for Clinicians

Therapists need a practical way to address AI use in treatment. I have created the AI-ASK acronym as a structured approach to shape these conversations about how patients are interacting with AI, reduce stigma about this topic, and cultivate a mindset of collaboration.

  • A — Awareness of AI usage: Many people do not realize when they are interacting with AI-driven platforms such as YouTube, social media, or search engines.
  • I — Interest in AI tools: Ask whether the patient is curious about apps or chatbots for support.
  • A — Advantages for your care: Collaborate to identify potential benefits such as practicing CBT or DBT skills, tracking symptoms, or increasing accountability between sessions.
  • S — Security risks: Discuss the risks to privacy and data protection, as well as the potential for bias in using AI products.
  • K — Keeping in contact: Revisit the topic regularly. A patient’s AI use will evolve as needs change and technology advances.

Consumer tips at a glance

  • Choose apps with published research and transparent privacy policies.
  • Use AI for skills and daily routines. Do not treat it as therapy.
  • Do not talk to or about an AI agent like it is a human. Avoid anthropomorphizing a generative pretrained agent (GPT).
  • Share your AI use with your therapist so it can be integrated safely into your care.
  • Review data settings, limit sharing, and pick tools that explain how your information is secured.
  • Notice whether the tool helps you connect more with people in your life. If it does not, reassess.

Clinician guidance

  • Present AI tools as practice supports that reinforce therapy goals.
  • Screen for risk factors such as isolation, obsessive tracking, or blurred boundaries.
  • Favor tools with clear safety features and evidence.
  • Use AI-ASK in intake and follow-ups so AI use remains part of the therapeutic dialogue.

TLDR (Too long; Didn’t read.)

AI self-help tools can help people learn coping strategies and stay on track with their mental health goals. They are not an adequate replacement for human therapists. Within healthy boundaries, with attention to privacy, and ongoing clinician involvement, AI technology can enhance care without replacing the relationships that heal. Therapists and consumers can use the AI-ASK framework to incorporate AI into their work together. And remember: If you like your AI-agent more than your therapist, you need to tell your therapist!

Bress, J. N., Falk, A., Schier, M. M., & others. (2024). Efficacy of a mobile app-based intervention for young adults with anxiety disorders: A randomized clinical trial. JAMA Network Open. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2822451

Essig, T. (2025). Psychoanalytic AI activism. TAP Magazine, 59(1). https://tapmagazine.org/all-articles/psychoanalytic-ai-activism

Stein, A. (2025). What AI can and can’t do, and how psychoanalysis can help. American Psychoanalytic Association. https://apsa.org/what-ai-can-and-cant-do/

Hayles, N. K. (2025). Bacteria to AI: Human futures with our nonhuman symbionts. University of Chicago Press.

Heinz, M. V., Mackin, D. M., Trudeau, B. M., & others. (2025). Randomized trial of a generative AI chatbot for mental health treatment. NEJM AI. https://gwern.net/doc/psychiatry/depression/2025-heinz.pdf

Li, H., Zhang, R., Lee, Y.-C., Kraut, R. E., & Mohr, D. C. (2023). Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. npj Digital Medicine. https://www.nature.com/articles/s41746-023-00979-5

Longhurst, C. A., & others. (2024). Generative artificial intelligence for drafting patient messages in electronic health records: A randomized clinical trial. JAMA Network Open. https://doi.org/10.1001/jamanetworkopen.2024.xxxxx

Ni, Y., & Jia, F. (2025). A scoping review of AI-driven digital interventions in mental health care. Healthcare. https://www.mdpi.com/2227-9032/13/10/1205