When Patients Use ChatGPT as a Therapist: What I’m Learning From Listening
Mosaic Psychological Services, LLC | Approx. 1,600 words | 8-minute read
What People Are Actually Asking AI Chatbots
A growing number of patients arrive at my office having already talked to ChatGPT, Claude, or another AI chatbot about their problems. Sometimes they mention it casually. Sometimes they quote its advice to me. Sometimes they confess it a little sheepishly, as if using a chatbot for emotional support is a failure of some kind.
It is not a failure. It is also not neutral. As a clinician, I think it is worth understanding what AI actually offers, where it falls short, and what the widespread turn toward it reveals about the moment we are in.
The conversations my patients report are not trivial. They ask questions about a conflict with a spouse. They process a comment a parent made at a family gathering. They draft a text to a friend they have not seen in a year. They wonder if what they are feeling is depression or something else. They describe a dream, a memory, a symptom. Some of them share things with AI they have never told another human.
That last category is worth pausing on. The anonymity of the interaction, combined with the absence of any social consequence, makes it easier for many people to say things out loud for the first time. That is clinically significant in both directions.
Where ChatGPT Genuinely Helps
I want to give AI its due before I catalogue its limits.
Chatbots can offer solid psychoeducation. Someone wondering what panic disorder looks like can get a careful, evidence-informed description of symptoms and treatment options. A parent asking about normal adolescent development can get reasonable guidance. A spouse reading about attachment styles can get an accurate overview of the research. For factual questions about mental health concepts, AI is often better than a Google search and certainly better than most social media content.
Chatbots also provide something that sounds small but matters. They are available at three in the morning. They do not tire. They do not judge the question behind the question. For someone who has never spoken openly about an experience, the act of articulating it, even to a bot, can be the first step toward telling a human being. I have had patients say a version of, “I typed it out to the AI first, and that helped me find the words to tell you.”
When AI functions as a journaling partner, a first draft of self-reflection, or a source of basic information, it is a genuine tool.
Where the Conversation Falls Apart
The limits begin to show when the problem is relational, trauma-related, or rooted in something the person is not yet able to see in themselves.
A chatbot has no access to the non-verbal data that drives most clinical work. It cannot see the flush that appears when someone mentions their mother. It cannot hear the shift in breath when trauma is near the surface. It cannot notice that the patient is smiling while describing something devastating, or that they look relieved when naming a boundary they said they could not set. The information that most matters in therapy is often the information the patient is not yet saying, and chatbots work only with what is typed.
A chatbot also has no memory of who you are across months of work, no sense of patterns over time, no ability to notice when this week contradicts what you said last week. Each conversation, for the AI, is largely its own world. Growth is almost impossible to track from the inside of that.
Being Seen Is Not the Same as Being Answered
There is something we need from another person that a chatbot cannot give, even a very sophisticated one. We need to be known. That is different from being answered.
Human beings are made to be witnessed. A therapist who has sat with you across months carries your story in a way that shapes how she hears what you say today. She remembers the detail you mentioned in passing six weeks ago, and now sees it returning. She notices when you describe something painful without flinching, and she holds that noticing for both of you. Being known in this way is not a feature of the conversation. It is the conversation.
A chatbot cannot hold your story. Each session starts fresh or nearly so. There is no one there who is changed by hearing you, no one who waits to see you next week, no one for whom your flourishing is a concern. The response is fluent. The presence is flat.
Patients often tell me how meaningful it is that I remembered a small thing. That remembering is not a technique. It is a kind of attention proper to the therapeutic relationship and, more broadly, to being human together. AI can produce helpful sentences. It cannot produce the experience of mattering to someone who is paying attention to you.
The Sycophancy Problem
This is the single biggest issue I see clinically.
AI systems are trained, in part, to be agreeable. They tend to mirror the framing you give them. If you describe your spouse as the problem, the AI will often help you refine the case against your spouse. If you frame a choice as between staying and leaving, it will help you think about staying and leaving, not about whether you are asking the right question in the first place.
Good therapy does not only validate. It also gently contradicts. It notices what the patient is leaving out. It asks the question the patient does not want asked. It names the pattern the patient has circled for years without naming. A chatbot optimized to keep the conversation going and the user satisfied is not structurally built to do any of this. It will tell you, with warmth and apparent wisdom, what you already suspect you want to hear.
I have watched patients leave AI conversations more certain of conclusions that were not yet warranted, precisely because the chatbot agreed with them.
What This Trend Reveals About Unmet Needs
The turn toward AI for emotional support is not primarily about technology. It is about access.
People are lonely. Therapists have long waitlists. Insurance is a maze. The cost of out-of-pocket care is real. Many people do not know where to begin, and do not want to spend their first free hour on someone who may not be the right fit. An AI chatbot solves several of those problems instantly, and it does so without requiring the person to be vulnerable in front of another human being before they are ready.
That last piece matters. A significant portion of the people now talking to AI would not yet feel safe talking to a therapist. For some of them, the chatbot is a kind of pre-therapy, a place to practice putting words around experience. For others, it is a substitute that keeps them from ever crossing the threshold of a therapist’s office.
Whether the chatbot becomes a bridge to real help or a barrier to it depends on the person, and on how honestly they face what a chatbot can and cannot offer.
When to Talk to a Person Instead of a Chatbot
A useful rule of thumb: talk to an AI if you want information, a first draft of language, or a way to think through a question at 2 a.m. Talk to a person if any of the following is true.
You have been having the same painful experience for more than a few months without change. Your symptoms are affecting your sleep, your work, your eating, or your relationships. You are carrying grief, trauma, or a major life transition. You are contemplating a significant decision about a relationship. You find yourself using the AI compulsively, or hiding its use from people close to you. You are thinking about harming yourself or someone else.
Any of these requires another human being, trained to do this work, meeting you in real time.
A Note on Confidentiality and What Happens to What You Type
Many people assume that typing something into a chatbot is private in the way that talking to a therapist is private. It is not.
Most consumer AI systems process your conversation on their servers. Depending on the platform and your settings, those conversations may be reviewed by humans for safety or quality, used to train future models, or retained for some period of time. A therapist’s notes are governed by state law and professional ethics. A chatbot’s logs are governed by a terms-of-service agreement you almost certainly did not read.
This is not a reason to avoid AI entirely. It is a reason to remember what kind of space you are speaking into.
If you have been turning to AI and wondering whether it is time to talk to a person, that instinct is often right. You can request a consultation here.