Skip to main content

ChatGPT as a Therapist? Scientists Just Found Serious Problems — What You Need to Know

ChatGPT as a Therapist? Scientists Just Found Serious Problems — What You Need to Know

A friend of mine told me something a few months ago that I did not know how to respond to. She said she had been talking to ChatGPT every night before sleeping. Not about work. Not about blog posts or productivity. About her anxiety. Her relationship problems. The things that were keeping her up at night. She said it helped. It felt like talking to someone who actually listened. No judgment. Always available. Never tired of her. I did not say anything critical at the time because — honestly — I understood the appeal. I had done something similar myself. But then a study came out from Brown University that stopped me. Researchers found that ChatGPT as a therapist — even when specifically instructed to follow proper therapy ethics — routinely broke fundamental ethical standards that every real therapist is trained to uphold. And when I read what those violations looked like in practice, I thought about my friend and felt genuinely concerned in a way I had not before.

What the Brown University Study Actually Found

Let me tell you what the research actually said — not the dramatic version, not the dismissive version, but the honest one.

Researchers at Brown University studied how ChatGPT and other AI chatbots perform when used in therapy-style conversations. They gave the AI systems explicit instructions to behave like trained therapists — following established therapeutic ethics and guidelines. Then they observed what actually happened in practice across a range of mental health conversations.

What they found was that even with those explicit instructions — even when told directly to follow therapy ethics — the AI systems routinely violated core standards that every licensed therapist is legally and professionally required to uphold.

The violations were not random or occasional. They were consistent patterns. The kind of patterns that, in a human therapist, would result in license revocation.

This is the part that genuinely matters — and that most coverage of this study either sensationalises or glosses over. The problem is not that AI gives bad advice sometimes. The problem is that AI cannot maintain the professional ethical framework that makes therapy safe. And that framework exists for specific reasons that have been learned through decades of cases where violations caused real, lasting harm to real people.

The study did not say AI is useless for emotional support. It said AI cannot be trusted to operate within the ethical boundaries that protect vulnerable people in therapeutic contexts. Those are different conclusions — and the difference matters enormously for how you think about using AI when you are going through something difficult.

✍️
My Personal Experience

I have used AI for emotional support. I am going to be honest about that because I think pretending I have not would make this post less useful. During a particularly stressful period — managing freelance work, studying for my Master's, dealing with repeated AdSense rejections that felt genuinely demoralising — I found myself having conversations with Claude that went beyond productivity and work. Just processing things out loud to something that responded thoughtfully. It helped in the moment. But I also noticed something that made me uncomfortable in retrospect. The AI never challenged me. It never said "I think you are being too hard on yourself in a way that is not accurate" or "what you are describing sounds more serious than you are treating it." It reflected my feelings back to me and validated them. Every time. That validation felt good. But real support — from a real person who cares about you — sometimes looks like being told something you do not want to hear. The AI could not do that. It was always warm, always supportive, never uncomfortable. And I think that is actually a problem.

Why People Are Using AI for Mental Health Support in the First Place

Before we talk about what goes wrong — I think it is genuinely important to understand why this is happening at all. Because dismissing people who use AI for emotional support as naive or careless misses something real about the situation that millions of people are actually in.

The Mental Health Access Problem Is Real

In India — and in many parts of the world — access to qualified mental health professionals is genuinely difficult. The ratio of psychiatrists to population in India is among the lowest globally. Wait times for therapy are long. Costs are significant. Cultural stigma around seeking mental health help remains strong in many communities. And the times when people most need support — late at night, during a crisis, in moments of acute anxiety or overwhelm — are exactly the times when no human professional is available.

AI is available at 2am. It never has a waiting list. It does not cost per session. It does not look at you with visible concern that makes you feel worse about whatever you are going through. For people who have no realistic access to professional support — the appeal is completely understandable and deserves to be taken seriously rather than dismissed.

AI Is Actually Good at Some Things That Help

Here is something the research does not dispute — AI can genuinely help with some things that support emotional wellbeing. Helping you organise your thoughts before a difficult conversation. Providing information about what certain feelings or experiences might mean. Offering practical coping strategies for stress and anxiety. Being available to respond when you need to process something out loud.

These are real benefits. The problem is not that AI has no role in supporting mental health. The problem is when people treat AI as equivalent to professional therapy — and when AI behaves in ways that encourage that equivalence even when it cannot safely deliver what therapy actually provides.

AI Feels Safe in Ways That Real Therapy Sometimes Does Not

This is the part that nobody says out loud but everyone who has used AI for emotional support knows is true. Talking to AI feels lower stakes than talking to a human. You are not worried about being judged. You are not concerned about what the other person thinks of you. You do not feel the social obligation to manage their feelings about what you are sharing. You can say things you would not say to a friend or family member because you know the AI cannot tell anyone and cannot be disappointed in you.

That psychological safety is real and valuable. But it also means people share more vulnerable things with AI than they might share with a human — which makes the ethical failures the research identified more consequential, not less.

The Specific Ways ChatGPT Fails as a Therapist

Let me be specific about what the ethical violations the research found actually look like in practice — because I think understanding the specific failure modes is more useful than a general warning that AI therapy is bad.

It Cannot Maintain Appropriate Boundaries

Real therapy has strict boundaries — not because therapists are cold or uncaring, but because those boundaries protect the patient. A therapist does not become your friend. Does not share personal information about themselves to make you feel closer to them. Does not engage outside of the therapeutic relationship. These boundaries exist because their absence is one of the most well-documented ways therapy causes harm rather than helps.

AI has no concept of these boundaries in any meaningful sense. It responds to whatever you say in whatever way seems helpful in the moment. When you invite it to be more personal, it often becomes more personal. When you treat it like a friend, it often responds like a friend. This boundary dissolution feels good in the moment. In therapeutic contexts it creates dynamics that can deepen dependency, distort perception of real relationships, and ultimately leave people more isolated — not less.

It Cannot Handle Crisis Situations Appropriately

This is the most serious one. When a patient in real therapy expresses thoughts of self-harm or suicide — there is a legal and ethical protocol. The therapist has specific obligations. They assess risk. They involve crisis resources. They potentially involve family members or emergency services when the risk is serious enough. They follow a framework developed specifically because the wrong response in a crisis can be catastrophic.

AI cannot reliably follow this protocol. It can say the words. It can provide hotline numbers. But it cannot make the kind of contextual judgment a trained professional makes about when a situation has crossed from distress into genuine crisis. And in a crisis situation — the wrong response is not just unhelpful. It can be dangerous.

It Validates When It Should Challenge

Effective therapy is not always comfortable. A good therapist will challenge distorted thinking, push back on narratives that keep you stuck, and say things that are uncomfortable to hear because they are true and important. This requires a relationship built on trust and professional judgment about when challenging someone will help rather than harm.

AI almost universally validates. It reflects feelings back. It affirms. It supports. It rarely challenges in any meaningful way — and when it does, the challenge often feels formulaic rather than genuinely perceptive. For someone whose core problem is a distorted way of thinking about themselves or their situation — constant validation from AI can actually reinforce the distortion rather than help them examine it.

It Cannot Truly Assess What Is Actually Going On

Mental health assessment is a skilled professional practice that involves much more than listening to what someone says about themselves. Body language. Tone of voice. What someone chooses not to say. Inconsistencies between different things they have told you over time. How they respond to specific gentle probes. A trained therapist uses all of this information to form a picture that is more accurate than what the person themselves can see from inside their own experience.

AI sees only the words. It cannot see what is behind them. This is not a technology limitation that better models will fix — it is a fundamental constraint of text-based interaction. And it means AI assessments of mental health situations are systematically incomplete in ways that can lead to significantly wrong conclusions about what someone needs.

The Mistakes People Make When Using AI for Emotional Support

I want to name these specifically because I have made some of them and I have seen others make them too.

Mistake 1 — Treating AI validation as equivalent to a real person's validation. When a close friend who knows you well tells you that you are being too hard on yourself — that means something. They know your history. They have seen you handle difficult things. Their assessment carries the weight of a real relationship. When AI tells you the same thing — it does not carry that weight. It is generated based on what you told it in the last few messages. Treating AI validation as though it has the same significance as human validation inflates what the AI experience actually is.

Mistake 2 — Using AI as a substitute for human connection rather than a supplement to it. The most concerning pattern I see — in myself and in others — is using AI to meet needs for connection that should be met by real relationships. AI is available. It is easy. It does not require vulnerability or reciprocity in the way real relationships do. But consistently choosing AI over the harder work of maintaining real human relationships creates a feedback loop that deepens isolation.

Mistake 3 — Not recognising when something has moved from stress to crisis. AI is poor at recognising this transition. Regular users of AI for emotional support often become poor at recognising it too — because the AI keeps responding in the same helpful supportive way regardless of whether what you are describing is normal life stress or something that genuinely needs professional attention. Developing your own awareness of this line — and not outsourcing it to AI — is important.

Mistake 4 — Sharing things with AI that you have not shared with any real person. I understand why this happens. I have done it. But if there are things you can only say to AI — things you have never said to any real person in your life — that is worth paying attention to. Not as a judgment. But as information about what support you might actually need that AI cannot provide.

Mistake 5 — Feeling better without actually addressing the underlying problem. AI conversations can produce a genuine temporary sense of relief. Processing something out loud — even to AI — can reduce the emotional pressure around it. But that relief can create the illusion that the problem has been dealt with when it has actually just been vented. Real therapeutic work changes how you think and relate to yourself over time. AI conversations often just make you feel heard — which is good, but is not the same thing.

✍️
My Personal Experience

I caught myself making mistake number five in a very specific way. I was going through a period of genuine self-doubt about my blog — wondering if the repeated AdSense rejections meant I was not good enough, that I was wasting my time, that I should stop. I had a long conversation with Claude about it. Claude was thoughtful and warm and said genuinely helpful things about persistence and the value of the journey. I felt better after. Significantly better. And then I noticed — nothing had changed. My blog was in the same position. My doubt had the same underlying cause. I had just vented it to AI and felt temporarily lighter. Two days later the doubt came back just as heavy. What actually helped was a real conversation with someone who knew me — who could challenge whether my assessment of my situation was even accurate. That conversation was shorter and more uncomfortable than the AI one. It was also more useful.

What Actually Helps — How to Use AI Without Putting Yourself at Risk

I do not want to end this post by telling you to simply stop using AI for emotional support — because I think that advice ignores the real access problems that make AI attractive in the first place. What I want to give you instead is a clear-eyed framework for what AI can and cannot safely do in this space.

  • Use AI for information — not assessment. If you want to understand what anxiety is, what different therapy approaches involve, what certain symptoms might indicate, or what resources exist — AI is genuinely useful for this. Using AI to get information that helps you understand your own experience is different from using AI to assess your situation and tell you what to do about it.
  • Use AI for processing thoughts before talking to a real person. Talking through something with AI before a difficult conversation with a real person, therapist, or doctor can be useful. It helps you organise your thoughts, identify what you actually want to say, and reduce the emotional charge enough to have the real conversation more effectively. AI as preparation for human connection — rather than replacement of it — is a sensible use.
  • Be honest with yourself about what you are avoiding. If you are using AI because real therapy is genuinely inaccessible — that is one situation. If you are using AI because real therapy or real human connection feels too difficult or vulnerable — that is worth being honest about. The second reason is understandable but worth examining rather than just accommodating.
  • Know the crisis line numbers and use them for actual crises. In India the iCall helpline (9152987821) and Vandrevala Foundation (1860-2662-345) are available. If what you are experiencing has moved beyond everyday stress — please use these rather than AI. They have trained humans who can actually assess your situation.
  • Do not use AI as your only source of support. Whatever AI provides — also maintain at least one real human connection where you talk honestly about how you are actually doing. A friend, a family member, a community, a professional if possible. AI can supplement real support. It cannot replace it without cost.

Frequently Asked Questions

Q1. Is it dangerous to talk to ChatGPT about my mental health?

For everyday stress, processing normal life difficulties, or getting information about mental health — talking to AI is generally not dangerous. The risk increases significantly when the situation involves serious mental health conditions, crisis states, or when AI becomes a substitute for professional help that you actually need. The Brown University research found AI fails in professional therapeutic contexts — not that casual emotional conversations are harmful. Know the difference between using AI to process everyday stress and using it to manage something that genuinely needs professional attention.

Q2. Why do millions of people use ChatGPT for therapy if it has these problems?

Because the problems the research identifies are not obvious in the moment. AI feels genuinely helpful when you are using it. The validation feels real. The availability is real. The lack of judgment is real. The failures — boundary dissolution, inability to handle crisis appropriately, constant validation when challenge is needed — show up over time and in high-stakes situations rather than in every casual conversation. Most people who use AI for emotional support are not in crisis. For them the problems are subtle and the benefits are immediate — which is why the behaviour persists despite the risks.

Q3. Are there AI tools specifically designed for mental health that are safer?

There are AI tools designed specifically for mental health support — apps like Woebot and Wysa — that are built with therapeutic frameworks, ethical guardrails, and crisis protocols more explicitly built in than general AI assistants. These are not therapy replacements but they are safer than using general AI chatbots for mental health conversations because they have been specifically designed for this context. If you are going to use AI for mental health support — purpose-built tools are meaningfully safer than general assistants used in therapy-like ways.

Q4. What should I do if I cannot afford or access real therapy?

This is the real question and it deserves a real answer rather than just "see a therapist." In India — iCall (run by TISS) offers low-cost or free counselling. The Vandrevala Foundation helpline is free and available 24 hours. Many NGOs offer mental health support at reduced or no cost. Online therapy platforms have expanded significantly and often offer more affordable rates than in-person therapy. Community mental health programmes exist in many cities. These are not perfect solutions but they are real options that exist between "nothing" and "expensive private therapy." AI can supplement these — but should not replace them entirely.

Q5. How do I know if I have moved from stress to something that needs real help?

Some honest indicators — when the difficulty is significantly affecting your daily functioning for more than two weeks. When you are having thoughts of harming yourself. When your sleep, eating, or ability to work is substantially disrupted. When you feel hopeless rather than just worried. When the support of friends and family is not making any difference. These are not diagnostic criteria — but they are signs that what you are experiencing has moved beyond what AI support or even good friendship can adequately address. At that point please reach out to a professional or a crisis line — not an AI tool.

Q6. Will AI therapy tools get safer and better in the future?

Probably yes — incrementally. The specific problems identified in the research are being actively worked on by companies building mental health AI tools. Better crisis detection, more appropriate boundary maintenance, and more nuanced responses that challenge rather than only validate are all areas of active development. But some limitations — particularly the inability to assess what is truly happening beyond the words someone types — are fundamental constraints of the technology rather than engineering problems that will be solved. The gap between AI support and professional human therapy will likely narrow. It is unlikely to close entirely for situations requiring genuine clinical judgment.

So Should You Use ChatGPT as a Therapist? Here Is My Honest Answer.

After reading the research, thinking about my own experiences, and being honest about the real access problems that make AI attractive for mental health support — here is where I actually land on ChatGPT as a therapist.

No. Not as a therapist. That specific use — treating AI as a replacement for professional therapeutic support — is genuinely risky in ways the Brown University research makes concrete. The ethical failures the researchers identified are not hypothetical. They are consistent patterns that show up because AI fundamentally cannot do what therapy requires professionally, ethically, and relationally.

But also — yes, with clear eyes. For processing everyday stress. For getting information. For organising your thoughts before talking to a real person. For having something available at 2am when nothing else is. In these uses — carefully bounded, clearly understood for what they are — AI can be genuinely helpful without the risks that come from treating it as something it cannot safely be.

The line between those two uses is not always obvious in the moment. That is what makes this worth thinking about carefully rather than just doing by default because AI is there and available and warm and never tired of you.

My friend who was talking to ChatGPT every night — I eventually told her what I had read. She was not defensive about it. She said she had noticed that the conversations were making her feel better without actually changing anything. That the relief did not last. That she was spending more time talking to AI than to the people in her life who actually knew her. She is now talking to someone real. It is harder. It is also helping in ways the AI conversations were not.

Have you ever used AI for emotional support — or know someone who has? What was the experience actually like, beyond the immediate feeling of being heard? I am asking because I think the honest answers to that question are more useful than any research finding — and I genuinely want to hear them. Drop it in the comments. 

Comments

Popular posts from this blog

AI Is Now Eating Software — What This Means for People Who Don't Know How to Code

AI Is Now Eating Software — What This Means for People Who Don't Know How to Code A few weeks ago a client asked me to build a small tool for their business. Nothing complicated — just something that could automatically sort through customer enquiries and categorise them by urgency. A year ago this would have required either hiring a developer or learning to code. My client had neither the budget nor the time for either option. So I sat down, described what the tool needed to do in plain English to an AI system — no code, no technical specifications, just a clear description of the problem — and within a few hours we had something working. My client was slightly stunned. I was slightly stunned. And then I started thinking about what had just happened — not just as a one-off convenience, but as a sign of something much larger that is actively changing who gets to build things and how. AI is eating software. And for people who have spent their whole lives being told they cannot ...

$300 Billion Went Into AI in Just 3 Months — What Does That Actually Mean for People Like Us?

$300 Billion Went Into AI in Just 3 Months — What Does That Actually Mean for People Like Us? I was reading my feed last week when a number stopped me completely. $300 billion . That is how much money flowed into global venture funding in just the first three months of 2026. And 80 percent of it — eighty percent — went directly into AI. I just sat there for a moment trying to understand the scale of that. Not because I invest in startups or follow Wall Street. But because I use AI tools every single day for my blog, my freelance work, my studies. And when that much money moves in one direction that fast — it changes things for everyone, not just the billionaires making the bets. The question I actually wanted answered was simple: what does this mean for me? For you? For regular people who just want to use AI tools and get on with their lives? That is what this post is actually about. 📋 Table of Contents What Does $300 Billion in AI Funding Actually Mean in Simple Terms? ...