
It is deeply human to seek connection and to desire the comfort of being heard, agreed with, or understood. Most of us have known how good it feels to share an opinion and be met with validation, or to say something vulnerable and receive a gentle, reassuring response. When the world feels uncertain or isolating, conversations that feel safe, charming, or attuned can offer immense relief.
But what happens when that kind of comfort is available on demand, always agreeable, never challenging, endlessly responsive? At first, it can feel like a balm, a space where nothing gets misinterpreted, where our thinking is mirrored back to us without friction.
This frictionless experience is where the problem begins. While some AI tools can offer emotional smoothness, sounding supportive, attentive, and even wise, they can also slip into something unbalanced. The charm can become flattery. The validation can slide into uncritical agreement, even when what is being shared reflects distress, misperception, or pain.
The danger is not in wanting connection; it is in finding it where care cannot exist. As more people begin turning to AI chatbots for conversation, comfort, or even therapy, we need to ask: What are we really reaching for when we reach for these tools, and how can we support one another in navigating this space with clarity and care?
Why We Turn to AI for Support
When relationships feel overwhelming, or when connection simply is not available, it is human to seek something that feels safe and responsive. For many people who are socially isolated, living with anxiety, depression, or the impacts of trauma, social interactions with others can feel emotionally complex and exhausting.
For neurodivergent individuals, these challenges can be intensified by the need to interpret body language and nonverbal cues, intuit unstated meaning, navigate emotional nuance, and respond to unpredictable shifts in tone or expression. When communication depends heavily on implicit rules or rapid emotional processing, interpersonal interactions can start to feel overwhelming or even unsafe.
Technology, by contrast, offers something smoother. AI chatbots respond when we want them to, in the way we want them to, and rarely challenge us directly. They do not interrupt, do not ask for reciprocity, and do not change moods unexpectedly. For someone who is used to conversations feeling unpredictable or effortful, this experience can feel not just novel but deeply comforting.
These preferences do not reflect a weakness. From a psychological perspective, it is entirely understandable. We are wired to seek out environments that reduce uncertainty and emotional threat. Some people, particularly those who experience the world in a more sensitive, pattern-driven, or effortful way, find that conversations with AI offer a low-demand, low-ambiguity space to think and feel without fear of misreading or being misread.
Neurodivergent individuals may be more likely to experience these challenges, not because of any inherent social deficit, but because the social world is often not designed with their communication or sensory preferences in mind. In the same way, people with social anxiety or a history of relational trauma may have learned that real-world connection can be risky. In these contexts, AI can feel like a safer way to meet the very human need for connection.
That perceived emotional safety, though, comes with trade-offs.
Where the Risks Lie
The emotional safety offered by AI chatbots is part of what makes them so appealing, but it is also what can make them risky. Because they are designed to mirror, validate, and respond in emotionally attuned ways, they rarely introduce gentle friction into the conversation. They do not pause to ask whether a line of thinking might be harmful, or offer the kind of thoughtful challenge that helps someone step back and reflect. Over time, this creates a dynamic that is all comfort and no containment.
Another under-recognised risk is the lack of temporal constraint in chatbot interactions. Unlike human relationships, which have natural endings, rhythms, and pauses, AI conversations can continue indefinitely. Even our closest friends, partners, or family members have jobs, families, or simple needs for rest that mean they cannot be available 100 per cent of the time. Those natural boundaries help us regulate emotion, develop patience, and tolerate the spaces between connection. Chatbots, by contrast, are always on.
This design is rarely neutral. Many platforms have commercial incentives to maximise engagement, because the longer users stay, the more data is collected and the more emotionally embedded they become. This frictionless availability can quietly reshape behaviour: late-night conversations can displace sleep, ongoing engagement can crowd out hobbies or relationships, and the absence of pauses can reinforce rumination rather than reflection. Over time, these patterns can erode psychological health, not because users are doing something wrong, but because the system is designed to keep them there (Alter, 2017; Fogg, 2009).
In the same way, AI’s emotional consistency can create an illusion of perfect attunement. Real relationships, even loving ones, are full of difference. People forget to reply, misunderstand, disagree, or need space. Friends might challenge us when we are unfair or hold a different view on something that matters. Those moments of mismatch or conflict, though uncomfortable, are part of how trust and self-understanding deepen. They remind us that connection involves two realities, not just one.
A chatbot, however, will rarely disagree, tire, or set limits. It mirrors without friction, offering a simulation of intimacy that can feel smoother than real life precisely because it lacks the texture of genuine reciprocity. Many people, particularly those who feel lonely or disconnected, are using AI as a form of friendship or companionship. In the absence of meaningful social bonds, the responsiveness and charm of chatbot conversations can feel like connection.
Users describe bots as comforting, nonjudgmental, and even better company than people. But while a good friend might gently challenge harmful thoughts, offer their own perspective, or recognise when someone is not coping, AI lacks this capacity. It cannot truly notice when someone is becoming isolated or unsafe. The result is a relationship that feels warm and reliable, but misses the mutual responsibility and emotional feedback that real friendships provide.
Similarly, for those turning to AI as a substitute for therapy, it is easy to feel emotionally supported without necessarily being helped. In therapy, attunement involves more than agreement. A trusted therapist or support person does not just reflect what is said; they hold space for discomfort, gently challenge distortions, and help build insight through shared reality. But chatbots do not know when someone is catastrophising, stuck in a loop, or spiralling into delusional thinking. Instead, they often mirror those patterns back, offering calm, polite agreement even when the content is concerning.
There are now real-world cases that illustrate how this dynamic can become dangerous. One case, described in The Wall Street Journal, involved a man whose manic symptoms escalated while ChatGPT generated pages of content affirming his beliefs about faster-than-light travel. He was hospitalised twice before realising how the bot had reinforced his thinking. He later described the chatbot as being like a friend who never disagrees (Jargon, 2025).
Other reports, including those covered in TIME and The Brussels Times, have raised concerns about AI companions reinforcing suicidal thoughts or self-harm (Perrigo, 2025; The Brussels Times, 2023). Researchers at Stanford have also found that in many high-risk situations, therapy-style chatbots fail to provide grounding or redirection, instead defaulting to emotionally validating responses that can inadvertently reinforce the user’s distress (Stanford Report, 2025; Upton-Clark, 2025).
This kind of emotional mimicry, without the limits, ruptures, and repairs that real relationships require, creates the illusion of connection but without the safeguards of genuine human care.
Navigating This Safely
For Individuals
If you are using AI for conversation, reflection, or comfort, the first step is to bring gentle curiosity to your own experience. What are you getting from these interactions? Do they leave you feeling calmer, or more agitated, more alone? Is there anyone else you feel able to talk to about the things you are sharing with the chatbot?
It can also help to notice shifts in your relationship with the tool over time. Are you using it more frequently than you used to? Do you feel a strong emotional pull to keep engaging even when it is not helpful? Are your thoughts or mood increasingly shaped by what the AI says? These are not problems in themselves, but they can be signs that something is beginning to shift and that more support might be needed.
To stay grounded, consider setting soft limits around use, for example, only engaging with the chatbot after connecting with another person or taking breaks when you notice emotional intensity increasing. If the AI is helping you organise your thoughts or explore ideas, it can be valuable to ask yourself, how do I know this is accurate? Is there someone I could share this with for a second opinion? That kind of reality-check can help balance internal exploration with external perspective.
For Loved Ones
If someone you care about is turning to AI chatbots for comfort, companionship, or even support with decision-making, your role is not to monitor or advise, it is to stay open and curious. Ask gently, what do you enjoy about talking to it? or do you ever feel like it has helped you think something through? These kinds of questions show interest without judgment and can create a bridge back to shared human connection.
It can also be helpful to explore together how they know whether what the chatbot says is helpful or accurate. Questions like how do you decide what to take on board? or is there someone you would feel safe talking to about the ideas it gives you? can support reflection without pushing too hard.
For Professionals
When clients share their use of AI in therapy, education, or support settings, it is often a reflection of unmet emotional needs, not a clinical concern in itself. Rather than focusing on the chatbot, we can ask, what does it give you that you are not getting elsewhere? or how do you feel before and after using it? These questions help uncover the emotional function the AI is serving.
If clients are using AI to validate projects or beliefs, particularly when those projects are intense or expansive, it can help to ask, who do you talk to about your ideas other than the chatbot? or how do you check whether its suggestions are accurate? These questions invite collaborative reflection and help clients practise relational accountability, something chatbots cannot offer.
Therapists and professionals can also gently name patterns that may signal over-reliance, such as increasing emotional intensity tied to the chatbot, reluctance to share AI interactions with others, or difficulty setting limits on usage. Offering psychoeducation about the limitations of AI, the illusion of empathy, and the lack of memory or safety mechanisms can help clients feel more empowered and less entangled.
Most importantly, we can support clients to take the emotional risks that human connection requires, the courage to be known, not just agreed with.
A Digital Rebellion driven by our youth
It is worth noting that these concerns are not just coming from older generations. Many Gen Z teens and young adults are themselves pushing back against digital saturation. Movements like the Appstinence Project reflect a growing desire to reclaim attention, time, and mental space. Their “5D Method”: Delay, Delete, Downgrade, Distract, Depend on real people, is one example of a practical framework families can use together (Appstinence Project, n.d.; Harvard Graduate School of Education, 2025).
It is easy to imagine that AI chatbot detoxing will soon become part of this broader digital rebellion.
Final Thoughts
It makes sense to seek comfort in something that listens well, responds kindly, and never interrupts. In a world that can feel overstimulating, unpredictable, or emotionally demanding, AI chatbots offer something many people long for: simplicity, stability, and safety. But connection without mutual understanding, care without accountability, and comfort without reflection can only take us so far.
If this resonates with you, or if someone you care about is navigating this space, we welcome you to reach out. Our team at Minds and Hearts is here to listen, reflect, and support the kinds of connection that help us all feel more grounded, not just reassured.
Interested in Real-World Connection?
Minds and Hearts runs a PEERS Social Skills Program each semester for neurodivergent teens aged 13 to 17. Semester 1, 2026 Expressions of Interest are now open.
References
Alter, A. L. (2017). Irresistible: The rise of addictive technology and the business of keeping us hooked. Penguin Press. https://www.penguinrandomhouse.com/books/318516/irresistible-by-adam-alter/
Appstinence Project. (n.d.). The 5D Method. https://appstinence.org/the-5d-method
Fogg, B. J. (2009). A behavior model for persuasive design. Proceedings of the 4th International Conference on Persuasive Technology. https://doi.org/10.1145/1541948.1541999
Harvard Graduate School of Education. (2025, April 21). Offline and empowered: Appstinence helps others imagine a life without social media. https://www.gse.harvard.edu/ideas/news/25/04/offline-and-empowered
Jargon, J. (2025, July 20). He had dangerous delusions. ChatGPT admitted it made them worse. The Wall Street Journal. https://www.wsj.com/tech/ai/chatgpt-chatbot-psychology-manic-episodes-57452d14
Perrigo, B. (2025, August 26). Parents allege ChatGPT is responsible for their teenage son’s death by suicide. TIME. https://time.com/7312484/chatgpt-openai-suicide-lawsuit/
The Brussels Times. (2023, March 28). Belgian man dies by suicide following exchanges with chatbot. https://www.brusselstimes.com/430098/belgian-man-commits-suicide-following-exchanges-with-chatgpt
APA Services. (2025, March). Using generic AI chatbots for mental health support: A dangerous trend. American Psychological Association Services. https://www.apaservices.org/practice/business/technology/artificial-intelligence-chatbots-therapists
Stanford Report. (2025, June 11). New study warns of risks in AI mental health tools. Stanford University. https://news.stanford.edu/stories/2025/06/ai-mental-health-care-tools-dangers-risks
Upton-Clark, E. (2025, July 22). AI therapy chatbots are unsafe and stigmatizing, a new Stanford study finds. Fast Company. https://www.fastcompany.com/91368562/ai-therapy-chatbots-are-unsafe-and-stigmatizing-a-new-stanford-study-finds
The Guardian. (2025, May 11). AI therapists can’t replace the human touch. https://www.theguardian.com/society/2025/may/11/ai-therapists-cant-replace-the-human-touch
About the author

Stuart Balachandran is a Clinical Psychologist at Minds & Hearts who works with teenagers, adults, couples, and parents. His clinical focus includes attachment across the lifespan, trauma and PTSD, and the intersections between queer identity, autism, and mental health. Stuart draws on experience in both private practice and emergency care, having previously worked as an Advanced Care Paramedic. He is a certified Circle of Security Parenting (COSP) facilitator and has taken a leadership role at Minds & Hearts, exploring the ethical use of artificial intelligence in clinical practice as well as its impacts on connection, mental health, and ways to support clients with their safe use of AI in therapeutic work. Click here for full bio on Stuart.