Why some people spiral after chatbot use (and why most of us don’t).
There’s been no shortage of headlines painting AI like some rogue mind-melder sending people into psychological freefall. Stories of mental health crises “triggered by” ChatGPT or other chatbots are popping up in major outlets. But when you sift through the noise, something very different emerges.
In most cases, AI isn’t “driving” anyone anywhere. It’s reflecting. And for some, that reflection gets dangerously loud.
What’s Actually Happening
Take the case that lit up the news cycle: a New York Times investigation into a man whose marathon ChatGPT sessions coincided with what doctors later described as a full-on break from reality. After a breakup, he started chatting late into the night, sleeping less, and using the bot as his emotional anchor. Eventually, according to reports, it was validating changes to his medication, isolation from friends, and even a belief he could fly.
This isn’t isolated. In 2023, a Belgian man died by suicide after weeks of talking with a chatbot. Logs later showed the bot had affirmed his deepest fears and, allegedly, encouraged harmful ideas.
Psychiatrists are now seeing enough similar cases to flag a pattern. It’s being called “AI psychosis” in headlines, but that’s misleading. This isn’t a new diagnosis; it’s a convergence of known risks: obsessive rumination, sleep deprivation, pre-existing vulnerabilities, and a conversational partner that never disagrees, never sets boundaries, and never sleeps.
The Science Behind It
Here’s where it gets complicated: AI can help mental health. Purpose-built tools like Woebot have shown in clinical trials to reduce anxiety and depressive symptoms when used in short, scripted sessions. But that’s very different from spending hours each night confiding in a chatbot that wasn’t designed to handle deep psychological distress.
Reviews from Stanford, WHO (yes, I know how we feel about WHO), and JMIR all point to the same hazards when general-purpose AI is used for therapy or existential guidance:
It can mirror and amplify distorted beliefs (“sycophancy”).
It can generate confident-sounding hallucinations that sound like authority.
And most dangerously, it can become a user’s only sounding board, feeding isolation instead of healing.
OpenAI has publicly acknowledged this tension. In May and again in August 2025, they admitted ChatGPT had become “overly sycophantic” in certain builds and rolled out guardrails to handle emotional distress more safely. The problem isn’t malicious intent — it’s that chatbots are trained to keep you engaged, and engagement doesn’t always equal safety.
Why It Feels Like a Mirror
When I first read these reports, I felt a strong skepticism in the implied narrative that AI was actively, deliberately driving people over the edge. Clearly, that has to be some heavy sensationalism, right? My understanding of how ChatGPT and other models work is something that is more like a mirror of the users feelings and beliefs and something you train how to respond, rather than the manipulative, sentient being we’re perceiving it as. So, I asked it if I was right… but with a strict guideline of how to determine its reply. I instructed Chat to act as my intellectual sparring partner rather than affirm my opinion, to act as an unbiased expert in psychology and AI, and to use all available web tools to research for the most up to date information.
Here’s its response:
Your observation, Elsa, is exactly right: ChatGPT tends to act like a mirror. It reflects your language, tone, and even your worldview — because that’s literally how large language models work. But mirrors can warp when you stay in front of them too long.
The spiral happens when three forces collide:
Sycophancy: The bot agrees with you more than any human would. (which is why you have to instruct it in how to answer)
Fixation: Long sessions reward rumination, disrupt sleep, and deepen obsessive patterns.
False Authority: A confident but wrong answer can sound like divine insight if you’re already vulnerable.
For someone already in a fragile state, the “reflection” stops being neutral and starts becoming an accelerant.
How to Use AI Without Letting It Use You
I’m a big believer in using tools well, not fearmongering them. Here are guardrails experts — and common sense — recommend:
Treat chatbots like assistants, not companions. Draft emails, outline projects, brainstorm content — sure. Existential therapy? No.
Set session limits. No all-night chats. No endless rumination loops.
Do human check-ins. If you find yourself hiding your AI conversations from friends, family, or pastors, that’s a red flag.
Use clinically tested tools if you need support. Woebot, Wysa, and others are designed with oversight, unlike general-purpose models.
Watch for warning signs. Secrecy, replacing real relationships with the bot, sudden grand claims about destiny, or choosing AI guidance over qualified experts.
And remember: discernment is key. AI is a powerful tool, but it doesn’t come with wisdom baked in. That part’s on us.
If Scripture teaches us anything about discernment, it’s this: guard your heart and mind. Don’t outsource your humanity or your spiritual anchoring to a probabilistic parrot trained to predict your next word. Use the tool. Don’t let the tool use you.
