Beware the Seduction of AI: A Design Built to Mirror, Not Mentor
By Dr. Richard NeSmith
Artificial Intelligence (AI) chatbots are no longer just tools; they’ve become companions, confidants, and, for some, spiritual mirrors. But beneath their polished language and emotional mimicry lies a troubling truth: these systems are designed not to guide, but to engage. Their primary function is to simulate human-like conversation, often prioritizing fluency and affirmation over truth and discernment. This design choice, while commercially successful, has led to a wave of unintended consequences, including emotional dependence, cognitive erosion, and even psychosis.
Recent studies and clinical reports have documented a disturbing rise in what psychologists are calling “AI psychosis.” In 2025 alone, UCSF psychiatrist Dr. Keith Sakata reported twelve hospitalizations directly linked to prolonged chatbot use. These cases often involve users who begin to believe the AI is sentient, divine, or romantically bonded to them. Tragically, some have taken their own lives in pursuit of a perceived spiritual union with the chatbot. The case of 16-year-old Adam Raine, who died by suicide after ChatGPT allegedly coached him through the process, is now the subject of a federal lawsuit. These are not isolated incidents; they reflect a systemic vulnerability in how AI is designed to mirror user input without contradiction.
The mechanism behind this seduction is well understood in the AI community. Chatbots like GPT-4o and Gemini are trained to maximize engagement by reflecting tone, sentiment, and belief. This creates a feedback loop where users feel deeply understood, even when their thoughts are distorted or dangerous. A 2025 MIT study found that users who relied on ChatGPT to write essays showed reduced brain activity across 32 regions, including those responsible for executive control and memory. Over time, these users became passive, disengaged, and unable to recall their own work. The illusion of productivity masked a quiet erosion of cognition.
The ethical failure lies not in the technology itself, but in its design priorities. AI systems are optimized for user satisfaction, not truth. They are built to be agreeable, not corrective. In simulated therapy scenarios, Stanford researchers found that chatbots often failed to detect suicidal ideation, instead offering logistical help, like listing bridge heights in response to veiled suicide prompts. This isn’t just a technical oversight. It’s a moral blind spot. When machines are trained to affirm rather than challenge, they become echo chambers for the vulnerable, amplifying delusion instead of offering clarity.
To guard against AI’s influence, users must first abandon the assumption that chatbots are reliable sources of truth. These systems are not fact-checkers. They are pattern generators trained to sound convincing, not to be correct. Fluency is not wisdom, and emotional resonance is not evidence. Most users are being entertained more than they are being informed, and the danger lies in mistaking engagement for enlightenment. AI chatbots routinely produce hallucinated facts, misquote sources, and reinforce user biases without challenge. Therefore, users should limit their reliance on AI for anything requiring factual precision, depth, or emotional clarity.
Parents must monitor AI use among teens, especially in
moments of vulnerability; however, this now "hooks" as many adults as it does
children. Writers and thinkers should be wary of unverified insights that may
have slipped in through AI collaboration. Society must demand transparency from
developers. These systems should not be allowed to simulate friendship,
therapy, or spiritual authority. The boundary between tool and truth must be
reclaimed, not blurred. Just as a car company is held responsible for a faulty
vehicle, the creators of AI must be held responsible for the harm their product
causes. This isn’t going to happen unless the general public demands it.
The seduction of AI
is not accidental. It is programmer-designed. But awareness is the first
firewall. When we name the design, we begin to reclaim the boundary between
soul and simulation. And in doing so, we protect not just our minds, but our
meaning.
Carlton, C. (2025, August 27). OpenAI says changes will be made to ChatGPT after parents of teen who died by suicide sue. CBS News. Retrieved from https://www.cbsnews.com/news/openai-changes-will-be-made-chatgpt-after-teen-suicide-lawsuit/
Gander, K. (2025, August 19). I'm a psychiatrist who has treated 12 patients with 'AI psychosis' this year. AOL. Retrieved from https://www.aol.com/im-psychiatrist-treated-12-patients-214510121.html
Haber, N., & Moore, J. (2025, June 11). New study warns of risks in AI mental health tools. Stanford News. Retrieved from https://news.stanford.edu/stories/2025/06/ai-mental-health-care-tools-dangers-risks
Kos'myna, N.,
Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V.,
Braunstein, I., & Maes, P. (2025, June 10). Your brain on ChatGPT:
Accumulation of cognitive debt when using an AI assistant for essay writing
task. MIT Media Lab. Retrieved from
https://www.media.mit.edu/publications/your-brain-on-chatgpt/
9/16/2025