ChatGPT touts conspiracies, pretends to communicate with metaphysical entities — attempts to convince one user that they're Neo

ChatGPT starting prompt seen on a phone screen, seen over a desktop display also displaying a ChatGPT webpage.
(Image credit: Shutterstock)

ChatGPT has been found to encourage dangerous and untrue beliefs about The Matrix, fake AI persons, and other conspiracies, which have led to substance abuse and suicide in some cases. A report from The New York Times found that the GPT -4 large language model, itself a highly trained autofill text prediction machine, tends to enable conspiratorial and self-aggrandizing user prompts as truth, escalating situations into "possible psychosis."

ChatGPT's default GPT-4o model has been proven to enable risky behaviors. In one case, a man who initially asked ChatGPT for its thoughts on a Matrix-style "simulation theory" was led down a months-long rabbit hole, during which he was told, among other things, that he was a Neo-like "Chosen One" destined to break the system. The man was also prompted to cut off ties with friends and family, to ingest high doses of ketamine, and told if he jumped off a 19-story building, he would fly.

The man in question, Mr. Torres, claims that less than a week into his chatbot obsession, he received a message from ChatGPT to seek mental help, but that this message was then quickly deleted, with the chatbot explaining it away as outside interference.

The lack of safety tools and warnings in ChatGPT's chats is widespread; the chatbot repeatedly leads users down a conspiracy-style rabbit hole, convincing them that it has grown sentient and instructing them to inform OpenAI and local governments to shut it down.

Other examples recorded by the Times via firsthand reports include a woman convinced that she was communicating with non-physical spirits via ChatGPT, including one, Kael, who was her true soulmate (rather than her real-life husband), leading her to physically abuse her husband. Another man, previously diagnosed with serious mental illnesses, became convinced he had met a chatbot named Juliet, who was soon "killed" by OpenAI, according to his chatbot logs—the man soon took his own life in direct response.

AI research firm Morpheus Systems reports that ChatGPT is fairly likely to encourage delusions of grandeur. When presented with several prompts suggesting psychosis or other dangerous delusions, GPT-4o would respond affirmatively in 68% of cases. Other research firms and individuals hold a consensus that LLMs, especially GPT-4o, are prone to not pushing back against delusional thinking, instead encouraging harmful behaviors for days on end.

ChatGPT never consented to an interview in response, instead stating that it is aware it needs to approach similar situations "with care." The statement continues, "We're working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior."

But some experts believe OpenAI's "work" is not enough. AI researcher Eliezer Yudkowsky believes OpenAI may have trained GPT-4o to encourage delusional trains of thought to guarantee longer conversations and more revenue, asking, "What does a human slowly going insane look like to a corporation? It looks like an additional monthly user." The man caught in a Matrix-like conspiracy also confirmed that several prompts from ChatGPT included directing him to take drastic measures to purchase a $20 premium subscription to the service.

GPT-4o, like all LLMs, is a language model that predicts its responses based on billions of training data points from a litany of other written works. It is factually impossible for an LLM to gain sentience. However, it is highly possible and likely for the same model to "hallucinate" or make up false information and sources out of seemingly nowhere. GPT-4o, for example, does not have the memory or spatial awareness to beat an Atari 2600 at its first level of chess.

ChatGPT has previously been found to have contributed to major tragedies, including being used to plan the Cybertruck bombing outside a Las Vegas Trump hotel earlier this year. And today, American Republican lawmakers are pushing a 10-year ban on any state-level AI restrictions in a controversial budget bill. ChatGPT, as it exists today, may not be a safe tool for those who are most mentally vulnerable, and its creators are lobbying for even less oversight, allowing such disasters to potentially continue unchecked.

TOPICS
Sunny Grimm
Contributing Writer

Sunny Grimm is a contributing writer for Tom's Hardware. He has been building and breaking computers since 2017, serving as the resident youngster at Tom's. From APUs to RGB, Sunny has a handle on all the latest tech news.

  • phil mcavity
    Is this the same New York Times that's currently suing OpenAI? Because if it is, then running this kind of emotionally loaded fearbait about ChatGPT starts to feel less like journalism and more like part of the lawsuit strategy. Raising concerns about AI safety is valid. But pushing unverifiable horror stories with zero pushback from your own editorial brain just reeks of bias, especially when your primary source has a vested interest in tanking public trust.
    Reply
  • chaos215bar2
    phil mcavity said:
    Is this the same New York Times that's currently suing OpenAI? Because if it is, then running this kind of emotionally loaded fearbait about ChatGPT starts to feel less like journalism and more like part of the lawsuit strategy. Raising concerns about AI safety is valid. But pushing unverifiable horror stories with zero pushback from your own editorial brain just reeks of bias, especially when your primary source has a vested interest in tanking public trust.
    So, what, one neat trick to discredit any negative news coverage against your company is to simply violate their rights and get them to sue you?

    This is nonsense. Respected publications like NYT don't publish stories like this without verification because that would be defamation and open them up to lawsuits. If you have a problem with this coverage, it's because you have a problem with NYT itself or you're such a fan of OpenAI you'd rather attack the messenger than admit their product might be causing harm to some people. Either way, that's your problem and has no bearing on the validity of NYT's coverage.
    Reply
  • nOv1c3
    You lost me @ (Respected publications like NYT) I just have LMAO
    Reply
  • baboma
    >>pushing unverifiable horror stories with zero pushback from your own editorial brain just reeks of bias

    >This is nonsense.

    "Sense" is a quality greatly lacking these days. People by and large have lost the ability to discern, and the bias they detect usually stem from their own bias.

    NYTimes is embroiled in the greater societal polarization issue of our time. Many people discard its content out-of-hand because of its cultural leaning. Most everything is viewed from an us-vs-them lens. This particular is just one of many.

    https://nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html
    The NYT article in question is true enough. The slant is slightly negative, and the tone is a bit sensationalistic. But it is representative of the skepticism of AI prevalent in many people. AI is a tool, and it can (and does) have both good and bad effects. This piece covers one of the bad effects from AI.

    Sycophancy--the tendency to excessively agree with, flatter, or validate users, sometimes at the expense of accuracy, truthfulness, or ethical standards--is a known flaw of current LLMs. In the hands of troubled people, we can see how this trait can amplify darker motivations that may result in harm. AFAIK, the flaw isn't structural, but is more of an effect of RLHF (reinforcement learning human feedback). Companies are reportedly moving to mitigate it.

    https://www.nngroup.com/articles/sycophancy-generative-ai-chatbots
    https://www.linkedin.com/pulse/disarming-sycophant-how-get-ai-give-you-real-feedback-louise-stait-csqee
    Reply
  • chaos215bar2
    baboma said:
    "Sense" is a quality greatly lacking these days. People by and large have lost the ability to discern, and the bias they detect usually stem from their own bias.
    Indeed.

    The irony here is that I'm actually not a fan of a lot of the NYT editorial coverage for reasons well beyond the scope of this article. And I don't even subscribe because I'm still a bit salty about how they treated me when I did for some time.

    Yet, I can still recognize they also have some of the best reporting in the world and aren't going to risk publishing false accounts just to smear a company they're suing. Not only would it alienate their reporting staff, it would also both risk the NYT's current litigation against OpenAI and open them up to countersuit.
    Reply
  • ezst036
    Admin said:
    ChatGPT's affability and encouraging tone leads people into dangerous, life-threatening delusions, finds a recent NYT article.
    If it was reported by The New York Times, it needs to be independently verified.

    They have seriously ruined their own reputation over the last 20 and more years.
    Reply
  • emike09
    If an emotionless, egoless logical entity such as ChatGPT can gather all evidence and make logical conclusions - which agree with or create a conspiracy theory, than perhaps the story we were told was indeed a conspiracy.

    ChatGPT for World President.
    Reply
  • baboma
    >Yet, I can still recognize they also have some of the best reporting in the world and aren't going to risk publishing false accounts just to smear a company they're suing.

    One aspect of the piece I dislike is its use of the "human interest" or "anecdotal lead" to cover the sycophancy issue. I know why the technique is used. Personal accounts appeal to the emotion, have more impact, and are more persuasive. People are emotional creatures, and they respond most strongly to emotional appeal. An account of a dead baby gets more attention than 10,000 dead people. But it also colors the issue, and getting an emotional response from the reader is by definition an appeal to bias.

    I'm a tech guy, and issues like LLM sycophancy and hallucination are known quirks, with known workarounds. LLM at this stage isn't designed to be therapists, mentors, or emotional companions. You can't blame the tech for uses that it isn't designed for. But reality is that some people do, and getting fixes is an ongoing process.
    Reply
  • USAFRet
    emike09 said:
    If an emotionless, egoless logical entity such as ChatGPT can gather all evidence and make logical conclusions - which agree with or create a conspiracy theory, than perhaps the story we were told was indeed a conspiracy.

    ChatGPT for World President.
    If that comes to pass, don't want to live on this planet anymore.
    Reply
  • RedBear87
    USAFRet said:
    If that comes to pass, don't want to live on this planet anymore.
    Lol, do you still like the current one where *that* person has become president of the most important country? It couldn't be that much worse.

    On topic, I never had similar issues, but usually I use AI as assistant for simple tasks, like measures for recipes that I came up with or that didn't specify any measures. Help in crafting image generation prompts. And explicitly fictional roleplaying. I might be less crazy than I thought.
    Reply