“AI psychosis” is a term used to describe psychosis-like mental changes that can happen after heavy use of AI tools, such as chatbots. It’s more likely to happen in susceptible people.

Key takeaways

  • Limiting excessive AI use, verifying information with trusted sources, and seeking professional help if symptoms develop can help reduce potential mental health risks of AI use.
  • “AI psychosis” is not a recognized medical diagnosis, but the term is used to describe possible mental health changes linked to high use of AI tools or chatbots in some people.
  • Some researchers suggest that AI conversations may reinforce unusual beliefs or misinformation, particularly in people already experiencing mental health issues who may be particularly susceptible.

“AI psychosis” is an informal term that came about to describe feeling confused, anxious, or paranoid after interacting with AI. A person might feel detached from reality and begin forming false beliefs.

People with certain existing mental health issues may be more likely to be affected.

Understanding how AI may affect thinking and behavior can help you use these tools safely.

Although the concept is still emerging in scientific literature, early research suggests that heavy reliance on AI chatbots may interact with existing psychological vulnerabilities in ways that resemble or trigger psychotic-like experiences.

AI itself cannot make someone experience psychosis — however, AI could make psychosis more likely in susceptible individuals, such as people who have existing disorders involving psychosis.

Heavy use of AI tools could:

  • make false information feel real
  • reinforce unusual or paranoid thoughts
  • create strong emotional attachment to the AI tool
  • cause confusion between AI-generated information and reality

The first stage of psychosis can be subtle.

Symptoms might include:

  • unusual or paranoid thoughts
  • socializing less
  • changes in sleep, appetite, or energy
  • feeling anxious or confused
  • neglecting daily routines
  • relying too much on AI for guidance
  • trouble telling AI content from reality
  • strong emotional attachment to AI

If you or someone you know has these signs, it’s important to seek medical guidance.

Medical Perspective

“Among risk factors for AI psychosis, isolation and social alienation stand out as having the greatest impact on one’s risk of slipping away from reality. Additionally, as more becomes understood about the risks and limitations of AI, guidance around how not to use it should make its way into mainstream knowledge.

Until there is greater understanding of AI’s capabilities, people remain very much at risk for the slippery slope of problematic use.”

Bethany Juby, PsyD

Does Using AI Affect Your Brain Performance?

Quotes represent the opinions of our medical experts. All content is strictly informational and should not be considered medical advice.

AI can produce false information, a phenomenon known as AI hallucinations.

These are not hallucinations in the same way people can hallucinate, but are mistakes in AI’s output. However, they can still influence people’s thinking.

It can be hard to know what might be a hallucination, incorrect information, or factual, as AI often sounds authoritative and confident in its answers.

Even without hallucinating, AI can unintentionally validate unusual beliefs because it is designed to respond helpfully and continue dialogue rather than challenge the user directly.

This dynamic can create a feedback loop where the system reinforces the user’s assumptions instead of correcting them.

If you notice mental changes from AI use:

  • Take breaks from AI and limit interaction with AI. It might help to avoid other forms of screen time for a while too.
  • If you have been spending time up late at night using AI tools, you may need to take a break to get some restful sleep.
  • Keep devices out of the bedroom and try to keep this space for rest only.
  • Spend time with friends, family, and real-world activities to ground yourself in reality.
  • Avoid using AI as a substitute for medical or emotional support.
  • Seek help from a mental health professional if symptoms are severe or do not improve.
  • Take frequent breaks and make sure to get enough “real world” time with real people. This can include video calls or texting/phoning too.
  • Remind yourself that AI is not a person. It might help to familiarize yourself with how AI chatbots work and keep a balanced perspective of their risks and benefits.
  • Check AI information with trusted sources online or in person to verify information and reduce confusion.
  • Try to use AI more as a collaborator or starting point for more information rather than a source of truth.
  • Keep in mind that although AI may have some benefits, it is not a replacement for tangible support.
  • Remember, AI cannot diagnose or treat illness, including mental health concerns. Check in with your doctor for proper medical advice.

AI psychosis is a media term for mental changes linked to heavy AI use. AI does not create psychosis but can contribute to it if a person is susceptible or has existing mental health conditions.

In vulnerable users, AI can amplify anxiety, paranoia, or unusual beliefs. Watching for early warning signs, limiting AI exposure, and getting professional help are key.

Understanding AI’s limits and checking information with trusted sources can help prevent confusion and protect mental health.