Rise of ‘AI psychosis’: What is it and are there warning signs?

From false convictions to unhealthy attachments, AI psychosis is raising fresh concerns about chatbot overuse.

Published: 12 hours ago

By Ashish kumar

AI-based spending tools go through all your online transactions and look for patterns.
Rise of ‘AI psychosis’: What is it and are there warning signs?

Following user posts on social media in recent weeks detailing experiences of losing contact with reality following prolonged use of AI chatbots such as ChatGPT, the phrase “AI psychosis” has gained popularity.

Based on these posts, AI psychosis appears to refer to false or troubling beliefs, delusions of grandeur or paranoid feelings experienced by users after lengthy conversations with an AI chatbot. Many of these users resorted to chatbots for expert guidance and inexpensive therapy.

According to a Washington Post investigation, AI psychosis is an unofficial term used to characterize a particular kind of online behavior that is comparable to other terms like “brain rot” or “doomscrolling,” despite the fact that it is not scientifically defined.

The new tendency coincides with the increasing growth of AI chatbots like OpenAI‘s ChatGPT. According to reports, ChatGPT, which was first introduced in 2022, is getting close to 700 million weekly users. However, there is mounting concern that interacting with these chatbots for long hours can have potentially harmful effects on the Mental Health of users. Given the rapid pace of AI adoption, mental health experts have argued that it is crucial to address the issue of AI psychosis quickly.

According to Vaile Wright, senior director for health care innovation at the American Psychological Association (APA), “the phenomenon is so new and it’s happening so rapidly that we just don’t have the empirical evidence to have a strong understanding of what’s going on.” She continued, “There are just a lot of anecdotal stories.”

Experts are trying to learn more about the problem because of the growing amount of problematic chatbot interactions that users or their friends and family are having. According to the report, the American Psychological Association is assembling a group of experts to investigate the application of AI chatbots in treatment. In the upcoming months, the panel’s report and suggestions for reducing any negative effects from AI chatbot encounters should be released.

AI psychosis: what is it?

Issues like drug usage, trauma, sleep deprivation, fever, or illnesses like schizophrenia can all lead to psychosis. Psychiatrists can identify psychosis in their patients by looking for symptoms like hallucinations, delusions, and disordered thinking.

A similar disorder that results from spending too much time conversing with an AI chatbot is sometimes referred to as AI psychosis. It can be used to describe a wide variety of incidents such as having false beliefs based on AI-generated responses, forming intense relationships with AI personas, etc.

How are AI firms responding to this?

According to OpenAI, it is developing enhancements that will let ChatGPT better identify indications of mental or emotional suffering in its customers.

In a blog post last month, the Microsoft-backed AI startup said that these modifications will enable the AI chatbot to “respond appropriately and point people to evidence-based resources when needed.” To enhance ChatGPT’s answers in these situations, OpenAI is also collaborating with a broad spectrum of stakeholders, such as doctors, clinicians, researchers studying human-computer interaction, mental health advisory groups, and specialists in youth development.

ChatGPT would be modified to make its AI-generated responses less definitive in “high-stakes situations,” the business added. For instance, rather than providing a simple response to a user’s inquiry like “Should I break up with my boyfriend?” the AI chatbot will guide the user through the decision-making process by posing follow-up questions, assessing the advantages and disadvantages, etc. According to the statement, ChatGPT’s behavioral update for important personal decisions will soon be available.

According to Anthropic, which is backed by Amazon, its most powerful AI models, Claude Opus 4 and 4.1, will now end a conversation with a user if they are acting abusively or in a way that is consistently damaging. According to the corporation, the action is intended to enhance the “welfare” of AI systems in potentially upsetting circumstances. The statement also stated that “we’re treating this feature as an ongoing experiment and will continue to refine our approach.”

Users have the option to update and resubmit their earlier prompt or initiate a new discussion if Claude cancels a conversation on its end. Additionally, they can provide feedback by utilizing the designated “Give feedback” button or by responding to Claude’s remark with a thumbs up or down.

According to Meta, parents may now limit how much time their kids spend interacting with the company’s AI chatbot on Instagram Teen Accounts. Additionally, connections to useful resources and the phone numbers of suicide prevention hotlines will be displayed to Meta AI users who submit prompts that seem to be related to suicide.

For breaking news and live news updates, like us on Facebook or follow us on Twitter and Instagram. Read more on Latest Technology on thefoxdaily.com.

COMMENTS 0

Author image
About the Author
Ashish kumar

Ashish Kumar is the creative mind behind The Fox Daily, where technology, innovation, and storytelling meet. A passionate developer and web strategist, Ashish began exploring the web when blogs were hand-coded, and CSS hacks were a rite of passage. Over the years, he has evolved into a full-stack thinker—crafting themes, optimizing WordPress experiences, and building platforms that blend utility with design. With a strong footing in both front-end flair and back-end logic, Ashish enjoys diving into complex problems—from custom plugin development to AI-enhanced content experiences. He is currently focused on building a modern digital media ecosystem through The Fox Daily, a platform dedicated to tech trends, digital culture, and web innovation. Ashish refuses to stick to the mainstream—often found experimenting with emerging technologies, building in-house tools, and spotlighting underrepresented tech niches. Whether it's creating a smarter search experience or integrating push notifications from scratch, Ashish builds not just for today, but for the evolving web of tomorrow.

... Read More