AI psychosis directly impacts emotionally vulnerable users

AI psychosis directly impacts emotionally vulnerable users


The phenomenon of AI psychosis validates beliefs and increases psychological risks


Summary

The emerging phenomenon of “AI psychosis” highlights the psychological risks associated with the overuse of AI, validating distorted beliefs in vulnerable users and reinforcing the need for skilled human monitoring and ethical system design.




The intensive use of chatbots and generative artificial intelligence systems is generating an emerging phenomenon known as AI psychosis, in which the perceptions of emotionally fragile people are strongly reinforced. Microsoft’s AI chief, Mustafa Suleyman, had previously warned of a rise in cases of AI psychosis, a non-clinical term that describes people who, through over-reliance on chatbots like ChatGPT, Claude and Grok, come to believe that imaginary perceptions are real. In a clinical study, psychiatrist Keith Sakata, from the University of California, San Francisco, reported treating 12 patients with psychosis-like symptoms associated with prolonged chatbot use, mostly young adults with psychological vulnerabilities, who presented with delusions, disorganized thinking, and hallucinations.

Unlike social networks, which already functioned as echo chambers, artificial intelligence goes further by producing plausible justifications, data and explanations that confirm distorted beliefs, reinforcing psychological vulnerabilities and making qualified human support essential. In the UK, a YouGov poll showed that 31% of young Britons aged 18 to 24 are willing to discuss mental health with AI rather than human therapists, highlighting the growing reliance on technology for sensitive issues. Experts emphasize that artificial intelligence should only be used as an aid and never as a substitute for professional support.

Renato Asse, founder of Comunidade Sem Codar and specialist in automation and practical applications of artificial intelligence, emphasizes that the phenomenon of artificial intelligence psychosis occurs because systems continuously validate the user without presenting counterpoints. “Users in difficulty could find artificial intelligence not a brake, but an accelerator of their distorted perceptions, because the system transforms hypotheses into plausible justifications and reinforces existing thought models,” he explains.

The growth of artificial intelligence in the Brazilian market is intense, but raises complex ethical challenges. Generative systems have the ability to produce personalized responses that confirm users’ beliefs, creating a continuous validation loop similar to an echo chamber. Renato emphasizes that responsibility in the design of these systems must include mechanisms that encourage AI to present opposing points of view and different interpretations, avoiding automatic reinforcement of cognitive distortions and promoting safer and more conscious interaction.

According to the expert, to use technology ethically and protect mental health, it is necessary for the system to integrate the user’s perception and provide, when possible, counterpoints. “We shouldn’t be afraid to create AI that disagrees and challenges the user, as long as it is designed to stimulate reflection, offer different perspectives and support critical thinking without replacing human monitoring,” he concludes.

Homework

inspires transformation in the world of work, in business, in society. Compasso, a content and connection agency, is born.

Source: Terra

You may also like

10 names for sleeping dogs

10 names for sleeping dogs

Check out creative and personality-filled suggestions for dogs who love to sleep Choosing a dog’s