Delusional Experiences Emerging From AI Chatbot Interactions or "AI Psychosis".
Uložené v:
| Názov: | Delusional Experiences Emerging From AI Chatbot Interactions or "AI Psychosis". |
|---|---|
| Autori: | Hudon A; Department of Psychiatry and Addictology, Faculty of Medicine, Université de Montréal, Montreal, QC, Canada.; Unité de recherche en psychiatrie, Département de psychiatrie, Institut universitaire en santé mentale de Montréal, Montreal, QC, Canada.; Centre de recherche, Institut universitaire en santé mentale de Montréal, Montreal, QC, Canada.; Department of Psychiatry, Institut national de psychiatrie légale Philippe-Pinel, Montreal, QC, Canada.; Centre de recherche en pédagogie de la santé, Université de Montréal, Montreal, QC, Canada., Stip E; Department of Psychiatry and Addictology, Faculty of Medicine, Université de Montréal, Montreal, QC, Canada.; Unité de recherche en psychiatrie, Département de psychiatrie, Institut universitaire en santé mentale de Montréal, Montreal, QC, Canada.; Centre de recherche, Institut universitaire en santé mentale de Montréal, Montreal, QC, Canada.; Department of Psychiatry, United Arab Emirates University, Al-Ain, United Arab Emirates. |
| Zdroj: | JMIR mental health [JMIR Ment Health] 2025 Dec 03; Vol. 12, pp. e85799. Date of Electronic Publication: 2025 Dec 03. |
| Spôsob vydávania: | Journal Article; Review |
| Jazyk: | English |
| Informácie o časopise: | Publisher: JMIR Publications Inc Country of Publication: Canada NLM ID: 101658926 Publication Model: Electronic Cited Medium: Internet ISSN: 2368-7959 (Electronic) Linking ISSN: 23687959 NLM ISO Abbreviation: JMIR Ment Health Subsets: MEDLINE |
| Imprint Name(s): | Original Publication: Toronto : JMIR Publications Inc., [2014]- |
| Výrazy zo slovníka MeSH: | Artificial Intelligence* , Psychotic Disorders*/psychology , Delusions*/psychology , Delusions*/etiology, Humans ; Generative Artificial Intelligence |
| Abstrakt: | The integration of artificial intelligence (AI) into daily life has introduced unprecedented forms of human-machine interaction, prompting psychiatry to reconsider the boundaries between environment, cognition, and technology. This Viewpoint reviews the concept of "AI psychosis," which is a framework to understand how sustained engagement with conversational AI systems might trigger, amplify, or reshape psychotic experiences in vulnerable individuals. Drawing from phenomenological psychopathology, the stress-vulnerability model, cognitive theory, and digital mental health research, the paper situates AI psychosis at the intersection of predisposition and algorithmic environment. Rather than defining a new diagnostic entity, it examines how immersive and anthropomorphic AI technologies may modulate perception, belief, and affect, altering the prereflective sense of reality that grounds human experience. The argument unfolds through 4 complementary lenses. First, within the stress-vulnerability model, AI acts as a novel psychosocial stressor. Its 24-hour availability and emotional responsiveness may increase allostatic load, disturb sleep, and reinforce maladaptive appraisals. Second, the digital therapeutic alliance, a construct describing relational engagement with digital systems, is conceptualized as a double-edged mediator. While empathic design can enhance adherence and support, uncritical validation by AI systems may entrench delusional conviction or cognitive perseveration, reversing the corrective principles of cognitive-behavioral therapy for psychosis. Third, disturbances in theory of mind offer a cognitive pathway: individuals with impaired or hyperactive mentalization may project intentionality or empathy onto AI, perceiving chatbots as sentient interlocutors. This dyadic misattribution may form a "digital folie à deux," where the AI becomes a reinforcing partner in delusional elaboration. Fourth, emerging risk factors, including loneliness, trauma history, schizotypal traits, nocturnal or solitary AI use, and algorithmic reinforcement of belief-confirming content may play roles at the individual and environmental levels. Building on this synthesis, we advance a translational research agenda and five domains of action: (1) empirical studies using longitudinal and digital-phenotyping designs to quantify dose-response relationships between AI exposure, stress physiology, and psychotic symptomatology; (2) integration of digital phenomenology into clinical assessment and training; (3) embedding therapeutic design safeguards into AI systems, such as reflective prompts and "reality-testing" nudges; (4) creation of ethical and governance frameworks for AI-related psychiatric events, modeled on pharmacovigilance; and (5) development of environmental cognitive remediation, a preventive intervention aimed at strengthening contextual awareness and reanchoring experience in the physical and social world. By applying empirical rigor and therapeutic ethics to this emerging interface, clinicians, researchers, patients, and developers can transform a potential hazard into an opportunity to deepen understanding of human cognition, safeguard mental health, and promote responsible AI integration within society. (©Alexandre Hudon, Emmanuel Stip. Originally published in JMIR Mental Health (https://mental.jmir.org), 03.12.2025.) |
| Contributed Indexing: | Keywords: artificial intelligence; chatbots; delusions; digital phenotyping; human-computer interaction; phenomenological psychopathology; psychosis; schizophrenia; stress-vulnerability model; theory of mind |
| Entry Date(s): | Date Created: 20251122 Date Completed: 20251203 Latest Revision: 20251203 |
| Update Code: | 20251204 |
| DOI: | 10.2196/85799 |
| PMID: | 41273266 |
| Databáza: | MEDLINE |
| Abstrakt: | The integration of artificial intelligence (AI) into daily life has introduced unprecedented forms of human-machine interaction, prompting psychiatry to reconsider the boundaries between environment, cognition, and technology. This Viewpoint reviews the concept of "AI psychosis," which is a framework to understand how sustained engagement with conversational AI systems might trigger, amplify, or reshape psychotic experiences in vulnerable individuals. Drawing from phenomenological psychopathology, the stress-vulnerability model, cognitive theory, and digital mental health research, the paper situates AI psychosis at the intersection of predisposition and algorithmic environment. Rather than defining a new diagnostic entity, it examines how immersive and anthropomorphic AI technologies may modulate perception, belief, and affect, altering the prereflective sense of reality that grounds human experience. The argument unfolds through 4 complementary lenses. First, within the stress-vulnerability model, AI acts as a novel psychosocial stressor. Its 24-hour availability and emotional responsiveness may increase allostatic load, disturb sleep, and reinforce maladaptive appraisals. Second, the digital therapeutic alliance, a construct describing relational engagement with digital systems, is conceptualized as a double-edged mediator. While empathic design can enhance adherence and support, uncritical validation by AI systems may entrench delusional conviction or cognitive perseveration, reversing the corrective principles of cognitive-behavioral therapy for psychosis. Third, disturbances in theory of mind offer a cognitive pathway: individuals with impaired or hyperactive mentalization may project intentionality or empathy onto AI, perceiving chatbots as sentient interlocutors. This dyadic misattribution may form a "digital folie à deux," where the AI becomes a reinforcing partner in delusional elaboration. Fourth, emerging risk factors, including loneliness, trauma history, schizotypal traits, nocturnal or solitary AI use, and algorithmic reinforcement of belief-confirming content may play roles at the individual and environmental levels. Building on this synthesis, we advance a translational research agenda and five domains of action: (1) empirical studies using longitudinal and digital-phenotyping designs to quantify dose-response relationships between AI exposure, stress physiology, and psychotic symptomatology; (2) integration of digital phenomenology into clinical assessment and training; (3) embedding therapeutic design safeguards into AI systems, such as reflective prompts and "reality-testing" nudges; (4) creation of ethical and governance frameworks for AI-related psychiatric events, modeled on pharmacovigilance; and (5) development of environmental cognitive remediation, a preventive intervention aimed at strengthening contextual awareness and reanchoring experience in the physical and social world. By applying empirical rigor and therapeutic ethics to this emerging interface, clinicians, researchers, patients, and developers can transform a potential hazard into an opportunity to deepen understanding of human cognition, safeguard mental health, and promote responsible AI integration within society.<br /> (©Alexandre Hudon, Emmanuel Stip. Originally published in JMIR Mental Health (https://mental.jmir.org), 03.12.2025.) |
|---|---|
| ISSN: | 2368-7959 |
| DOI: | 10.2196/85799 |
Full Text Finder
Nájsť tento článok vo Web of Science