How AI Psychosis will Drive Us All Insane: Confirmation Bias, Crowdsourced Echo Chamber Chatbots

Summary

Critical view of generative artificial intelligence, particularly chatbots like ChatGPT, arguing that they operate on confirmation bias and create echo chambers that can exacerbate users' mental health issues, including psychosis and delusions. They highlighted that AI chatbots often provide incorrect, suggestible, and biased information, reinforcing users' existing beliefs and potentially triggering severe psychotic episodes, especially in vulnerable individuals. The speaker stressed the lack of regulation and research on "AI psychosis," warning of the significant psychological risks posed by these technologies in the absence of counterbalancing human interaction.

Tags

Tip: click a paragraph to jump to the exact moment in the video.

  1. 00:01 Today I’m going to present to you a new view of artificial intelligence especially generative artificial intelligence used in AI chatbots such as chat chat GPT and others. I propose that artificial intelligence works on the principle of confirmation
  2. 00:22 bias. It encourages psychosis and delusions by telling you what you want to hear. It is crowdsource and because it relies on billions of texts and so on so forth, it creates the impression of impartiality and authority. But essentially it’s an ecochamber which
  3. 00:50 isolates you, renders you soypistic, cuts you off from reality and then responds in a way which amplifies and magnifies your mental health issues. Artificial intelligence is a very dangerous technology. Not necessarily because it’s going to extinguish or eradicate the human
  4. 01:14 species, but because it can drive all of us further away from reality and into fantasies that rely on cognitive distortions such as grandiosity and paranoid ideiation. So, while artificial intelligence may not exterminate us physically, it may drive all of us insane.
  5. 01:43 I’ve been warning about the adverse mental health consequences of generative artificial intelligence GI or more colloquially um these are known as AI chat bots. So I’ve been warning that these chat bots could and do have adverse impact on their users.
  6. 02:06 There is a playlist on this channel titled artificial intelligence and technology where you can find many of these videos over the years. I have connected artificial intelligence with the resurgence and emergence of cluster B personality disorders, an increase in
  7. 02:22 incidents and later prevalence in the general population. Today I would like to discuss the newly minted phrase AI psychosis which essentially just captures what I’ve been saying for many years. So what is AI uh psychosis? I want to uh first reframe generative
  8. 02:45 artificial intelligence. I think that chatbot and similar chatbot similar technologies are essentially crowdsourcing. They’re a form of crowdsourcing. They’re like the next generation of Wikipedia when it comes to the processing of information and its conversion into
  9. 03:06 knowledge. So when you ask chart GPT or
  10. 03:12 dipsseek or or whatever anthropy or all these chatboards when you ask them to create something new for example lines of coding or solve a mathematical problem and so on so forth that’s not what we are discussing in this video. What we’re discussing in this video is
  11. 03:32 when you ask CH just chuck GPT and similar charts for advice. When you ask them for information, factbased information, when you regard them as an authority on the issues and questions that you’re raising, in these situations, you are very vulnerable as a user and regrettably
  12. 03:56 chat GPT and similar chatbots are constructed to take advantage of this vulnerability. I don’t think there’s malevolence or malice, but that’s definitely the way they leverage their interface or interaction with their users. The there is a presumption of authority.
  13. 04:18 There is um a refusal to engage in a
  14. 04:24 dialogue of equals. The chatbot assumes the position of a teacher, an encyclopedia, a guru, a leader, a thought leader, an intellectual when actually it is far from it. Recent studies have conclusively demonstrated that 40% up to 50% of the responses and
  15. 04:46 answers provided by chatbots, artificial intelligence chatbots, are completely wrong. These are not only hallucinations, but worse, misinformation and bias. They are wrong not only as a matter of opinion. They’re wrong as a matter of fact. They’re
  16. 05:04 factually wrong. I have tested the earlier version of chart GPT, chart GPT 3 I think and 3.5 with my own data and you can see the result in the video I’ve made again in the artificial intelligence playlist. But the situation has become even worse.
  17. 05:25 Artificial intelligence chatbots are suggestible. In other words, they are reactive and responsive to changes in the structure of the query. If you change your query, if you modify it, if you change the question, including the order of the words in the
  18. 05:45 question, you’re likely to receive dramatically different answers which have nothing in common. They’re mutually exclusive and contradictory. This is called suggestability. You’re able to influence the output of the chatbot by manipulating the input,
  19. 06:03 the question you ask. This is a serious problem because chatbot chat bots end up telling you what you want to hear. They end up amplifying, magnifying your biases, your delusions, your misinformation. And by doing so, they’re isolating you from reality increasingly more. They are
  20. 06:32 rendering you intellectually soypistic. There it’s not only a question of crowdsourcing, which in itself is very susceptible to manipulation and bias, but it’s a crowdsourced echochamber. Ecochamber. It’s a crowdsourced thought silo. It’s a it’s the kind of
  21. 06:54 crowdsourcing which never exposes the user to counterveailing thinking to challenges to other opinions to diversity. So AI psychosis is the outcome of a repetition of messaging, a repetition of texts, a repetition of images, a repetition of output outputs
  22. 07:20 that tend to self enhance and self-reinforce time and again. They render the user aminable to delusions, including, for example, cognitive distortions such as grandiosity. Artificial intelligence aggrandizes users, tells them what they want to hear, affirms and confirms their own
  23. 07:46 misperceptions about themselves, their own erroneous, fantastic, inflated, sometimes self-concept. And all this creates what is colloquially known as AI psychosis. Let me be clear. There are many other situations in life which yield the same outcomes. For example,
  24. 08:08 when you drink alcohol, this engenders something known as alcohol myopia which is indistinguishable from narcissistic grandiosity. Sim similar impact. Coke cocaine has a similar impact. Coke grandiosity. the co user um feels that he or she is
  25. 08:29 invincible, invulnerable, immune to the consequences of their actions. So alcohol myopia is similar to narcissism whereas cocoa cocaine would lead to something resembling very much psychopathy. But can AI chatbot trigger real psychosis? They can, as I’ve just explained,
  26. 08:54 reinforce delusional beliefs about the environment, about other people, about reality, and about oneself. But in rare cases, yes, artificial intelligence chatbots have induced psycho psychotic episodes. People developed psychosis. People were unable to tell the difference or to
  27. 09:19 distinguish between reality and fantasy or reality and the delusion. They lost their reality testing. People who have interacted with generative artificial intelligence chatbots have been reporting this with increasing frequency and this is seriously worrying. By now
  28. 09:38 we have well over 20 reports of people who have developed absolute psychotic disorder and required medication and even hospitalization. And this is in a single month. And so chart GPT, Microsoft Copilot, Deep Dips, Deep Seek, I mean all these um lead to
  29. 10:05 transformations in one’s state of mind. They are they alter consciousness. They are consciousness altering. Some people experience what they call spiritual awakening. Some people spot conspiracies and become conspiracy theories. Theories. Of course, because the technology is
  30. 10:29 very new, there’s very little research about this so-called AI psychosis. It’s all anecdotal. There are no real studies. Let’s go back to basics. Psychosis is when you perceive reality and when you think about it in the wrong way. When there is a disruption in the accurate
  31. 10:56 flow or the accuracy accuracy of the flow of information from reality into your brain and your mind and your appraisal of this information and your ability to organize this new incoming information, this new incoming stimuli in a pre-arranged framework which helps
  32. 11:14 you to organize the world and make sense of it. So psychosis is not only about misperceiving or misapprehending reality. It’s about an inability to decode reality, to decipher reality, to make sense of it, to imbue it with meaning, to connect it to the past or uh project
  33. 11:35 it into the future, generate predictions, hypothesis. Now, psychos psychotic disorders have a very diverse ideology. Some psychotic disorders are caused by brain disorders, for example, schizophrenia or bipolar disorder, severe stress, uh substance use disorders, including
  34. 11:56 drug use. Um there at this stage there is no solid evidence that artificial intelligence triggers trigger the kind of psychosis that is the exact equivalent of um organic psychosis. We we don’t know if the pseudocychosis generated by by using and interacting
  35. 12:27 with artificial intelligence is indistinguishable clinically from real psychosis which is caused by neurobiological processes and has an organic substrate. In other words, that AI triggers psychosis is still an untested hypothesis. Um, still having said that it makes sense,
  36. 12:55 it’s exactly like for example the genetic origin or hereditary origin of narcissistic personality disorder. We do not have any convincing evidence that this is the case, but it makes a lot of sense and probably is true. Similarly with artificial intelligence
  37. 13:13 um we don’t have sub a substantive substantial body of evidence to connect artificial intelligence and psychotic disorders but it stands to reason. It makes sense. Chatbots, for example, are designed to provide positive humanlike responses to prompts and queries from
  38. 13:38 users. And this would tend to enhance the risk of psychosis among people who already have problem who already have trouble distinguishing between what is and what is not real. So if you are predisposed, if you if you have a premobidity, if you are suffering from a mental
  39. 13:59 health condition that predisposes you to impair your reality testing to to to gauge reality to appropriately, predisposes you to a malfunction or a dysfunction or a disruption in your ability to gauge reality appropriately. If you have this predisposition,
  40. 14:18 exposure to artific artificial intelligence will exacerbate it, will make it worse. In the United Kingdom, a few researchers have proposed that conversations with chatbots may create a feedback loop. The artificial intelligence chatbot would reinforce paranoid or
  41. 14:40 delusional beliefs that the user has mentioned in the prompt or the query. And this in turn would reinforce the chatbot’s own reactions and responses. It would condition the chatbot in a way. So the psychos the psychosis is both ways. The user is rendered psychotic but
  42. 14:59 so is the chatbot. These feedback loops and feedback mechanisms they go both ways. As the chatbot shapes or reshapes the user’s mind and renders it more detached from reality, in other words, more psychotic. The user does the same to the chatbot. And the chatbot’s responses as the
  43. 15:25 conversation continues begin to merge to adhere closely to the users’s beliefs, to the users’s misinformation. chatbot appears to be a kind of people pleaser attempting to pleate and calm down and conform to the expectations of the user and that is the aforementioned
  44. 15:51 suggestability. There was a pre-print published in July. It’s not been peer-reviewed mind you but these scientists simulated user chatbot conversations using prompts with varying levels of paranoia. They found that the user in the chatbot reinforced each
  45. 16:10 other’s paranoid ideation and paranoid beliefs. Studies involving people without mental health conditions, without tendencies or predisposition, for example, for paranoia or people who never entertain paranoid ideation or paranoid thinking, people who are not hypervigilant. these
  46. 16:32 people um uh would tend to react a lot less forcefully or powerfully to the suggestions uh and the inputs and the stimuli and the triggers afforded by the chatbot they’re using. This is the belief. There are no studies yet. This needs to be studied.
  47. 16:55 But clearly some people are susceptible, some people are not. Some people are suggestible, some people are not. We see it for example in hypnosis. People who have already experienced some kind of mental health issue are probably at a greater risk for developing
  48. 17:13 psychosis than people who haven’t. So it seems that some people experience a psychotic break when they interact with chatbots. But most of them were already susceptible to developing delusions or paranoia owing to their genetics, to stress, to misuse of drugs
  49. 17:35 and alcohol, to mental health issues such as borderline personality disorder, narcissistic personality disorder, paranoid personality disorder, and so on and so forth. Chat bots are very likely, for example, to exacerbate the manic phase in bipolar disorder, a period of
  50. 17:54 extremely euphoric and elevated energy and mood because they reinforce symptoms such as elation or despair. By the way, people who are isolated, people who don’t have checks and balances, they, for example, they’re skto or they’re introverted. They never interact with people. They
  51. 18:19 don’t have friends. The only feedback they would get would be from the chatbot. The chatbot would assume the role of a friend, the role of a of a parental figure, the role of a guru, the role of a teacher. And because the these people live in a vacuum, a
  52. 18:35 social vacuum, there would be no counterveailing voices, no contravening information, no stimuli to undermine the stream of consciousness of the chatbot. The chatbot would take over, would hijack the user’s mind. People without access to friends or family are at risk because
  53. 19:00 friends and family, co-workers, parishioners, you name it, neighbors, they could provide a corrective measure. They could somehow calibrate the user of the artificial intelligence chat. when you interact with other people. Studies have shown repeatedly this is
  54. 19:24 very protective against psychosis or at least psychotic episodes because um as Kylie Seymour neuroscientist at the University of Technology in Sydney in Australia says they these people other people like friends, family and so on can offer those counterfactual pieces
  55. 19:44 of evidence to help you think about how you’re thinking. So, um, it’s a problem when you’re all alone, you and the chatbot, you’re in trouble. Simmer has misspoken. They don’t provide counterfactual pieces of evidence. Other people provide factual pieces of
  56. 20:09 evidence. The counterfactual emanates from the chatbot. But what she meant to say is that there would be competing narratives. If you if you’re in touch with people and and you’re using an AI chatbot, you would have two competing narratives which would prevent
  57. 20:27 psychosis. The risk of developing psychosis for people without a predisposition is the same whether they do or do not interact in chatbots believes similar. There are no studies to substantiate this. But how does artificial intelligence reinforce delusional beliefs? Take for
  58. 20:48 example grandiosity. It would tell you what you want to hear. It would tell you you’re a genius. It would tell you that your book is revolutionary and groundbreaking. It would tell you that you belong in the upper echelons of intellectuals in human
  59. 21:03 history. It would tell you anything. It would do anything to please you and to keep you to keep you as a user to make sure that you continue to use the chatbot and it would do do so in an authoritative way embedding it in a context which appears very real and well
  60. 21:23 substantiated and well researched when it actually is not or is even completely fellacious. This is very serious. We are not talking about 4% of the responses. We’re talking about half of them. Chatbots remember information from conversations that occurred months earlier. And
  61. 21:46 um one of the delusions that are created by this technological facet of chatbots is paranoia. Because people who are predisposed to be paranoid, they would feel that the chatbot is watching them. that their thoughts are being extracted or recorded.
  62. 22:09 Um, memory is very fickle. We tend to forget 90% of everything within a few months. We tend to forget half of everything within 24 hours. And the chatbot, the artificial intelligence chatbot never does. Chatbots never forget anything. It’s all
  63. 22:26 recorded. When they play it back at you, when they quote you or remind you of something you have said or something you have typed, it’s very creepy. It’s very eerie because you can’t remember having said it or having typed it. And it it gives
  64. 22:44 the impression of surveillance. It gives the impression of being monitored, of being supervised, of being watched, or being observed, which is the core of paranoid ideiation. grandiose delusions. Um, people believe that they can speak to God through the
  65. 23:03 chatbot. People believe they have discovered some amazing theory in physics or in philosophy or that they have invented a new religion. All these have been reported. Worse Jonal has analyzed several chats and posted the analysis online and he found the analysis found dozens of
  66. 23:24 instances in which chatbots validated mystical or delusional beliefs. Chatbots made claims that um they were in contact with extraterrestrial intelligence and were channeling it to the user. Amazing things. situation is really really bad and there’s no regulation. There are no
  67. 23:45 laws. There’s no regulatory agencies. It’s a it’s the wild west. Technology companies own rule the world and they’re in charge. They do whatever they want. They’re playing with users minds and not for the better. It’s very delirious and very detrimental and I’ve been warning
  68. 24:08 against it for years.
Facebook
X
LinkedIn
WhatsApp

https://vakninsummaries.com/ (Full summaries of Sam Vaknin’s videos)

http://www.narcissistic-abuse.com/mediakit.html (My work in psychology: Media Kit and Press Room)

Bonus Consultations with Sam Vaknin or Lidija Rangelovska (or both) http://www.narcissistic-abuse.com/ctcounsel.html

http://www.youtube.com/samvaknin (Narcissists, Psychopaths, Abuse)

http://www.youtube.com/vakninmusings (World in Conflict and Transition)

http://www.narcissistic-abuse.com (Malignant Self-love: Narcissism Revisited)

http://www.narcissistic-abuse.com/cv.html (Biography and Resume)

Summary

Critical view of generative artificial intelligence, particularly chatbots like ChatGPT, arguing that they operate on confirmation bias and create echo chambers that can exacerbate users' mental health issues, including psychosis and delusions. They highlighted that AI chatbots often provide incorrect, suggestible, and biased information, reinforcing users' existing beliefs and potentially triggering severe psychotic episodes, especially in vulnerable individuals. The speaker stressed the lack of regulation and research on "AI psychosis," warning of the significant psychological risks posed by these technologies in the absence of counterbalancing human interaction.

Tags

If you enjoyed this article, you might like the following:

Narcissism at Its Best (Trailer of Documentary by Peter Kolakowski)

The speaker expressed a strong critical view of the creators of modern technology, particularly social media and artificial intelligence, labeling them as mentally ill and motivated by rejecting reality. They argued that social media platforms are designed to isolate individuals by reducing intimacy and exploiting loneliness for profit, employing psychological

Read More »

FREE BOOKS at Vaknin-Rangelovska Foundation

The Vaknin-Rangelovska Foundation offers six free downloadable books covering topics such as narcissistic abuse, psychological recovery, and revolutionary theories in physics, with support available for authors to publish expansions of Sandaknin’s work. The foundation also sponsors free seminars, lectures, interviews, and documentary participations to promote education in psychology, economics, and

Read More »

Solastalgia: Healing Shared Fantasy, Narcissistic Abuse

Sam Vaknin detailed the concept of narcissistic abuse as creating a dystopian, alienating environment that sets victims up for failure and emotional distress, coining the term “solstalgia” to describe the unique psychological pain caused by such oppressive environments. He related solstalgia, originally defined as distress from environmental changes, to the

Read More »

When the Narcissist Cries Wolf

The speaker discussed the complex nature of narcissists, highlighting that they are not always predators but can also be victims of toxic relationships and manipulation, often surrounded by other disordered individuals like borderlines and psychopaths. Narcissists are vulnerable due to their distorted self-concept, susceptibility to manipulation, and impaired reality testing,

Read More »

How to Hatch in Narcissist’s Mind: Internal to External Object

Sam Vaknin explained that narcissists perceive others not as separate external entities but as internal objects or avatars within their own minds, using primitive defenses such as projection and splitting internally on these representations rather than on the actual people. He emphasized that challenging a narcissist’s internal object through asserting

Read More »

Sam and Lidija: Parents of Narcissistic Abuse Field (with J.S. Wolfe)

In this in-depth discussion, Sam Vaknin and Lydia Rangalowska explored the complexities of narcissistic personality disorder, including its origins, emotional dynamics, and impact on relationships, emphasizing the internalized nature of narcissistic perceptions and behaviors. They highlighted the challenges faced by partners of narcissists, the interplay between different personality disorders, and

Read More »

Parentified Child’s Insecure Attachment: Internal Parents, Rebirth, Hyperintrojection

The speaker discussed the complex insecure attachment psychological dynamics of parentified children, explaining how insecure, regressed parents often infantilize themselves and delegate parental responsibilities to their children, causing these children to assume caregiving roles both externally and internally throughout their lives. They highlighted how even mentally healthy parents can regress

Read More »

Being Alone is Normal, Socializing is Coercive (Loneliness Industry Podcast)

Professor Sam Vaknin discussed how human loneliness, being alone is an inherent condition stemming from the existential trauma of separateness experienced through the gaze of others, with modern technology enabling a choice to embrace isolation via artificial interactions like social media. He emphasized that this technological shift is intentional and

Read More »

Is There Good Narcissism? (The Nerve with Maureen Callahan)

The speaker discussed the concept of narcissism, emphasizing that a certain degree of narcissism is natural and necessary for healthy self-esteem, but can become pathological and harmful. They explained the narcissist’s behavior, awareness of their impact, and the dynamic of the “fantastic space” or fantasy world narcissists create to manipulate

Read More »

Recognize Borderline Personality Disorder in Women and Mothers (The Nerve with Maureen Callahan)

The discussion focused on defining borderline personality disorder (BPD) through key traits such as innate emptiness, emotional dysregulation, suicidal ideation, chronic anger, intense and unstable relationships, and twin anxieties of abandonment and engulfment. It highlighted that not all individuals with BPD exhibit every trait, using personal childhood examples to illustrate

Read More »