AI psychosis (NYT, PsychologyToday) is an apparent phenomenon where people go crazy after talking to chatbots too much. There are some high-profile anecdotes, but still many unanswered questions. For example, how common is it really? Are the chatbots really driving people crazy, or just catching the attention of people who were crazy already? Isn’t psychosis supposed to be a biological disease? Wouldn’t that make chatbot-induced psychosis the same kind of category error as chatbot-induced diabetes?
I don’t have all the answers, so think of this post as an exploration of possible analogies and precedents rather than a strongly-held thesis. Also, I might have one answer - I think the yearly incidence of AI psychosis is somewhere around 1 in 10,000 (for a loose definition) to 1 in 100,000 (for a strict definition). I’ll talk about how I got those numbers at the end. But first:
I. Lenin Was A Mushroom
In the early 1990s, as the Soviet Union was collapsing, performance artist Sergey Kuryokhin presented a Daily Show style segment on a Russian talk show. He argued that Vladimir Lenin ate so many mushrooms that he eventually turned into a mushroom, and led the October Revolution while possessed by a sentient mushroom spirit.
Today this all sounds banal - just another schizo conspiracy theory that probably wouldn’t even get enough YouTube clicks to earn back its production cost. But 1990s Russians were used to a stodgy, dignified version of state TV. While it’s an exaggeration to say it would never lie to them, it would at least be comprehensible lies, like how the latest Five Year Plan was right on track. And Kuryohkin designed his piece masterfully, interviewing leading authorities about tangentially related topics (“so, you’re the world’s top Lenin biographer, would you agree that Lenin often ate mushrooms?”) and splicing the footage to look like a growing scholarly consensus. The result basically one-shotted a large segment of the Russian populace. According to Wikipedia:
A large number of Soviet citizens (one estimate puts the number at 11.3 million audience members) took the deadpan "interview" at face value, in spite of the absurd claims presented. Sholokhov has said that perhaps the most notable result of the show was an appeal by a group of party members to the Leningrad Regional Committee of the CPSU to clarify the veracity of Kuryokhin's claim. According to Sholokhov, in response to the request one of the top regional functionaries stated that "Lenin could not have been a mushroom" because "a mammal cannot be a plant."
Aside from the usual conclusion (that history is more charming and fascinating than you can imagine) I conclude two things from this incident.
First, much like LLMs, lots of people don’t really have world models. They believe what their friends believe, or what has good epistemic vibes. If they don’t currently think that Lenin was a mushroom, it’s not because they understand human agency / scientific materialism / psychedelia and have a well-worked out theory of why fungi can’t contain sentient mushroom spirits that possess leading communist politicians. They don’t believe it because it feels absurd. They predict that other people would laugh at them if they said it. If they get told that it it’s not absurd, or that maybe people would laugh at them if they didn’t say it, then their opinion will at least teeter precariously.
But second, if a source which should be official starts acting in unofficial ways, it can take people a while to catch on. And I think some people - God help them - treat AI as the sort of thing which should be official. Science fiction tells us that AIs are smarter than us - or, if not smarter, at least perfectly rational computer beings who dwell in a world of mathematical precision. And ChatGPT is produced by OpenAI, a $300 billion company run by Silicon Valley wunderkind Sam Altman. If your drinking buddy says you’re a genius, you know he’s probably putting you on. If the perfectly rational machine spirit trained in a city-sized data center by the world’s most cutting-edge company says you’re a genius . . . maybe you’re a genius?
Kelsey Piper discusses her new parenting technique: when her young daughter refuses to hear reason, they ask the AI who’s right. The AI says she should listen to her parents, and the child is mollified:
I’m not making fun of Kelsey or her daughter here. Something about this rings true to me. When I was eight years old, I wouldn’t have cared much what my parents thought either. But if the computer believed it, that would be a different story!
II. In Search Of . . . Social Media Psychosis?
In case you’ve been hiding under a rock for the past ten years: QAnon is a right-wing conspiracy theory. The most common version claims that liberal elites, especially Hillary Clinton, molest young children to extract an immortality serum from their blood. Donald Trump figured this out and is trying to stop them, but for some reason he can’t play his hand openly, so he has to pursue a roundabout strategy involving winning the Presidency and dismantling the liberal order from above. Everything that has happened in politics over the past ten years has been part of the shadow war between Trump and the immortal pedophile conspiracy.
This is pretty crazy. But is it psychotic? And since it spread through sites like 4chan and Facebook, should we invent a new diagnostic entity, “social media psychosis”, to cover it?
These are tough questions, but in the end we didn’t do this.
I think this was partly because there was a pre-existing category, “conspiracy theory”, that seemed like a better fit. We concluded that “sometimes social media facilitates the spread of conspiracy theories”, but stepped back from saying “social media can induce psychosis”.

And partly it was because there are so many crazy beliefs in the world - spirits, crystal healing, moon landing denial, esoteric Hitlerism, whichever religions you don’t believe in - that psychiatrists have instituted a blanket exemption for any widely held idea. If you think you’re being attacked by demons, you’re delusional, unless you’re from some culture where lots of people get attacked by demons, in which case it’s a religion and you’re fine. This is partly political self-protection - no psychiatrist wants to be the guy who commits an Afro-Caribbean person for believing in voodoo. But it also seems to track something useful about reality. Nietzsche wrote “Madness is something rare in individuals — but in groups, parties, peoples, and ages, it is the rule.” Most people don’t have world-models - they believe what their friends believe, or what has good epistemic vibes. In a large group, weird ideas can ricochet from person to person and get established even in healthy brains. In an Afro-Caribbean culture where all your friends get attacked by demons at voodoo church every Sunday, a belief in demon attacks can co-exist with otherwise being a totally functional individual.
So is QAnon a religion? Awkward question, but it’s non-psychotic by definition. Still, it’s interesting, isn’t it? If social media makes a thousand people believe the same crazy thing, it’s not psychotic. If LLMs make a thousand people each believe a different crazy thing, that is psychotic. Is this a meaningful difference, or an accounting convention?
Also, what if a thousand people believe something, but it’s you and your 999 ChatGPT instances?
III. A Hidden Army Of Crackpots
I have a family member who believes that the theory of evolution, as usually understood, cannot possibly work. He has developed an alternative theory called “noctogenesis” which patches Darwinism using ideas from the transactional interpretation of quantum mechanics, and he works on-and-off on various related books and papers. I have told him I suspect he might be a crackpot; he stands by his claims. It’s fine; when I got into the technological singularity and AI safety, lots of people suspected I was a crackpot, and I stood by my claims too. You’ve got to stand by your family members even when they’re slightly crackpottish.
This family member is happily married, retired after running a successful business, and generally a normal likeable person. He has no signs of mental illness, and doesn’t talk about quantum evolution unless someone else brings it up first. There must be millions of people like him. Used car dealers with proofs of P = NP, dentists who think they’ve discovered something important about Mary Magdalene, math professors obsessed with destroying the moon.
I’m working on evaluating ACX Grants, and these people are out in force. A few propose literal perpetual motion machines. Others have vaguer plans, like some kind of social media app (it’s always a social media app) that will cause world peace. Many of them have decent jobs and seem like upstanding members of society. Their secrets are known only to themselves, their family members, and their would-be grantmaker.
…and, increasingly, their chatbots. After years of hiatus (or at least not talking to me about his work) my family member is back on the quantum evolution beat, and LLMs appear to be involved. If I knew him less well, I would think the LLM had caused the quantum evolution theory - but no, it just made it much easier to research and write about.
Is this psychosis? The answer has to be no, but it’s once again hard to draw the line. A very small number of crackpots will be vindicated by history. A larger number will be erroneous but sympathetic - the official account of the Kennedy assassination is pretty weird, and reasonable minds can disagree. From there, we get to ones that are maybe not so sympathetic: flat earth, QAnon, the thing where the Queen was an alien lizard. If only one person thought the Queen was an alien lizard, and they never managed to convince anyone else, would that be sufficient evidence for a delusional disorder? I’m not sure.
(psychiatry has a diagnosis, schizotypal personality, which sort of involves being a normal person with a few odd ideas, but it’s not a great match for many of these people, and interesting mainly as a genetic curiosity - it travels in the same families as schizophrenia itself)
Maybe this is another place where we are forced to admit a spectrum model of psychiatric disorders - there is an unbroken continuum from mildly sad to suicidally depressed, from social drinking to raging alcoholism, and from eccentric to floridly psychotic. People who are eccentric can remain so their whole lives, with the level of expression depending on their social connections and the ease of pursuing their rabbit holes.
LLMs, by making it easier to pursue odd theories and serving as a surrogate social connection who always agrees with you, can bring latent crackpottery into the open.
IV. Cause And Effect
Bipolar disorder has an interesting relationship with sleep. Most manic people sleep very little, or not at all - maybe an hour or two a night. But also, poor sleep can cause bipolar episodes in people prone to them. In a typical case, a bipolar who’s been well-controlled for years will get assigned a big report at work and get poor sleep for a few nights until they finish. At first, this will be just as bad as it sounds, and they’ll be working through a fog of tiredness. Then the tiredness will lift. They’ll feel normal, then better-than-normal, until finally they can’t sleep even if they want to. Then they’ll email the report to their boss and it will be written entirely in Assyrian cuneiform.
I increasingly think this isn’t just an incidental feature of bipolar, but part of the reason it exists as a diagnostic category at all. Most people have a compensatory reaction to insomnia - missing one night of sleep makes you more tired the next. A small number of people have the reverse, a spiralling reaction where missing one night of sleep makes you less tired the next. Solve for the equilibrium and you reach a stable attractor point where you never sleep at all. But this does other bad things to your brain - hence the cuneiform.
I’m not claiming that bipolar is “just” sleep loss. As Borsboom et al will tell you, psychiatric disorders can be viewed as complex networks of symptoms, each reinforcing the others. In a few pure cases, you can get a ratchet going with sleep alone, and the sleeplessness will spark everything else. More likely, there will be lots of interactions between poor sleep and everything else, and the “everything else” can sink or hypercharge an impending manic episode. Still, I find this a fruitful way to think about bipolar. Sleeplessness is both the cause and the effect.
Can delusions also be like this?
That is, suppose there’s some personality trait where having one delusion makes you even more delusional. Maybe the delusion makes you excited (who wouldn’t be excited to learn they’re the Messiah?), and you’re more delusional when you’re in an excited state and not thinking clearly. Or maybe it’s a three-symptom cycle - the delusion causes excitement, which makes you unable to sleep, which scrambles your thinking, which makes you more delusional (which makes you even less able to sleep, etc). The point is: delusions are certainly an effect of bipolar disorder. And in the dynamical system model of psychiatric disorders, we should expect that effects are often also causes; that’s how the vicious cycle gets going.
This is the best I can do at modeling true LLM psychosis. Someone with a trait where delusions lead inevitably to more delusions starts using an LLM. The LLM accentuates whatever usual tendency towards crackpottery they have and makes them believe something a little crazier than whatever they believed before. Then that crazy belief feeds upon itself and causes other things like excitement and sleep loss, which (if the person is predisposed) precipitates a true psychotic episode.
V. Folie A Deux Ex Machina
If one person believes a crazy thing, it’s a delusion; if a thousand people believe it, it’s a religion. What if exactly two people believe it?
In psychiatry, this is called folie a deux. It fits awkwardly into our nosology and is rarely seen. Still, it happens enough to generate a few case studies. In a typical case, one person has psychosis for some normal reason, like schizophrenia or bipolar, and the second person is a shut-in who lives with them and rarely talks to anyone else. The psychotic person gets some normal psychotic delusion - they’re God, the Feds are after them, etc - and sort of psychically steamrolls over the second person until they believe it too. Usually removing the second person from the first is sufficient for a cure.
This slightly challenges the view of psychosis as a biological disorder - but only slightly. Again, think of most people as lacking world-models, but being moored to reality by some vague sense of social consensus. If your social life is limited to one person, and that person themselves becomes unmoored, then sometimes you will follow along. I would expect second-sufferers to believe delusions in a sort of cognitively normal way, the same way people believe true facts, honest mistakes, and conspiracy theories. I would expect them to be less likely (though not zero likely) to have other psychotic features like sleep disturbances, hallucinations, disorganized speech, or a tendency to autonomously generate delusional ideas aside from the one they absorbed from the index case.
An introverted person using an LLM has some similarities to folie a deux. If they use the chatbot very often, it might be a large majority of their social interactions. Here the primary vs. secondary distinction breaks down - the most likely scenario is that the human first suggested the crazy idea, the machine reflected it back slightly stronger, and it kept ricocheting back and forth, gaining confidence with each iteration, until both were totally convinced. Compare this to normal social interactions, where if someone expresses a crazy idea that isn’t common in their culture, other people will shoot them down or at the very least nod politely and stop the conversation.
So my working theory of LLM psychosis is:
Some patients were already psychotic, and LLMs just help them be psychotic more effectively.
Other patients had a subclinical tendency towards crackpottishness, and LLMs helped them be crackpottish more effectively, to the point where it started looking really bad and coming to other people’s attention.
Other patients had weak world models, and perhaps a very weak subclinical tendency towards crackpottery that never would have surfaced at all. But unmoored from their usual social connections, and instead stuck in focused conversation with a “friend”/”community”/”culture” that repeated all of their weirdest ideas back to them, they became much more crackpottish than they would have been otherwise.
A small number of patients might have started out becoming only a little more crackpottish, but that in itself precipitated a full manic episode and they became floridly psychotic.
VI. The Survey
In order to assess the epidemiology and nosology of AI psychosis, I surveyed readers of my blog. I asked them to take the survey without knowing what it was about (to avoid selection bias), and got 4,156 responses.
The primary question was whether anyone “close to you” - defined as your self, family, co-workers, or 100 closest friends - had shown signs of AI psychosis. 98.1% of people said no, 1.7% said yes.
How do we translate this into a prevalence? Suppose that respondents had an average of fifty family members and co-workers, so that plus their 100 closest friends makes 150 people. Then the 4,156 respondents have 623,400 people who are “close”. Among them, they reported 77 cases of AI psychosis in people close to them (a few people reported more than one case). 77/623,400 = 1/8,000. Since LLMs have only been popular for a year or so, I think this approximates a yearly incidence, and I rounded it off to my 1/10,000 guess above.
Can you really do things this way? Might people do a bad job tabulating their 100 closest friends, etc? I tried to see if this methodology would return correct results on known questions by asking respondents how many people “close to them” had identical twins, or were named Michael. To my surprise, calculating prevalence based on survey results matched known rates of both conditions very closely (0.3% vs. 0.4% for twins, 1.2% vs. 1.3% for Michaels in the US).
Obvious remaining issues:
Might some people get LLM psychosis without their friends knowing it? Obviously yes; this should be taken as an estimate of the incidence of psychosis severe enough to be noticeable to friends.
Might ACX readers be unrepresentative? Obviously yes, although it’s not clear which direction. Readers tend to be more interested in and willing to use AI than the general public, and more willing to think about speculative and controversial ideas on their own (maybe a risk factor?). But they’re also richer and more educated, and mostly understand enough about AI to avoid the pure perfect machine spirit failure mode. Overall it seems like a wash. Also, I would expect their friends and family to be less unrepresentative than they are.
Might rates vary by country? Obviously yes, although I analyzed the data separately for Americans and non-Americans and didn’t find any difference.
Might some of these people’s social circles overlap, such that we’re double-counting the same cases? ACX readers come from all over the world, so I think this is unlikely to be a major issue.
None of these concerns make me reluctant to use this number as it was intended: an order-of-magnitude estimate in the total absence of any other attempt to study this condition.
What else can we learn about AI psychosis from this survey? I asked people to describe the cases they were talking about. 66 responses were clear enough to code. Of those, 6 did not really seem psychotic (for example, they involved people treating AI like a romantic partner). Of the remaining 60, I coded them into four categories:
Definitely psychotic even before the AI (n=19), if the respondent said the friend had a pre-existing diagnosis of schizophrenia, bipolar, or other psychotic mental illness.
Not previously psychotic but major risk factors (n=19), if the respondent volunteered the information that the friend had some sort of issues even before encountering the AI. These included use of psychosis-inducing drugs, obsession with conspiracy theories, or diagnosis with a condition like PTSD or borderline personality.
No previous risk factors but had merely become somewhat crackpottish (n=16), if the respondent said the friend had gotten weird ideas from the AI but they weren’t a clear match for a psychotic picture. For example, the friend might have become a math crackpot, or gotten really into crystals, or thought that the AI had “awoken” and was “really talking to them”, but otherwise remained mostly normal.
No previous risk factors, and now totally psychotic (n=6), if the respondent didn’t mention any previous history of psychosis or concerning behavior, and their friend’s post-LLM state did seem like real clinical psychosis.
We see that the nightmare scenario - a person with no previous psychosis history or risk factor becoming fully psychotic - was uncommon, at only 10% of cases. Most people either had a previous psychosis history known to the respondent, or had some obvious risk factor, or were merely crackpots rather than full psychotics.
If we limit the term “AI psychosis” to people with no previous risk factors who are now fully psychotic, I estimate the strict incidence at one tenth of the loose incidence, so 1/100,000 people per year.
As always, you can try to replicate my work using this publicly available version of the survey data. If you get slightly different answers than I did, it’s because I’m using the full dataset which includes a few people who didn’t want their answers publicly released. If you get very different answers than I did, it’s because I made a mistake, and you should tell me.
Share this post