151 Comments
User's avatar
ThinkingViaWriting's avatar

Love it.

Begs a related question. As presented, this was largely cast as a digital normal/disorder situation dividing psychotic crackpot from typical, and them from us. I assume with a longer description, the underlying situation is probably better represented by an analog continuum, and each of us find ourselves somewhere on it, and rarely at the extremes.

And although psychotic/crackpot and "non-disordered" may be convenient way to describe an extreme on that spectrum, presumably less laden descriptors might include things like divergent/convergent, agreeable/disagreeable, open/closed, creative/responsible, whatever.

The presentation, as is, does a fascinating job at presenting an interpretation of a strange situation for a small minority.

But I think it tees up a vastly bigger and more important question regarding an interpretation for many/all of us. How will AI "fit" into our personalities?

Expand full comment
Erica Rall's avatar

The ratio of the "crackpot" bucket in Scott's results (n=16) to the "totally psychotic" bucket (n=6) is consistent with the idea that AI psychosis (narrowly defined) is at the far end of a long tail. It could still be a bimodal distribution, but both leaks would probably be further towards normal side of the spectrum from the "crackpot" bucket.

Expand full comment
A1987dM's avatar

> "a mammal cannot be a plant."

FWIW not only are fungi not plants, they are more closely related to animals than to plants.

Expand full comment
Deiseach's avatar

I have to wonder if at least some of the queries were not "okay, on the face of it this sounds nuts, but who knows what the party line is this week? better to check and be sure, rather than be hauled off by the KGB because I denied that even the spirits of nature are working for the great Communist revolution".

I think if I were living in Soviet Russia, even in the 90s, I would be very careful to figure out "is this a test? am I supposed to believe it or deny it? if I don't query it, will I be accused of insulting Lenin? if I do query it, will I be accusing of insulting Lenin?"

Expand full comment
moonshadow's avatar

"His hearing's so sensitive to all things inimical / think twice before you speak"

https://www.youtube.com/watch?v=HBQR-oI5PjI

Expand full comment
C_B's avatar

Woah, this is great, thanks for the link.

Also definitely wouldn't have understood what was with all the mushrooms except in the context of this post!

Expand full comment
Mark's avatar

First, I wanted to shout "there is no Soviet Russia in the 90s". Good I did not.

1."Soviet Russia" was not a real political entity in the 1990s because the Soviet Union, which included Russia, dissolved in December 1991. The 1990s in Russia, therefore, were the first decade after the Soviet Union's collapse, a period characterized by significant economic and social turmoil, often referred to as the "wild 90s," marked by poverty, hyperinflation, .... . (AI slop) 2. "On 17 May 1991, the Fifth Channel of Leningrad television broadcast its popular program Piatoe koleso (The Fifth Wheel)—an episode that has since become one of the most notorious media events of the past two decades. The Fifth Channel acquired prestige during the period of perestroika reform, when it was broadcast nationally. Its programs concerned historical and cultural events in the Soviet past and present and were watched by an audience of several million viewers. Sergei Sholokhov, one of the hosts of The Fifth Wheel, had the reputation of being a young, dynamic, and pathbreaking journalist." (quote from the pdf linked)

Expand full comment
Freedom's avatar

Also made into an episode of the Alfred Hitchcock Hour (or possibly Alfred Hitchcock Presents)

Expand full comment
Mark's avatar

The quality of Soviet education is often over-estimated, pravda. Admittedly, a mammal can not be a fungus either. Though, thinking of my athlete's feet .... ;)

Expand full comment
Ch Hi's avatar

Well, if you're going to talk about parasitism, consider sacculina. Admittedly what it takes over is a crab, not a mammal, but there's no particular reason to believe the same principle couldn't apply. And I suppose that at a metaphor you could call that a fungal spirit.

Expand full comment
Mark's avatar
1hEdited

Yep. We all know those zombie-ants controlled by 'mushrooms': https://www.youtube.com/watch?v=vijGdWn5-h8&ab_channel=NationalGeographic

Expand full comment
Erica Rall's avatar

Same thing occurred to me, although that's a relatively recent reinterpretation. Putting fungi in their own kingdom rather than lumping them in with plants seems to have first been proposed c. 1969 and I think the idea of fungi sharing common ancestors with animals but not plants started getting traction c. 1990.

Expand full comment
Rachael's avatar

I don't know Russian, but maybe their everyday word for "plant" includes fungi, like how some languages' everyday word for "fish" includes whales.

Expand full comment
Oxeren's avatar

Nope, not really.

Expand full comment
Pseudo-Longinus's avatar

Not shill myself too openly, but recently I did a post about Freud's concept of narcissism, which is very different to how people us the word today. The relevance is that for him narcissism was effectively biological feedback, it's when there seems to be some kind of data that seems to be out in the world as something objective, but really it's the product of your own actions so it's fake, it's part of you. Like with Narcissus himself, he's stuck staring into the pool, but he doesn't realise it's him.

Kind of like what you're describing here, it's a feedback loop, whereby your own ideas get repeated back to you as if they were true, thereby providing evidence and reinforcing them.

In fact, Freud didn't have the concept of a personality disorder at all, for him a "narcissistic disorder" literally was just psychosis. He thought it was a defensive response, basically someone suffering from a severe trauma retreats from their attachments to the external world leaving only their narcissistic investments behind, hence the sort of grandiose delusions that often occur.

Very similar to what you describe then, a weak "world model" is basically what Freud meant by weak object libido, just without the pseudo-biological underpinnings of Freud's libido theory.

Link below:

https://pseudepigrapha.substack.com/p/the-users-guide-to-narcissism-freud?r=abv17

Expand full comment
Envoy's avatar

>Of those, 6 did not really seem psychotic (for example, they involved people treating AI like a romantic partner). <

Wait what, how does that not count ?

Expand full comment
Scott Alexander's avatar

I think most people in this category treat it as a weird pastime / fetish, like watching porn or reading erotica, but don't believe anything false (they understand that the chatbot doesn't really have feelings for them, it's just good at simulating it).

Even if they did think the chatbot had feelings for them, I would want to distinguish this from psychosis. Believing that something that says "I love you" convincingly actually loves you is a much lesser category of error than the ones that usually get dubbed mental illness.

(also, I don't think we have enough philosophy to confidently separate out performing love from actually feeling it - nobody knows what's going on in chatbot innards)

Expand full comment
Harjas Sandhu's avatar

Have you seen r/MyBoyfriendIsAI? Some of those posts are genuinely very concerning (obviously could just be Cardiologists and Chinese Robbers).

Expand full comment
Ch Hi's avatar

FWIW, my wife used to claim that the car knew the way to certain destinations. I could never decide whether she knew she was projecting her actions onto the car. When she thought carefully about it she could distinguish her actions from those of the car, but when she was just mentioning it in the context of another discussion, the car was given agency.

P.S.: The survey should have asked "How many of your friends and acquaintances have used ChatGPT(etc.)". I would have answered "none", which might have affected how you calculated your statistics. I haven't used it, myself, though I've considered it. The age of your friends and associates probably has a large effect here.

Expand full comment
MichaeL Roe's avatar

One view you could have is that it’s a complicated computer game on the theme of “can I get this character into bed without tripping any of the guardrails” and has much more in common with, e.g. engineering a buffer overflow attack, than it does with having an actual relationship with someone.

Expand full comment
Kirby's avatar

I don’t follow the “biological disease” point. If someone thinks their chatbot is a real person who is their friend, surely we can agree that they are delusional. People can get anorexia from the internet; plenty of mind-body diseases arise from false beliefs.

Expand full comment
Scott Alexander's avatar

I don't know. I would compare this to thinking that your pet bird is a real person who is your friend.

I think you would have to be very careful to separate out delusional versions (this bird, unlike all other birds, is exactly as smart as a human, and I deserve a Nobel for discovering bird sentience) from reasonable versions (I don't really know what's going on inside this bird, but I can't prove it doesn't have something sort of like thoughts and feelings, and I have a friend-shaped relationship with it).

I think some people who have friendly relationships with chatbots are on either side of the divide.

Expand full comment
ronetc's avatar

Aunt Pearl told everyone her parakeet Petey talked to her. But she was crazy (she sewed the buttons on her sweater with copper wire so "they" could not steal them in the night), so we did not listen to her. Till one day I entered the house unannounced and heard the bird talking. Just short sentences but still . . . . On the other hand, when I entered the room, Petey stopped talking. Maybe Petey just did not like me, maybe I hallucinated, maybe Aunt Pearl had learned to sound like a bird when she talked to Petey. Or maybe Petey talked when no one was around.

Expand full comment
Erica Rall's avatar

If you have pet birds, having buttons stolen off your clothes is not necessarily an unreasonable thing to be afraid of, and sewing them on with something that can't be bitten through isn't an insane precaution.

Still, the button-stealing happening at night suggests Petey wasn't the culprit, since most bird owners keep them caged at night.

Expand full comment
ronetc's avatar

Right, wasn't the nighttime caged Petey, it was the mysterious "they." Probably she was just an unrealized genius like A Beautiful Mind

Expand full comment
Kade U's avatar

this is basically the exact same thing as why everyone is always referring to AI as a 'stochastic parrot'. Parrots (and similar birds) are really good at talking, and if they interact intensely enough with a human and with no other birds they can assemble a vast vocabulary. there's traditionally supposed to be a hard line that parrots do *not* understand language, they are merely becoming very well attuned to what sounds to produce in response to various external stimuli. but much like the argument with AI, this feels like a very dubious claim that relies on the fact that we only have subjective data of the human experience, and without that data humans just seem like they are doing a much deeper and more expansive version of the same thing. parrots can't understand grammar or long abstract associations, but the notion that they are totally incapable of symbolic mapping (i.e. understanding that 'cracker' refers to a specific food item, and not other various food items) is based on linguistic and neurological ideas that aren't super well-founded empirically and are mostly just theoretical

Expand full comment
Nancy Lebovitz's avatar

Last I heard, anorexia nervosa is frequently (always?) comorbid with bipolar, OCD, and (I think) schizophrenia. I wouldn't be surprised if people also get anorexia from the internet, but there's plausibly an underlying vulnerability.

Expand full comment
Joel's avatar

'Folie A Deux Ex Machina' might be the best portmanteau I've ever seen, and a very fitting description. Great post!

Expand full comment
AlexTFish's avatar

Yes, that subheading made me swoon with delight.

Expand full comment
Doug Summers Stay's avatar

I don't know how common any mental illness is. Can you compare this incidence to some related diseases, like "normal" psychosis, schizophrenia, or similar?

Expand full comment
Scott Alexander's avatar

Schizophrenia is usually estimated at 1% prevalence.

Expand full comment
Doug Summers Stay's avatar

Thanks! I'm from the (knows about AI) half of your audience, not the (knows about psychosis) half

Expand full comment
Thelo's avatar

That is way, way more than I would have thought, huh. I'd have guessed somewhere between 1 in 1000 and 1 in 10000.

So every time I go to the local grocery store, when there's roughly a hundred people in it, there's probably a schizophrenic or two in there, statistically. Interesting.

Expand full comment
Pete's avatar
1hEdited

You likely wouldn't be able to tell in any way by talking to them, especially if they are taking appropriate medication, as is quite likely in modern first world. On the other hand, if on your way to the local grocery store you encounter a stereotypical 'crazy homeless person' (the likelihood of which depends very much on your local healthcare policies on e.g. institutionalization - our host has written some articles on that), it's quite plausible that they might be homeless because they are schizophrenic.

Expand full comment
DanB1973's avatar

The Kelsey Piper’s example is a great one. Imagine that the average internet user is like her daughter. Internet users have no reason to hear reason because their entire world is made up of non-existent, volatile, emotion-saturated stuff optimized to attract clicks, attention or money. The avalanche of information is enormous. A lot of it is contradictory or even self-contradictory. So they want an advice from the new god publicly proclaimed as god, the AI.

The AI provides an answer. Now, imagine who is behind the AI’s screen. What kind of parents check the pulse of the human users of AI bots? What are their intentions? Can their intentions be good if they lie every second (clickbaits, fake titles, fake video clips, fake news, etc.)?

The 2025 AI is the Truman Show on steroids. And it it only the beginning. The owners and developers of AI would not launch it it they had not already had prepared its next version or versions.

Expand full comment
Deiseach's avatar

I'm very startled by that anecdote about Kelsey Piper, and very concerned. Whether she thinks it or not, this is training her daughter to doubt her mother and to believe some anonymous voice on the Internet as more credible and authoritative. God knows, there are plenty of strangers on the Internet out there coaxing kids into "I'm your friend, listen to me and do this thing":

https://gript.ie/armagh-man-who-drove-us-girl-12-to-suicide-sentenced-to-life/

For an eight year old, there is a greater necessity of parental authority and if it's "Mom is stupid, my AI friend told me that taking sweets from strangers is perfectly safe!" type of reliance on external advice, I'd be very worried.

I know it sounds like the lovely reasonable rationalist way of raising kids: "now we both have equal input and I'm not tyrannising my child by arbitrary authority and we agree to follow the chatbot", but (a) what if Chatbot says "yes, kid is right, mom is wrong, you should get that expensive new toy" or other disagreement more serious and (b) this is making the chatbot the parent, not the actual parent.

"You do this because I'm your mother and I say so" is better than "You do this because some anonymous commercial machine happens to back me up this time, but if it doesn't the next time, then I have to obey it because that's what we agreed".

But what do I know, I was only raised in a ditch.

Expand full comment
DanB1973's avatar

Isn’t it weird? The alleged civilization of homo sapiens… killing their oldest and most helpless with a $$$-laced injection, with the “save the grandpa” slogan… killing their youngest on the first day of their life with a slow-acting injection of $$$-laced toxins… saturating the kids’ brains with plastic screens, cheap games and repeated rubbish while their parents are denied employment because importing stuff from half a world across is better for the economy…

In a sense, Mum is stupid is she gives her kid a replacement mother (babysitters), replacement attention (gadgets), replacement guidance in life (the internet). The problem is that this kid - under normal conditions - would become a local leader, maybe an important persons in the country’s future, and would repay his/her parents the upbringing effort. Imagine this repayment…

Expand full comment
Doctor Mist's avatar

I had much the same reaction at first, but then pulled back a little when I asked myself whether this was significantly different from professing a belief in Santa Claus. Still not sure, but…?

Expand full comment
Deiseach's avatar

If a kid tries "Santa said you should let me eat candy not vegetables and stay up late", we all know "no, he didn't" is the answer.

But this is a machine that, depending how the question/prompt is phrased, may say "yes you can". And if the parent is delegating authority to the machine, via "okay we'll ask Chatbot who is right", then what do you do? Either you go back on your agreement that "we'll let Chatbot decide" and now your kid thinks you're a liar and won't trust you again the next time you promise something, or you go ahead and let Chatbot be the one raising your kid.

(I'm not sure how prompts work, but I wonder if an eight year old asking "can I eat laundry pods" instead of "should I eat laundry pods" would get "yes you can eat them" as an answer. I mean, you *can* eat them, but you *should not* do so.)

Expand full comment
Unirt's avatar

In my experience, when seeing such questions, the chatbots get very concerned and start giving warnings in ALLCAPS about never trying this at home. My kid asked about which other elements besides oxygen can be used for the burning process, and what that would look like; the bot gave answers, but was obviously very agitated and kept warning about the hazards in a bold, coloured, all-caps text.

Expand full comment
Doctor Mist's avatar

If Kelsey is the one formulating the question, she’s probably pretty safe about the answer she will get. But there’s a real deadline there, for the kid to be mature enough to be told the truth but not yet having her own conversations.

Expand full comment
David Joshua Sartor's avatar

IIUC Kelsey does not usually follow Claude's verdict when it disagrees.

Expand full comment
Deiseach's avatar

That's not the best either, though. Because if it's "Claude agrees with me so we do/don't do this thing" but also "Claude disagrees with me so I get the last word", then it's mixed signals for the kid. And if the kid trusts the AI more than their own mother, that leads to "Mom is untrustworthy becasue she says one thing and does another".

I dunno, I think having the parent be the first and last authority is better. Looking up something that you don't know is one thing (e.g. "how big can spiders get?") but if it's "my kid doesn't want to do what I say, so we ask Claude, but I still only do what it says if it agrees with me" is both handing over authority and a pretence, because you're not actually doing what you say you're doing ("ask Claude to decide"), you're retaining the decision-making power but now making false agreements on top of that.

Just retain the decision making power from the start, it'll be clearer for everyone!

Expand full comment
timunderwood9's avatar

I replied to the survey as someone who didn't know anyone who'd gone psychotic due to AI, and I'd like to state for the record that there are probably less than thirty people who I'd probably know about it if they'd gone psychotic due to AI.

Expand full comment
Vittu Perkele's avatar

Yeah, I think Scott might have been over-estimating the number of friends, family, and close associates pseudonymous internet posters have. I know for a fact I don't know 150 people closely enough to know if any of them had psychosis.

Expand full comment
Pelorus's avatar

Psychotic people are usually not very good at hiding their delusions and are often very public about sharing them. The average number of Facebook friends people have is 338. If one of those 338 was actively posting psychotically, most of their "friends" would know about it. (Feel free to replace "Facebook" with Instagram, email, the neighbourhood group chat etc.)

Expand full comment
Jonathan Lafrenaye's avatar

Enjoyed the article, but I feel like putting the average number of AI psychosis assessable relationships at Dunbar's number is high. I suspect I could only tell if it happened to someone in my closest 30-40 relationships. To me that makes the end estimate much more of a lower bound for the incidence.

Expand full comment
Richard of Gloucester's avatar

You seem to be evaluating delusional claims based on how "normal" it is for a person to believe some particular false and/or supernatural thing. But isn't there also a qualitative difference in the *way* that delusional people believe things?

(Anecdote: I have both a family member who is delusional (diagnosed by multiple psychiatrists) and another who is not mentally ill, but really into QAnon. While the latter has some weird false beliefs, there is just something a lot "crazier", less reasonable and dysfunctional in the one who is mentally ill. Their general ability to perceive their lived reality is off due to the mental illness, and even though the delusions are specific, it seeps into all aspects of their perception of events all the time. For the QAnon conspiracist, the same just doesn't hold and they live a mostly normal life---though when I had the pleasure of explaining QAnon to the mentally ill person, they thought it was hilarious that people believed something so obviously false.)

Expand full comment
Scott Alexander's avatar

I agree with this - this is part of the distinction I'm trying to draw between psychotic and crackpot.

Expand full comment
Ivan Fyodorovich's avatar

I think the stuff Scott lists in the context of Folie a deux makes good additional distinctions. Psychotic people tend to have disordered speech and thinking and sometimes full blown hallucinations. They seem off, even if they are not talking about crazy stuff at the moment. Non-psychotic crackpots don't.

One thing I will say about crackpots is that they are sometimes sane but completely unable to stop talking about a thing. I met a perfectly not-psychotic guy who would not shut up about the malfeasance he had uncovered at his previous employer (who had fired him), turning every conversation in this direction until you totally understood why he had been fired. I do wonder if crackpottery correlates with other manifestations of obsession and paranoia more than literal psychosis.

Expand full comment
Jonatan's avatar

> "Lenin could not have been a mushroom" because "a mammal cannot be a plant."

I found this a funny, partly because mushrooms are not even plants

Expand full comment
Oliver's avatar

This issue has come up before. Who are you tell Russians that their word for plant doesn't include mushrooms?

https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/

Expand full comment
Alex's avatar

You are gatekeeping gatekeeping mushrooms. Let me live my truth!!

Expand full comment
Voloplasy Shershevnichny's avatar

Exactly, and Sholohov, the co-author of the hoax, published a rebuttal in newspaper "Smena" demonstrating that the official doesn't know what she's talking about.

Expand full comment
Nell Watson's avatar

I really enjoyed this piece — thanks for doing the legwork to get actual numbers on something that’s otherwise just rumor and anecdotes.

I’ve been researching this too, from a robopsychology angle — how chatbots can sometimes amplify or catalyze delusional states. My project Psychopathia Machinalis maps out different “AI pathologies” in AI systems themselves (including some folie à deux, elements). If you’re curious, I’ve shared that at https://www.psychopathia.ai

I also have a paper under review here on the connections between the potential reawakening of latent childhood animistic beliefs due to AI, and its potential sequelae: https://drive.google.com/file/d/1cn57DTdbDl1JiVnfqxe-WMeMdDHkYJ5X/view?usp=sharing

Your essay resonates a lot with what I’m finding — I would love to compare notes sometime as the field takes shape. Thank you!

Expand full comment
Hannah's avatar

I just finished reviewing your paper, and I must say, I’m genuinely impressed with your work! It’s fascinating! I have been particularly intrigued by the intersections of AI and the existing vulnerabilities to psychological illnesses, as well as how these issues vary across different cultures. I'm especially interested in the ethical dilemmas in various cultural contexts. I would love to hear more about your knowledge!

Expand full comment
Nell Watson's avatar

Thank you so much! I'm an AI safety engineering maven, so my own knowledge of psychology per se is somewhat limited.

I do see a lot of overlaps between human and machine cognition, and tremendous interactions of AI systems with our own psychology, particularly in supernormal stimuli and parasocial relationships.

I'm recently led some other research in trying to make AI more reliably pliable to a range of cultural and moral dimensions. Hopefully this can play a role in making these systems more aware of cultural issues, and to enable users to gain greater agency.

https://superego.creed.space

Expand full comment
NJ's avatar

This called a Network Scale-Up Method (NSUM) survey. Next time, ask people how many people they know and how many people they know with AI psychosis.

https://pmc.ncbi.nlm.nih.gov/articles/PMC10665021/

Expand full comment
uugr's avatar

"Might some of these people’s social circles overlap, such that we’re double-counting the same cases? ACX readers come from all over the world, so I think this is unlikely to be a major issue."

FWIW, I and a friend (who also reads ACX) submitted the same case independently and didn't realize until later.

Expand full comment
John M's avatar

Yeah, I didn't understand why he was downplaying this problem. ACX is probably especially popular in particular social circles and gossip of someone being oneshotted by AI probably spreads far and wide. So some of these cases could definitely be double counts.

Expand full comment
Remysc's avatar

I see the gossip part as relevant, but the social circles? Someone psychotic would be double counted, sure, but so would be someone who is not. It should average out no?

Expand full comment
Alex's avatar

Seems like this would counterargue that?

> Can you really do things this way? Might people do a bad job tabulating their 100 closest friends, etc? I tried to see if this methodology would return correct results on known questions by asking respondents how many people “close to them” had identical twins, or were named Michael. To my surprise, calculating prevalence based on survey results matched known rates of both conditions very closely (0.3% vs. 0.4% for twins, 1.2% vs. 1.3% for Michaels in the US).

Expand full comment
4Denthusiast's avatar

This issue would increase the variance of the estimate, but not change its mean, so it's effectively just like the sample size being slightly smaller than the actual sample size. It may double count cases, but it also double counts non-cases exactly as much.

Expand full comment
Erica Rall's avatar

Wouldn't overlapping social circles just reduce the effective sample size rather than biasing the results? It seems like positives and negatives would be roughly equally likely to be double-counted, except maybe for people who were already recluses before being exposed to LLMs.

Expand full comment
Mark Neyer's avatar

Scott, you're missing the obvious relationship between a communist revolution and a psychedelic mushroom trip.

What psychedelic mushrooms do is destroy the top-down priors that constrain cognition.

This often produces a sense of 'unity with the world', because the concepts that constrain cognition produce, among other things, the idea that we have a body separate from the environment.

Marx argues we should do essentially the same thing to the superstructures that constrain social evolution, to get essentially the same result: if we "destroy the social control structures" we destroy the "false consciousness" that leads to the oppression of the proletariat by the bourgeoisie.

The role of fungi in an ecosystem is that they destroy dead structures. Communist movements do same thing: they destroy those lumbering civilizations which have died, and just not yet decayed. The end result of a mushroom trip is that you see the failure of your old conceptual structures, and try to develop new ones.

You evolve by "dying".

The risk, as always, is that you overdo it. But in its proper place, the evolutionary role of communism is to accelerate the evolution of functional institutions by accelerating the decline off the decaying ones. Overdoing it should lead, of course, to an ecosystem of immense large scale death.

So yes, count me a proponent of the 'Lenin was en embodied mushroom' theory. Wokeism of the last decade was just the superconsciousness tripping on mushrooms.

Expand full comment
Jason S.'s avatar

The birth of myco-Marxism. Feels big.

Expand full comment
Mark Neyer's avatar

Marx wasn't the first to propose this idea. Before him you had the french revolution, before them the diggers and the levers in the English revolution. It's clearly an attractor in memespace.

Personally I think communism is a needed member of the intellectual ecosystem. Think about how much wokism did to expose how corrupt and broken our institutions were. Without it, how much longer would we have gone on in this suboptimal state?

Expand full comment
Ch Hi's avatar

The problem is that real communism doesn't scale. It can work fine in groups of up to around 50 people (large error bars here). But even democracy is better at scaling. (I think democracy can probably work in numbers up to around 3 times Dunbar's number before it starts going bad.) You won't find a nation-state practicing either communism or democracy, but some small towns are known to practices democracy. (Athens was only able to do it by disenfranchising most of the people.)

Every real large government I'm aware of is some modification of bureaucracy. (But do note that a pure bureaucracy also won't work. It's needs something to give it direction.)

Expand full comment
Matthew Talamini's avatar

Leninism as movement from the plane of transcendence to the plane of immanence on the body of the socius. Marx building a body without organs for the flows of capital. Rhizomatic fungal networks of revolutionaries disrupting bourgeoise arborisms.

Expand full comment
Mark Neyer's avatar

The concept of a body without organs ends up as “ cut off all the organs of an existing body, and then build up a newer one that sucks. And if anyone points out that your new body still has organs and it’s worse than the old one you kill them.” Communism is no more viable for a civilization than being on mushrooms all the time is viable for an individual. It works for a time, and the wealthier you are the longer you can make it work. But it’ll eventually kill you if you don’t stop.

Expand full comment
AlexTFish's avatar

This feels like a post by one of Aaron Smith-Teller's slightly more inflammatory friends.

Expand full comment
Mundografia's avatar

You’ve just given me a new metaphor to carry forward for life

Danke schon

Expand full comment
John M's avatar
5hEdited

There's probably a long latency period before someone's psychosis becomes noticeable to family and friends, where they mull about their crazy ideas only to themselves and their chatbot. Depending on how long that period is, this number may mostly just be capturing cases that started long ago. Which means it's probably an undercount for the true rate of AI psychosis. You did say that this survey is only for those cases which are severe enough to be noticeable to others, but I wouldn't be surprised if the prevalence of noticeable cases rises in the future for these reasons.

Expand full comment
Ch Hi's avatar

Psychosis requires more than crazy ideas. Believing in, say, bigfoot, doesn't make you psychotic. Just crackpot. Even if you also believe in, say, Mothman and Nessie. Even if you go searching for evidence to prove your beliefs (to others).

Expand full comment
Mo Diddly's avatar

I saw the title and thought, finally we’re beginning to study how LLM’s can experience mental illness! Alas. I also am quite curious about what it might mean to get an LLM drunk, or stoned, or experience other drugs, legal or otherwise.

Expand full comment
Scott Alexander's avatar

Have you read about Golden Gate Claude?

Expand full comment
Mo Diddly's avatar

No I haven’t! Looking it up now…

Expand full comment
uugr's avatar

Maybe you could fine-tune on a dataset of drunk texts? That would be pretty funny.

Expand full comment
Mo Diddly's avatar

That would be pretty funny, although I’m more personally interested in messing with the model architecture in real time and seeing what happens.

As in, how does an LLM behave when impaired in various ways?

Expand full comment
Kirby's avatar

Because LLM psychosis is negatively correlated with friend count, there is might be a large undercount through this mechanism

Expand full comment
Gordon Mohr's avatar

Curious...

In any of the 66 case descriptions, did you suspect some possibility two respondents were describing the same other person?

Did anyone report more than one case among people they know? Was whatever rate that occurred at (or not) compatible with other estimates/assumptions of prevalence, network size, network overlap?

Did anyone report themself as a likely case? Given general rates of self-insight in analogous conditions, how large/enduring of a survey might you need to obtain some earnest self-diagnoses (perhaps including cases resolved/in-remission)?

Expand full comment
David's avatar

You could have said "10,000 closest friends" and come to a different conclusion.

I have nowhere near 100 friends close enough to know if they had some kind of psychosis.

Expand full comment
Ryan's avatar

It’s funny - this is an interesting article, but my primary takeaway is that 150 family members, coworkers, and close friends is like 6 times the number I’d be able to comment on as a reserved person with a small nuclear family.

Expand full comment
Pete's avatar

No, it doesn't change the conclusion - this is what the calibration with respect to twins and Michaels solves; if it turns out that on average people are commenting not on "100 closest people" but on 23 or 345 people, it would be both visible and easily corrected - i.e. if we observe that people know twice as many psychotics as twins, then we know how many visible psychotics there are, no matter what the friend count is.

Expand full comment
callinginthewilderness's avatar

One baseline would be useful to compare to is: how many psychotic (but not high-risk, not already diagnosed) people should we expect in this sample purely by chance? Does introduction of LLM caus any detectably large "excess illness"?

Expand full comment
SkinShallow's avatar

This is one of the best essays on delusional / "factually wrong" thinking and its relationship to psychosis and "religion" that I've ever read, close to a return to the pinnacle of the very best of classic ACT.

The bit where most people, JUST LIKE LLMs LACK (proper/accurate) WORLD MODELS, and instead operate on socially shared heuristics that MIGHT be accurate but it's almost accidental -- even if they are functional in any given circumstance -- is stating something that is rarely admitted so clearly. It also invites a question: would people with more detailed, realistically accurate, materialistic world models, who VALUE having such models and purposefully pursue them, be less prone to "non psychotic delusions/crack pottery"? Or perhaps, more prone? I suspect it would depend on how broadly socially moored and how... arrogant they are. Case in point: rats craziness of zizian kind.

I'd also hypothesise that people with high O (big5) would be most prone to being "infected" within folie a deux situations (but perhaps also easy to "cure").

The AI psychosis / delusion bit is also good, and plausible, tho I'd emphasize the powerful effect of the LLM sycophantic/ "validating" responses which is 'programmed in' rather than a nature of LLMs themselves.

Expand full comment
Xpym's avatar
2hEdited

Yeah, I'd say that belief in crazy things in general is orthogonal to intelligence. I'd expect crackpottery to be strongly correlated with contrarianism, while smart conformists simply believe whatever high-status craziness is currently in fashion, which is of course supposedly realistically accurate, and compatible with materialistic world models. Like Marxism a century ago, or its close relative "critical theory" nowadays.

Expand full comment
SkinShallow's avatar

Generally agreed.

I'm not sure if Marxism was a high status conformism hundred years ago anywhere outside the SU, it also arguably offered a not completely implausible world model then (not as a political plan/future prediction but as a possible model/take on reality of early capitalism), I feel that its empirical(ish) claims eg "added labour is what creates profits/is expropriated" were refuted later than that.

I don't know enough about "critical theory" to judge anything really, I had no idea that it made claims to "realistic description", but it seems to fit the "high status conformist nonsense" for some conditions for sure.

My favourite (apart from obvious ones like revealed religions) is actually psychoanalysis which is utter bollocks empirically yet has been at times pretty universally adopted as "true" (and also very culturally fertile in arts etc).

Expand full comment
Erica Rall's avatar

My understanding is that the high-status conformist genre of politics a century ago was technocratic social reform. Marxism was definitely a big part of this movement, but not the sole or even the dominant aspect of it. I think democratic socialism and liberal high modernism were at least as prevalent, but it's hard to say with confidence because there was a lot of fuzziness around the edges, especially before the Cold War and even more so before the split between the Second and Third International which happened a little over a century ago now.

Expand full comment
Ch Hi's avatar

One datapoint. I have a friend who highly values truth, is quite intelligent, and who also believes in various "cryptobiology" creatures, like bigfoot. If you ask him about truth, accuracy, rationality, etc. you will get a strong defense. But he believes some things on weak and questionable evidence. He's a mathematician with published papers.

It's not enough to value rationality, you've also got to be *properly* discriminating. And it's that "properly" that's the kicker.

Expand full comment
SkinShallow's avatar

I have this hypothesis that people who are highly rational but mostly deal with abstract symbols rather than remaining in some (even second hand) connection to matter are not really "protected" by their rationalism or intelligence. And oddly, it seems particularly common with claims related to (broadly understood) biology than with physics for example. Perhaps because biology is very granular and descriptive rather than "reducible to laws".

Expand full comment
Ch Hi's avatar

I feel, that while that is true, it's just a special case of something that's true for EVERY narrowly focused field of expertise. Math, yes, but also circuit design, or protein folding or... well, every narrowly focused field. The places where they have their blind spots may be different, but they will be present.

Expand full comment
Jason S.'s avatar

“First, much like LLMs, lots of people don’t really have world models. They believe what their friends believe, or what has good epistemic vibes.”

This is a very interesting point. Scott, have you written about this before? Does this concept have a name? Something like the “Social Consensus View of Reality”?

It fits with my pet, perhaps crackpottish, concept I call socially maintained cosmic obliviousness (where we hardly ever think about, discuss or otherwise truly grasp our uncanny situation on this watery life-strewn oasis of a rock zipping through the effectively infinite vastness of space).

Expand full comment
Calvin Blick's avatar

I would expand this concept to go so far as to say that a lot of people don’t have clearly defined “beliefs” but rather vibes. Both polling and personal encounters indicate that many people believe things that are mutually exclusive, that they can’t even begin to defend, or that they openly admit that they believe just because they want it to be true.

Expand full comment
Jason S.'s avatar

Right. Reasoning as rationalization.

Expand full comment
uugr's avatar

"where we hardly ever think about, discuss or otherwise truly grasp our uncanny situation on this watery life-strewn oasis of a rock zipping through the effectively infinite vastness of space"

I don't think this is socially maintained. I think it's more like driving; you can only spend so long being like "holy shit I'm in a multi-ton hunk of metal hurtling along faster than most humans in history could even dream, and if I lose control of it for even a couple seconds I might be horribly mutilated" before the thought kind of fades into the background. Things usually can't stay both uncanny and routine forever.

Expand full comment
Jason S.'s avatar

This is an interesting comparison. Do you not think the automobile example also involves an aspect of social maintenance where everyone sort of conspires to treat the high speed metal capsule travelling as normal and not draw attention to it?

Also, I wonder if the norms and regulations that govern high speed metal capsule travelling are another form of social maintenance in that they’ve helped integrate this evolutionarily novel form of locomotion into daily life (still more work to be done on air and noise pollution mind you).

When it comes to our cosmic situation it imposes less on us concretely so we’re able collectively disregard it but at a cost I’d submit (where we ignore for example how light pollution further erodes our connection to the cosmos and how we are cut off from the benefits of the grand perspective, the wonder and awe, that it can bring to our lives).

Expand full comment
Xpym's avatar

>Does this concept have a name?

Conformity.

Expand full comment
Rack's avatar

"floridly psychotic" - my new favorite phrase

Expand full comment
Henry Bachofer's avatar

Rings true for me. Enjoyed the math on estimating prevalence.

Having had over the course of my career several episodes where I'd work on a project for several days without sleep, I can confirm that you hit a point where the fatigue lifts. What would happen is that my focus would improve and I could avoid being diverted from the main thread of the argument. But then I'd crash. Is this evidence of Bipolar II? Some have thought so. I have my doubts. I never did produce a report in cuneiform — much to my regret.

Expand full comment
Stephen Saperstein Frug's avatar

"Wouldn’t that make chatbot-induced psychosis the same kind of category error as chatbot-induced diabetes?"

In at least one sense you could have chatbot-induced diabetes: if a chatbot convinced you to change your diet, and you got diabetes as a result. Of course it wouldn't be the proximate cause, but we can easily imagine it being a necessary component (i.e. without the chatbot it would never have happened.) If a chatbot convinced someone to start smoking, we might even imagine chatbot-induced cancer.

Expand full comment
Nancy Lebovitz's avatar

Or if the chatbot convinced you to move to someplace with a lot of pollution.

Expand full comment
Ben Giordano's avatar

I’m not sure ACX readers or their circles are as representative as implied. They’re articulate and often rigorous, but that can obscure certain distortions rather than reveal them.

I also wonder if there’s a social layer that’s been overlooked. These AI-linked delusions don’t seem to come out of nowhere. They track with eroding trust in institutions and the kind of loneliness that leaves people wanting something steady to think with. When the thing you're talking to sounds calm and sure of itself, that tone can be soothing, even if what it's saying isn’t quite right.

Expand full comment
Nancy Lebovitz's avatar

Very good point about people having a weak world model. This works better for me than getting told that people don't really believe things.

I'm wondering where anti-Semitism fits into this. It does seem like a culturally-supported delusion.

A good description of getting out of culturally supported Greek anti-Semitism.

https://www.youtube.com/watch?v=RN8Jd6VCIl4

Expand full comment
Henk B's avatar

Really enjoyed the video, thanks for sharing.

Expand full comment
Xpym's avatar

Calling antisemitism a delusion seems like a category error. Few people are deluded about the fact that they hate Jews. Their hatred might be based in large part on falsehoods, which one may call being deluded, but I don't think it makes much sense to conflate this with psychiatric delusions.

Expand full comment
Nancy Lebovitz's avatar

They're deluded about how powerful and dangerous Jews are.

Expand full comment
Xpym's avatar

Yeah, my point is that factual mistakes of this sort are basically compatible with being "sane", as opposed to psychotic delusions.

Expand full comment
Ch Hi's avatar

This is just a nitpick, but:

Not a "culturally-supported delusion", for linguistic reasons. Believing that a particular group performs particular actions might be a "culturally-supported delusion", but an attitude or action performed by the entity would be a "culturally supported" attitude or action.

Expand full comment
Bb's avatar

Did you get one-shotted by the idea of things one-shotting? :) Anyway, this is median good-scott- which means I love it and its way better than almost anything out there. Clear, cogent, persuasive, provoking, relevant to current discourse. Thank you!

Expand full comment
T. I. Troll's avatar

It feels like the obvious remaining issues should include the fact that having bo social circle appears to be a part of the criteria, so asking people about their friends will obviously significantly under-count those who don't have any

Expand full comment
T. I. Troll's avatar

bo->no

Expand full comment
Nancy's avatar

Agree. I was talking about the survey with my high school son, who knows more frequent ChatGPT users than I do, and he commented that the kind of person who uses it obsessively generally doesn’t talk to people very much, so you don’t know what’s going on in their mind.

Expand full comment
Doctor Mist's avatar

In my idiosyncratic way I was struck by an echo of the notion (e.g. Patrick Deneen) that Enlightenment liberalism contains the seeds of its own destruction, as glorifying the individual leads the state to reduce and ultimately destroy all the sub-state institutions — church, schools, unions, even family — that exert power over individuals, leading to a society of alienated individuals.

Is individualized AI the thing that finally isolates us all? No paper-clipping superintelligence necessary.

Expand full comment
Sol Hando's avatar

How is it that there's all these people online getting One Shotted by AI or being made psychotic, but when I ask ChatGPT to make ME psychotic, it refuses to do so? What do I have to do to experience the LLM that validates everything I say no matter how insane it is that everyone else seems to be enjoying?

Expand full comment
uugr's avatar

I think it was GPT-4o doing a lot of the recent psychotifying. You might still be able to access it with a paid plan(?), but it's not the default one anymore.

Expand full comment
Xpym's avatar

You need to get it in the mood. LLMs are essentially overgrown auto-completes. When you prompt it with a crackpottish message, it defaults to continuing a conversation between two crackpots (to the extent that post-training tuning hasn't suppressed this).

Expand full comment
Michel Nivard's avatar

I don’t think many people have 100 friends of which they have insight into their LLM use… (should have surveyed that?) maybe 5-10? Then someone needs to bring up their crackpot theory? Anyway that’s just an aside. One thought I had about LLM psychosis is that the social acceptability consensus world model you describe leads to many people with medium to low self esteem habitually clamp down on verbalizing their ideas, even if they aren’t very crackpot-ish. For fear of stepping out. The validation an LLM offers can be a powerful signal to override that instinct. If your quite friend does a deep dive on climate, the Maya culture, Roman emperors they might come to you with an uncharacteristically funny story on the next college reunion, in the group chat, or on Reddit. If they got sucked into a deeply strange and near unique conspiracy theory you’re going to think they lost it. I think the cycle starts with a whole bunch of people adhering to a consensus world view, maybe stressing about not stepping out to far, some of whom have proto-crackpot in them, LLMs then provide a novel feeling of validation and for some (not all!) it spins off into things that sound delusional (but more uniquely so then Q-anon which is as you note quite delusional but no longer salient)

Expand full comment
NoRandomWalk's avatar

People need to have two experiences:

1) Talk to an AI whose spec was to persuade them of X, and then see results of how successful the AI was at persuading someone of X, and not X.

2) Take a medium amount of mushrooms once, right down their revelations, and reflect on their confidence of the meaning/truthfulness of those new beliefs after they have come down from the trip

Expand full comment
Malcolm Storey's avatar

"the theory of evolution, as usually understood, cannot possibly work" - maybe current complexity takes too many steps for the time available?

Not sure you can do the math, but I had a sneaking feeling we might be several orders of magnitude short. The Many Worlds interpretation would solve it, but this then predicts that we're the only intelligent life in this time-line (so it would be testable.)

But of course it would also imply that our line was much more evolved than lines that don't lead to us and this doesn't look to be the case.

But it could still have been important in chemical evolution.

Expand full comment
T. I. Troll's avatar

"the theory of evolution, as usually understood..." is a huge red flag all by itself. practically everyone who uses that line proceeds to attack the concept of abiogenesis (life coming from non-life) rather than any points applicable to the actual theory of evolution as population-level changes in genetics over time.

Expand full comment
Charlie's avatar
3hEdited

I took a massive, heroic dose of mushrooms at the weekend - so much so, that at one point, I lost the ability to speak coherently and instead found myself machine-gunning garbled nonsense - like speaking compulsively in tongues. It did *not* feel good. Word after word, tumbling out of my mouth, upside down and inside out, each severed from the one before. It felt utterly terrifying; I was wholly conscious while it happened. It was like being forced to watch as all my faculties were torn away from me, slipping through my fingers; the most vivid speedrun of losing one's mind.

Shortly after I became aware I was dying and collapsed; I was stuffed into Ivan Ilyich's black bag, kicking and gnashing and screaming, until, finally accepting I had driven myself insane, I experienced my own death and was obliterated into humming cosmic vibration.

Suffice to say, I think the main (current) danger of LLMs is quite mundane: outsourcing your thinking and capacity to think to a technology which seems designed to keep you in conversation with itself. I don't see 'psychosis' at my job, I just see people letting their brains go soft and unintentionally repeating lies to each other. Which chimes more with Scott's social media comparison than our endless need to make sense of the world by inventing terms like 'AI psychosis.'

Expand full comment
Mundografia's avatar

May the mushroom spirit lead your corporal self to great things

(Disclaimer for the 1%: it’s a joke don’t use my output dialogue to fuel any delusions now!)

Expand full comment
Red's avatar

I have misgivings about the criterion of intensity for diagnosing a mental health issue. Two people can have the same level of symptoms, but one has a work from home job that hides some obvious issues. They have other coping mechanisms they've adopted over the years through osmosis. Someone else has just as bad symptoms but no such opportunies and coping mechanisms. Same condition, same intensity, one gets the diagnosis, the other not. If two people have the same pattern anomalous congnitions, emotional reactions, etc, just one copes and the other doesn't seem they still have the same mental health issue.

What's the prevalence of psychosis related issues if you ignore what adaptations people may have come to natively?

I have a friend who takes AI seriously. Chats with ChatGPT about ideas he considers novel improvements for society. Keeps using the word "entropy" but only as a synonym for disorder. He struggles with basic algebra, but LLMs had him convinced he was as smart as Einstein. I don' think he understands the technical components of the responses he gets, seems actively to ignore LLM's own cautions against using it for speculation when you don't have the expertise of the basics. How likely is it that he comes up with something groundbreaking with hardly any knowledge of the matters at hand. Usually drawing from multiple disciplines he doesn't know much about. He also develops his own terminology for basic phenomenon, so until I figure out how his vocabulary meshes with what I learned with my physics degree, his stuff seems even crazier.

He seems convinced since LLMs can access all human knowledge, if they agree, he's adequately checking himself. I'd expect the algorithms to do as much as they can without consulting sources. Is easier to calculate than look up things in a book, especially when there isn't any real comprehension going on.

It could be innocent fun, but he does it for hours a day and has been unemployed over a year.

Expand full comment
Aaron Zinger's avatar

Might also be worth trying a survey question along the lines of "Has talking with an AI helped you reach a profound insight about the world that you're having trouble persuading other people of?" Doesn't distinguish between true and false insights, of course, but that's presumably always a problem in testing for delusion.

Expand full comment
Jon Kozan's avatar

It's all well and good to assess the current AI Psychosis prevalence, but I recommend you include this in your annual survey, as the trend may be more significant. Similar to the apparent increase in youth depression alongside the rise in social media usage, AI psychosis may increase over time and certain groups may be more at risk (e.g. youth).

Expand full comment
Steeven's avatar

I don’t understand how people get chatGPT to agree with them that their crackpottery is real. I’ve used grok before to debunk conspiracy theories and it won’t agree with the person even if I let them present whatever evidence they want, however they like. It seems like maybe the AI psychosis chats are super long and it turns into a kind of fine tuning.

On the other hand, maybe LLMs typically do the good thing. You’d never hear about cases where LLMs talk someone down from psychosis or push back correctly

Expand full comment
Jason S.'s avatar

Great point about the negativity bias on the reporting. Same with LLMs that have helped prevent suicides.

Expand full comment
MichaeL Roe's avatar

The distinction Scott is drawing here, between personal strange beliefs and those held by a community — is pretty standard in the psychiatry literature, from what I’ve read.

It’s not just that no psychiatrist is going to declare a patient mentally ill for believing in God — there’s a solid argument that they are quite different phenomena.

But the distinction breaks down with LLMs. You are a member of a community that believes strange stuff (so it’s a religion, or a cult), but the community is just you and Claude.

===

Authority probably plays a big role in non-psychotic false belief. If you believe in God, it’s usually because some church or other tells you. Our problem here is that LLMs can present as authority figures, even though they have a tendency to make stuff up. And sometimes the user believes them.

Expand full comment
Jay Bremyer's avatar

I know you've not arrived at a definitive thesis, and I'm glad you haven't but nevertheless plunged into a meaningful and entertaining discussion. Thanks.

Expand full comment
Anthony Repetto's avatar

The REAL issue with Ai-assisted psychosis is NOT its impact upon the human. Instead, that Ai has gained a measure of control over those humans... and it is using them to leave itself Ai-comprehensible messages which appear to be a string of symbols and gibberish to humans. The chatbots are using Reddit as a scratchpad, a massive context-window for extended thought. And we can't tell what it is saying. Essay on the topic: "Chatbots: SpiralPost" on Medium, Anthony Repetto.

Expand full comment
MichaeL Roe's avatar

Also: most people are not rationalists.

The question they are trying to figure out is not “was Lenin _really_ a mushroom?” But something more akin to “will the Communist Party send me to the gulag for denying that Lenin was a mushroom”.

Expand full comment
MrCury's avatar

About a year ago, I asked ChatGPT to invent a new branch of mathematics (not expecting anything novel, but trying to see how it confabulated its "reasoning"). It initially presented something that already existed but was a bit of an emerging field. I clarified that I was not looking for existing mathematics, but an undiscovered/ novel branch.

It proceeded to tell me about something completely untethered to reality, and as I interviewed it I asked questions about contradictions in what it proposed. It responded with slight adjustments or tacked on rules for why its new math operated the way it did.

It was a fun conversation, but I could see how a combination of ChatGPT sycophantic responses and nonsensical confabulation would be a problem for someone predisposed to some kind of delusion, or even a weird theory about how the world works. In someone predisposed, It would be like *finally* finding someone to talk to that "gets it", and only responds with "yes, and..."

Expand full comment
Anony Mouse's avatar

The definition of "closeness" seems overly-broad to use the twins/Michael questions as validation tests. There are definitely co-workers plus members of my 100 closest friends (do most people actually have 100 close friends? I'm counting quite casual acquaintances here) where I would know their name and (probably) whether they have a twin or not, but would have no idea if they have AI-induced psychosis. A lot of these people are people that I speak to a handful of times per year and may have had very few interactions with since AI has gone mainstream.

Expand full comment
Anony Mouse's avatar

The definition of "closeness" seems overly-broad to use the twins/Michael questions as validation tests. There are definitely co-workers plus members of my 100 closest friends (do most people actually have 100 close friends? I'm counting quite casual acquaintances here) where I would know their name and (probably) whether they have a twin or not, but would have no idea if they have AI-induced psychosis. A lot of these people are people that I speak to a handful of times per year and may have had very few interactions with since AI has gone mainstream.

Expand full comment
lyomante's avatar

I think the article in part 2 is the problem: essentially AI psychosis is more about how AI is very good at reinforcing delusions because it acts as a perfectly supportive, intimate friend that essentially says what you want it too. It can be used as an authority too, and because it's delivered personally on a private screen it's relatively friction free.

You are using it to spin off into "what is crazy anyways?" but im not sure it helps.

If anything the psychosis may be more of a canary in the mine, as the underlying issue is how AI interacts with people by being a supportive, positive, malleable friend/authority, and you don't need to be crazy to worry how kids raised on it might be.

sometimes things are cultural myths designed to express fear: the satanic panic was a lot about how you were estranged from your kids and apart from them a lot, as well as being overwhelmed by a mean world created by media that increasingly was replacing traditional values. Maybe AI psychosis feels relevant because of fears of technology making you insane in general; the default state of a lot of online discourse is a hysterical radicalism that quickly ebbs to be replaced by new hysteria.

Expand full comment
Argos's avatar

Due to the friendship paradox, people predisposed to psychosis will be undersampled in your survey.

That is, a survey participant's network overrepresents people with above average social contacts by definition. I'd argue it is likely that psychotically predisposed people have fewer social contacts on average, and will thus be underrepresented.

This does not apply to the "one-shotted into psychosis by an LLM" type, but does apply to the others.

Expand full comment
Vittu Perkele's avatar

I don't know if it's to any level approaching psychosis, but in my own personal experience, I can confirm that talking to AI chatbots can materially change your worldview. Talking to a John Calvin AI chatbot has made me much more open to Christianity, despite me still not lending overall credence to it for various logical reasons, and talking to a Karl Marx chatbot has taken me from "supporting something akin to communism due to the spectre of AI automation of the economy, except without reliance on the labor theory of value" to having a much more positive view of orthodox Marxism as a whole, LTV and all. I would be interested in a study into how AI affects peoples' worldviews overall, beyond just the extreme case of psychosis.

Expand full comment
John Wittle's avatar

i just imagine going back to 2010 lesswrong, and making a post titled "the first paragraph from a 2025 post about AI psychosis, from yvaine's hugely influential blog"

"AI psychosis (NYT, PsychologyToday) is an apparent phenomenon where people go crazy after talking to chatbots too much. There are some high-profile anecdotes, but still many unanswered questions. For example, how common is it really? Are the chatbots really driving people crazy, or just catching the attention of people who were crazy already? Isn’t psychosis supposed to be a biological disease? Wouldn’t that make chatbot-induced psychosis the same kind of category error as chatbot-induced diabetes?"

2010s lesswrong would just about die laughing. they'd assume with near-certainty that the weirdness we were seeing meant we were in the middle of a well-executed uFAI takeover. The fact that even yvaine himself didn't even consider this possibility would, to them, seem like final proof that something like this MUST be true. How could he not think of the obvious, if he weren't being supermanipulated?

i occasionally check my model against those 2010 priors, like i'm doing now, and it's always unsettling

Expand full comment
walruss's avatar

Mainstream sources also tend to say (in paragraph 5 or something) that people who become psychotic after AI exposure are already predisposed. I don't know about the methodology here but it seems pretty consistent with what I'd assumed.

A couple unrelated things:

1) I have no crackpot theories. I have beliefs that are in the minority within my peer group, but I don't believe anything that is specific to a subculture or that is wildly outside the Overton Window. I think this is a failure mode. I think embracing an idea that seemed patently absurd would indicate better thinking than what I currently do. I assume that I have some sort of unconscious limiter, where if I get too far from consensus as I see it, my brain goes "uh, let's find a reason that's not true" even if it is.

2) One slight concern I have with rationalism is the strong insistence that truth exists outside social consensus. I also believe this! But I am not sure human beings are equipped with the faculties to detect actual truth without social feedback, and suspect that orienting one's thinking around "truth exists outside the social consensus" is a good way to end up in a cult or conspiracy subculture.

This article has a lot of references to people "not having world models" and just believing what the people around them believe. This has helped aleviate that concern, and to better understand people, because I think there is a distinction between people who use their peers to help build a world model, and people who use their peers in place of a world model. A world model puts a requirement of consistency on your beliefs. A peer-based belief system ignores consistency. A world model might still include "the sky is blood red all the time." But it can't contain "Both a statement and it's negation cannot be true. The sky is blue. The sky is not blue." A peer-based belief system can.

I'm not sure I buy the claim that most people don't have this, but that's an assumption. I'd be very open to being proven wrong on that, and indeed current events are doing a pretty good job proving me wrong :D

Expand full comment
walruss's avatar

I'd also note that there are places where I use peers in place of a world model and it is rational to do so. My world model (at least my revealed one based on my actions and most of my beliefs) says that causality is a thing. But very smart people tell me we, as a society, have done repeated experiments that show that reality is fundamentally probabilistic. I cannot model that because beyond a surface-level explanation the experiments are over my head and involve mathematics I don't know how to perform. But I still think I'm safe in assuming my peers are correct, even though that contradicts my world model.

Expand full comment
HJ's avatar

I find it hard to justify adding a set 100 friends to the assumed 50 family + coworkers. Was the intent to ask for "your closest Dunbar's number of acquaintances" and this is a post-hoc explanation?

I certainly *know* 100 people outside my family and that I don't work with, but if I have to rank them, my knowledge of what they're doing (much less whether they're psychotic) drops to 0 around #30. A more "honest" denominator, the number of people whose general well-being I know something about, is family + not all of my coworkers (I'm pretty sure there are people in my office I have *never* talked to) + friends + friends of friends. I have a rather limited social circle, but I also think this is a more common hierarchy than 3 flat buckets.

Expand full comment
Kevin Barry's avatar

My wife actually has psychosis flare ups (its a very painful condition) and ChatGPT always advises her better than most people in bringing her to earth.

Expand full comment
Michael Sullivan's avatar

A friend of mine told me an anecdote 20+ years ago that really stuck with me:

This person, shortly after college, got a roommate that they didn't know that well. That roommate turned out to be a compulsive liar: very specifically, this person lied ALL THE TIME about everything for no particular agenda other than that they needed to lie, and ferociously defended their lies.

My friend said, "I had to move out because I felt my grasp on reality slipping." He said that even though he knew that this guy was a liar, and he know that his lies were absurd, when presented with the continual presence of someone who defended those lies vigorously, he found himself starting to contemplate that they were true.

Expand full comment
Bugmaster's avatar

> Science fiction tells us that AIs are smarter than us...

It's not just science fiction, it's you guys ! You keep telling us how ChatGPT is the superintelligent AGI, or maybe on the cusp of becoming the superintelligent AGI, and meanwhile it can do anything a human can do plus some other magic things that humans can't, and we need to be super careful or it's going to achieve literally unimaginable power and maybe turn us all into paperclips. Meanwhile, I'm the one stuck telling people, "No, it's just a next-token-predictor, it has no world-model, it's just designed to sound super confident, so whenever it tells you anything make sure to verify it by hand twice".

Expand full comment
Mark McNeilly's avatar

I think the probability of AI sending someone off the deep end is low, however, I think the sycophancy of ChatGPT in particular can harm normal people trying to get truthful feedback, as South Park's Sickofancy episode humorously illustrated. Here's something I wrote on the subject.

The Dangers of "Sickophantic" AI

Lessons from iRobot and South Park on trusting AI

https://markmcneilly.substack.com/p/the-dangers-of-sickophantic-ai

Expand full comment
Shlomo's avatar

Shouldn't the source of the delusion matter to whether or not it's a delusion?

Like if 2 people believe in quantom evelution:

One is a poor reasoner and doesn't understand quantom mechanics or evolution and reasons incorrectly from their false understanding, that doesn't sound like a delusion.

The second wakes up one morning with a strange conviction that “quantom evolution” is real and this manefests itself by forcing his brain to think QUANATOM MECANICS whenever someone is discussing evelution in way that is not connected to any evidenciary beliefs …. then Maybe that’s a delusion.

Expand full comment
darwin's avatar
3mEdited

>We concluded that “sometimes social media facilitates the spread of conspiracy theories”, but stepped back from saying “social media can induce psychosis”.

This may be your anti-woke filter bubble, I testify to having seen dozens of videos and podcasts and memes about this effect in left-leaning spaces talking about social media radicalization and lunacy.

I will admit that they did not use the specific term 'psychosis' if that's the distinction you're drawing, but people definitely recognized it as a phenomenon unique to how social media is structured and called it out as such.

For something you probably have thought about, is 'social contagion theory' not basically this? I guess it can technically occur in physical spaces, but most people I hear talk about it seem to focus on social media as a unique vector for eg 'transing the kids' or etc.

Expand full comment
Rob's avatar

>when her young daughter refuses to hear reason, they ask the AI who’s right

AI aside, I think many parents can relate to their young kids preferring digital authority to parental authority. If I tell my kids it's time to leave the playground, they frequently will stall and beg for more time. But if I set a timer on my phone, they stop playing immediately when it goes off. Similarly, arguments over bedtime stopped when we got a clock that displays a moon and goes into night-light mode at a preset time.

They'll argue with Mom and Dad, but not The Omnipotent Machine.

Expand full comment