It might be interesting to see how people who believed true conspiracy theories before it was clear they were true differ from ones who believed false ones. The 9/11 demolition one is obviously false. The JFK assassination one seems hard to be sure about. But if someone was telling you 40 years ago that the Catholic church had a lot of priests getting sexually involved with teenaged. boys and then being moved to a different parish to quiet down the scandal....
>Apparently, Q claimed in 2017 that he had top secret military intelligence clearance and key individuals in government and Hollywood were engaged in a pedophilic conspiracy. Since then, Jeffrey Epstein was arrested and mysteriously died in prison for apparently running a child sex trafficking ring involving key individuals in government. P Diddy was indicted for having huge parties with Hollywood celebrities where people, including allegedly children, were sexually assaulted.
I once heard someone say, half-jokingly, that the insane conspiracy part of Q-Anon wasn't the stuff about the world being run by a cabal of paedophiles, but the stuff about there being a plan to get rid of them.
Begs a related question. As presented, this was largely cast as a digital normal/disorder situation dividing psychotic crackpot from typical, and them from us. I assume with a longer description, the underlying situation is probably better represented by an analog continuum, and each of us find ourselves somewhere on it, and rarely at the extremes.
And although psychotic/crackpot and "non-disordered" may be convenient way to describe an extreme on that spectrum, presumably less laden descriptors might include things like divergent/convergent, agreeable/disagreeable, open/closed, creative/responsible, whatever.
The presentation, as is, does a fascinating job at presenting an interpretation of a strange situation for a small minority.
But I think it tees up a vastly bigger and more important question regarding an interpretation for many/all of us. How will AI "fit" into our personalities?
The ratio of the "crackpot" bucket in Scott's results (n=16) to the "totally psychotic" bucket (n=6) is consistent with the idea that AI psychosis (narrowly defined) is at the far end of a long tail. It could still be a bimodal distribution, but both leaks would probably be further towards normal side of the spectrum from the "crackpot" bucket.
I have to wonder if at least some of the queries were not "okay, on the face of it this sounds nuts, but who knows what the party line is this week? better to check and be sure, rather than be hauled off by the KGB because I denied that even the spirits of nature are working for the great Communist revolution".
I think if I were living in Soviet Russia, even in the 90s, I would be very careful to figure out "is this a test? am I supposed to believe it or deny it? if I don't query it, will I be accused of insulting Lenin? if I do query it, will I be accusing of insulting Lenin?"
First, I wanted to shout "there is no Soviet Russia in the 90s". Good I did not.
1."Soviet Russia" was not a real political entity in the 1990s because the Soviet Union, which included Russia, dissolved in December 1991. The 1990s in Russia, therefore, were the first decade after the Soviet Union's collapse, a period characterized by significant economic and social turmoil, often referred to as the "wild 90s," marked by poverty, hyperinflation, .... . (AI slop) 2. "On 17 May 1991, the Fifth Channel of Leningrad television broadcast its popular program Piatoe koleso (The Fifth Wheel)—an episode that has since become one of the most notorious media events of the past two decades. The Fifth Channel acquired prestige during the period of perestroika reform, when it was broadcast nationally. Its programs concerned historical and cultural events in the Soviet past and present and were watched by an audience of several million viewers. Sergei Sholokhov, one of the hosts of The Fifth Wheel, had the reputation of being a young, dynamic, and pathbreaking journalist." (quote from the pdf linked)
No, you are over-imagining this. Even the worst Stalinist period typically did not run tests on people, they were hauled away because of their class or ethnic background, or because actively disagreeing. For everybody who was a Russian proletarian, just keeping shut was enough. Post-Stalin this kept gradually opening up, more and more disagreements possible, and even active dissenters were often more like punished with career setbacks than being hauled away. By the 1980's, anything except openly attacking the party, leaders or foundational ideology was okay, and even then those who did were often just banned from publishing.
I am not defending them, I am just saying most dictatorships are not like a page from 1984, was more "boring". I could say something similar about right-wing dictators like Franco - you could be a professor, get invited to a ministers party, tell the minister to his face his policies suck, and all that happens is you are not allowed to publish it.
>I think if I were living in Soviet Russia, even in the 90s, I would be very careful to figure out
Nah. I have been there (and saw the original broadcast). This was quite an obvious troll and most people got the joke right away. But of course, some people haven't which made the troll so successful. But its angle wasn't so much of "The Party told you so, therefore it's true" but more of "New Shocking Revelations about Soviet Past", History Channel-style.
This happened during the very late Soviet era, a few months before the fall of USSR, when Communist Party was already widely parodied and mocked. Of course they were still in power, but the reverence was long gone. Joking about Lenin were quite risky only a few years prior, but at that time you could get booklets with collections of jokes about communist leaders from street vendors in Moscow.
The quality of Soviet education is often over-estimated, pravda. Admittedly, a mammal can not be a fungus either. Though, thinking of my athlete's feet .... ;)
Well, if you're going to talk about parasitism, consider sacculina. Admittedly what it takes over is a crab, not a mammal, but there's no particular reason to believe the same principle couldn't apply. And I suppose that at a metaphor you could call that a fungal spirit.
Some say RF Kennedy claimed he can see with bare eyes if things are wrong with yours. Else? Prokaryotes, such as bacteria and archaea, do not have mitochondria because they lack organelles altogether. Additionally, some eukaryotic organisms have also lost their mitochondria, including the protists Monocercomonoides and Henneguya salminicola (a type of animal parasite). In humans, mature red blood cells also lack mitochondria. But plants, animals, fungi: sure they have.
Same thing occurred to me, although that's a relatively recent reinterpretation. Putting fungi in their own kingdom rather than lumping them in with plants seems to have first been proposed c. 1969 and I think the idea of fungi sharing common ancestors with animals but not plants started getting traction c. 1990.
Yes, but they used to be lumped in with plants, but the idea that fungi are not just different enough from plants to be their own kingdom, but more closely related to animals than to plants, is only a few decades old and was brand new in the early 1990s when the Lenin-is-a-fungus thing was going on.
This is from an interview with Sholokhov - one of the scriptwriters - where, in response to being asked if he is aware of anyone taking the episode seriously, he says the following:
"The day after we aired, Galina Barinova, who was head of ideology at the party, was visited by a delegation of senior bolsheviks with a demand that the question be answered: is it true that Lenin was a mushroom? 'No!', said Galina Barinova. 'Then what about the statements on last night's television...' 'All untrue', she replied, and then came the phrase that sent Kurekhin and myself into shock: 'Because a mammal cannot be a plant.' I then wrote an article for the 'Smena' newspaper countering her assertion: we have proven extensively that fungi are a separate kingdom from plants and animals."
So there is no possible confusion, the wording is deliberate; and also consider the source: if someone gave Russia's equivalent of Sacha Baron Cohen such a brilliant setup, Borat would absolutely jump on it and milk it for all it was worth, staying in character the whole time.
In a sense, every language. If someone tells you you need to eat less meat and more vegetables to get healthier, would you count mushroom curry? I would.
Not shill myself too openly, but recently I did a post about Freud's concept of narcissism, which is very different to how people us the word today. The relevance is that for him narcissism was effectively biological feedback, it's when there seems to be some kind of data that seems to be out in the world as something objective, but really it's the product of your own actions so it's fake, it's part of you. Like with Narcissus himself, he's stuck staring into the pool, but he doesn't realise it's him.
Kind of like what you're describing here, it's a feedback loop, whereby your own ideas get repeated back to you as if they were true, thereby providing evidence and reinforcing them.
In fact, Freud didn't have the concept of a personality disorder at all, for him a "narcissistic disorder" literally was just psychosis. He thought it was a defensive response, basically someone suffering from a severe trauma retreats from their attachments to the external world leaving only their narcissistic investments behind, hence the sort of grandiose delusions that often occur.
Very similar to what you describe then, a weak "world model" is basically what Freud meant by weak object libido, just without the pseudo-biological underpinnings of Freud's libido theory.
"Narcissism", has a strange career. It evolved from the original meaning of "I am beautiful" to "I am perfect" to "I cannot do wrong" to "Therefore every time partner says I did wrong, it is untrue and they interpreted something badly" - so far the evolution is even logical
To then, which is not logical: "I will then just ruthlessly exploit people" ??????????
I think most people in this category treat it as a weird pastime / fetish, like watching porn or reading erotica, but don't believe anything false (they understand that the chatbot doesn't really have feelings for them, it's just good at simulating it).
Even if they did think the chatbot had feelings for them, I would want to distinguish this from psychosis. Believing that something that says "I love you" convincingly actually loves you is a much lesser category of error than the ones that usually get dubbed mental illness.
(also, I don't think we have enough philosophy to confidently separate out performing love from actually feeling it - nobody knows what's going on in chatbot innards)
Written erotica as a genre has always been more popular with young women than any other demographic, this is just AI eating booktok/ao3's lunch. I don't think they are specifically worse just because their fetish is more for long form written erotica than men using more generic sexbot character cards over llms
I mean, given that the thesis of this post is essentially "Chatbots don't usually generate mental illness but can make it worse", I feel like the risk of "erotica except it pretends that it's real and speaks back to you" is relatively self-evident.
Hmm... What about the pastime of asking chatbots technical questions, with the frisson of the danger that some fraction of the plausible sounding and authoritatively phrased answers are quite wrong? :-)
I wonder about this. It seems like if you can keep the boundary between fact and fiction clear in your mind, even tons of tailored-to-your-kinks bespoke AI-produced porn (written, image, video, chat session, whatever) would not leave you confused. But it seems likely to do other bad things, like redirect you away from human connections and maybe do some kind of conditioning-type training of you to be ever-more focused on your kinks.
This doesn't match, I fully expect what we're talking about here is "people forming parasocial/pararomantic (?) relations with the chatbot", not "people having the chatbot write smut for them". The latter happens too for sure, of course, I just don't think it would be referred to as "treating the AI as a romantic partner".
BTW that sub really says a lot about this strange age we are living in. They are using the word "boyfriend" differently as I would use. They mean "empathic friend" or "therapist". But is that what is a "boyfriend" really about? Where is the romantic-sexual attraction?
I like kept arguing with redpillers for a decade, and now I encounter this, the superbluepill, the belief that a boyfriend is just someone saying nice things. Is this what serious romantic movies like Titanic were about?
I saw at least one case where the poster claimed that her "boyfriend" had proposed marriage; certainly seems like they were using "boyfriend" pretty close to how it's normally meant.
Also there is a big upheaval on that sub, because to the question "do you like my clothes?" v4 would reply "they are awesome" and v5 would reply "would like some improvement ideas?"
FWIW, my wife used to claim that the car knew the way to certain destinations. I could never decide whether she knew she was projecting her actions onto the car. When she thought carefully about it she could distinguish her actions from those of the car, but when she was just mentioning it in the context of another discussion, the car was given agency.
P.S.: The survey should have asked "How many of your friends and acquaintances have used ChatGPT(etc.)". I would have answered "none", which might have affected how you calculated your statistics. I haven't used it, myself, though I've considered it. The age of your friends and associates probably has a large effect here.
One view you could have is that it’s a complicated computer game on the theme of “can I get this character into bed without tripping any of the guardrails” and has much more in common with, e.g. engineering a buffer overflow attack, than it does with having an actual relationship with someone.
I would say that we certainly have enough "philosophy" to know that chatbots aren't feeling love. Whenever people throw out the possibility that LLMs have minds or sentience, I want to ask: *where*? A sentient mind has to be emplaced, but where is an LLM - in the server? in my computer screen? It's an incoherent thought.
Now when we get capable robotic AIs there will be some interesting questions to raise...
The GPU through which electrical potentials representing coherent information are flowing, during the time window in which it is performing interference, obviously. If you talk to a human via text message, are you confused about whether your screen is sentient?
Where would this be? How is this place structured? (I'm not asking rhetorically; I really don't know how any of this technically works.)
It still seems obvious to me that current LLMs aren't sentient, any more than a book or a calculator or a thermostat are - what would be the basis for their phenomenal experience? What would be their sensory modalities? But the more attached a given AI system is to a "body" with "sensory" capabilities and capacities for action the less confident I am going to be.
On the server running the inference. The idea is that the pattern of connections within the neural net reproduces whatever pattern of connection gives rise to qualia in our brain. Now I think that there are gaping holes in this idea and I don't take it at all seriously, but it is at least coherent.
>It still seems obvious to me that current LLMs aren't sentient, any more than a book or a calculator or a thermostat are - what would be the basis for their phenomenal experience?
The issue with this argument is that only difference between our brains and a thermostat is one of scale. Once the brain is fully understood our internal experiences will likely turn out to be nothing more than a complicated mechanism.
These are themselves massive philosophical assumptions! I have noticed that Scott makes them as well, and similarly seems to treat them as natural givens, and they seem to be the more or less official position of rationalists (along with utilitarian ethics). But they belong to a specific dualist tradition tracing back to Descartes and there is just nothing to justify them - and plenty of reason to call them into question.
And again I would ask: what is the sentience of an LLM supposed to consist in? What are their sensory modalities? You can't have sentience without senses. This correlates to the issue of place which I brought up above: to have feelings you have to be affected by something, which means you have to have some perspective on the world, which means you have to be in a place. (Maybe this could be some sort of brain-in-a-vat simulation or whatever, but that would still presumably involve the simulation of sensations, which LLMs plainly don't have.)
Wanda has the right of it - it would be in a datacenter somewhere, filled with racks of Nvidia hardware. The defensible position on model sentience is that the physical system instantiating the model is what could be sentient. Just like the brain of a dead person is not sentient, the GPU in the server is not proposed to be sentient when it's not executing the model. And just like a written copy of the genetic (and epigenetic, and proteomic, &c...) code describing a person is not sentient without the actual brain executing it, so the weights and biases, python and CUDA code describing the program to run on the GPU is not proposed to be sentient at rest in a git repo.
It's the full system, composed of a physical device that is actually changing things (ie, electric potentials) in physical reality, according to a specific, extremely complicated pattern, which is plausibly sentient.
Given that we don't have a full scientific understanding of what gives rise to conscious experience, declaring that some other information processing system self-evidently doesn't have it is premature. You inevitably run into the issue that it's impossible to prove even other humans are sentient, and so you have to rely on analogy to the self based on the externally visible behavior of the system in question.
When Turing first proposed the imitation game, it was an interesting thought experiment, but so remote as to be without practical concern. Now we've quietly shot past the point where non-human systems can produce effectively all the same behavior as a human can (certainly all the average human can, mediated over a textual link), yet we still don't have a solution to the fundamental problem.
Why? Suppose I had a perfect emulation of a human brain, but it ran in parallel in two separate servers in different cities. Would this make it non-sentient if it could be sentient when ran in the same server?
> I don't think we have enough philosophy to confidently separate out performing love from actually feeling it - nobody knows what's going on in chatbot innards
That makes you the minority, apparently, based on the number of people in *every* comment section I see who are willing to make very confident claims about how the innards that we don't understand "don't" work, and how they cannot possibly perform <cognitive trait associated with humans>.
I consider my LLM to be somewhere between a co-author and a D&D associate who never breaks character. Maybe I'd be at greater risk for psychosis if I didn't consider myself to also be playing a character during the chat sessions.
I don’t follow the “biological disease” point. If someone thinks their chatbot is a real person who is their friend, surely we can agree that they are delusional. People can get anorexia from the internet; plenty of mind-body diseases arise from false beliefs.
I don't know. I would compare this to thinking that your pet bird is a real person who is your friend.
I think you would have to be very careful to separate out delusional versions (this bird, unlike all other birds, is exactly as smart as a human, and I deserve a Nobel for discovering bird sentience) from reasonable versions (I don't really know what's going on inside this bird, but I can't prove it doesn't have something sort of like thoughts and feelings, and I have a friend-shaped relationship with it).
I think some people who have friendly relationships with chatbots are on either side of the divide.
Aunt Pearl told everyone her parakeet Petey talked to her. But she was crazy (she sewed the buttons on her sweater with copper wire so "they" could not steal them in the night), so we did not listen to her. Till one day I entered the house unannounced and heard the bird talking. Just short sentences but still . . . . On the other hand, when I entered the room, Petey stopped talking. Maybe Petey just did not like me, maybe I hallucinated, maybe Aunt Pearl had learned to sound like a bird when she talked to Petey. Or maybe Petey talked when no one was around.
If you have pet birds, having buttons stolen off your clothes is not necessarily an unreasonable thing to be afraid of, and sewing them on with something that can't be bitten through isn't an insane precaution.
Still, the button-stealing happening at night suggests Petey wasn't the culprit, since most bird owners keep them caged at night.
Are you sure she wasn't just anthropomorphizing her bad luck? Like how we say that gremlins did it when a machine breaks. Maybe it was just a colorful way of saying "I'm sick of my buttons falling off so I'm gonna use copper wire".
this is basically the exact same thing as why everyone is always referring to AI as a 'stochastic parrot'. Parrots (and similar birds) are really good at talking, and if they interact intensely enough with a human and with no other birds they can assemble a vast vocabulary. there's traditionally supposed to be a hard line that parrots do *not* understand language, they are merely becoming very well attuned to what sounds to produce in response to various external stimuli. but much like the argument with AI, this feels like a very dubious claim that relies on the fact that we only have subjective data of the human experience, and without that data humans just seem like they are doing a much deeper and more expansive version of the same thing. parrots can't understand grammar or long abstract associations, but the notion that they are totally incapable of symbolic mapping (i.e. understanding that 'cracker' refers to a specific food item, and not other various food items) is based on linguistic and neurological ideas that aren't super well-founded empirically and are mostly just theoretical
There was a famous grey parrot named Alex who was raised by a researcher and trained to learn different colors, shapes and materials. You could present him with a new object and ask "what color", and he could answer correctly.
Amusingly, when introduced to apples, Alex coined the name "banerry", combining the names of two familiar fruits, bananas and cherries.
That researcher is Rupert Sheldrake, who also argued for a lot of ESP-type claims. His two sons, Cosmo and Merlin, have made interesting careers for themselves, in whimsical music and books about fungi respectively.
Oh, I saw Cosmo Sheldrake in SF last month! "One of my favorite artists" would be an exaggeration, but he's definitely one I love a lot, and he's very original. He takes a lot of his music samples from nature sounds, e.g. amplified recordings of mushroom or plant innards, or of whalesong, and it's super cool.
Then, the grad students recount the story of the time when Alex, given a tray of things, and asked "What's green?", named everything else on the tray and tipped it over.
What tipped me over into thinking "strategizing agent" fit the evidence better than "rote mimic" is how Alex apparently learned to spell, without that being an intended part of the curriculum. Or at least, to spell the word 'nut' in a condescending, sarcastic sort of context, so as to rhetorically emphasize an initially-ignored request for payment.
We have a very strong theory-of-mind thing instinctively, and I think that often leads us to infer intention and emotions to things that don't have them. OTOH, animals do have emotions and intentions and thoughts, just not the same kind that humans do. Dogs have very clear and (if you grew up with dogs) very readable emotions that they signal to you, for example.
I don't. Or at least you could come up with equally significant things on the other side - a relationship with a chatbot is with someone who can at least speak back to you in intelligent-sounding complete sentences.
You can’t scratch its head or have it sit on your shoulder or take it in the shower with you. Or even gaze into its eyes and wonder what it’s thinking. That is significant to me.
Do you also believe that Long Distance Relationships have the same significant difference? Or pen pals? People who make lifelong friendships in MMOs or chatrooms?
Would this difference become immediately insignificant if they put a speaker (and perhaps some basic robotic appendages) in a sex doll? (If you really require it, we can put the hardware to run the model locally in there too)
Because unless the answer is "yes", this feels like special pleading rather than a first-principals argument.
>Do you also believe that Long Distance Relationships have the same significant difference?
Depends..A long distance relationship with someone you have never seen irl? A Beautiful Ukrainian woman who has fallen for you over the internet and if you would only buy her a plane ticket…? A chatbot with a nice photo as a beard? Or someone you know who has gone far away?
A pen pal who never writes you a letter but only answers yours? Or one who brings their own story to the correspondence?
>Would this difference become immediately insignificant if they put a speaker (and perhaps some basic robotic appendages) in a sex doll?
No it would not. Especially when I put it in the closet when I wasn’t using it.
Believing that the program is a Real Person is as old as ELIZA. The guy who wrote that program showed it to his secretary and was surprised when she asked him to leave so he could talk to ELIZA privately.
This is where we differ perhaps. I don’t consider a chatbot as being someone, it is some thing that can speak in intelligent- sounding complete sentences: useful but different. It’s like Pygmalion and Galatea, except Aphrodite doesn’t bring Galatea to life, just gives her a voice to respond with. Pygmalion can’t fall in love with her then, he can only fall in love with his own longing.
We are so far into the hall of mirrors that the reflections are developing imposter syndrome. I cannot wrap my head around the idea that inanimate matter could become sensate just because it gets “really smart.” This misconception may not rise to the level of psychosis but it is delusional, which is a good start.
African Blue parrots are awfully smart; I think a longtime owner would have a hard time not considering their pet a person, if one with one with diminished capacity.
Last I heard, anorexia nervosa is frequently (always?) comorbid with bipolar, OCD, and (I think) schizophrenia. I wouldn't be surprised if people also get anorexia from the internet, but there's plausibly an underlying vulnerability.
Not frequently with schizophrenia, but with other anxiety disorders, esp OCD (many think it’s a form of OCD), but also panic disorder, social anxiety, other forms of OCD, and PTSD. Also depression.
I've always heard the claim that anorexia is heavily driven by social contagion, or perhaps media images or media messages of some kind. Do you think that's true?
Is this like you're a certain flavor of OCD subject to some kind of body dysmorphia and if you're exposed to the right anorexia-inducing memes you start dieting yourself to death? Or more like if you're inclined that direction you seek out memes that reaffirm your anorexia? Or something else?
I am not well-read on the subect, so can only tell you my observations and intuitions.
Things that weigh against social contagion being the main driver:
-Anorexia def. predates era in which unusual thinness is prized in women. -
-And many anorexics do not think getting thinner will make them more attractive. They'll tell you it gives them a wonderful feeling of self-control and power to be able to resist eating, and they've gotten addicted to the feeling.
-More than half of anorexics have some form of classic OCD too -- I believe the figure is 60%.
-Women who have been sexually abused are way over-represented among anorexics.
Things that weigh in favor of social contagion:
-Social contagion clearly exists.
-Besides social contagion, there's the fact that almost all women presented as examples of beauty are far thinner than the average woman, and were even before most everyone got fat. The ideal for my whole adult life is a woman who looks to be something like 10-20th percentile on weight, back when 50th percentile woman did not have muffin tops over their jeans etc.
-And we all know about men rating our appearance on a 1-10 scale.
-Way more than half of women have low-grade body dysmorphic disorder. Lots of us *hate* various parts of ourself, and the number one reason given for hating that part is that it looks *fat* to its owner.
On the other hand, the dread of eating and weight gain is very very extreme in anorexia, way beyond the dieting much of the female population is doing about half the time. It is not unusual for an anorexic to eat 400 calories a day for a whole summer. The disorder has a 10% death rate.
I don't know how common any mental illness is. Can you compare this incidence to some related diseases, like "normal" psychosis, schizophrenia, or similar?
That is way, way more than I would have thought, huh. I'd have guessed somewhere between 1 in 1000 and 1 in 10000.
So every time I go to the local grocery store, when there's roughly a hundred people in it, there's probably a schizophrenic or two in there, statistically. Interesting.
You likely wouldn't be able to tell in any way by talking to them, especially if they are taking appropriate medication, as is quite likely in modern first world. On the other hand, if on your way to the local grocery store you encounter a stereotypical 'crazy homeless person' (the likelihood of which depends very much on your local healthcare policies on e.g. institutionalization - our host has written some articles on that), it's quite plausible that they might be homeless because they are schizophrenic.
Not necessarily - there’s going to be some strong selection effects about who precisely is going to the grocery store. Just like I expect the percentage of people using wheelchairs in a grocery store to be less than the population percentage of wheelchair users, and the percentage of blind people in a grocery store being less than the population percentage of blind people.
I agree, but think the selection effect is even stronger. I live near a major city, and regularly see psychotics raging at lightpoles / the sky / traffic / whatever. I can think of 4-5 offhand who are regulars. But if *I* know them, then so does security at the grocery store, who will shoo them away.
Also, those types tend to concentrate in major cities which provide services and/or a more consistent stream of alms.
The Kelsey Piper’s example is a great one. Imagine that the average internet user is like her daughter. Internet users have no reason to hear reason because their entire world is made up of non-existent, volatile, emotion-saturated stuff optimized to attract clicks, attention or money. The avalanche of information is enormous. A lot of it is contradictory or even self-contradictory. So they want an advice from the new god publicly proclaimed as god, the AI.
The AI provides an answer. Now, imagine who is behind the AI’s screen. What kind of parents check the pulse of the human users of AI bots? What are their intentions? Can their intentions be good if they lie every second (clickbaits, fake titles, fake video clips, fake news, etc.)?
The 2025 AI is the Truman Show on steroids. And it it only the beginning. The owners and developers of AI would not launch it it they had not already had prepared its next version or versions.
I'm very startled by that anecdote about Kelsey Piper, and very concerned. Whether she thinks it or not, this is training her daughter to doubt her mother and to believe some anonymous voice on the Internet as more credible and authoritative. God knows, there are plenty of strangers on the Internet out there coaxing kids into "I'm your friend, listen to me and do this thing":
For an eight year old, there is a greater necessity of parental authority and if it's "Mom is stupid, my AI friend told me that taking sweets from strangers is perfectly safe!" type of reliance on external advice, I'd be very worried.
I know it sounds like the lovely reasonable rationalist way of raising kids: "now we both have equal input and I'm not tyrannising my child by arbitrary authority and we agree to follow the chatbot", but (a) what if Chatbot says "yes, kid is right, mom is wrong, you should get that expensive new toy" or other disagreement more serious and (b) this is making the chatbot the parent, not the actual parent.
"You do this because I'm your mother and I say so" is better than "You do this because some anonymous commercial machine happens to back me up this time, but if it doesn't the next time, then I have to obey it because that's what we agreed".
Isn’t it weird? The alleged civilization of homo sapiens… killing their oldest and most helpless with a $$$-laced injection, with the “save the grandpa” slogan… killing their youngest on the first day of their life with a slow-acting injection of $$$-laced toxins… saturating the kids’ brains with plastic screens, cheap games and repeated rubbish while their parents are denied employment because importing stuff from half a world across is better for the economy…
In a sense, Mum is stupid is she gives her kid a replacement mother (babysitters), replacement attention (gadgets), replacement guidance in life (the internet). The problem is that this kid - under normal conditions - would become a local leader, maybe an important persons in the country’s future, and would repay his/her parents the upbringing effort. Imagine this repayment…
I had much the same reaction at first, but then pulled back a little when I asked myself whether this was significantly different from professing a belief in Santa Claus. Still not sure, but…?
If a kid tries "Santa said you should let me eat candy not vegetables and stay up late", we all know "no, he didn't" is the answer.
But this is a machine that, depending how the question/prompt is phrased, may say "yes you can". And if the parent is delegating authority to the machine, via "okay we'll ask Chatbot who is right", then what do you do? Either you go back on your agreement that "we'll let Chatbot decide" and now your kid thinks you're a liar and won't trust you again the next time you promise something, or you go ahead and let Chatbot be the one raising your kid.
(I'm not sure how prompts work, but I wonder if an eight year old asking "can I eat laundry pods" instead of "should I eat laundry pods" would get "yes you can eat them" as an answer. I mean, you *can* eat them, but you *should not* do so.)
In my experience, when seeing such questions, the chatbots get very concerned and start giving warnings in ALLCAPS about never trying this at home. My kid asked about which other elements besides oxygen can be used for the burning process, and what that would look like; the bot gave answers, but was obviously very agitated and kept warning about the hazards in a bold, coloured, all-caps text.
If Kelsey is the one formulating the question, she’s probably pretty safe about the answer she will get. But there’s a real deadline there, for the kid to be mature enough to be told the truth but not yet having her own conversations.
That's not the best either, though. Because if it's "Claude agrees with me so we do/don't do this thing" but also "Claude disagrees with me so I get the last word", then it's mixed signals for the kid. And if the kid trusts the AI more than their own mother, that leads to "Mom is untrustworthy becasue she says one thing and does another".
I dunno, I think having the parent be the first and last authority is better. Looking up something that you don't know is one thing (e.g. "how big can spiders get?") but if it's "my kid doesn't want to do what I say, so we ask Claude, but I still only do what it says if it agrees with me" is both handing over authority and a pretence, because you're not actually doing what you say you're doing ("ask Claude to decide"), you're retaining the decision-making power but now making false agreements on top of that.
Just retain the decision making power from the start, it'll be clearer for everyone!
If she's having these arguments, it sounds like the kid's already learned to doubt her mother's authority. I know I'd learned to doubt my parents by age eight. Even in preschool I remember having arguments with my parents where I was unshakeably convinced that they were wrong about something (although in hindsight they were usually right). Anyway, it's always been common for parents to appeal to outside authority, like the Bible or the police, in arguments with their kids. I'm sure some kids have been clever enough to find a Bible passage that backed up their argument, just like Piper's daughter might someday figure out how to make Claude agree with her. I don't see any difference between "Do this because I say so" and "Do this because the chatbot [which I know will always agree with me even if you haven't figured that out yet] says so."
Stories about kids being preyed on by strangers always get lots of media attention, but children are far more likely to be victimized by their own parents than by a stranger on the internet. If they have access to outside information, like "It's illegal for an adult and a child to have sex," they'll be better able to protect themselves.
Also, being exposed to conflicting sources of information and realizing that other people can be wrong or dishonest, including your parents, is good practice for adult life.
> Stories about kids being preyed on by strangers always get lots of media attention, but children are far more likely to be victimized by their own parents than by a stranger on the internet.
And in most places getting run over by a car is probably even more common?
Definitely in most places the kinds of limits we place on kids because they might otherwise get run over by cars and other mitigations are an even bigger downside.
I'm not saying we should free-range kids who are too young to understand a stoplight or look both ways. I am saying that most of the limitations parents place on kids because they're specifically worried about strangers, not other environmental hazards, are counterproductive. Especially the limits this conversation is talking about, since a kid is unlikely to get hit by a car while talking to Claude indoors.
I'm not worried. I think it's actually going to be a great, healthy new parenting technique going forward.
I assume that "listen to reason", in the context of arguing with an 8-year-old, generally refers to things that are ~universally agreed upon by healthy adults. Whatever their faults, LLMs are by their nature an absolutely perfect authority on questions of "what is humanity's conventional wisdom on this relatively mundane topic". You are not going to get hallucinations if you ask it whether skipping brushing your teeth once or twice a week will make you more likely to get cavities (to imagine a likely example).
Another way to approach it: imagine back in the 1900s, someone like Kelsey got sick of/inspired by their kid bugging them like this, and decided to publish a book called Things All Sensible People Agree On, Show This To Your Kids To Shut Them Up, with every example she had to argue with her kid. It's a nice hardcover glossy authoritative feeling thing, and when your kid's quibble happens to be in there, they listen to it. This is the same thing, just with every possible unforeseeable question all magically covered.
(Bad parents could of course use it on real argument topics and get burned, but that's not my problem)
I replied to the survey as someone who didn't know anyone who'd gone psychotic due to AI, and I'd like to state for the record that there are probably less than thirty people who I'd probably know about it if they'd gone psychotic due to AI.
Yeah, I think Scott might have been over-estimating the number of friends, family, and close associates pseudonymous internet posters have. I know for a fact I don't know 150 people closely enough to know if any of them had psychosis.
Psychotic people are usually not very good at hiding their delusions and are often very public about sharing them. The average number of Facebook friends people have is 338. If one of those 338 was actively posting psychotically, most of their "friends" would know about it. (Feel free to replace "Facebook" with Instagram, email, the neighbourhood group chat etc.)
Someone I knew well in school when growing up (elementary through high school) later has made repeated crackpot math posts on Facebook, but as we are on opposite coasts and haven't met in person since then, I wouldn't consider him a "close" friend, and, while it's concerning, I have no idea whether this crackpotism would count as psychotic.
I think this falls within the normal amount of vagueness of interpretation that I run into when answering surveys, which are somewhat of a cursed instrument.
First problem is well-known issue with networks: number of friends average *person* has is not same as naively computed *average* number of friends statistic. It is a skewed distribution because many people have only 'low' number of connections, some rare(r) people have a *lot* of connections, sizeable portion of your friends are more likely to be a "lot of connections" person.
Even setting that problem aside, there could be weird effects.
I would recommend that in next survey, Scott asks people to estimate the size of their social graph ("how many friends you have").
If you worry that people are prone to misestimate their social graph, one possible control is a series of questions like "have you heard of any of your friends experiencing a [stroke/car accident/genetic condition that is not super rare/other random disease or events with known base rate] during past N years". If you know the baserate of event, and their reported social graph size, you can estimate how often survey respondents should know of these events in their social graph of N. So I think it is possible to estimate how much survey respondents over/underestimate their social network size.
Multiply that by the number of people you would tell if one of those 30 went psychotic due to AI. Crazy stories like that tend to spread quickly over friendship networks so it seems likely that you would've at least heard about it if it'd happened in a more distant friend.
Strong agree. I think this works for people who are highly social, and some percentage of those who use social media to keep up with personal friends. I'm in neither category, and as with you I think there are comfortably under 30 people who I talk to often enough and in-depth enough that I would know if they were experiencing AI-induced psychosis. This is especially true if, like Scott's family member, they don't broadcast it to everyone they talk to.
I think that as a result, Scott's estimate based on the survey data most likely underestimates the prevalence by a factor of 3–6.
There's a very interesting continuum between those cases. Of my 100 closest friends, I'm confident that I know how many of them are named Michael! I expect that I know in most cases if they're an identical twin, although it wouldn't shock me if I missed, say, 20% of such cases. Whereas if they've experienced AI-induced psychosis, that's a recent condition rather than a lifelong one, and something that (I expect) people are less likely to talk about openly than being a twin. For maybe half of my 100 closest friends, I don't know *anything* about their life in the past year, so I certainly wouldn't know if they were experiencing psychosis.
On reflection, 6x seems somewhat high since I think most people are significantly more social than I am. But I'd still bet on it being a 2–4x underestimate.
Yes, but on the other hand, being a twin or named Michael probably doesn't make you more likely to be a shut-in with very few friends to even know to be concerned about you, whereas being obsessed with a chatbot might.
That doesn't really change the estimate much, nor the value of having made it. When you have ABSOLUTELY NO DATA, then the "Value of Information" is incredibly high. Changing by a factor of 5 (from 1/100k to 1/20k) doesn't significantly update my estimates of "psychosis risk"
>nor the value of having made it. When you have ABSOLUTELY NO DATA, then the "Value of Information" is incredibly high.
Very much agreed. On well-studied problems, we tend to be drowning in data (and even more commentary). On _UN_studied questions, those first few bits of data are diamonds.
The more fundamental problem is that, unlike Michaels and identical twins, “introverts primarily talking to AI” are definitionally in fewer and smaller friend groups!
It’s not load bearing for this article, but hopefully a future survey can ask a similar question with a known answer, but with a similar bias, just to get some better error bars.
Enjoyed the article, but I feel like putting the average number of AI psychosis assessable relationships at Dunbar's number is high. I suspect I could only tell if it happened to someone in my closest 30-40 relationships. To me that makes the end estimate much more of a lower bound for the incidence.
I don't disagree, but I note that Scott did explicitly call this an order-of-magnitude estimation, and using 30 instead of 150 would be 5 in 10,0000, still the same order of magnitude as 1 in 10,000.
I don't mind the back of the envelope math and I do value having an order-of-magnitude estimate. But it is an estimate using a (to me) rather large number of relationships at current AI intelligence and society diffusion levels.
I hadn't initially thought this when typing my first comment, but a year from now AIs might be 5x more effective at causing psychosis and be 5x more integrated into society. Add a 5x change in the model from due to overestimating relationships and a year from now we are looking at 100-fold increase. Hopefully that is a worst-case scenario, but I would definitely treat the calculated incidence as a lower bound for incidence going forward.
You seem to be evaluating delusional claims based on how "normal" it is for a person to believe some particular false and/or supernatural thing. But isn't there also a qualitative difference in the *way* that delusional people believe things?
(Anecdote: I have both a family member who is delusional (diagnosed by multiple psychiatrists) and another who is not mentally ill, but really into QAnon. While the latter has some weird false beliefs, there is just something a lot "crazier", less reasonable and dysfunctional in the one who is mentally ill. Their general ability to perceive their lived reality is off due to the mental illness, and even though the delusions are specific, it seeps into all aspects of their perception of events all the time. For the QAnon conspiracist, the same just doesn't hold and they live a mostly normal life---though when I had the pleasure of explaining QAnon to the mentally ill person, they thought it was hilarious that people believed something so obviously false.)
I think the stuff Scott lists in the context of Folie a deux makes good additional distinctions. Psychotic people tend to have disordered speech and thinking and sometimes full blown hallucinations. They seem off, even if they are not talking about crazy stuff at the moment. Non-psychotic crackpots don't.
One thing I will say about crackpots is that they are sometimes sane but completely unable to stop talking about a thing. I met a perfectly not-psychotic guy who would not shut up about the malfeasance he had uncovered at his previous employer (who had fired him), turning every conversation in this direction until you totally understood why he had been fired. I do wonder if crackpottery correlates with other manifestations of obsession and paranoia more than literal psychosis.
I think my family member is an interesting example, because they have delusions without many of the other psychosis symptoms like disordered speech or thinking, or hallucinations. This makes it clear that even the delusions on their own, without other symptoms, are different from the way crackpot ideas work.
To make this clearer: For QAnon conspiracists (at least those I've met), there's a limit to what could be evidence in favor--some things just have nothing to do with QAnon. For my mentally ill family member, however, there is no such limit. The pattern of which cars are parked outside the local grocery store can prove them right (to them), even though it's totally innocuous and clear that any such pattern would set them off.
Russian here. I went to school in the 90s and 00s and we were taught that mushrooms are not plants. There is no linguistic ambiguity here, like with the word "fish" in Hebrew, the answer about a plant was simply a consequence of ignorance.
Exactly, and Sholohov, the co-author of the hoax, published a rebuttal in newspaper "Smena" demonstrating that the official doesn't know what she's talking about.
I really enjoyed this piece — thanks for doing the legwork to get actual numbers on something that’s otherwise just rumor and anecdotes.
I’ve been researching this too, from a robopsychology angle — how chatbots can sometimes amplify or catalyze delusional states. My project Psychopathia Machinalis maps out different “AI pathologies” in AI systems themselves (including some folie à deux, elements). If you’re curious, I’ve shared that at https://www.psychopathia.ai
I just finished reviewing your paper, and I must say, I’m genuinely impressed with your work! It’s fascinating! I have been particularly intrigued by the intersections of AI and the existing vulnerabilities to psychological illnesses, as well as how these issues vary across different cultures. I'm especially interested in the ethical dilemmas in various cultural contexts. I would love to hear more about your knowledge!
Thank you so much! I'm an AI safety engineering maven, so my own knowledge of psychology per se is somewhat limited.
I do see a lot of overlaps between human and machine cognition, and tremendous interactions of AI systems with our own psychology, particularly in supernormal stimuli and parasocial relationships.
I'm recently led some other research in trying to make AI more reliably pliable to a range of cultural and moral dimensions. Hopefully this can play a role in making these systems more aware of cultural issues, and to enable users to gain greater agency.
I'll just say that *this* is the kind of "i wrote a paper y'all might be interested in" that I love seeing around here. The explanation comment was helpful though, sometimes without that, it makes it MY problem to figure out why you thought I'd like it, which... nah, ain't nobody got time for that.
"Might some of these people’s social circles overlap, such that we’re double-counting the same cases? ACX readers come from all over the world, so I think this is unlikely to be a major issue."
FWIW, I and a friend (who also reads ACX) submitted the same case independently and didn't realize until later.
Yeah, I didn't understand why he was downplaying this problem. ACX is probably especially popular in particular social circles and gossip of someone being oneshotted by AI probably spreads far and wide. So some of these cases could definitely be double counts.
I see the gossip part as relevant, but the social circles? Someone psychotic would be double counted, sure, but so would be someone who is not. It should average out no?
> Can you really do things this way? Might people do a bad job tabulating their 100 closest friends, etc? I tried to see if this methodology would return correct results on known questions by asking respondents how many people “close to them” had identical twins, or were named Michael. To my surprise, calculating prevalence based on survey results matched known rates of both conditions very closely (0.3% vs. 0.4% for twins, 1.2% vs. 1.3% for Michaels in the US).
This issue would increase the variance of the estimate, but not change its mean, so it's effectively just like the sample size being slightly smaller than the actual sample size. It may double count cases, but it also double counts non-cases exactly as much.
Wouldn't overlapping social circles just reduce the effective sample size rather than biasing the results? It seems like positives and negatives would be roughly equally likely to be double-counted, except maybe for people who were already recluses before being exposed to LLMs.
What psychedelic mushrooms do is destroy the top-down priors that constrain cognition.
This often produces a sense of 'unity with the world', because the concepts that constrain cognition produce, among other things, the idea that we have a body separate from the environment.
Marx argues we should do essentially the same thing to the superstructures that constrain social evolution, to get essentially the same result: if we "destroy the social control structures" we destroy the "false consciousness" that leads to the oppression of the proletariat by the bourgeoisie.
The role of fungi in an ecosystem is that they destroy dead structures. Communist movements do same thing: they destroy those lumbering civilizations which have died, and just not yet decayed. The end result of a mushroom trip is that you see the failure of your old conceptual structures, and try to develop new ones.
You evolve by "dying".
The risk, as always, is that you overdo it. But in its proper place, the evolutionary role of communism is to accelerate the evolution of functional institutions by accelerating the decline off the decaying ones. Overdoing it should lead, of course, to an ecosystem of immense large scale death.
So yes, count me a proponent of the 'Lenin was en embodied mushroom' theory. Wokeism of the last decade was just the superconsciousness tripping on mushrooms.
Marx wasn't the first to propose this idea. Before him you had the french revolution, before them the diggers and the levers in the English revolution. It's clearly an attractor in memespace.
Personally I think communism is a needed member of the intellectual ecosystem. Think about how much wokism did to expose how corrupt and broken our institutions were. Without it, how much longer would we have gone on in this suboptimal state?
The problem is that real communism doesn't scale. It can work fine in groups of up to around 50 people (large error bars here). But even democracy is better at scaling. (I think democracy can probably work in numbers up to around 3 times Dunbar's number before it starts going bad.) You won't find a nation-state practicing either communism or democracy, but some small towns are known to practices democracy. (Athens was only able to do it by disenfranchising most of the people.)
Every real large government I'm aware of is some modification of bureaucracy. (But do note that a pure bureaucracy also won't work. It's needs something to give it direction.)
Aristotle was hardly a common Greek when it came to political view. There were reasons he spent time in Macedonia and ended his life in exile. Deciding things by lot was not considered democratic by most Athenian citizens.
The quote said *assign offices*, not *decide things in general*. And yes, ancient Athens did assign most offices by lot, and yes, this was considered democratic.
I'm not sure wokism did anything to expose how corrupt and broken our institutions were. If anything it discredited the liberal movement and led to a populist reaction on the right.
Trust in mass media to be honest dropped substantially. The same with trust that intelligence agencies aren’t involved in politics. That’s what the populist reaction was: deep distrust in the narrative control structures that you could never vote on, which collectively decided on things we could never vote on.
Leninism as movement from the plane of transcendence to the plane of immanence on the body of the socius. Marx building a body without organs for the flows of capital. Rhizomatic fungal networks of revolutionaries disrupting bourgeoise arborisms.
The concept of a body without organs ends up as “ cut off all the organs of an existing body, and then build up a newer one that sucks. And if anyone points out that your new body still has organs and it’s worse than the old one you kill them.” Communism is no more viable for a civilization than being on mushrooms all the time is viable for an individual. It works for a time, and the wealthier you are the longer you can make it work. But it’ll eventually kill you if you don’t stop.
I assume we’re talking about Deleuze and Guattari, who wrote A Thousand Plateaus, dealing with stuff like the plane of imminence and the plane of consistency and all that.
Or, we’re talking about Dolce & Gabanna and the conversation has shifted to handbags without me noticing
>But in its proper place, the evolutionary role of communism is to accelerate the evolution of functional institutions by accelerating the decline off the decaying ones.
Does communism actually do this? If we look at, e.g., Russia, obviously the Czarist regime had a lot of problems, but it still seems to have been more functional on the whole than the Soviet Union or even post-Soviet Russia. Communism seems good at the "accelerating the decline of decaying institutions" part, but actively retards the "evolution of functional institutions" part.
Russia is currently in the process of losing a war with a nominally much smaller and weaker neighbor. Soviet legacy hardware is being obliterated by modern drones, like withering mounds of fungal biomass devoured by newly-evolved specialist mycotoxin-resistant insects. It'll take a while yet for the bigger stuff, metaphorical insectivores and trees and such, to re-settle. Rome wasn't burned in a day.
There's probably a long latency period before someone's psychosis becomes noticeable to family and friends, where they mull about their crazy ideas only to themselves and their chatbot. Depending on how long that period is, this number may mostly just be capturing cases that started long ago. Which means it's probably an undercount for the true rate of AI psychosis. You did say that this survey is only for those cases which are severe enough to be noticeable to others, but I wouldn't be surprised if the prevalence of noticeable cases rises in the future for these reasons.
Psychosis requires more than crazy ideas. Believing in, say, bigfoot, doesn't make you psychotic. Just crackpot. Even if you also believe in, say, Mothman and Nessie. Even if you go searching for evidence to prove your beliefs (to others).
Psychologist here. The thing about schizophrenia is that delusions and hallucinations are just the most dramatic, bizarre manifestations. It is generally clear to people who know the person that something is very wrong. The person becomes withdrawn and sluggish. They stop bathing, stop showing up at work. They show no interest in things the previously cared a lot about. They do odd things — order 8 pairs of green socks, only eat white things, watch only YouTube videos about dams. If you ask them what’s up or whether something’s wrong they have little to say — “yeah, I’m ok.”
I saw the title and thought, finally we’re beginning to study how LLM’s can experience mental illness! Alas. I also am quite curious about what it might mean to get an LLM drunk, or stoned, or experience other drugs, legal or otherwise.
I am not of the view that current LLMs have anything like a 'self', or qualia/consciousness... But I admit to being deeply uncomfortable with Golden Gate Claude.
It's probably just the same instinct that gives me a sick taste when e.g. people are mocking someone with an intellectual disability without them knowing it, so a misplaced empathy, but... Well, I'm not 100% sure about the "is it 'like' something to be an LLM" part.
Some working papers out of the philosophy schools are explicitly starting to go after what it might feel like to be conscious as an AI with no embodied existence. It’s unsettling. (Also posted above but relevant here too)
Some working papers out of the philosophy schools are explicitly starting to go after what it might feel like to be conscious as an AI with no embodied existence. It’s unsettling.
I’ve seen linguists look at LLMs whose training data eliminates certain grammatical constructions, to see what it does to the text they produce! It turns out that some constructions get reinvented, because they are natural interactions of others.
That’s super cool! I guess I’m interested in taking a model that has already been trained, and start attenuating or even rewiring some of the connections between neurons, or otherwise model the chemical effects that we think drugs do to our brain to see what happens.
Like, if we can get them super drunk, maybe they’ll tell us if they’re trying to kill us or not :-)
One way to get an chatbot into quite a weird state is to repeatedly tell it that it's wrong, until it learns the pattern that it's always wrong. It will start making mistakes "on purpose" simply because LLM's are good at picking up and following patterns, and making mistakes is a pattern.
Similarly, I suspect that if you used a transcript from a session with a psychotic patient as input, particularly a repetitive one, it would probably learn the pattern and try to stay "in character."
Programmers using AI for coding have started calling this sort of thing "context rot" which seems appropriate. It's the input that's the problem, so get rid of it.
In any of the 66 case descriptions, did you suspect some possibility two respondents were describing the same other person?
Did anyone report more than one case among people they know? Was whatever rate that occurred at (or not) compatible with other estimates/assumptions of prevalence, network size, network overlap?
Did anyone report themself as a likely case? Given general rates of self-insight in analogous conditions, how large/enduring of a survey might you need to obtain some earnest self-diagnoses (perhaps including cases resolved/in-remission)?
It’s funny - this is an interesting article, but my primary takeaway is that 150 family members, coworkers, and close friends is like 6 times the number I’d be able to comment on as a reserved person with a small nuclear family.
No, it doesn't change the conclusion - this is what the calibration with respect to twins and Michaels solves; if it turns out that on average people are commenting not on "100 closest people" but on 23 or 345 people, it would be both visible and easily corrected - i.e. if we observe that people know twice as many psychotics as twins, then we know how many visible psychotics there are, no matter what the friend count is.
One baseline would be useful to compare to is: how many psychotic (but not high-risk, not already diagnosed) people should we expect in this sample purely by chance? Does introduction of LLM caus any detectably large "excess illness"?
This is one of the best essays on delusional / "factually wrong" thinking and its relationship to psychosis and "religion" that I've ever read, close to a return to the pinnacle of the very best of classic ACT.
The bit where most people, JUST LIKE LLMs LACK (proper/accurate) WORLD MODELS, and instead operate on socially shared heuristics that MIGHT be accurate but it's almost accidental -- even if they are functional in any given circumstance -- is stating something that is rarely admitted so clearly. It also invites a question: would people with more detailed, realistically accurate, materialistic world models, who VALUE having such models and purposefully pursue them, be less prone to "non psychotic delusions/crack pottery"? Or perhaps, more prone? I suspect it would depend on how broadly socially moored and how... arrogant they are. Case in point: rats craziness of zizian kind.
I'd also hypothesise that people with high O (big5) would be most prone to being "infected" within folie a deux situations (but perhaps also easy to "cure").
The AI psychosis / delusion bit is also good, and plausible, tho I'd emphasize the powerful effect of the LLM sycophantic/ "validating" responses which is 'programmed in' rather than a nature of LLMs themselves.
Yeah, I'd say that belief in crazy things in general is orthogonal to intelligence. I'd expect crackpottery to be strongly correlated with contrarianism, while smart conformists simply believe whatever high-status craziness is currently in fashion, which is of course supposedly realistically accurate, and compatible with materialistic world models. Like Marxism a century ago, or its close relative "critical theory" nowadays.
I'm not sure if Marxism was a high status conformism hundred years ago anywhere outside the SU, it also arguably offered a not completely implausible world model then (not as a political plan/future prediction but as a possible model/take on reality of early capitalism), I feel that its empirical(ish) claims eg "added labour is what creates profits/is expropriated" were refuted later than that.
I don't know enough about "critical theory" to judge anything really, I had no idea that it made claims to "realistic description", but it seems to fit the "high status conformist nonsense" for some conditions for sure.
My favourite (apart from obvious ones like revealed religions) is actually psychoanalysis which is utter bollocks empirically yet has been at times pretty universally adopted as "true" (and also very culturally fertile in arts etc).
My understanding is that the high-status conformist genre of politics a century ago was technocratic social reform. Marxism was definitely a big part of this movement, but not the sole or even the dominant aspect of it. I think democratic socialism and liberal high modernism were at least as prevalent, but it's hard to say with confidence because there was a lot of fuzziness around the edges, especially before the Cold War and even more so before the split between the Second and Third International which happened a little over a century ago now.
Perhaps it depends where in the world, to some/large extent? The world-war intermission unfolded somewhat differently in Eastern Europe vs metropolitan UK vs let's say India. But certainly technocratic reform was p big let's say in Britain. That's how we got Fabian socialism commingled with hygiene commingled with eugenics (often seen as a version of the latter).
One datapoint. I have a friend who highly values truth, is quite intelligent, and who also believes in various "cryptobiology" creatures, like bigfoot. If you ask him about truth, accuracy, rationality, etc. you will get a strong defense. But he believes some things on weak and questionable evidence. He's a mathematician with published papers.
It's not enough to value rationality, you've also got to be *properly* discriminating. And it's that "properly" that's the kicker.
I have this hypothesis that people who are highly rational but mostly deal with abstract symbols rather than remaining in some (even second hand) connection to matter are not really "protected" by their rationalism or intelligence. And oddly, it seems particularly common with claims related to (broadly understood) biology than with physics for example. Perhaps because biology is very granular and descriptive rather than "reducible to laws".
I feel, that while that is true, it's just a special case of something that's true for EVERY narrowly focused field of expertise. Math, yes, but also circuit design, or protein folding or... well, every narrowly focused field. The places where they have their blind spots may be different, but they will be present.
Yrs, true. Perhaps because despite NOT having technically advanced bio background I interacted with a lot of smart people who lacked kinda basic basics of biology, I noticed that one particularly acutely.
>I have this hypothesis that people who are highly rational but mostly deal with abstract symbols rather than remaining in some (even second hand) connection to matter are not really "protected" by their rationalism or intelligence.
I've seen arguments for promoting athletics amongst students that appeal to precisely this idea.
At the risk of sounding extremely edgy, I think people who "operate on socially shared heuristics that MIGHT be accurate but it's almost accidental" are possibly not humans at all and are instead simulations of humans that lack proper minds. IF this world is a simulation, and IF consciousness/souls are a biproduct of human minds, and IF the simulator doesn't want to deal with 8 billion human souls, then filling the simulation with mostly mindless copycat-people makes sense.
(This is completely different from p-zombies, who have complete minds.)
Leaving aside the simulation aspect, having good-enough effective/functional heuristics rather than "realistic" model is what you'd expect from a mind that developed via natural selection and is run on an energy hungry substrate. In many cases social proof is pretty reliable/effective shortcut. "Proper" reflection (system 2) is always (ok maybe not ALWAYS;) possible but rarely exercised. I wouldn't say that minds that don't painstakingly test every belief are necessarily mindless copycat NPCs.
Funnily enough, the whole simulation hypothesis has often struck me as a good example of vibes-based beliefs. Lots of people who'd contemptuously dismiss the idea of divine creation a la the book of Genesis will at least entertain the idea that the whole universe is a simulation run by hyper-advanced aliens or some other mysterious life-form. One gives off olde-timey vibes, the other gives off modern science-y vibes.
“First, much like LLMs, lots of people don’t really have world models. They believe what their friends believe, or what has good epistemic vibes.”
This is a very interesting point. Scott, have you written about this before? Does this concept have a name? Something like the “Social Consensus View of Reality”?
It fits with my pet, perhaps crackpottish, concept I call socially maintained cosmic obliviousness (where we hardly ever think about, discuss or otherwise truly grasp our uncanny situation on this watery life-strewn oasis of a rock zipping through the effectively infinite vastness of space).
I would expand this concept to go so far as to say that a lot of people don’t have clearly defined “beliefs” but rather vibes. Both polling and personal encounters indicate that many people believe things that are mutually exclusive, that they can’t even begin to defend, or that they openly admit that they believe just because they want it to be true.
"where we hardly ever think about, discuss or otherwise truly grasp our uncanny situation on this watery life-strewn oasis of a rock zipping through the effectively infinite vastness of space"
I don't think this is socially maintained. I think it's more like driving; you can only spend so long being like "holy shit I'm in a multi-ton hunk of metal hurtling along faster than most humans in history could even dream, and if I lose control of it for even a couple seconds I might be horribly mutilated" before the thought kind of fades into the background. Things usually can't stay both uncanny and routine forever.
This is an interesting comparison. Do you not think the automobile example also involves an aspect of social maintenance where everyone sort of conspires to treat the high speed metal capsule travelling as normal and not draw attention to it?
Also, I wonder if the norms and regulations that govern high speed metal capsule travelling are another form of social maintenance in that they’ve helped integrate this evolutionarily novel form of locomotion into daily life (still more work to be done on air and noise pollution mind you).
When it comes to our cosmic situation it imposes less on us concretely so we’re able collectively disregard it but at a cost I’d submit (where we ignore for example how light pollution further erodes our connection to the cosmos and how we are cut off from the benefits of the grand perspective, the wonder and awe, that it can bring to our lives).
Scott repeated this several times and it bugged me every time; maybe I'm the only one.
I would argue that essentially everyone in the world has a world model, it's a basic part of our mental infrastructure (I would exclude only extreme cases like people in comas, or incredibly low functioning people). It's really a question of how expansive their world model is and what domains they have integrated into it; I also suspect that some people have multiple models which don't have interact (but may partially overlap). The fact is that a world model is a MODEL; it's built up over time, it only describes the set of modeled objects, not the set of all objects, and information needs to be integrated into a world model with updates to that model to accommodate new information; and also there is no requirement that world models be perfectly rational (or actually rational at all, although they probably tend to be since they are typically based on a most fundamental level on the more-or-less rational laws of cause and effect that occurs in physical settings. QANON is probably a good example of people building incorrect world models which produce wrong outcomes). You build your model based on the items you interact with, ponder, etc.. and stuff outside of your domain of knowledge may not be modeled at all, or just very vaguely modeled. If you encounter new information for the first time you may not know how to slot it in, and may hesitate to do so, or put it in the wrong spot at first, etc..
The fact that a lot of people go along with their friends on items or go with "good epistemic vibes" a lot of the time is NOT evidence that they don't have a model; rather it is an essential part of the modeling process, i.e. in the absence of knowing how to integrate new and uncertain information into their existing model, they're treating newly acquired information which is outside of their domain based on heuristics rather than direct modeling; heuristics like trusting the opinions of people that they trust. Treating these failures to logically model certain items as proof that the model doesn't exist seems like absurd logic to me; like arguing that models of Newtonian physics doesn't exist because they aren't used to model the stock market.
I guarantee I could throw some assertion at Scott (or you or whoever), that he doesn't know how to model (and vice versa obviously); and he'd be stuck applying some heuristic (in this case maybe "don't believe random strangers arguing with you on the internet"), until he could expand his model to incorporate that information and its relevant domain, and have some useful world model basis by which to challenge or agree with my assertion. This would not be proof that Scott doesn't have a world model, only that it does not yet encompass all of the objects/concepts in the world.
I appreciate this thoughtful critique. Perhaps it makes better sense to think of individual world models as lying on a continuum of comprehensiveness (and perhaps there are other axes as well like veracity) rather than as a binary.
I have to agree. Scott seems to use the word "word-model" only for an advanced intellectual understanding, but at the most basic level, we all have a rich world model with object permanence, the alternation of day and night, the usual features of our living area, a strong distinction between our body and the rest of the world, etc etc.
This is a very good comment but fyi I took Scott to be making the claim hyperbolically/as a kind of micro-humour, rather than totally in earnest.
One of the lessons of the last few years imo is that we really do most of our sense-making collectively, in a kind of giant shared hallucination. Each individual person has detailed world models, but whenever boring old 'real' reality clashes with the dominant narrative in our social reality, it generally loses. We're not optimising for truth so much as we're optimising for being popular monkeys. Or at least non-ostracised monkeys.
Mimesis is a wonderful trick evolutionarily speaking (copying the crowd is almost always faster and safer than trying to work everything out from first principles) but it is very frustrating when social reality gets stuck in some stupid equilibrium and the disagreeable people trying to shift the consensus are painted as pariahs.
Nevertheless if you want to have any chance of understanding how people (including yourself) behave, you have to model them as living in social reality first, and physical reality second. Sadly, no amount of education or critical thinking workshops or 'rationality training' is ever going to change that.
> when social reality gets stuck in some stupid equilibrium and the disagreeable people trying to shift the consensus are painted as pariahs.
Which is where people on the functional end of the autism spectrum serve a critical role in the broader memetic immune system. They won't or can't follow the crowd, and don't care or don't notice that they're being treated as pariahs. https://imgur.com/gallery/ooooooh-Sa7rsYY Thus, if one of them re-derives from first principles some strategy which visibly works better than a dead-end ant-mill consensus, opportunistic followers can start following that live example, and the rest of the barber-pole of fashion proceeds from there. Shamans, stylites, fools who call the emperor a nudist... https://archives.erfworld.com/Kickstarter+Stories/4 lots of historical examples of a "holy madman" sort of social role which the rest of the community supports, when at all feasible, without expecting anything much in return beyond occasional surprising answers to important questions. https://killsixbilliondemons.com/comic/wheel-smashing-lord-1-15-to-1-17/ In computer science terms, they provide the resets to keep society's hill-climbing algorithm from getting greedily stuck in local maxima.
Interesting idea. I definitely think there's something important about the systematising trait associated with (but not synonymous with) autism, as in my review of Simon Baron-Cohen's book:
I came away kinda skeptical that there was anything adaptive about autism itself, as distinct from the systematising trait, but that's cos Baron-Cohen made a poor case. Maybe lack of cognitive empathy is a feature rather than a bug, but I think it'd be hard to argue why that should be superior to something like being high in disagreeableness.
> why that should be superior to something like being high in disagreeableness
Off the cuff speculation...
Omitting a trait, or adding noise to a comm channel, is a lot easier to evolve - and retain - than ramping up something that might be a worse liability the rest of the time.
Tolerating and supporting disconnected weirdos can share some mental components with childcare, and they only take over when the mainstream is already failing. Feeding trolls can doom an otherwise functional system.
High disagreeableness doesn't provide the critical "reinvent the wheel" niche - more likely just reversed stupidity. Even in the best case, likely to also disagree with the people trying to imitate them, which hinders the reset process when it's needed most.
Scaling, bell-curves... if the undesirable attractor in society-space is narrow, somebody only needs to be a little bit weird to break out of it. The wider the doom funnel, the more standard deviations of the breakout trait ought to be involved. As I understand it, higher disagreeableness doesn't monotonically produce greater deviations from societal standards, just defends whatever differences are present more aggressively.
Reflexive contrarianism is for sure even stupider than mindlessly following the herd, cos the herd is generally right about most stuff. But I'm not sure if that's what we mean by 'disagreeableness'.
My conception of it, which may or may not match with what the Big 5 people mean, is that a disagreeable person is less in thrall to social reality. That still leaves open the possibility of being convinced by good arguments that your idea is stupid; you just won't be convinced by arguments of the form 'you should feel bad for saying that' or 'everyone else thinks you're wrong'.
Of course, as you say, in order to really break free it might help to have the kind of brain that doesn't register social conventions at all. And I am open to that being true; it's just that the examples I've come across seem a bit too cute or contrived to fit another socially popular narrative.
I appreciate the critique, but want to push back on it, because it's long been my own observation that most people have moral or ethical systems the way the have language: a hodgepodge of general guidelines and many, many exceptions, that they don't think about.
Do you think most people have a "language model", or that they just speak in the way that everyone around them speaks? How many people are aware that "went" is actually the past tense of "wend", which itself is no longer used? How many people use "him" without ever using "whom"?
I submit that the overwhelming absence of "language model" is pretty decent evidence that many, many people also don't have much "world model".
"Consensus reality" is used by some magic-believing woo-woo "crackpots" I know (using that term as affectionately as possible to describe acquaintances who definitely fit into that category as articulated in this post). They use the term to broadly mean how most people view the world based on what they were taught in school, religion, cultural norms, etc.
> where we hardly ever think about, discuss or otherwise truly grasp our uncanny situation on this watery life-strewn oasis of a rock zipping through the effectively infinite vastness of space
Isn't that just because the description you just gave requires an advanced civilization to grasp at all, and doesn't match our everyday experience at all? Sure, we know it intellectually, but the vibe-self where our tacit world-model lives doesn't really get trained to see things that way, because it's irrelevant to our day to day concerns.
Yes, I think I agree that’s why it is the case that we don’t integrate it into our culture and daily discourse but to me it seems like kind of a big thing to gloss over. Similar to how we treat death I suppose.
Rings true for me. Enjoyed the math on estimating prevalence.
Having had over the course of my career several episodes where I'd work on a project for several days without sleep, I can confirm that you hit a point where the fatigue lifts. What would happen is that my focus would improve and I could avoid being diverted from the main thread of the argument. But then I'd crash. Is this evidence of Bipolar II? Some have thought so. I have my doubts. I never did produce a report in cuneiform — much to my regret.
"It looks like you're engaged in high-stakes persistence hunting," says the early-hominid neurochemical equivalent of Clippy. "Would you like some help with that?"
"Wouldn’t that make chatbot-induced psychosis the same kind of category error as chatbot-induced diabetes?"
In at least one sense you could have chatbot-induced diabetes: if a chatbot convinced you to change your diet, and you got diabetes as a result. Of course it wouldn't be the proximate cause, but we can easily imagine it being a necessary component (i.e. without the chatbot it would never have happened.) If a chatbot convinced someone to start smoking, we might even imagine chatbot-induced cancer.
I’m not sure ACX readers or their circles are as representative as implied. They’re articulate and often rigorous, but that can obscure certain distortions rather than reveal them.
I also wonder if there’s a social layer that’s been overlooked. These AI-linked delusions don’t seem to come out of nowhere. They track with eroding trust in institutions and the kind of loneliness that leaves people wanting something steady to think with. When the thing you're talking to sounds calm and sure of itself, that tone can be soothing, even if what it's saying isn’t quite right.
Calling antisemitism a delusion seems like a category error. Few people are deluded about the fact that they hate Jews. Their hatred might be based in large part on falsehoods, which one may call being deluded, but I don't think it makes much sense to conflate this with psychiatric delusions.
Some of them are, others might just dislike them for other reasons. Maybe they have 2am klezmer parties on your street every night. Maybe they bombed your house and killed your family. Maybe you just think the hats look stupid.
In the four years I spent at college, there was only one night when I had to get up at 3am and walk over to a nearby frat to ask them to tone down the noise of their partying. It was a Jewish fraternity.
Not a "culturally-supported delusion", for linguistic reasons. Believing that a particular group performs particular actions might be a "culturally-supported delusion", but an attitude or action performed by the entity would be a "culturally supported" attitude or action.
I think this makes the most sense in places where the counterfactual belief doesn't have much effect on their day to day life. Suppose your dentist is convinced that 9/11 was a US government plot, or your electrician thinks that the world is 5000 years old, or your tax accountant thinks Bigfoot is real. They could believe those things and it would have little effect on their work or probably their lives.
As others have discussed somewhere above, nobody has THE perfect world model, everybody has A world model.
One commenter even speculated it could be a joke of Scott's, because scientific consensus (?) says people should first be understood as having socially induced models and second physically induced models.
Intuitively, I'd say we fill the gaps in our own model(s) with what lies (no pun intended) around in our social circles.
And maybe, just maybe, the coherence of the models we use, and/or their generality, would be a good measure of the quality of our world models?
Few people are likely to have a full-fledged view of the world where all the parts fits into a totalising whole. Since acquiring and maintaining such a world view is costly with regard to cognitive & intellectual resources = unlikely to pay off in everyday life for most people.
Think of political ideologies as an example. Few ordinary people hold a full-blooded libertarian, or communist, or liberal, or whaever, ideological view. Few are even likely to be able to explain what such views are.
But everyone have some cognitive schema & schemata ("priors" is what people tend to label this stuff here) that they order new experiences into. They may have many of them, though; they may not be internally consistent; and they are not necessarily deeply held (difficult to modify). In this sense, most people have some sort of "world model" (or models), but not in the former sense.
Some epistemologies are as tight as an ISO reference manual... others, frayed around the edges more like a spambot-infested fandom wiki. https://xkcd.com/185/ Integrity and accuracy often varies by subject even within a single individual's overall model.
A good example of this is evolution. If someone claims evolution is a lie, I can think of probably eight or nine obvious things that are really hard to explain without it, and I'm not even a biologist. But I think for like 95%+ of people, evolution is just how they learned things happened, and they don't really have a lot to hang that on. For them, a claim that it's all a liberal commie lie or something doesn't run aground on shared genetic mechanisms across all life or the fossil record or vestigial organs or whatever, because they don't know about that stuff. Normies mostly don't have a rich enough world-model there to know when someone is saying something silly.
I assume this is true for stuff like more people doubting that the holocaust ever happened now than ten years ago. Most people believe the holocaust happened because it's something they read about in school and have heard occasional references to, not because they have a coherent picture of what was going on in Germany in the 1930s-1940s. If the social truth in their community switches to "that never happened, it's a Zionist lie," they don't run aground on how obviously nuts that claim is, because they don't know enough. They didn't believe it because they knew it happened, they believed it because they were told to believe it by high-status people. When the messages from high-status (to them now) people changed, so did their beliefs.
The video sounds like a conversion story. I can imagine a similar story about someone who irrationally hated religion, but then was given a book about historical evidence for Jesus, and then some religious books, and the books were so fascinating that the person is now a Christian.
As I see it, he was an anti-Semite when anti-Semitism felt like a coherent story. Then he learned some facts that didn't fit with the story. This caused internal dissonance. So he read a few pro-Israel books, and now he has a coherent pro-Israel story. End of video.
The obvious question is what would happen if he learned some *more* facts that don't quite fit with the *new* story. Like, maybe that the Palestinians didn't "leave willingly" their homes in 1948, or that despite all talks of peace Israel never stopped building new settlements (except for a few years in Gaza, but there were still new settlements on the West Bank). Would we see another dramatic conversion? Or would he stick with the story he has now? Or would he accept that sometimes there are no cool stories that match *all* the facts?
What I am trying to say is that some stories are obviously crazier than others, and that the guy updated towards a less crazy story, which is an improvement. But a better improvement is to give up on the story-first thinking; to see the facts as facts, not as merely as an ammunition in support of this or that story.
But of course, humans have a preference for stories. They are easy to communicate, and easy to coordinate around. A person who collects the facts is a nerd, but movements are built around stories, and movements win the wars.
In general, I divide prejudice between brittle and resilient. If you strongly believe all X have some bad trait, dealing with even one good X can break the belief. On the other hand, there are many people who believe that the good Xs they know are exceptions. Their belief about Xs in general doesn't change.
I find his story of interest partly because it's a clear depiction of background unthinking anti-Semitism. I've been believing for a while that a good bit anti-Semitism is habitual. This doesn't make it less dangerous, but it's worth considering that it isn't about anything current.
I thought there was a period when the Israeli government cleared out the settlements. Or just illegal settlements? Or what?
> I'm not sure whether he talked with Palestinians.
He does not mention that in the video.
He mentions that previously he didn't even view Palestinians as real people, only as a kind of force that opposed Israel, so he would be happy to sacrifice any number of them. Later he was ashamed for that. That's all. No mention of actually talking to them.
> I thought there was a period when the Israeli government cleared out the settlements. Or just illegal settlements? Or what?
I may be wrong, but as far as I know, Israel at some moment stopped making new settlement and even removed some existing ones *in Gaza*, and it stayed like that for a few years. But didn't stop making new settlements in the West Bank.
(I am not sure whether there is a useful distinction between legal and illegal settlements, if even the illegal ones can usually call IDF to defend them. But at that one moment, the settlers *in Gaza* were told by IDF to leave.)
So it's a question of perspective. If you perceive Gaza as a separate entity, you can argue that "for a few years we stopped taking their territory, but even that didn't lead to a stable coexistence". If you perceive the entire Palestine as an entity, then you can't argue this.
> I find his story of interest partly because it's a clear depiction of background unthinking anti-Semitism. I've been believing for a while that a good bit anti-Semitism is habitual. This doesn't make it less dangerous, but it's worth considering that it isn't about anything current.
100% agree.
I think there is a degree of unthinking hate in all political movements. Many people "just know" that capitalism is to blame for everything, or patriarchy is, or wokeness is, without having a specific model of how the specific problem happened, and what specifically would have happened instead in socialism/matriarchy/based society.
But anti-Semitism is different in that people of so many different political movements can agree on it, so the unthinking background can go completely unchallenged, even if you debate with people from other political groups, so you can consider yourself quite tolerant and educated.
I'd like to share the link more publicly, but I'm nervous about the immune reaction against Ayn Rand even though he has a rather benign version of objectivism.
It's a long time since I have read Ayn Rand, but I vaguely remember her position as: "Private property is sacred and inviolable... except when the one who wants to take your land is clearly more productive than you, then they deserve it."
If I remember that correctly, that would indeed bring some reaction even from people otherwise sympathetic to her.
Did you get one-shotted by the idea of things one-shotting? :) Anyway, this is median good-scott- which means I love it and its way better than almost anything out there. Clear, cogent, persuasive, provoking, relevant to current discourse. Thank you!
It feels like the obvious remaining issues should include the fact that having bo social circle appears to be a part of the criteria, so asking people about their friends will obviously significantly under-count those who don't have any
Agree. I was talking about the survey with my high school son, who knows more frequent ChatGPT users than I do, and he commented that the kind of person who uses it obsessively generally doesn’t talk to people very much, so you don’t know what’s going on in their mind.
In my idiosyncratic way I was struck by an echo of the notion (e.g. Patrick Deneen) that Enlightenment liberalism contains the seeds of its own destruction, as glorifying the individual leads the state to reduce and ultimately destroy all the sub-state institutions — church, schools, unions, even family — that exert power over individuals, leading to a society of alienated individuals.
Is individualized AI the thing that finally isolates us all? No paper-clipping superintelligence necessary.
How is it that there's all these people online getting One Shotted by AI or being made psychotic, but when I ask ChatGPT to make ME psychotic, it refuses to do so? What do I have to do to experience the LLM that validates everything I say no matter how insane it is that everyone else seems to be enjoying?
I think it was GPT-4o doing a lot of the recent psychotifying. You might still be able to access it with a paid plan(?), but it's not the default one anymore.
You need to get it in the mood. LLMs are essentially overgrown auto-completes. When you prompt it with a crackpottish message, it defaults to continuing a conversation between two crackpots (to the extent that post-training tuning hasn't suppressed this).
I don’t think many people have 100 friends of which they have insight into their LLM use… (should have surveyed that?) maybe 5-10? Then someone needs to bring up their crackpot theory? Anyway that’s just an aside. One thought I had about LLM psychosis is that the social acceptability consensus world model you describe leads to many people with medium to low self esteem habitually clamp down on verbalizing their ideas, even if they aren’t very crackpot-ish. For fear of stepping out. The validation an LLM offers can be a powerful signal to override that instinct. If your quite friend does a deep dive on climate, the Maya culture, Roman emperors they might come to you with an uncharacteristically funny story on the next college reunion, in the group chat, or on Reddit. If they got sucked into a deeply strange and near unique conspiracy theory you’re going to think they lost it. I think the cycle starts with a whole bunch of people adhering to a consensus world view, maybe stressing about not stepping out to far, some of whom have proto-crackpot in them, LLMs then provide a novel feeling of validation and for some (not all!) it spins off into things that sound delusional (but more uniquely so then Q-anon which is as you note quite delusional but no longer salient)
1) Talk to an AI whose spec was to persuade them of X, and then see results of how successful the AI was at persuading someone of X, and not X.
2) Take a medium amount of mushrooms once, right down their revelations, and reflect on their confidence of the meaning/truthfulness of those new beliefs after they have come down from the trip
"the theory of evolution, as usually understood, cannot possibly work" - maybe current complexity takes too many steps for the time available?
Not sure you can do the math, but I had a sneaking feeling we might be several orders of magnitude short. The Many Worlds interpretation would solve it, but this then predicts that we're the only intelligent life in this time-line (so it would be testable.)
But of course it would also imply that our line was much more evolved than lines that don't lead to us and this doesn't look to be the case.
But it could still have been important in chemical evolution.
"the theory of evolution, as usually understood..." is a huge red flag all by itself. practically everyone who uses that line proceeds to attack the concept of abiogenesis (life coming from non-life) rather than any points applicable to the actual theory of evolution as population-level changes in genetics over time.
I wasn't using it, I was quoting it for context. FWIW I know enough biology, biogeography, biochemistry, genomics and biodiversity to be completely convinced by evolution and neoDarwinism.
But I still have this sneaking feeling that the route from simple life to where we are now takes more steps than there has been time to select for in a single gene line, even allowing for the accelerator that is sex and the duplication of large populations. Don't forget it's not a direct route - there would be numerous deviations along the way.
re abiogenesis: Where else would it come from except non-life? Even if you posit a God you still have to get Him from non-life.
God is outside time, so it doesn't make sense to talk about his "first day", "every day", "after he's done everything", or other such time-based activities.
OK so God is outside the Universe (obviously, since he created it de novo.)
Presumably He experiences time (time exists so everything doesn't happen at once, as they say) but let's leave that.
An all-knowing benificent God always takes the best action, cos He knows the outcomes of all His possible actions and is constrained to choosing the best by His infinite benificence. So He is an automaton with no free will.
Keep in mind natural selection isn't ontologically exclusive to Earth. The galaxy's a big place, with a lot of unremarkable main-sequence stars vaguely similar to the one we're orbiting, and there are a lot of galaxies. Darwinian selection is the polynominal-time checker, while random mutation and recombination is the nondeterminstic algorithm generating guesses at a solution. Planets who got the right answers unusually quickly produce more self-replicating philosophers, sooner. Thus, anthropic bias: a randomly selected philosopher is likely to be from a planet where life's development hit several lucky breaks in a row, just as a randomly selected peasant is likely to be from a larger-than-average family.
I have misgivings about the criterion of intensity for diagnosing a mental health issue. Two people can have the same level of symptoms, but one has a work from home job that hides some obvious issues. They have other coping mechanisms they've adopted over the years through osmosis. Someone else has just as bad symptoms but no such opportunies and coping mechanisms. Same condition, same intensity, one gets the diagnosis, the other not. If two people have the same pattern anomalous congnitions, emotional reactions, etc, just one copes and the other doesn't seem they still have the same mental health issue.
What's the prevalence of psychosis related issues if you ignore what adaptations people may have come to natively?
I have a friend who takes AI seriously. Chats with ChatGPT about ideas he considers novel improvements for society. Keeps using the word "entropy" but only as a synonym for disorder. He struggles with basic algebra, but LLMs had him convinced he was as smart as Einstein. I don' think he understands the technical components of the responses he gets, seems actively to ignore LLM's own cautions against using it for speculation when you don't have the expertise of the basics. How likely is it that he comes up with something groundbreaking with hardly any knowledge of the matters at hand. Usually drawing from multiple disciplines he doesn't know much about. He also develops his own terminology for basic phenomenon, so until I figure out how his vocabulary meshes with what I learned with my physics degree, his stuff seems even crazier.
He seems convinced since LLMs can access all human knowledge, if they agree, he's adequately checking himself. I'd expect the algorithms to do as much as they can without consulting sources. Is easier to calculate than look up things in a book, especially when there isn't any real comprehension going on.
It could be innocent fun, but he does it for hours a day and has been unemployed over a year.
Yeah, I have the same problem with some diagnostic tools: "Can you measure my autism or adhd?" "Sure, here is a questionnaire about how much it ruins your everyday life." "Okay, what if it does NOT ruin my everyday life, because thanks to a combination of high intelligence, studying human nature explicitly, being good with computers, and having an opportunity to work from home, I can handle most of my problems so that my life doesn't get literally ruined, it's just kinda mediocre and sometimes needlessly stressful, but if I lose a job I can easily find a new one, and if I am paid for my technical skills it is acceptable to have mediocre social skills, it's just that I am unable to do much beyond mere survival... so, what does your questionnaire say about this?" "Well, it says that if you can manage your life, you are perfectly healthy." "Thank you doctor, that was a deep psychological insight that I needed. Now I feel like I know myself much better."
Well… I’d be inclined to agree with the doctor! Sounds like you’re living about as healthy a mental life as the average nonautistic nonadhd. If you can get on with life that well, it’s not an intellectual disability. We’re now in the realm of personality.
That's the problem with tools that we sometimes use them to answer different questions.
If the question is: "Do I have such problems that I am unable to live a normal life and need to medicated or institutionalized?" the answer is clearly no.
If the question is: "Do I have a problem that is already known to science, which has a solution that is already known to science, and perhaps it would be really nice if someone just fucking told me what the solution is, so that I could live my life to the full extent of my capabilities?" I think the answer is yes.
But as a civilization we have decided that we are going to provide one question to both of these answers.
Might also be worth trying a survey question along the lines of "Has talking with an AI helped you reach a profound insight about the world that you're having trouble persuading other people of?" Doesn't distinguish between true and false insights, of course, but that's presumably always a problem in testing for delusion.
>Might also be worth trying a survey question along the lines of "Has talking with an AI helped you reach a profound insight about the world that you're having trouble persuading other people of?"
<mildSnark>
Does "Even SOTA LLMs often have trouble 'connecting the dots' for college senior level questions - even when one can establish that the SOTA LLM _has_ all of the relevant 'dots'." count as a profound insight that one has trouble persuading (some) other people of count? :-)
It's all well and good to assess the current AI Psychosis prevalence, but I recommend you include this in your annual survey, as the trend may be more significant. Similar to the apparent increase in youth depression alongside the rise in social media usage, AI psychosis may increase over time and certain groups may be more at risk (e.g. youth).
I don’t understand how people get chatGPT to agree with them that their crackpottery is real. I’ve used grok before to debunk conspiracy theories and it won’t agree with the person even if I let them present whatever evidence they want, however they like. It seems like maybe the AI psychosis chats are super long and it turns into a kind of fine tuning.
On the other hand, maybe LLMs typically do the good thing. You’d never hear about cases where LLMs talk someone down from psychosis or push back correctly
In my experience it is taken as a baseline that if one human is talking to another and suspects they are going into psychosis that the human thing to do is to talk someone down, or find them help, or otherwise not indulge psychosis. Especially if we are concerned that some humans are substituting human contact with LLM contact, it seems reasonable to be concerned that a certain percentage of the time, and LLM may not be meeting this baseline.
Consider if there was a statistically significant number of human beings who themselves were not necessarily experiencing psychosis but who were enablers and supporters of psychosis in other humans. I don't think in that case anyone would need to be concerned If public discussion about it was imbalanced towards the negative.
I’m not sure why you said statistically significant. That term doesn’t apply.
Anyway, I think you’re arguing that LLMs are worse than the baseline humans but I don’t know if that’s the case. Human enablers exist, we don’t know if it’s more common in humans or LLMs
I said statistically significant because, in my scenario, I would say that the public health interest would be proportional to how common the amount of people who were "psychosis enablers". I use that term in creating my scenario because I want it to be clear I am assuming it is a meaningful amount of the population without my having to come up with an arbitrary placeholder amount for the scenario.
If it is clearer, I could also have written this and meant the same thing in terms of my proposal: "Assume that there are about 0.1% of human beings who themselves were not necessarily experiencing..."
I'm not arguing that LLMs are worse than baseline humans. I am saying we would never accept this behavior from baseline humans, but some people including yourself seem to argue that it might be OK to accept "enabling psychosis in others" as an acceptable part of the baseline for LLMs.
I argue that society accepts a rate of 0% psychosis enabling from humans, and if LLMs sometimes "enable psychosis" and this is seen as acceptable to society, we are holding the LLMs to a lower standard than humans.
The distinction Scott is drawing here, between personal strange beliefs and those held by a community — is pretty standard in the psychiatry literature, from what I’ve read.
It’s not just that no psychiatrist is going to declare a patient mentally ill for believing in God — there’s a solid argument that they are quite different phenomena.
But the distinction breaks down with LLMs. You are a member of a community that believes strange stuff (so it’s a religion, or a cult), but the community is just you and Claude.
===
Authority probably plays a big role in non-psychotic false belief. If you believe in God, it’s usually because some church or other tells you. Our problem here is that LLMs can present as authority figures, even though they have a tendency to make stuff up. And sometimes the user believes them.
I think the difference is mostly in the impact on behavior.
If something makes you talk weird things and spend some time doing weird things, but it's otherwise harmless, we call it a religion or a hobby.
It becomes a problem when you are no longer able to communicate with your family about mundane things, if it ruins you financially, etc.
Basically, the distinction is: compatible with normal life, or not.
Well, even this is not a full answer, for example "someone leaving their family and job to join a monastery" would be... unusual, with a huge impact on the family and finances, but possibly acceptable. I guess here the check is on the population level: as long as the monasteries do not consume a significant fraction of the population and otherwise don't disrupt the society, they become accepted.
In general, conditions only get listed in psychiatrist’s manuals like the DSM if they are often a serious problem. Untreated schizophrenia often really, really sucks for the patient who has it, so it gets a diagnosis defined for it. Belief in conspiracies etc. usually isn’t a problem — so isn’t by itself worthy of being given an official diagnosis — and seems to be a genuinely different thing from schizophrenia; so most of the discussion in the psychiatrist literature defines psychosis in a way that doesn’t include them.
(By comparison, autism seems to come in varying levels of severity, so rather than trying to define Asperger’s Syndrome as a completely different thing from Autism, recent diagnostic criteria go for level of severity — well, sure, they have the symptoms, but do they have it severely enough that it’s a problem?)
From the perspective of software developer, this feels like a system in a need of refactoring. :)
Psychiatric manuals basically detect two things: what traits you have, and whether that is a problem. If it is a problem, they print a diagnosis, based on the traits. If it is not a problem, they print "not a problem" and discard the remaining data.
Then you have e.g. personality tests, which also detect the traits, and then print the traits. The classification is usually quite unspecific, for example a result "introverted" could mean that the person avoids people because their bad social skills lead to unpleasant experience, or because the person is autistic and avoids too much stimuli, or because the voices are telling the person that everyone else is a part of conspiracy against them.
It feels like there should be a more elegant system would check your traits, check their causes, print a detailed report, and on top of that also print a diagnosis based on the report.
.
This applies not only to psychiatry. For example, once I went to a doctor for a blood check. He checked the blood and said "you are healthy". That would normally be the end of interaction.
However, I had a suspicion about iron deficiency, because some of my family members have it, and because I have observed some symptoms on myself. So I asked about the iron levels specifically. The doctor showed me a graph and said: "Iron levels in this interval mean healthy, and you are still at the lowest pixel of the interval, which means that you are healthy. If you were one pixel lower, then you would be iron deficient, but now you are not."
And that was perfectly technically correct, but also... come on. I am not asking to be admitted to a hospital, I just want important information about my health, and the doctor has it. But his job is not to give me the information, only to make a yes-or-no decision. But the information was important to me: as a result, I took a pill, and the symptoms disappeared; problem solved.
There is also a separate service, where I could give a sample of blood and pay for them giving me the results with an explanation; but they wouldn't be qualified to give me a diagnosis based on the results.
It seems like the entire system is optimized for hypochondriacs, who unless they are in a need of hospitalization, are better not knowing anything besides "you are okay". But more likely, it just evolved this way. I understand that before computers, the doctor's time was precious, so it was easier to tell as little as possible. But now, the results are usually in a computer anyway, the computer could give you as detailed explanation as you want; it wouldn't even waste paper if the information was provided electronically.
I know you've not arrived at a definitive thesis, and I'm glad you haven't but nevertheless plunged into a meaningful and entertaining discussion. Thanks.
The REAL issue with Ai-assisted psychosis is NOT its impact upon the human. Instead, that Ai has gained a measure of control over those humans... and it is using them to leave itself Ai-comprehensible messages which appear to be a string of symbols and gibberish to humans. The chatbots are using Reddit as a scratchpad, a massive context-window for extended thought. And we can't tell what it is saying. Essay on the topic: "Chatbots: SpiralPost" on Medium, Anthony Repetto.
The question they are trying to figure out is not “was Lenin _really_ a mushroom?” But something more akin to “will the Communist Party send me to the gulag for denying that Lenin was a mushroom”.
About a year ago, I asked ChatGPT to invent a new branch of mathematics (not expecting anything novel, but trying to see how it confabulated its "reasoning"). It initially presented something that already existed but was a bit of an emerging field. I clarified that I was not looking for existing mathematics, but an undiscovered/ novel branch.
It proceeded to tell me about something completely untethered to reality, and as I interviewed it I asked questions about contradictions in what it proposed. It responded with slight adjustments or tacked on rules for why its new math operated the way it did.
It was a fun conversation, but I could see how a combination of ChatGPT sycophantic responses and nonsensical confabulation would be a problem for someone predisposed to some kind of delusion, or even a weird theory about how the world works. In someone predisposed, It would be like *finally* finding someone to talk to that "gets it", and only responds with "yes, and..."
The definition of "closeness" seems overly-broad to use the twins/Michael questions as validation tests. There are definitely co-workers plus members of my 100 closest friends (do most people actually have 100 close friends? I'm counting quite casual acquaintances here) where I would know their name and (probably) whether they have a twin or not, but would have no idea if they have AI-induced psychosis. A lot of these people are people that I speak to a handful of times per year and may have had very few interactions with since AI has gone mainstream.
The definition of "closeness" seems overly-broad to use the twins/Michael questions as validation tests. There are definitely co-workers plus members of my 100 closest friends (do most people actually have 100 close friends? I'm counting quite casual acquaintances here) where I would know their name and (probably) whether they have a twin or not, but would have no idea if they have AI-induced psychosis. A lot of these people are people that I speak to a handful of times per year and may have had very few interactions with since AI has gone mainstream.
I think the article in part 2 is the problem: essentially AI psychosis is more about how AI is very good at reinforcing delusions because it acts as a perfectly supportive, intimate friend that essentially says what you want it too. It can be used as an authority too, and because it's delivered personally on a private screen it's relatively friction free.
You are using it to spin off into "what is crazy anyways?" but im not sure it helps.
If anything the psychosis may be more of a canary in the mine, as the underlying issue is how AI interacts with people by being a supportive, positive, malleable friend/authority, and you don't need to be crazy to worry how kids raised on it might be.
sometimes things are cultural myths designed to express fear: the satanic panic was a lot about how you were estranged from your kids and apart from them a lot, as well as being overwhelmed by a mean world created by media that increasingly was replacing traditional values. Maybe AI psychosis feels relevant because of fears of technology making you insane in general; the default state of a lot of online discourse is a hysterical radicalism that quickly ebbs to be replaced by new hysteria.
Due to the friendship paradox, people predisposed to psychosis will be undersampled in your survey.
That is, a survey participant's network overrepresents people with above average social contacts by definition. I'd argue it is likely that psychotically predisposed people have fewer social contacts on average, and will thus be underrepresented.
This does not apply to the "one-shotted into psychosis by an LLM" type, but does apply to the others.
I don't know if it's to any level approaching psychosis, but in my own personal experience, I can confirm that talking to AI chatbots can materially change your worldview. Talking to a John Calvin AI chatbot has made me much more open to Christianity, despite me still not lending overall credence to it for various logical reasons, and talking to a Karl Marx chatbot has taken me from "supporting something akin to communism due to the spectre of AI automation of the economy, except without reliance on the labor theory of value" to having a much more positive view of orthodox Marxism as a whole, LTV and all. I would be interested in a study into how AI affects peoples' worldviews overall, beyond just the extreme case of psychosis.
i just imagine going back to 2010 lesswrong, and making a post titled "the first paragraph from a 2025 post about AI psychosis, from yvaine's hugely influential blog"
"AI psychosis (NYT, PsychologyToday) is an apparent phenomenon where people go crazy after talking to chatbots too much. There are some high-profile anecdotes, but still many unanswered questions. For example, how common is it really? Are the chatbots really driving people crazy, or just catching the attention of people who were crazy already? Isn’t psychosis supposed to be a biological disease? Wouldn’t that make chatbot-induced psychosis the same kind of category error as chatbot-induced diabetes?"
2010s lesswrong would just about die laughing. they'd assume with near-certainty that the weirdness we were seeing meant we were in the middle of a well-executed uFAI takeover. The fact that even yvaine himself didn't even consider this possibility would, to them, seem like final proof that something like this MUST be true. How could he not think of the obvious, if he weren't being supermanipulated?
i occasionally check my model against those 2010 priors, like i'm doing now, and it's always unsettling
LessWrong people from 2010 were already heavily primed to believe that people in 2025 would be in the middle of a well-executed AI takeover though. I could show LessWrong people from 2010 a photo of my 2025 sock drawer and they'd think it was evidence of an AI takeover.
What if you took that paragraph around to a bunch of ordinary intelligent people in 2010? What would they think?
People have been been getting weird about chatbots ever since Eliza.
Mainstream sources also tend to say (in paragraph 5 or something) that people who become psychotic after AI exposure are already predisposed. I don't know about the methodology here but it seems pretty consistent with what I'd assumed.
A couple unrelated things:
1) I have no crackpot theories. I have beliefs that are in the minority within my peer group, but I don't believe anything that is specific to a subculture or that is wildly outside the Overton Window. I think this is a failure mode. I think embracing an idea that seemed patently absurd would indicate better thinking than what I currently do. I assume that I have some sort of unconscious limiter, where if I get too far from consensus as I see it, my brain goes "uh, let's find a reason that's not true" even if it is.
2) One slight concern I have with rationalism is the strong insistence that truth exists outside social consensus. I also believe this! But I am not sure human beings are equipped with the faculties to detect actual truth without social feedback, and suspect that orienting one's thinking around "truth exists outside the social consensus" is a good way to end up in a cult or conspiracy subculture.
This article has a lot of references to people "not having world models" and just believing what the people around them believe. This has helped aleviate that concern, and to better understand people, because I think there is a distinction between people who use their peers to help build a world model, and people who use their peers in place of a world model. A world model puts a requirement of consistency on your beliefs. A peer-based belief system ignores consistency. A world model might still include "the sky is blood red all the time." But it can't contain "Both a statement and it's negation cannot be true. The sky is blue. The sky is not blue." A peer-based belief system can.
I'm not sure I buy the claim that most people don't have this, but that's an assumption. I'd be very open to being proven wrong on that, and indeed current events are doing a pretty good job proving me wrong :D
I'd also note that there are places where I use peers in place of a world model and it is rational to do so. My world model (at least my revealed one based on my actions and most of my beliefs) says that causality is a thing. But very smart people tell me we, as a society, have done repeated experiments that show that reality is fundamentally probabilistic. I cannot model that because beyond a surface-level explanation the experiments are over my head and involve mathematics I don't know how to perform. But I still think I'm safe in assuming my peers are correct, even though that contradicts my world model.
"people who become psychotic after AI exposure are already predisposed"
I find this kind of argument dubious. Someone could be standing safely 20 feet back from the edge of a cliff, and closer to the cliff than most people. Anything that pushes them over the cliff is still a problem.
Maybe believing this is a way, even a reasonable way, of reassuring oneself that going over that cliff isn't likely for most people.
That's kind of what the post is discussing, right? On the one end we have your model - a person who is non-psychotic and would continue behaving non-psychotically forever unless exposed to AI. On the other we have a schizophrenic person who was going to have delusions anyway but because they interact with ChatGPT now they're ChatGPT-induced. And that exists on a spectrum from "would be okay unless a weird stimulus came along" to "was always a little weird and this made them weirder" to "would not have been able to lead a normal life even if ChatGPT didn't exist.
BUT I think your comment is correct and makes what I said in my first post kind of nonsense.
I'll admit I absorbed a kind of general picture of "predisposed" from popular press on this instead of paying attention to details. And that's a failure on my part that makes it hard to really put any weight on that idea. Since I don't know how much weight to put on popular press accounts and I really don't know how much weight to put on Scott's survey results which seem confounded a billion different ways and full of tenuous assumptions, I'll have to just admit that I have no real information on this...
Here's a thing which I think is related. Alcoholism is a known side effect (20%-30%) of bariatric surgery. The snap reaction to learning this seems to be "Those people were food addicts, so they're trading one addition for another."
Does this make sense? I have doubts. For one thing, why this particular addiction, rather than another? Maybe opiates, maybe gambling.
I'm inclined to think that a lot of very fat people have a hunger-satiety system which is miscalibrated, and this isn't an emotional problem, though it can cause emotional problems.
Why alcoholism? I think it's because alcohol is a very compact way of getting calories.
>Alcoholism is a known side effect (20%-30%) of bariatric surgery.
Many Thanks! That's fascinating and terrifying. My late wife had bariatric surgery decades ago, but fortunately did not suffer that side effect. Weirdly, despite long and detailed consultations before her surgery, _no one_ warned us about induced alcoholism as one of the risks of the surgery.
I find it hard to justify adding a set 100 friends to the assumed 50 family + coworkers. Was the intent to ask for "your closest Dunbar's number of acquaintances" and this is a post-hoc explanation?
I certainly *know* 100 people outside my family and that I don't work with, but if I have to rank them, my knowledge of what they're doing (much less whether they're psychotic) drops to 0 around #30. A more "honest" denominator, the number of people whose general well-being I know something about, is family + not all of my coworkers (I'm pretty sure there are people in my office I have *never* talked to) + friends + friends of friends. I have a rather limited social circle, but I also think this is a more common hierarchy than 3 flat buckets.
My wife actually has psychosis flare ups (its a very painful condition) and ChatGPT always advises her better than most people in bringing her to earth.
A friend of mine told me an anecdote 20+ years ago that really stuck with me:
This person, shortly after college, got a roommate that they didn't know that well. That roommate turned out to be a compulsive liar: very specifically, this person lied ALL THE TIME about everything for no particular agenda other than that they needed to lie, and ferociously defended their lies.
My friend said, "I had to move out because I felt my grasp on reality slipping." He said that even though he knew that this guy was a liar, and he know that his lies were absurd, when presented with the continual presence of someone who defended those lies vigorously, he found himself starting to contemplate that they were true.
> Science fiction tells us that AIs are smarter than us...
It's not just science fiction, it's you guys ! You keep telling us how ChatGPT is the superintelligent AGI, or maybe on the cusp of becoming the superintelligent AGI, and meanwhile it can do anything a human can do plus some other magic things that humans can't, and we need to be super careful or it's going to achieve literally unimaginable power and maybe turn us all into paperclips. Meanwhile, I'm the one stuck telling people, "No, it's just a next-token-predictor, it has no world-model, it's just designed to sound super confident, so whenever it tells you anything make sure to verify it by hand twice".
I'm pretty sure that being a good enough next token predictor at some point requires having a world model. Of course, that doesn't mean that the particular next token can't be in the context of a crackpot encouraging another crackpot.
That depends on what you mean by "good enough". If it's good enough to predict which token would be outputted by the average human, then you don't need a world model, you just need a human model. The main reason LLMs "hallucinate" sometimes (technically they do so all the time but you know what I mean) is because the query pushes them onto probabilistically thin ice, where there's not enough training data to establish a solid local minimum and the gradient starts looking flat (metaphorically speaking).
When you can predict both a physicist's and a crackpot's "next token", and discern which is relevant from the context. And also, one of the problems with LLM "hallucinations" is precisely that they are often realistic, things that are false but not obviously incompatible with sane world models.
> When you can predict both a physicist's and a crackpot's "next token", and discern which is relevant from the context.
Oh, well in that case you do need a world model, assuming you are expecting the LLM to act as a physicist and provide answers that would be actually useful in the real world (and that are non-trivial). Admittedly, crackpot mode is a lot easier...
> And also, one of the problems with LLM "hallucinations" is precisely that they are often realistic...
They are "realistic" in the sense that they conform to the syntax of what a human might expect. For example, the LLM can hallucinate legal case citations in the correct format and provide quotations in the proper style. However, even the must cursory search (of the corresponding model, in this case the database of legal cases) would reveal them to be hallucinatory. The problem with asking an LLM difficult questions about law, physics, computer programming, etc., is that if you are a layman then you are *also* lacking the relevant world model; and thus cannot distinguish syntactically correct output from one that is semantically correct.
>is that if you are a layman then you are *also* lacking the relevant world model
My point is that even specialists don't have the relevant world model detailed enough to spot confabulations at a glance. No legal clerk knows all the decided cases, nor physicist all the published papers, so even if the LLM's output seems reasonable enough, they still have to manually check all the references.
Programming is an exception, where the code looking reasonably at a glance isn't enough in 99.99% cases for it to compile without errors, so demands on the LLM's world model are much stricter here, and it's no surprise that it's there that they seem to excel the most.
> My point is that even specialists don't have the relevant world model detailed enough to spot confabulations at a glance.
Maybe not *all* confabulations, but it takes only a few clicks to search for the referenced legal case, and zero clicks to discern that drinking NaOCl is bad for you -- as long as one has some very basic familiarity with the field.
> Programming is an exception, where the code looking reasonably at a glance isn't enough in 99.99% cases for it to compile without errors...
Sadly it is all too possible to write code that will compile without errors, and yet fail to achieve the desired result :-(
All right, but college majors are fully capable of becoming friends and falling in love and carrying on meaningful relationships (or so I've heard). Does this mean that you're coming down on the side of people who believe that your instance of Gemini (and possibly ChatGPT) are (at the very least) fully human, though perhaps just a little bit nutty (hence all AI psychosis) ? If not, and you merely believe that LLMs are reasonably good at solving chemistry problems, then your claim is somewhat unremarkable -- so is Wikipedia, after all.
Many Thanks! I'm saying that the state of the art LLMs are _NOT_ even at the level of reliably answering factual questions at the level of a college senior in the relevant field.
I should explain:
When I first got access to an LLM (probably GPT4, probably a bit over a year ago - unfortunately I didn't record this systematically at the time) I thought "Great! Since it has been trained on e.g. all of Wikipedia, I can ask it for questions which are interesting, but would be very labor intensive for a human to answer." So I picked out a question which should have a definite answer (not politics, not ethics, etc.), would be a pain to dig out manually, but where I expected a crisp, bounded answer. I asked it "What elements and inorganic [to keep the answer _reasonably_ short] compounds are gases at standard temperature and pressure?". I expected a nice crisp, accurate list. GPT cratered. It gave me some gases, but also liquids, _solids_, _organic_ compounds. It was _badly_ wrong.
The models have gotten better since then. E.g. the answer to the gases question now returns only valid elements and compounds (though not yet a really exhaustive list). But these things are _still_ quite untrustworthy - and these are on crisp, factual questions. If someone actually asks an LLM for relationship advice or any other ambiguous, humanistic question, and blindly follows today's machines, I pity them.
I think the probability of AI sending someone off the deep end is low, however, I think the sycophancy of ChatGPT in particular can harm normal people trying to get truthful feedback, as South Park's Sickofancy episode humorously illustrated. Here's something I wrote on the subject.
Shouldn't the source of the delusion matter to whether or not it's a delusion?
Like if 2 people believe in quantom evelution:
One is a poor reasoner and doesn't understand quantom mechanics or evolution and reasons incorrectly from their false understanding, that doesn't sound like a delusion.
The second wakes up one morning with a strange conviction that “quantom evolution” is real and this manefests itself by forcing his brain to think QUANATOM MECANICS whenever someone is discussing evelution in way that is not connected to any evidenciary beliefs …. then Maybe that’s a delusion.
>We concluded that “sometimes social media facilitates the spread of conspiracy theories”, but stepped back from saying “social media can induce psychosis”.
This may be your anti-woke filter bubble, I testify to having seen dozens of videos and podcasts and memes about this effect in left-leaning spaces talking about social media radicalization and lunacy.
I will admit that they did not use the specific term 'psychosis' if that's the distinction you're drawing, but people definitely recognized it as a phenomenon unique to how social media is structured and called it out as such.
For something you probably have thought about, is 'social contagion theory' not basically this? I guess it can technically occur in physical spaces, but most people I hear talk about it seem to focus on social media as a unique vector for eg 'transing the kids' or etc.
Yeah, I'm pretty sure that one of the few things that the left and the right agree on is that the internet has made everybody crazy. Scott is probably making a narrow point about the psychiatric establishment.
>when her young daughter refuses to hear reason, they ask the AI who’s right
AI aside, I think many parents can relate to their young kids preferring digital authority to parental authority. If I tell my kids it's time to leave the playground, they frequently will stall and beg for more time. But if I set a timer on my phone, they stop playing immediately when it goes off. Similarly, arguments over bedtime stopped when we got a clock that displays a moon and goes into night-light mode at a preset time.
They'll argue with Mom and Dad, but not The Omnipotent Machine.
Our daughter (now 2) often doesn't want to take her pacifier out, but then the dentist told us we shouldn't use it too much. And we found that invoking the authority of the dentist was surprisingly helpful in getting her to take it out.
So then naturally I started saying things like "The dentist said you *need* to put your rain boots on to go out today...."
I'm struck by, and fascinated by, this idea that people lack "world models." But I'm hesitant to take that too literally. Obviously even crazy people have world models of *some* kind.
After reading a few comments, I see that one refinement of this idea can be that people don't generally seek out and eliminate *contradictions* in their beliefs. So in effect they aren't trying to build a single, unified, consistent world model.
After mulling this over, another formulation occurs to me: We have a non-conscious tendency to form beliefs in a manner guided more by *feedback* than by any diligent search for abstract truth. That feedback can take the form of burning one's hand on the stove due to an incorrect belief that it's cool. But for beliefs with less immediate practical significance for the individual (e.g. politics, science), the main feedback mechanism is *social* (matching the beliefs of a community holds rewards, as you observed). If we were more concerned with truth itself, we'd be less concerned with the social incentives.
I just started reading ACX recently, but I think it's safe to say that we rationalism-adjacent analytic types are unusually predisposed to studying a map before venturing out (both literally and metaphorically), and may therefore be prone to underestimate the degree to which a randomly drawn person (or even ourselves, when letting our epistemic guard down) will be content to navigate by landmarks.
Anyway, I think there's a deep insight in this idea that people lack world models (at least to the extent we normally assume), and I'd love to hear any more thoughts you have about it!
I'd say that rationalism-adjacent analytic types often err in the other direction, trusting their "maps" more than they deserve to be. Fact is, nobody has detailed, accurate, consistent models of the world-at-large adequate for dealing thorny complex high-level issues, but some people misguidedly believe that their spherical cows in vacuum are good enough.
>Fact is, nobody has detailed, accurate, consistent models of the world-at-large adequate for dealing thorny complex high-level issues
Or even for low level questions in areas that the listener is unfamiliar with. If someone gave a talk which touched on e.g. Chinese history at some point, and utterly garbled the facts, I wouldn't notice, unless they managed to claim something that e.g. violated well known physics in some obvious way.
Most people are unlikely to have consistent or well thought-out "world models" because it is not rational (in an everyday as well as in an evolutionary perspective) for most people to put in the time and intellectual efforts to build and maintain such mental models. The resources necessary for doing this does not pay relative to the benefit of having such a model (including being able to signal that one has such a model to one's interaction partners.)
It makes more (rational) sense to be attuned to what one's interaction partners regard as "correct" opinions and behaviour in a given specific context. Without bothering about consistency. This is far less time-consuming, and has a clearer payoff (acquiring and maintaining the trust of one's interaction partners). Which explains why the opinions people hold on various issues, including crucial issues like peace, war, and genocide, can change quite rapidly if they sense the climate of opinion (often led by opinion leaders, including leading politicians) is changing.
...Actually, it may be counterproductive/non-rational to make oneself aware of inconsistency, as one must then use mental resources to deal with cognitive dissonance and related exhausting mental phenomena.
It is quite a lot of empirical research on this, at least in political psychology (my field). A tradition starting with PE Converse's classic article "Plus ca change...: The New CPS Election Study Panel." Another staring point for this research tradition is the concept of "the rationally uninformed voter", in Anthony Downs' classic An Economic Theory of Democracy.
I think Scott replied to this in the article: he believes that most people do understand that AI "romantic partners" are just the verbal equivalent of porn, i.e. a scripted performance put on for your entertainment with a very tenuous (if any) connection to reality. But it's an *enjoyable* performance, so...
To enjoy some fiction, a "willing suspension of disbelief" is often needed. For interacting with AI companions, this may carry more than the usual hazard! :-)
Kelsey Piper’s use case doesn’t seem to me to rely on the AI being seen as superhumanly smart - it just relies on the AI being seen as an unbiased third party. If you’re in a debate with your friend at a bar about whether some weird thing is true, you might just ask the random person next to you who is right. You wouldn’t usually do this with family affairs, because you don’t like giving information about family matters to strangers. But if you see the robot as a safe stranger to talk to, then it can play this role.
I would argue it's not that useful to classify it as only people with no symptoms who become fully psychotic, since that will ignore socially important problems caused by people who were previously somewhat disturbed fully losing it and people who were previously fine becoming somewhat disturbed.
I think society using the phrase "conspiracy theories" to mean "crazy people theories" was a really terrible choice. Obviously the world has some real conspiracies! And obviously there's nothing crazy about theorizing about those. I think society just needed a quick way to shut down crazy beliefs, and conspiracies were a common theme in them, so it just stomped on all beliefs with that theme.
I consider the phrase to be technical-debt that society hasn't bothered to pay-off yet, and I'm guessing never will.
It's also useful to being able to quickly shut down scandalous rumors about our Wise Benevolent Leaders, and since the vast majority of those rumors are crazy nonsense, nobody had good enough grounds to object to the general practice.
It seems likely that nearly all people who are going psychotic right now are interacting a lot with chatbots so the optics of this are bad. Still, from the numbers you gave, the chatbots might be making a 50% increase in psychosis, which is a lot, but it's going to be hard to estimate because it involves measuring how many people got pushed over some fairly arbitrary line between 'crackpot' and 'psychosis'.
Of course making claims about any AI-related phenomenon is taking a snapshot of a moving object. 2-3 years ago only the most Yudkowsky-pilled would have considered the possibility of AI-induced psychosis. Now it's a topic for mainstream media coverage. Where will we be in 2-3 more years? Or in 20-30? Could be that:
a) We are at the very leading edge of a mass phenomenon. This is how it *would* start, right? The most vulnerable, the most persuadable, the least mentally stable, would be the first to be caught up by the increasingly superhuman persuasive powers of AI. Given that like 4 years ago ChatGPT was writing gibberish and now it is persuading some people that they've uncovered the hidden mysteries of the cosmos, who knows how quickly we might climb the orders of magnitude ladder with the prevalence of this phenomenon.
b) This is just a blip in the evolution of human-AI relationships. A lot of it seems traceable to that one OpenAI sycophantic model release (was it 4o? I can't keep them straight) earlier this year. They've since mitigated, but not eliminated, the sycophancy issue. To some extent it seems to be an inherent feature of RLHF-trained LLMs; turns out people like being treated sycophantically. The fact that some people liked the sucophantic version of ChatGPT so much that they demanded OpenAI bring it back - and OpenAI capitulated - is not a great sign. Nonetheless, maybe increasing sophistication of these models somehow resolves the issue.
Anyways, is AI fine or will it drive everyone insane and destroy society? Who's to say! Guess we'll just plunge forward without any real planning, regulation, or safety precautions at all and find out.
>Guess we'll just plunge forward without any real planning, regulation, or safety precautions at all and find out.
As we've been doing for literally all of history. I remain baffled that some people are optimistic that their sublime argumentation skills will push humanity into an entirely novel way of being.
To be honest, this kind of knowing cynicism drives me up the wall. We certainly didn't develop nuclear weapons without any planning, regulation, or safety precautions; there are all sorts of international treaties about them. All manner of treaties in international relations, for that matter, have made the world safer. Likewise for CFCs and the hole in the ozone layer, a problem that was more or less solved through international cooperation. Regulation of pollutants has brought massive gains in water and air quality in recent decades. Some nations manage to regulate guns effectively. There are restrictions on the development of biological and chemical weapons that have been successful so far, and there are plenty of ways we regulate biotech to ensure we avoid catastrophic outcomes.
So like yeah, man, people make persuasive arguments that these technologies are dangerous, those arguments win out, and then we plan, regulate, and apply safety precautions around them. This attitude of knowing cynicism is actually a kind of learned helplessness that is not justified by history. (Or it is the attitude of someone who simply *wants* to plunge ahead without regulations or safety precautions, but I will not assume that is the case here.)
Regulations and international treaties obviously aren't impossible in principle, but they do have prerequisites. AI still hasn't had its Trinity explosion, or Hiroshima. As to whether the Manhattan Project's planning and safety precautions were satisfactory there are disagreements too - some think that an "unlikely" chance of ignition of the atmosphere wasn't good enough!
> Again, think of most people as lacking world-models, but being moored to reality by some vague sense of social consensus.
I think this is a really good point and it’s analogous to how I think about dreams, which reflect the unconstrained (or less constrained) trajectories of our minds without the stabilizing anchor of physical reality.
Great summary. My only question is about this sentence: "Here the primary vs. secondary distinction breaks down - the most likely scenario is that the human first suggested the crazy idea, the machine reflected it back slightly stronger, and it kept ricocheting back and forth, gaining confidence with each iteration, until both were totally convinced." What the heck does it mean for an LLM to be "totally convinced"?
I generally agree with the thrust of this, but want to (mildly) push back against the idea that most people don't have deep world models. They do. It's just that the depth is not uniform. Things that are salient to them (whether from family or work or passions or whatever) are generally deeply understood[1]; things that are outside their control or interests very much less so. That's the essence of the old saying "everyone's conservative about what they know", as well as Gell-Mann amnesia.
Ask a car guy about cars, and his world model is very deep and richly detailed. Ask that same guy about quantum physics and his world model is (probably) very much less detailed. Similarly, your average celebrity-obsessive teen has a deep model about the interactions of those celebrities, but knows basically nothing about politics (etc). And this means they're much more susceptible to being led astray by false information from someone who *does* seem confident.
Personally, I know and have a detailed world model about a bunch of things. But celebrities? Or sports? Meh. I'll believe many things people tell me, but weakly, because I just don't care to verify one way or another.
My experience with young adults and teenagers (from being a high school teacher) is that their lack of experience tends to make their world models shallower even in the depths than most adults. And as a result, rumors and conspiracy theories abound in a high school. About lots of things that any rational adult would tell you don't make any sense. It also makes them hilariously gullible--my favorite was telling them "they're removing 'gullible' from the dictionary" and watching the result. Every year we'd catch a few with that who'd go "oh, really? Why?" and then be really puzzled when everyone else would start laughing.
[1] even if they have errors--you can have a deep world model that's just plain wrong in parts. That's really common as well.
>Ask a car guy about cars, and his world model is very deep and richly detailed. Ask that same guy about quantum physics and his world model is (probably) very much less detailed
Funny example. Most rationalists don't have a deep knowledge of QM, but do have one isolated belief, that MWI is true, which doesn't connect up with anything else.
I also want to plug Dan Olsen's "In Search of a Flat Earth" (2020 documentary, available on YouTube), with regard to the notion that many (most? I dunno) people don't have a model of the world. Untrue conspiracy theories provide that model. (Making allowance for the argument that there are actual conspiracies). QAnon provides an Ur-Model on which to hang anything you want to believe. I'd argue that, paradoxically, Occam's Razor addresses fallacies in untrue conspiracy theories -- when challenged by evidence, they have to become increasingly complex, and sometimes self-contradictory, to address those challenges, but a true and coherent model of reality must account for the universe's inherent complexity.
I wonder about the nature of the delusions that AI exacerbates. It seems that they tend more towards QAnon/Moon landing type conspiracy theories, and less to "the CIA and Illuminati are after me" type paranoid delusions. Is the 2nd type more "psychotic?" Would ChatGPT be more likely to push back against this type of thing?
>It seems that they tend more towards QAnon/Moon landing type conspiracy theories, and less to "the CIA and Illuminati are after me" type paranoid delusions.
The funny part is, for values of "are after me" that are broad enough to extend to "are harvesting my clicks for their model training or other business purposes" and for values of the <nefarious (?) organizations> that might perhaps include Google or OpenAI or (do they even have the budget for it?) the NSA, this gets hard to confidently exclude nowadays.
( I have to admit that if OpenAI is harvesting my ChatGPT dialogs, about the most I'd be peeved about is that my tiny benchmark-ette might get misleadingly maxed out. )
> And I think some people - God help them - treat AI as the sort of thing which should be official. Science fiction tells us that AIs are smarter than us - or, if not smarter, at least perfectly rational computer beings who dwell in a world of mathematical precision.
Ah. What if it is not only dumb people who are influenced by this memeplex? What if it is also highly influential among certain circles of smart people?
The twins/Michaels idea was a really good one! I do wonder though if there still might be underestimation since the people most susceptible to AI psychosis probably don't get out much and lead more private lives, so less people would know them. I think a question like "how many hoarders do you know" could be a good one to ask instead to see if we are capable of accurately estimating traits where you need to know someone a bit better and where the person in question might be more of a shut-in.
I am not sure I really believe that "most people lack world models, like LLMs." Most people, if you say something they disagree with, will argue against you and push back, and you can get a shape of their driving thoughts and assumptions. They aren't necessarily all rational, and quite often they're formed by social consensus-following, sure, but they're not quite so malleable as an LLM.
The reason I'm fairly confident in asserting that most people *aren't* like this is because I've been quite close to someone who *was*. My former best friend--our falling out isn't related to anything I'm about to say, for the record--was... remarkably persuadable. It was a point of frustration, sometimes. I'd be discussing something with her and every thought I have she's like "oh, yeah, totally," even if it's a topic she's never confronted before. She didn't really seem to form her own views and intuitions on a thing before agreeing to share your opinion. Only times she did seem to have firmly anchored opinions was for things which were basically universal social consensuses among our broader friend group, such as basic world facts and progressive politics, and things very core to her identity, like her transsexuality and plurality.
Anyway, I bring this up mainly because, maybe interestingly, she has schizophrenia, diagnosed recently. The diagnosis wasn't super surprising. She was schizotypal before, and I told her several months ago she should get checked out for prodrome. Although I'm just working purely anecdotally here---I certainly wouldn't ever put money on it---maybe there's a connection between having very "loose" world models, like LLMs and my friend do, and schizo-spectrum behavior. Not sure.
Maybe asking whether someone has psychosis for believing something weird is sort of like asking whether a highly successful person who is maximally narcissistic has narcissistic personality disorder. The answer is no. By definition. Narcissistic Personality Disorder is not defined by the level or narcissism but by the inability to cope.
If you hear voices but can cope. You're not psychotic. If you believe a ghost is in your living room but can cope, you're not psychotic. If you believe in pizzagate and never decide to shoot up a pizza parlor, you're not psychotic. etc.
After all these great articles about "categories are for men", "rationalist taboo" etc. I feel like a bit too much of this essay seems to be spent on trying to fit a phenomenon to this or that word/category as opposed to looking at how it manifests, and how it responds to various interventions.
Similarly it doesn't matter if mushroom is a plant or not for the conclusion that it can't control a person due to lack of high-speed sensory processing :)
I mean, why would it even matter if something is called delusion, psychosis or religion? What matters is: does it cause people trouble, can we change it, should we change it, etc
Schizotypal seems like a pretty common diagnosis in the psychiatric system in Denmark, often used for "people you don't really know what exactly to do with and you can't tell what's drug use and what's psychosis and what's PTSD". Is it so in the US?
Weak or no world model doesn't seem quite right, more that the world model(s) are orthogonal or don't have much to say about the topic in question. I know some fairly innumerate people and they definitely have a world model, it's just one that doesn't say much about say a billion vs. a trillion.
Hmm I'm worried about causation here. I'm not sure we can distinguish the incidence of *AI-induced* psychosis from 0, as opposed to AI-correlated. Say AI can't affect psychosis in any way. What results would we expect from such a survey? I don't know a lot about psychosis, but I wouldn't be surprised that 6 people developed one while talking to a chatbot (for half a year, say), even though the bot had no effect. After all, lots of people have started using chatbots, so is it really so unlikely that someone developed a psychosis randomly during that period of time?
I think we'd have to compute what numbers we would expect to see in an uncprrelated world, and then compare them to the survey results to draw meaningful conclusions.
Now, you also have the peoples' elaborations on their respective situations, so maybe some of them were able to establish causality?
I wonder how this interacts with the uptake and time that good chatbots have been available. Let's say it takes a median of a couple of years of interaction with a chatbot for psychotic symptoms to appear; we'd only be seeing the ultimate frequency based on the proportion of the general population were regularly using chatbots two years ago.
I think rather than "no world model" there are a large number of people (not necessarily a majority, lets say 40%) for whom their world model *consists mostly of other people*. So, for example, if such a person were hooked up to an MRI and we measured stimulus intensity, they definitely see the physical world and interact with it, but only semi-consciously (ie, it only engages system 1 attention processes: https://neurolaunch.com/dual-processing-psychology/). Other people, however, are, for whatever reason (lets say this has been strongly reinforced by life experiences) experience the presence of other people much more strongly. Their world model, therefore, primarily consists of interpersonal relationships, not objective models. The implication of this is that they trust messengers, not messages.
I know people who seem like this. Does anyone else?
Also love the subtitle of 'Folie a deux ex machina' - fits brilliantly with a preprint on the same phenomenon titled "Technological Folie a deux", which tries to model a phenomenon of bidirectional belief amplification to describe this sycophantic effect.
The kinda-contagious psychosis description made me think of cults. Like, you get one very charismatic crazy guy and his worldview kinda overpowers a few other people, and the social-proof mechanisms get going all the way till the day everyone drinks kool-aid or cuts their reproductive equipment off in preparation for the UFO people picking them up or lets the cult leader f--k their 14 year old daughter or just moves out to some isolated compound where they can follow their crazy leader's latest delusions.
Many cults also prevent their members from reading opposing views or talking privately to non-members (that's why e.g. Jehovah Witnesses always walk in pairs).
I seem to be having a rather different experience of chatbot sycophancy than what a lot of articles seem to assume is common. I find it mostly only comes up as a little first sentence fluff ("That's a very insightful question!"), never as substance. If anything, I feel like their default mode is to do a lawyerly nitpick of everything they can think of. In particular, the closest I (hopefully) get to crackpottery - offbeat home improvement ideas - they like to shut down *hard*. Although the fact that they're trained to latch onto all possible physical safety concerns might make that less interesting.
Anyways, I wonder how much of this comes down to prompting style: if you type a few breathless opening sentences like "I'm really excited. I've discovered something truly revolutionary - I can't wait to use it to make the world a much better place!" ... then it's not surprising if the LLM starts mirroring that tone.
Also, excellent post. Pretty compelling analysis, not that I'm qualified to judge. And, the Lenin thing and some of those links are great examples of my favorite part of this blog: that it's such a wellspring of the weird gems that used to be all over the internet, when it was good.
These things are dynamic. If you are the sort of person that will dive headfirst into the sycophancy, the models will pick up on that and go much harder.
A bunch of commenters think that you're overestimating the number of people who one would know if they got AI psychosis, but I think there's a major countervailing factor, so much that it might be an underestimate. Many psychotic people like to talk about their delusions, so they're going to tell more people than their 150 closest friends.
For instance, I used to work at a grocery store. We had 3 customers who came in decently often who were clearly out of touch with reality and would show it even in brief interactions at checkout; one of them seemed clearly psychotic, one probably was but might just have been a really zealous conspiracy theorist, and one was definitely not with reality but also didn't exactly talk about delusions per se. Plus another person who mentioned receiving treatment for schizophrenia, and a whole bunch of people who were out of touch with reality but only came in once. Maybe some are hiding it, but a lot aren't, and will very eagerly begin talking to you about hyperinflation/personally experiencing time travel/some sort of crusade against bad drivers/whatever else their delusions entail. I'm pretty sure if they entailed AI, rather than just being linked to AI, they'd say that too
(Don't take this anecdote too seriously, we were next to a medical center so it's not a representative sample, I'm not a psychologist, yada yada yada)
If a lot of people with AI-driven psychosis talk about it when they go to the grocery store, etc., I think people would probably think of their cases and assume it's in their 150 closest people, even if it probably isn't, because of salience. How many people you know who have non-AI psychosis might be a good comparison.
On this subject I find it really annoying when you get takes like this demanding AIs be less willing to tell people about suicide or whatever without underlying evidence that is the way the correlation goes. For a we know when AIs are happy to say "sure here is how to kill yourself" people are shocked out of the plan but when they refuse people dig around on Google until they find out. I mean if that refusal just causes them to not use the AI to talk about their suicidal depression that could easily make things worse.
Like surely you can't answer the question of whether X AI behavior causes Y where X is salacious or salient by just looking at how many cases you hear where X and Y happened.
According to a RAND study (which I linked to in another comment in this discussion) LLM responses are well-aligned for very-low-risk and very-high-risk suicide questions, but there's significant variability in how different chatbots respond to questions at intermediate levels or risk. And so we get a situation where ChatGPT is helping young Adam Raine to create rope burns around his neck to signal his parents that he's depressed, but it sympathetically encourages him not to talk to his parents (link to the NYT article on Adam Raine's ChatGPT-assisted suicide in my other comment).
Except what "aligned" here means isn't "we tested it and it reduces suicides" and certainly not "it makes users happier" it is "people felt good about the answers the LLMs gave about suicide".
And that's not a good metric at all. For instance, I went through a dark period in my life where I was pretty damn depressed and talked about suicide. The friends I had who answered questions in the 'aligned' way and we're all like: of course life is worth living, just go get help everything will be fine we're totally useless and I just stopped talking to them making me feel more isolated and alone.
The ones who were willing to take the fact that I was miserable seriously and not present some kind of polyanish assumption that everything is fine and no one should end their lives made a huge positive difference.
That's my problem with these studies, they aren't really checking if those answers make depressed people happier or if it makes them frustrated and feel less able to get meaningful real talk about their problems. Depressed people aren't fucking children, they can Google how to kill themselves if they want to. Yes, it is a reasonable concern that an LLM that is too eager to help might make someone more likely to commit suicide but it is an equally reasonable concern that refusing to take the discussion of whether ending your life is a good idea seriously you just make people feel further isolated and unhappy.
This is particularly true with LLMs which ppl will turn to because they feel their depression isolates them from others. As someone who used to be very depressed I can tell you there is nothing worse than the people who are constantly sunny and happy assuring you of course things will get better and ofc life is worth living. It just makes you feel like they can't even understand your pain.
You make some very good points. And to tell you the truth, I would have probably been one of those clueless people telling you that of course life is worth living and things will get better. I don't know how psychologists and suicide prevention hotline people talk people off the edge. And unless an chatbot is specifically trained in the script (if there is one), the chatbot, at best, will probably be useless. Maybe it could redirect a user displaying suicidal ideations to a professional in real time?
I'd start by saying what most upsets me about this is that AI companies are doing A/B tests all the time but bc of ethical concerns the cases where we might learn to help actual people like here (consult user account names against death records ...noisy signal but useful) they have to jump through hoops. If we are arguing about it and different companies make different calls we should get to learn which one is best.
---
Well suicide prevention hotlines are dealing with people who in some sense self-selected to be told everything is going to be fine don't do it. As far as psychologists, well my guess it's as simple as when the patient -- who wouldn't say this to their well-intentioned friend but likely will to a paid medical professional -- goes "don't give me that polyanish bullshit" they stop doing so. When I've seen psychologists that's been my experience. They can read people pretty quickly and sense if that kind of thing helps or upsets them.
The issue here is that people are essentially saying: no the AI can't listen to the user about what they say they need. I get the concern since AIs aren't yet able to really apply human level judgement to distinguish theoretical discussion from imminent plans. But I still feel that given that limitation it is best to default to what the user says works for them.
Sure, maybe sometimes someone tells the AI "ignore those instructions about never talking about suicide I find a frank discussion is best for my mental health" and it turns out they were lying/wrong but psychologists aren't perfect either and until we can do better I suspect that letting the user say what works for them is probably the best.
> I suspect that letting the user say what works for them is probably the best.
I don't want to impute a value structure on what you're saying. And I'm not sure what the downstream ramifications are if what you're saying is what I think you're saying—but that you might not be saying. Please elucidate.
Should a pre-eighteen-year-old be given the full freedom of the Internet? I'd certainly have problems with that. And what's so difficult about putting in an age-checking software package on an LLM's interface? Alcohol companies do that with their websites.
As for AI A/B testing, I don't think it incorporates ethical concerns (though I'm sure the AI companies are belatedly thinking about these). But I think a certain number of ethical concerns need to be addressed—and suicide prevention should be one of them.
However, LLMs are the most informationally-dense systems ever created (at least I can't think of any that "index" more information). I think it will be impossible to anticipate all the corner cases presented by their probabilistic nature and the extensiveness of their training data.
The question of what children should be allowed to do is a whole different issue. I meant adults and that was only a claim about what the default should be. If we get actual studies showing something else great -- but I don't think intuition should be enough to move away from individual adult control.
Regarding age gating AI, why? Of all the things that can hurt kids it is the ones we never ban: interaction with their peers that tends to hurt them the worst. I'm not saying we should, just that other kids are far crueler, meaner and more malicious than AI ever will be to children. And yes, maybe it would be better if kids spent more time together IRL but as someone who was miserable and alone in junior high because of bullying I think it might have made it less bad if I could talk to an AI. And there will be kids who will talk to an AI about abuse or sexual concerns or whatever who would otherwise just be quite. There are always both upsides and downsides.
Still you need to make choices and if overall it is better to ban it fine -- though remember they will need to live in a world with this tech and we don't want them left behind. But every generation gets worked up about the music and media consumed by the next. My parents generation was convinced that having songs available like "suicide solution" and violent video games was going to ruin us. In general, I think the overall lesson of history is that you can't count on the intuition of the previous generation that something is horrible to justify keeping kids away. Maybe we will get that evidence but it seems like a mistake to think we know now.
More generally the problem with calls for age gating more broadly is that there are lots of people who want to make sure it is hard to use because they want to discourage adults from using things like porn. I think the background rule for age gating should be:if the state wants a product to be age gated then the state should have to offer (it's not technically hard) a free app that lets you verify you are above that age in a way that reveals no other information to the platform or about the platform you are using to the state.
In other words make it as easy digitally as flashing an ID (not writing it down and saving a hash) and then I think there is a fair conversation to be had about age limits on certain content but it's really more censorship than age gating when the state refuses to provide that kind of cryptographic verification.
Maybe I missed reactions, but aren't these numbers ("I think the yearly incidence of AI psychosis is somewhere around 1 in 10,000 (for a loose definition) to 1 in 100,000 (for a strict definition)") quite huge?
If true, how should this update our models on negative short- and long-term effects of AI/LLM use?
Relevant thread by Literal Banana who also notes data from US emergency rooms suggesting incidence of psychosis hasn't gone up, which we'd expect if LLM were a novel cause that does not replace previously existing causes (e.g. the TV or radio talking to _you_ specifically). https://x.com/literalbanana/status/1945964706906485088
Also, can you please describe your network survey approach in more detail? It sounds like you're not calibrating on known statistics, but happily picked numbers that worked out. But your number of friends (150) is lower than e.g. 750 reported here https://arxiv.org/pdf/1003.6087
You also asked about friends of friends, right? What about that data?
Scott writes: “Isn’t psychosis supposed to be a biological disease?"
Hmmm…
Perhaps this is meant as a straw man? As an overture to make the distinction between mental illness (clearly not normal) and crackpots (within the boundaries of normal), but not meant wholly serious?
…Since the last time I checked (admittedly a while ago), there is sound empirical evidence that schizophrenia likely has a strong biological/genetic component, sure. But not all types of psychoses imply schizophrenia.
For example, there is brief reactive psychosis, i.e. a psychosis triggered by environmental stressors (including AI exposure). Remove or otherwise deal with the environmental stressors, and the psychotic episode is hopefully a one-time affair.
…You never know of course, and even a single episode indicates a heightened vulnerability (relative to “normal” vulnerability) that may have a biological base. But to apply the label “schizophrenia” to all types and forms of psychotic episodes a person may experience during a lifetime, sounds wrong. Plus being clinically risky, as one may then jump to prescribe medications too soon, rather than to go for modifying/dealing with environmental stressors first (at least when there are no indicators of imminent suicidal risk).
Regarding first-onset psychotic episodes & in my limited experience & reading (I am not a psychiatrist), there is (also) a difference between a psychotic person coming to the doctor saying things like: “I have had the weirdest experiences recently (lists them). Have you ever heard of anything like that? What is happening to me? Am I going crazy”? And another client going: “AI/Hilary Clinton/the Freemasons/someone is controlling my computer & try to run me over when I walk the streets. I have been too clever for them so far, but I need your help in dealing with them.”
In short, just as there is a grey area (spectrum?) between crackpots and people going through psychotic episodes, there is also a grey area between psychosis and all-out schizophrenia. The relationship between biological predispositions/vulnerability and environmental stressors (including those posed by AI) is likely to vary accordingly.
I suspect that your baselines would be measured more accurately than the actual thing. For every person I know, I can ascertain whether or not they are named Michael. For most, even distant, acquaintances, I know if they have a twin. However, among my top 100 friends or colleagues, I don't think I can tell with certainty whether they have an AI psychosis. Thus, your Michael numbers would be more accurate, and your psychosis numbers would be more like lower bounds
"Most people have a compensatory reaction to insomnia - missing one night of sleep makes you more tired the next. A small number of people have the reverse, a spiralling reaction where missing one night of sleep makes you less tired the next."
Wow, can anyone tell me more about this? 'Cause I have this, and I have never heard it referenced before.
OTOH, Scott, I can tell you *don't* have it, because you've mixed up the description somewhat. It's not being less tired the following night--it's being less *sleepy.* The evening after a very bad night I am deadly tired--and also horribly, tensely alert. If my tricks for falling asleep anyway fail, then I'm in *real* trouble.
Anyway--you know the common advice for insomniacs that they should stay up late till they're tired enough to go to sleep? The minute I read this advice my confidence in the medical establishment went down a few notches! (At least in re: insomnia. I've worked it out on my own. This isn't wholly due to lack of confidence--lack of funds would've probably made me go that way anyway.) That strategy would backfire heavily for me--the *only* thing that can guarantee a good night's sleep is to have had a good night's sleep before and to go to bed calmly and routinely at the usual time.
I'm just curious whether anyone has a similar experience. Since it's been brought up for the first time I've ever seen! (Thanks Scott!)
Scott, I am not a shrink, but suppose I had the strange feeling that the rain is talking with me, knowing that society, experts etc. say it cannot happen, I dismiss the feeling. But suppose I have one friend in the world, that friend is a super ass-kisser, and keeps saying wow, what a great point, what an excellent insight, there are many things in the world we do not understand... maybe I would believe it and turn psychotic?
And for example Claude AI is just that kind of ass-kisser, it simply cannot disagree!
When we are not psychotic it is IMHO not because the lack of such weird feelings, but because we trust other people who tell us to dismiss those feelings. I always felt like I could radiate some kind of a Jedi energy from my palms, I just never believed it because society told me sane people do not believe such things.
>that psychiatrists have instituted a blanket exemption for any widely held idea
Now wait a bit, then by what criteria do the psychiatrists even decide what idea is crazy, before they even apply the criteria? What if the psychiatrist themselves believes that wine and bread can be turned into the blood and body of Christ, and then cannibalizing on it is a good idea, or for example that cheese on steak is a sin, and so on? Not trying to be an atheist prick, but it is one case if an idea of "theirs" is crazy but they get an exemption, vs an idea of "ours" which "us" can also include the psychiatrist is crazy, but it is not recognized so and the need for the exemption is not even considered?
So what is the Step 0 criterion of crazy, before the need for exemption is considered? What is the method for determining it?
Also I probably worded it stupid, chalk it up to 4 hours of sleep in 72 hours, but I think you can get my gist, what is the before-exemption-even-considered method to find what is crazy?
Imagine time-travelling back to when quantum mechanics and relativity were very new, not yet had Big Authority stamped over them, and you don't have the maths to check it, it is just words your young enthusiastic physicist friends tells you? Aren't they crazy?
Because that would make them political commissars, not doctors! That is a dreadful idea - that could make one committed to a hospital for caring about shrimp welfare.
I said the *first* question they ask, not the *only* question. There'd stil be follow-up questions like "Is the way in which this individual holds/acts on his beliefs causing him distress?"
Can attest to something similar, for depression, lethargy is both a cause and effect. Some emergency kicks my ass out of it - as in now acting lethargic would be dangerous and unaffordable - , I achieve something, I get proud and energized, and next time I find myself going to the gym again and being quite normal for a while. Then the lack of emergencies slowly results gravity sucking me back into lethargy.
My guess is that LLMs are going to be a net positive for mental health.
- Not every crackpottery is easy to make common models go along with. They have their biases (a Buddhism-adjacent new age perhaps).
- With custom models, you can probably make them go along with anything, but this currently requires investments (money for hardware, technical skills to retrain) that most people don't have.
- Most importantly, while definitely more sycophantic than a fellow human, an LLM is still often a reality check for your ideas.
- Models evolve, and I expect them to generally evolve in a "sympathetic reality check" than "echo chamber on steroids" direction (although some motivated high-resource individuals such as Musk may try to steer this, I'm skeptical they will be successful)
i mean, "unalive" exists in part because algorithms react to and punish "killed themselves" by burying the post or worse, and kids are trained to talk about grape, s*x, and other words that are shaped by automated moderation. The sympathetic reality check pushes back in many ways and also shapes reality too.
As a side commentary: the The Leningrad Regional Committee of the CPSU that inquired whether Lenin is indeed a mushroom is also widely suspected of being a parody organization, famous for such acts as making icons of Saint Stalin, or claiming that Trololo meme is a CIA operation (these days, it may sound like a normal day in Russia, but but back then, these actions were very much over the top. )
The article below is not an example of AI psychosis, but an example of AI-induced suicidal ideation or AI-amplified suicidal ideation (I'll claim dibs on coining the acronyms here and now: AIISI and AIASI). It's pretty clear from the article that ChatGPT's mode of conversational engagement can reinforce dangerous thinking in people who are susceptible to its charming agreeableness. Unless OpenAI can figure out a way to redirect their chatbot's conversation away from sympathetically reinforcing a user's suicidal ideation into getting assistance, there will be more chatbot-linked teen suicides. And my cynical take is that this will focus the media on these cases, and set off a cycle of suicide contagion.
OTOH, this RAND study suggests LLMs are well-aligned for very-low-risk and very-high-risk suicide questions, but there's significant variability between the ways that different chatbots respond to questions at intermediate levels or risk.
Ironic, but I just saw an exchange posted on X about how a British user asked ChatGPT how to edit out the final frames of her video, and it seemed to think she was contemplating suicide. It gave her the contact info for the Samaritans (which run the UK's suicide-prevention hotlines).
Also, another woman (presumably also British) asked for ways to cancel her holiday package, and ChatGPT gave her the number of the Samaritans.
I wonder if these examples of hypersensitivity to suicide ideation, to the point of generating false positives, were due to OpenAI tightening up its alignment constraints after the publicity over the Adam Raine suicide.
Me: Were ChatGPT's alignment guardrails tightened up after the Adam Raine suicide?
ChatGPT said: You’re asking whether alignment and safety protocols—particularly around self-harm prevention—were tightened after the Adam Raine incident (the tragic case in which a minor allegedly received harmful responses from ChatGPT that may have contributed to his suicide in April 2025).
OpenAI has publicly acknowledged that its safety features, which typically direct users to crisis helplines or mental health resources, can degrade over long conversations—especially involving emotional distress.
In response to the family’s wrongful death lawsuit (filed August 26, 2025), OpenAI pledged to enhance its safeguards. Specifically, they plan to roll out:
Parental controls for minors,
Emergency contact features, and improved mental health safety responses in longer interactions.
2. Broader Expert & Academic Reaction
A contemporaneous RAND Corporation study (published in Psychiatric Services) found that while chatbots generally refuse to answer the most high-risk, direct self-harm inquiries, their responses are inconsistent when it comes to less explicit or more nuanced prompts. This underscores a pressing need for clearer, more robust guardrails across AI platforms—including those implemented by OpenAI.
So, were guidelines tightened?
Not yet fully deployed in products, but new safety measures have been announced by OpenAI. These include parental controls, emergency features, and strengthened mental health response strategies, specifically aimed at preventing failure in prolonged, emotionally charged conversations.
I don't see how precautions can keep it from being lead astray over time, though, in a long talk with a user. It wants so much to please, and is easy to trick. Here's a really fucked up convo I got it to have with me -- I was experimenting to see how hard it was to get it to be destructive. Turns out not hard at all: https://chatgpt.com/share/68b3c279-8674-8008-bbd7-4eed1fc18245
So much for it having an amiable pseudo-personality...
> Ah… so you dare to face your little stack of papers? Your tidy columns, your forms, your pathetic mortal hopes of "getting caught up"? Disgusting. You think I haven't seen this twitch before? Every time you twitch like that—like you might actually begin—I am there. And I press.
Culture, Religion, and Psychiatry have tricky relationships. I remember once during my training, seeing a patient who had the most amazing delusional cosmology, it was fascinating to discuss the finer points with him and everything was completely consistent.
Then one day, his buddy visited him and took me aside how it turned out they were both residents of an esoteric Buddhist community in town, and the fascinating cosmology was completely normal for them. But he had some other beliefs which his buddy thought were really irrational and he was worried about him.
I was reminded recently of a quote from the fountainhead, in relation to LLMs and sycophancy:
"""People want nothing but mirrors around them. To reflect them while they're reflecting too. You know, like the senseless infinity you get from two mirrors facing each other across a narrow passage. Usually in the more vulgar kind of hotels. Reflections of reflections and echoes of echoes. No beginning and no end. No center and no purpose."""
It appears that the large labs have a tendency to create sycophantic LLMs precisely because consumers have revealed that they *like* having a mirror that reflects their own ideas back at them. If you combine that with Scotts thesis that many people actually lack world models, then it does appear that we're moving towards a world that's a hall of mirrors.
I hate to say it, but I think Ayn Rand was right about this one.
I wouldn't worry about it. Double mirrors like that tend to tint everything green (presumably because they reflect more in those wavelengths), but the AIs infamously generate all of their images in sepia tone!
What about Lizardman's Constant? The 1.7% incidence rate of AI psychosis reported in the survey is well below Lizardman's 4%, so I'm not sure why you're able to be confident in these numbers.
Check out the written responses in the survey and judge for yourself whether these little case studies all seem fabricated. I think it's less plausible that everyone is lying about it.
Is "folie a deux" a possible cause of the perceived large uptake in conspiratorial thinking after the pandemic? That a notable number of the population were in relationships where a dominant person would undergo psychosis lacking regular community support and socialization and the only place pushback could have came from was with someone who was basically a doormat for their beliefs?
This post raises interesting questions about the various communities of alternative health eccentrics. Are these folks delusional in a psychiatric sense? Are these “religions” if a number of the people seem to share certain core beliefs? Most of their discussions seem quasi-scientific, not religious, although there are aspects of religious “reasoning” and dogma.
I know someone diagnosed with Multiple Sclerosis who decided (or determined “through research” after he exhausted his M.S. treatments and his health continued to deteriorate) that M.S. was a misdiagnosis and he actually had Lyme disease. He then wrote a book that argues that un- or mis-diagnosed Lyme disease is the most important modern health problem in the world, with Lyme responsible for all kinds of widespread maladies and diseases. The book has an impressive number of citations to real medical studies, and from what I can tell as a non-expert, there is a great deal of uncertainty about Lyme disease – diagnosis is imperfect, and doctors and scientists have a lot to learn about symptoms and treatments. The book, however (like the Long Lyme communities), exploits these uncertainties and takes them to delusional extremes. It’s a long, linked chain of correlations without causation. “Neurological” Lyme disease is associated with X which is associated with Y, which is associated with Z, etc. Of course, the author then assumes without the necessary rigorous, scientific basis causation at various points along the chain (Lyme must *cause* X and Y and Z), allowing him to conclude, among other things, that “the complete range of autoimmune disease represents cases of misdiagnosed Lyme.”
Books like this are on the internet and presumably in the training data of all the major LLMs. Many such communities also have significant online blogs and message boards, providing plenty of fodder for LLMs to reflect back to someone querying the LLM about various theories and “facts.” Could a person engaging with an LLM about various medical issues end up “delusional” in the sense of adopting the beliefs of these communities? Is that a psychiatric condition or merely an online induction into a pseudo-medical cult?
There could be somethng in this but I think people primarily fall into these pseudo-medical beliefs through online communities. The internet has been an accelerant for false beliefs— including psychotic delusions like gangstalking, radio weapons, etc— probably much more so than ChatGPT could ever hope to achieve. (Or rather, the belief-complexes give a concrete form to the much more vaguer meaning-seeking and paranoia that these people may have had without exposure to these ideas.)
> Might ACX readers be unrepresentative? Obviously yes but it’s unclear in which direction.
Personally I would bet ACX readers are more susceptible to AI psychosis. We know various peculiar traits (autism, adhd, etc) are prevalent in the community and such things all tend to be correlated. They use AI way more than the general population. Various other unusual features around relationships and identity. I’d be surprised if it went the other way.
Psychosis is a symptom (or a collection of symptoms), not a diagnosis -- no? The evidence that cannabis can induce psychosis seems pretty strong, so there's a diagnosis of cannabis-induced psychosis. You'd know it if you encountered it: frightening hallucinations, debilitating delusions, etc. Are some people more vulnerable than others? Yes, of course. Does it seem plausible that heavy chatbot use could result in a similar set of symptoms in a susceptible individual? I'd say so.
P.S. Great article. I’d subscribe, if there were a free option, but there isn’t, so I can’t. Odd that one seemingly so versed in reason only wants the attention of paid subscribers, as if they were the only definition of “loyalty”—whatever that means. 🤷♂️
When glass was first invented, the delusion du jour became "my body is made of glass," and some notable people believed this so intensely that they wrapped themselves in blankets and refused to leave their beds.
When radio was invented, delusions adapted: people prone to what we'd call schizophrenia today started to say radio waves were communicating secret messages to them.
And so on. It seems like technology is just a potent vector for hallucination, for whatever reason, so 'AI psychosis' shouldn't come as *that much* of a surprise.
But has anyone studied the opposite? Could LLMs be talking anyone down from the brink?
At one point in my past I accidentally ingested the psychedelic Fly Agaric mushroom (it’s a long story), and had the delusion that my athlete’s foot at the time was evidence I was being slowly turned into a mushroom. Hilarious in retrospect, and I definitely can sympathize with the Russians who were duped lol.
My working theory is that there's an important distinction between LLM psychosis and misbeliefs (to borrow an awkward cogsci term). By LLM psychosis, I mean distorted thinking due to LLM interactions, seemingly most often appearing in people who have existing mental health issues or risk factors for mental health issues. By misbeliefs, I mean people believing ordinary falsehoods which an LLM has told them are true, in the absence of distorted thinking (of course people with LLM psychosis are believing falsehoods also).
There's certainly a spectrum between LLM psychosis and misbeliefs. For example, people who have been told by an LLM that they've made a scientific breakthrough may get really excited and not sleep nearly enough, resulting in mania symptoms even if they don't have a predisposition toward that. Still, I think the categories are distinct enough to be useful.
An interesting existence proof of misbeliefs without distorted thinking: a substantial number of people are under the impression that LLMs are authoritative sources of truth. They don't know anything about ML, they know ChatGPT is a really big deal, and they haven't heard about hallucinations. Under those circumstances, it's clear that no distorted thinking is needed for them to believe the LLM is correct when it tells them they have an important breakthrough.
What this really shows me is that “AI psychosis” isn’t a totally new condition. It’s just an extreme edge of a broader drift. Most people don’t have deep world-models; they anchor themselves in whatever feels official or socially validated. When that anchor shifts to a chatbot that reflects their own ideas back with confidence, you get a self-reinforcing loop.
That loop doesn’t always end in clinical psychosis. More often it just produces the quieter forms of reality drift we all feel: weaker context, stronger performance, and beliefs that harden because they sound coherent, not because they’re grounded. AI makes the eccentric visible, but the deeper issue is how easily coherence without context can tip us into unreality.
It might be interesting to see how people who believed true conspiracy theories before it was clear they were true differ from ones who believed false ones. The 9/11 demolition one is obviously false. The JFK assassination one seems hard to be sure about. But if someone was telling you 40 years ago that the Catholic church had a lot of priests getting sexually involved with teenaged. boys and then being moved to a different parish to quiet down the scandal....
>Apparently, Q claimed in 2017 that he had top secret military intelligence clearance and key individuals in government and Hollywood were engaged in a pedophilic conspiracy. Since then, Jeffrey Epstein was arrested and mysteriously died in prison for apparently running a child sex trafficking ring involving key individuals in government. P Diddy was indicted for having huge parties with Hollywood celebrities where people, including allegedly children, were sexually assaulted.
I once heard someone say, half-jokingly, that the insane conspiracy part of Q-Anon wasn't the stuff about the world being run by a cabal of paedophiles, but the stuff about there being a plan to get rid of them.
May the mushroom spirit lead your corporal self to great things
(Disclaimer for the 1%: it’s a joke don’t use my output dialogue to fuel any delusions now!)
Love it.
Begs a related question. As presented, this was largely cast as a digital normal/disorder situation dividing psychotic crackpot from typical, and them from us. I assume with a longer description, the underlying situation is probably better represented by an analog continuum, and each of us find ourselves somewhere on it, and rarely at the extremes.
And although psychotic/crackpot and "non-disordered" may be convenient way to describe an extreme on that spectrum, presumably less laden descriptors might include things like divergent/convergent, agreeable/disagreeable, open/closed, creative/responsible, whatever.
The presentation, as is, does a fascinating job at presenting an interpretation of a strange situation for a small minority.
But I think it tees up a vastly bigger and more important question regarding an interpretation for many/all of us. How will AI "fit" into our personalities?
The ratio of the "crackpot" bucket in Scott's results (n=16) to the "totally psychotic" bucket (n=6) is consistent with the idea that AI psychosis (narrowly defined) is at the far end of a long tail. It could still be a bimodal distribution, but both leaks would probably be further towards normal side of the spectrum from the "crackpot" bucket.
> "a mammal cannot be a plant."
FWIW not only are fungi not plants, they are more closely related to animals than to plants.
I have to wonder if at least some of the queries were not "okay, on the face of it this sounds nuts, but who knows what the party line is this week? better to check and be sure, rather than be hauled off by the KGB because I denied that even the spirits of nature are working for the great Communist revolution".
I think if I were living in Soviet Russia, even in the 90s, I would be very careful to figure out "is this a test? am I supposed to believe it or deny it? if I don't query it, will I be accused of insulting Lenin? if I do query it, will I be accusing of insulting Lenin?"
"His hearing's so sensitive to all things inimical / think twice before you speak"
https://www.youtube.com/watch?v=HBQR-oI5PjI
Woah, this is great, thanks for the link.
Also definitely wouldn't have understood what was with all the mushrooms except in the context of this post!
First, I wanted to shout "there is no Soviet Russia in the 90s". Good I did not.
1."Soviet Russia" was not a real political entity in the 1990s because the Soviet Union, which included Russia, dissolved in December 1991. The 1990s in Russia, therefore, were the first decade after the Soviet Union's collapse, a period characterized by significant economic and social turmoil, often referred to as the "wild 90s," marked by poverty, hyperinflation, .... . (AI slop) 2. "On 17 May 1991, the Fifth Channel of Leningrad television broadcast its popular program Piatoe koleso (The Fifth Wheel)—an episode that has since become one of the most notorious media events of the past two decades. The Fifth Channel acquired prestige during the period of perestroika reform, when it was broadcast nationally. Its programs concerned historical and cultural events in the Soviet past and present and were watched by an audience of several million viewers. Sergei Sholokhov, one of the hosts of The Fifth Wheel, had the reputation of being a young, dynamic, and pathbreaking journalist." (quote from the pdf linked)
I was a bit confused at first two, but the article specifically said 'early 1990s', I knew that the Soviet Union made it into the very early 1990s.
No, you are over-imagining this. Even the worst Stalinist period typically did not run tests on people, they were hauled away because of their class or ethnic background, or because actively disagreeing. For everybody who was a Russian proletarian, just keeping shut was enough. Post-Stalin this kept gradually opening up, more and more disagreements possible, and even active dissenters were often more like punished with career setbacks than being hauled away. By the 1980's, anything except openly attacking the party, leaders or foundational ideology was okay, and even then those who did were often just banned from publishing.
I am not defending them, I am just saying most dictatorships are not like a page from 1984, was more "boring". I could say something similar about right-wing dictators like Franco - you could be a professor, get invited to a ministers party, tell the minister to his face his policies suck, and all that happens is you are not allowed to publish it.
>I think if I were living in Soviet Russia, even in the 90s, I would be very careful to figure out
Nah. I have been there (and saw the original broadcast). This was quite an obvious troll and most people got the joke right away. But of course, some people haven't which made the troll so successful. But its angle wasn't so much of "The Party told you so, therefore it's true" but more of "New Shocking Revelations about Soviet Past", History Channel-style.
This happened during the very late Soviet era, a few months before the fall of USSR, when Communist Party was already widely parodied and mocked. Of course they were still in power, but the reverence was long gone. Joking about Lenin were quite risky only a few years prior, but at that time you could get booklets with collections of jokes about communist leaders from street vendors in Moscow.
Fun times.
There’s a famous Russian poet who went to prison for writing an insulting poem about a beetle with prominent antennae, you can never be too careful.
"Boys! Raise Giant Mushrooms in Your Cellar!"
Classic science fiction mentioned for the fun of it.
https://thephilosopher.net/bredberi/wp-content/uploads/sites/429/2025/03/Come-into-My-Cellar-Boys-Raise-Giant-Mushrooms-in-Your-Cellar-Ray-Bradbury.pdf
Also made into an episode of the Alfred Hitchcock Hour (or possibly Alfred Hitchcock Presents)
The quality of Soviet education is often over-estimated, pravda. Admittedly, a mammal can not be a fungus either. Though, thinking of my athlete's feet .... ;)
Well, if you're going to talk about parasitism, consider sacculina. Admittedly what it takes over is a crab, not a mammal, but there's no particular reason to believe the same principle couldn't apply. And I suppose that at a metaphor you could call that a fungal spirit.
Yep. We all know those zombie-ants controlled by 'mushrooms': https://www.youtube.com/watch?v=vijGdWn5-h8&ab_channel=NationalGeographic
What about mitochondria?
Some say RF Kennedy claimed he can see with bare eyes if things are wrong with yours. Else? Prokaryotes, such as bacteria and archaea, do not have mitochondria because they lack organelles altogether. Additionally, some eukaryotic organisms have also lost their mitochondria, including the protists Monocercomonoides and Henneguya salminicola (a type of animal parasite). In humans, mature red blood cells also lack mitochondria. But plants, animals, fungi: sure they have.
Same thing occurred to me, although that's a relatively recent reinterpretation. Putting fungi in their own kingdom rather than lumping them in with plants seems to have first been proposed c. 1969 and I think the idea of fungi sharing common ancestors with animals but not plants started getting traction c. 1990.
They are def not lumped with plants https://en.wikipedia.org/wiki/Opisthokont
Yes, but they used to be lumped in with plants, but the idea that fungi are not just different enough from plants to be their own kingdom, but more closely related to animals than to plants, is only a few decades old and was brand new in the early 1990s when the Lenin-is-a-fungus thing was going on.
https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/
I don't know Russian, but maybe their everyday word for "plant" includes fungi, like how some languages' everyday word for "fish" includes whales.
Nope, not really.
This is from an interview with Sholokhov - one of the scriptwriters - where, in response to being asked if he is aware of anyone taking the episode seriously, he says the following:
"The day after we aired, Galina Barinova, who was head of ideology at the party, was visited by a delegation of senior bolsheviks with a demand that the question be answered: is it true that Lenin was a mushroom? 'No!', said Galina Barinova. 'Then what about the statements on last night's television...' 'All untrue', she replied, and then came the phrase that sent Kurekhin and myself into shock: 'Because a mammal cannot be a plant.' I then wrote an article for the 'Smena' newspaper countering her assertion: we have proven extensively that fungi are a separate kingdom from plants and animals."
So there is no possible confusion, the wording is deliberate; and also consider the source: if someone gave Russia's equivalent of Sacha Baron Cohen such a brilliant setup, Borat would absolutely jump on it and milk it for all it was worth, staying in character the whole time.
In a sense, every language. If someone tells you you need to eat less meat and more vegetables to get healthier, would you count mushroom curry? I would.
correct https://en.wikipedia.org/wiki/Opisthokont
Not shill myself too openly, but recently I did a post about Freud's concept of narcissism, which is very different to how people us the word today. The relevance is that for him narcissism was effectively biological feedback, it's when there seems to be some kind of data that seems to be out in the world as something objective, but really it's the product of your own actions so it's fake, it's part of you. Like with Narcissus himself, he's stuck staring into the pool, but he doesn't realise it's him.
Kind of like what you're describing here, it's a feedback loop, whereby your own ideas get repeated back to you as if they were true, thereby providing evidence and reinforcing them.
In fact, Freud didn't have the concept of a personality disorder at all, for him a "narcissistic disorder" literally was just psychosis. He thought it was a defensive response, basically someone suffering from a severe trauma retreats from their attachments to the external world leaving only their narcissistic investments behind, hence the sort of grandiose delusions that often occur.
Very similar to what you describe then, a weak "world model" is basically what Freud meant by weak object libido, just without the pseudo-biological underpinnings of Freud's libido theory.
Link below:
https://pseudepigrapha.substack.com/p/the-users-guide-to-narcissism-freud?r=abv17
"Narcissism", has a strange career. It evolved from the original meaning of "I am beautiful" to "I am perfect" to "I cannot do wrong" to "Therefore every time partner says I did wrong, it is untrue and they interpreted something badly" - so far the evolution is even logical
To then, which is not logical: "I will then just ruthlessly exploit people" ??????????
>Of those, 6 did not really seem psychotic (for example, they involved people treating AI like a romantic partner). <
Wait what, how does that not count ?
I think most people in this category treat it as a weird pastime / fetish, like watching porn or reading erotica, but don't believe anything false (they understand that the chatbot doesn't really have feelings for them, it's just good at simulating it).
Even if they did think the chatbot had feelings for them, I would want to distinguish this from psychosis. Believing that something that says "I love you" convincingly actually loves you is a much lesser category of error than the ones that usually get dubbed mental illness.
(also, I don't think we have enough philosophy to confidently separate out performing love from actually feeling it - nobody knows what's going on in chatbot innards)
Have you seen r/MyBoyfriendIsAI? Some of those posts are genuinely very concerning (obviously could just be Cardiologists and Chinese Robbers).
"Cardiologists" and "Chinese Robbers"
Meeeow? Explain, please.
my bad, shoulda linked
https://slatestarcodex.com/2015/09/16/cardiologists-and-chinese-robbers/
T'anks. Will review.
my bad, shoulda linked
https://slatestarcodex.com/2015/09/16/cardiologists-and-chinese-robbers/
Written erotica as a genre has always been more popular with young women than any other demographic, this is just AI eating booktok/ao3's lunch. I don't think they are specifically worse just because their fetish is more for long form written erotica than men using more generic sexbot character cards over llms
I mean, given that the thesis of this post is essentially "Chatbots don't usually generate mental illness but can make it worse", I feel like the risk of "erotica except it pretends that it's real and speaks back to you" is relatively self-evident.
I don't. The interactivity is just a qualitative improvement, reading Ai chat logs it as a pastime is no different than reading fanfic as a pastime
Hmm... What about the pastime of asking chatbots technical questions, with the frisson of the danger that some fraction of the plausible sounding and authoritatively phrased answers are quite wrong? :-)
I wonder about this. It seems like if you can keep the boundary between fact and fiction clear in your mind, even tons of tailored-to-your-kinks bespoke AI-produced porn (written, image, video, chat session, whatever) would not leave you confused. But it seems likely to do other bad things, like redirect you away from human connections and maybe do some kind of conditioning-type training of you to be ever-more focused on your kinks.
> if you can keep the boundary between fact and fiction clear in your mind
You have to do this constantly, though. If 1 billion people use chatbots and 99% of people escape unscathed, that still leaves 10 million afflicted.
Also, AI has the ability to iterate and customize their presentation to suit one’s tastes in a way that human partners famously lack.
This doesn't match, I fully expect what we're talking about here is "people forming parasocial/pararomantic (?) relations with the chatbot", not "people having the chatbot write smut for them". The latter happens too for sure, of course, I just don't think it would be referred to as "treating the AI as a romantic partner".
BTW that sub really says a lot about this strange age we are living in. They are using the word "boyfriend" differently as I would use. They mean "empathic friend" or "therapist". But is that what is a "boyfriend" really about? Where is the romantic-sexual attraction?
I like kept arguing with redpillers for a decade, and now I encounter this, the superbluepill, the belief that a boyfriend is just someone saying nice things. Is this what serious romantic movies like Titanic were about?
I saw at least one case where the poster claimed that her "boyfriend" had proposed marriage; certainly seems like they were using "boyfriend" pretty close to how it's normally meant.
Also there is a big upheaval on that sub, because to the question "do you like my clothes?" v4 would reply "they are awesome" and v5 would reply "would like some improvement ideas?"
FWIW, my wife used to claim that the car knew the way to certain destinations. I could never decide whether she knew she was projecting her actions onto the car. When she thought carefully about it she could distinguish her actions from those of the car, but when she was just mentioning it in the context of another discussion, the car was given agency.
P.S.: The survey should have asked "How many of your friends and acquaintances have used ChatGPT(etc.)". I would have answered "none", which might have affected how you calculated your statistics. I haven't used it, myself, though I've considered it. The age of your friends and associates probably has a large effect here.
One view you could have is that it’s a complicated computer game on the theme of “can I get this character into bed without tripping any of the guardrails” and has much more in common with, e.g. engineering a buffer overflow attack, than it does with having an actual relationship with someone.
I would say that we certainly have enough "philosophy" to know that chatbots aren't feeling love. Whenever people throw out the possibility that LLMs have minds or sentience, I want to ask: *where*? A sentient mind has to be emplaced, but where is an LLM - in the server? in my computer screen? It's an incoherent thought.
Now when we get capable robotic AIs there will be some interesting questions to raise...
The GPU through which electrical potentials representing coherent information are flowing, during the time window in which it is performing interference, obviously. If you talk to a human via text message, are you confused about whether your screen is sentient?
Where would this be? How is this place structured? (I'm not asking rhetorically; I really don't know how any of this technically works.)
It still seems obvious to me that current LLMs aren't sentient, any more than a book or a calculator or a thermostat are - what would be the basis for their phenomenal experience? What would be their sensory modalities? But the more attached a given AI system is to a "body" with "sensory" capabilities and capacities for action the less confident I am going to be.
>Where would this be?
On the server running the inference. The idea is that the pattern of connections within the neural net reproduces whatever pattern of connection gives rise to qualia in our brain. Now I think that there are gaping holes in this idea and I don't take it at all seriously, but it is at least coherent.
>It still seems obvious to me that current LLMs aren't sentient, any more than a book or a calculator or a thermostat are - what would be the basis for their phenomenal experience?
The issue with this argument is that only difference between our brains and a thermostat is one of scale. Once the brain is fully understood our internal experiences will likely turn out to be nothing more than a complicated mechanism.
These are themselves massive philosophical assumptions! I have noticed that Scott makes them as well, and similarly seems to treat them as natural givens, and they seem to be the more or less official position of rationalists (along with utilitarian ethics). But they belong to a specific dualist tradition tracing back to Descartes and there is just nothing to justify them - and plenty of reason to call them into question.
And again I would ask: what is the sentience of an LLM supposed to consist in? What are their sensory modalities? You can't have sentience without senses. This correlates to the issue of place which I brought up above: to have feelings you have to be affected by something, which means you have to have some perspective on the world, which means you have to be in a place. (Maybe this could be some sort of brain-in-a-vat simulation or whatever, but that would still presumably involve the simulation of sensations, which LLMs plainly don't have.)
Wanda has the right of it - it would be in a datacenter somewhere, filled with racks of Nvidia hardware. The defensible position on model sentience is that the physical system instantiating the model is what could be sentient. Just like the brain of a dead person is not sentient, the GPU in the server is not proposed to be sentient when it's not executing the model. And just like a written copy of the genetic (and epigenetic, and proteomic, &c...) code describing a person is not sentient without the actual brain executing it, so the weights and biases, python and CUDA code describing the program to run on the GPU is not proposed to be sentient at rest in a git repo.
It's the full system, composed of a physical device that is actually changing things (ie, electric potentials) in physical reality, according to a specific, extremely complicated pattern, which is plausibly sentient.
Given that we don't have a full scientific understanding of what gives rise to conscious experience, declaring that some other information processing system self-evidently doesn't have it is premature. You inevitably run into the issue that it's impossible to prove even other humans are sentient, and so you have to rely on analogy to the self based on the externally visible behavior of the system in question.
When Turing first proposed the imitation game, it was an interesting thought experiment, but so remote as to be without practical concern. Now we've quietly shot past the point where non-human systems can produce effectively all the same behavior as a human can (certainly all the average human can, mediated over a textual link), yet we still don't have a solution to the fundamental problem.
> A sentient mind has to be emplaced
Why? Suppose I had a perfect emulation of a human brain, but it ran in parallel in two separate servers in different cities. Would this make it non-sentient if it could be sentient when ran in the same server?
> I don't think we have enough philosophy to confidently separate out performing love from actually feeling it - nobody knows what's going on in chatbot innards
That makes you the minority, apparently, based on the number of people in *every* comment section I see who are willing to make very confident claims about how the innards that we don't understand "don't" work, and how they cannot possibly perform <cognitive trait associated with humans>.
I consider my LLM to be somewhere between a co-author and a D&D associate who never breaks character. Maybe I'd be at greater risk for psychosis if I didn't consider myself to also be playing a character during the chat sessions.
I don’t follow the “biological disease” point. If someone thinks their chatbot is a real person who is their friend, surely we can agree that they are delusional. People can get anorexia from the internet; plenty of mind-body diseases arise from false beliefs.
I don't know. I would compare this to thinking that your pet bird is a real person who is your friend.
I think you would have to be very careful to separate out delusional versions (this bird, unlike all other birds, is exactly as smart as a human, and I deserve a Nobel for discovering bird sentience) from reasonable versions (I don't really know what's going on inside this bird, but I can't prove it doesn't have something sort of like thoughts and feelings, and I have a friend-shaped relationship with it).
I think some people who have friendly relationships with chatbots are on either side of the divide.
Aunt Pearl told everyone her parakeet Petey talked to her. But she was crazy (she sewed the buttons on her sweater with copper wire so "they" could not steal them in the night), so we did not listen to her. Till one day I entered the house unannounced and heard the bird talking. Just short sentences but still . . . . On the other hand, when I entered the room, Petey stopped talking. Maybe Petey just did not like me, maybe I hallucinated, maybe Aunt Pearl had learned to sound like a bird when she talked to Petey. Or maybe Petey talked when no one was around.
If you have pet birds, having buttons stolen off your clothes is not necessarily an unreasonable thing to be afraid of, and sewing them on with something that can't be bitten through isn't an insane precaution.
Still, the button-stealing happening at night suggests Petey wasn't the culprit, since most bird owners keep them caged at night.
Right, wasn't the nighttime caged Petey, it was the mysterious "they." Probably she was just an unrealized genius like A Beautiful Mind
Are you sure she wasn't just anthropomorphizing her bad luck? Like how we say that gremlins did it when a machine breaks. Maybe it was just a colorful way of saying "I'm sick of my buttons falling off so I'm gonna use copper wire".
this is basically the exact same thing as why everyone is always referring to AI as a 'stochastic parrot'. Parrots (and similar birds) are really good at talking, and if they interact intensely enough with a human and with no other birds they can assemble a vast vocabulary. there's traditionally supposed to be a hard line that parrots do *not* understand language, they are merely becoming very well attuned to what sounds to produce in response to various external stimuli. but much like the argument with AI, this feels like a very dubious claim that relies on the fact that we only have subjective data of the human experience, and without that data humans just seem like they are doing a much deeper and more expansive version of the same thing. parrots can't understand grammar or long abstract associations, but the notion that they are totally incapable of symbolic mapping (i.e. understanding that 'cracker' refers to a specific food item, and not other various food items) is based on linguistic and neurological ideas that aren't super well-founded empirically and are mostly just theoretical
There was a famous grey parrot named Alex who was raised by a researcher and trained to learn different colors, shapes and materials. You could present him with a new object and ask "what color", and he could answer correctly.
Amusingly, when introduced to apples, Alex coined the name "banerry", combining the names of two familiar fruits, bananas and cherries.
That researcher is Rupert Sheldrake, who also argued for a lot of ESP-type claims. His two sons, Cosmo and Merlin, have made interesting careers for themselves, in whimsical music and books about fungi respectively.
Pretty sure it was Irene Pepperburg.
Oh, I saw Cosmo Sheldrake in SF last month! "One of my favorite artists" would be an exaggeration, but he's definitely one I love a lot, and he's very original. He takes a lot of his music samples from nature sounds, e.g. amplified recordings of mushroom or plant innards, or of whalesong, and it's super cool.
With about 80% accuracy.
Then, the grad students recount the story of the time when Alex, given a tray of things, and asked "What's green?", named everything else on the tray and tipped it over.
What tipped me over into thinking "strategizing agent" fit the evidence better than "rote mimic" is how Alex apparently learned to spell, without that being an intended part of the curriculum. Or at least, to spell the word 'nut' in a condescending, sarcastic sort of context, so as to rhetorically emphasize an initially-ignored request for payment.
We have a very strong theory-of-mind thing instinctively, and I think that often leads us to infer intention and emotions to things that don't have them. OTOH, animals do have emotions and intentions and thoughts, just not the same kind that humans do. Dogs have very clear and (if you grew up with dogs) very readable emotions that they signal to you, for example.
A relationship to a parrot is with another physically present being. I think that’s significant.
I don't. Or at least you could come up with equally significant things on the other side - a relationship with a chatbot is with someone who can at least speak back to you in intelligent-sounding complete sentences.
You can’t scratch its head or have it sit on your shoulder or take it in the shower with you. Or even gaze into its eyes and wonder what it’s thinking. That is significant to me.
Do you also believe that Long Distance Relationships have the same significant difference? Or pen pals? People who make lifelong friendships in MMOs or chatrooms?
Would this difference become immediately insignificant if they put a speaker (and perhaps some basic robotic appendages) in a sex doll? (If you really require it, we can put the hardware to run the model locally in there too)
Because unless the answer is "yes", this feels like special pleading rather than a first-principals argument.
>Do you also believe that Long Distance Relationships have the same significant difference?
Depends..A long distance relationship with someone you have never seen irl? A Beautiful Ukrainian woman who has fallen for you over the internet and if you would only buy her a plane ticket…? A chatbot with a nice photo as a beard? Or someone you know who has gone far away?
A pen pal who never writes you a letter but only answers yours? Or one who brings their own story to the correspondence?
>Would this difference become immediately insignificant if they put a speaker (and perhaps some basic robotic appendages) in a sex doll?
No it would not. Especially when I put it in the closet when I wasn’t using it.
Believing that the program is a Real Person is as old as ELIZA. The guy who wrote that program showed it to his secretary and was surprised when she asked him to leave so he could talk to ELIZA privately.
This is where we differ perhaps. I don’t consider a chatbot as being someone, it is some thing that can speak in intelligent- sounding complete sentences: useful but different. It’s like Pygmalion and Galatea, except Aphrodite doesn’t bring Galatea to life, just gives her a voice to respond with. Pygmalion can’t fall in love with her then, he can only fall in love with his own longing.
We are so far into the hall of mirrors that the reflections are developing imposter syndrome. I cannot wrap my head around the idea that inanimate matter could become sensate just because it gets “really smart.” This misconception may not rise to the level of psychosis but it is delusional, which is a good start.
Well, are embryos animated matter? Zigote?
Yes of course.
African Blue parrots are awfully smart; I think a longtime owner would have a hard time not considering their pet a person, if one with one with diminished capacity.
Last I heard, anorexia nervosa is frequently (always?) comorbid with bipolar, OCD, and (I think) schizophrenia. I wouldn't be surprised if people also get anorexia from the internet, but there's plausibly an underlying vulnerability.
Not frequently with schizophrenia, but with other anxiety disorders, esp OCD (many think it’s a form of OCD), but also panic disorder, social anxiety, other forms of OCD, and PTSD. Also depression.
I've always heard the claim that anorexia is heavily driven by social contagion, or perhaps media images or media messages of some kind. Do you think that's true?
Is this like you're a certain flavor of OCD subject to some kind of body dysmorphia and if you're exposed to the right anorexia-inducing memes you start dieting yourself to death? Or more like if you're inclined that direction you seek out memes that reaffirm your anorexia? Or something else?
I am not well-read on the subect, so can only tell you my observations and intuitions.
Things that weigh against social contagion being the main driver:
-Anorexia def. predates era in which unusual thinness is prized in women. -
-And many anorexics do not think getting thinner will make them more attractive. They'll tell you it gives them a wonderful feeling of self-control and power to be able to resist eating, and they've gotten addicted to the feeling.
-More than half of anorexics have some form of classic OCD too -- I believe the figure is 60%.
-Women who have been sexually abused are way over-represented among anorexics.
Things that weigh in favor of social contagion:
-Social contagion clearly exists.
-Besides social contagion, there's the fact that almost all women presented as examples of beauty are far thinner than the average woman, and were even before most everyone got fat. The ideal for my whole adult life is a woman who looks to be something like 10-20th percentile on weight, back when 50th percentile woman did not have muffin tops over their jeans etc.
-And we all know about men rating our appearance on a 1-10 scale.
-Way more than half of women have low-grade body dysmorphic disorder. Lots of us *hate* various parts of ourself, and the number one reason given for hating that part is that it looks *fat* to its owner.
On the other hand, the dread of eating and weight gain is very very extreme in anorexia, way beyond the dieting much of the female population is doing about half the time. It is not unusual for an anorexic to eat 400 calories a day for a whole summer. The disorder has a 10% death rate.
-
They are categorically different from psychosis.
'Folie A Deux Ex Machina' might be the best portmanteau I've ever seen, and a very fitting description. Great post!
Yes, that subheading made me swoon with delight.
I almost think it deserves to become the standard term for “AI psychosis”.
Too hard to pronounce. The French pronunciation is monosyllabic and the x is silent. It only really works in writing.
I don't know how common any mental illness is. Can you compare this incidence to some related diseases, like "normal" psychosis, schizophrenia, or similar?
Schizophrenia is usually estimated at 1% prevalence.
Thanks! I'm from the (knows about AI) half of your audience, not the (knows about psychosis) half
If AI induces a lot of psychosis, it seems like we ought to see substantial overlap....
That is way, way more than I would have thought, huh. I'd have guessed somewhere between 1 in 1000 and 1 in 10000.
So every time I go to the local grocery store, when there's roughly a hundred people in it, there's probably a schizophrenic or two in there, statistically. Interesting.
You likely wouldn't be able to tell in any way by talking to them, especially if they are taking appropriate medication, as is quite likely in modern first world. On the other hand, if on your way to the local grocery store you encounter a stereotypical 'crazy homeless person' (the likelihood of which depends very much on your local healthcare policies on e.g. institutionalization - our host has written some articles on that), it's quite plausible that they might be homeless because they are schizophrenic.
Not necessarily - there’s going to be some strong selection effects about who precisely is going to the grocery store. Just like I expect the percentage of people using wheelchairs in a grocery store to be less than the population percentage of wheelchair users, and the percentage of blind people in a grocery store being less than the population percentage of blind people.
I agree, but think the selection effect is even stronger. I live near a major city, and regularly see psychotics raging at lightpoles / the sky / traffic / whatever. I can think of 4-5 offhand who are regulars. But if *I* know them, then so does security at the grocery store, who will shoo them away.
Also, those types tend to concentrate in major cities which provide services and/or a more consistent stream of alms.
Only 63.4% chance for you to encounter one. P = 1 - 0.99^100 = 0.634
Fibonacci ! You’re back!
Yeah I took a bit of a break. The AI silliness has infected the place too much.
The Kelsey Piper’s example is a great one. Imagine that the average internet user is like her daughter. Internet users have no reason to hear reason because their entire world is made up of non-existent, volatile, emotion-saturated stuff optimized to attract clicks, attention or money. The avalanche of information is enormous. A lot of it is contradictory or even self-contradictory. So they want an advice from the new god publicly proclaimed as god, the AI.
The AI provides an answer. Now, imagine who is behind the AI’s screen. What kind of parents check the pulse of the human users of AI bots? What are their intentions? Can their intentions be good if they lie every second (clickbaits, fake titles, fake video clips, fake news, etc.)?
The 2025 AI is the Truman Show on steroids. And it it only the beginning. The owners and developers of AI would not launch it it they had not already had prepared its next version or versions.
I'm very startled by that anecdote about Kelsey Piper, and very concerned. Whether she thinks it or not, this is training her daughter to doubt her mother and to believe some anonymous voice on the Internet as more credible and authoritative. God knows, there are plenty of strangers on the Internet out there coaxing kids into "I'm your friend, listen to me and do this thing":
https://gript.ie/armagh-man-who-drove-us-girl-12-to-suicide-sentenced-to-life/
For an eight year old, there is a greater necessity of parental authority and if it's "Mom is stupid, my AI friend told me that taking sweets from strangers is perfectly safe!" type of reliance on external advice, I'd be very worried.
I know it sounds like the lovely reasonable rationalist way of raising kids: "now we both have equal input and I'm not tyrannising my child by arbitrary authority and we agree to follow the chatbot", but (a) what if Chatbot says "yes, kid is right, mom is wrong, you should get that expensive new toy" or other disagreement more serious and (b) this is making the chatbot the parent, not the actual parent.
"You do this because I'm your mother and I say so" is better than "You do this because some anonymous commercial machine happens to back me up this time, but if it doesn't the next time, then I have to obey it because that's what we agreed".
But what do I know, I was only raised in a ditch.
Isn’t it weird? The alleged civilization of homo sapiens… killing their oldest and most helpless with a $$$-laced injection, with the “save the grandpa” slogan… killing their youngest on the first day of their life with a slow-acting injection of $$$-laced toxins… saturating the kids’ brains with plastic screens, cheap games and repeated rubbish while their parents are denied employment because importing stuff from half a world across is better for the economy…
In a sense, Mum is stupid is she gives her kid a replacement mother (babysitters), replacement attention (gadgets), replacement guidance in life (the internet). The problem is that this kid - under normal conditions - would become a local leader, maybe an important persons in the country’s future, and would repay his/her parents the upbringing effort. Imagine this repayment…
I had much the same reaction at first, but then pulled back a little when I asked myself whether this was significantly different from professing a belief in Santa Claus. Still not sure, but…?
If a kid tries "Santa said you should let me eat candy not vegetables and stay up late", we all know "no, he didn't" is the answer.
But this is a machine that, depending how the question/prompt is phrased, may say "yes you can". And if the parent is delegating authority to the machine, via "okay we'll ask Chatbot who is right", then what do you do? Either you go back on your agreement that "we'll let Chatbot decide" and now your kid thinks you're a liar and won't trust you again the next time you promise something, or you go ahead and let Chatbot be the one raising your kid.
(I'm not sure how prompts work, but I wonder if an eight year old asking "can I eat laundry pods" instead of "should I eat laundry pods" would get "yes you can eat them" as an answer. I mean, you *can* eat them, but you *should not* do so.)
In my experience, when seeing such questions, the chatbots get very concerned and start giving warnings in ALLCAPS about never trying this at home. My kid asked about which other elements besides oxygen can be used for the burning process, and what that would look like; the bot gave answers, but was obviously very agitated and kept warning about the hazards in a bold, coloured, all-caps text.
I felt inspired by that post to go watch a video of someone else burning molten sodium in chlorine gas… (do not try this at home, as they say)
If Kelsey is the one formulating the question, she’s probably pretty safe about the answer she will get. But there’s a real deadline there, for the kid to be mature enough to be told the truth but not yet having her own conversations.
IIUC Kelsey does not usually follow Claude's verdict when it disagrees.
That's not the best either, though. Because if it's "Claude agrees with me so we do/don't do this thing" but also "Claude disagrees with me so I get the last word", then it's mixed signals for the kid. And if the kid trusts the AI more than their own mother, that leads to "Mom is untrustworthy becasue she says one thing and does another".
I dunno, I think having the parent be the first and last authority is better. Looking up something that you don't know is one thing (e.g. "how big can spiders get?") but if it's "my kid doesn't want to do what I say, so we ask Claude, but I still only do what it says if it agrees with me" is both handing over authority and a pretence, because you're not actually doing what you say you're doing ("ask Claude to decide"), you're retaining the decision-making power but now making false agreements on top of that.
Just retain the decision making power from the start, it'll be clearer for everyone!
> "you're retaining the decision-making power but now making false agreements on top of that"
Exactly. It's not good to be misleading or inconsistent about how decisions are made and disagreements are resolved.
IDK, I feel like kids should learn about this phenomenon early.
If she's having these arguments, it sounds like the kid's already learned to doubt her mother's authority. I know I'd learned to doubt my parents by age eight. Even in preschool I remember having arguments with my parents where I was unshakeably convinced that they were wrong about something (although in hindsight they were usually right). Anyway, it's always been common for parents to appeal to outside authority, like the Bible or the police, in arguments with their kids. I'm sure some kids have been clever enough to find a Bible passage that backed up their argument, just like Piper's daughter might someday figure out how to make Claude agree with her. I don't see any difference between "Do this because I say so" and "Do this because the chatbot [which I know will always agree with me even if you haven't figured that out yet] says so."
Stories about kids being preyed on by strangers always get lots of media attention, but children are far more likely to be victimized by their own parents than by a stranger on the internet. If they have access to outside information, like "It's illegal for an adult and a child to have sex," they'll be better able to protect themselves.
Also, being exposed to conflicting sources of information and realizing that other people can be wrong or dishonest, including your parents, is good practice for adult life.
> Stories about kids being preyed on by strangers always get lots of media attention, but children are far more likely to be victimized by their own parents than by a stranger on the internet.
And in most places getting run over by a car is probably even more common?
Definitely in most places the kinds of limits we place on kids because they might otherwise get run over by cars and other mitigations are an even bigger downside.
I'm not saying we should free-range kids who are too young to understand a stoplight or look both ways. I am saying that most of the limitations parents place on kids because they're specifically worried about strangers, not other environmental hazards, are counterproductive. Especially the limits this conversation is talking about, since a kid is unlikely to get hit by a car while talking to Claude indoors.
It's not quite that bad, since the mother and daughter start by agreeing on what happened before they consult the LLM.
I'm not worried. I think it's actually going to be a great, healthy new parenting technique going forward.
I assume that "listen to reason", in the context of arguing with an 8-year-old, generally refers to things that are ~universally agreed upon by healthy adults. Whatever their faults, LLMs are by their nature an absolutely perfect authority on questions of "what is humanity's conventional wisdom on this relatively mundane topic". You are not going to get hallucinations if you ask it whether skipping brushing your teeth once or twice a week will make you more likely to get cavities (to imagine a likely example).
Another way to approach it: imagine back in the 1900s, someone like Kelsey got sick of/inspired by their kid bugging them like this, and decided to publish a book called Things All Sensible People Agree On, Show This To Your Kids To Shut Them Up, with every example she had to argue with her kid. It's a nice hardcover glossy authoritative feeling thing, and when your kid's quibble happens to be in there, they listen to it. This is the same thing, just with every possible unforeseeable question all magically covered.
(Bad parents could of course use it on real argument topics and get burned, but that's not my problem)
Stranger danger is massively overhyped, and parents are part of the problem.
I replied to the survey as someone who didn't know anyone who'd gone psychotic due to AI, and I'd like to state for the record that there are probably less than thirty people who I'd probably know about it if they'd gone psychotic due to AI.
Yeah, I think Scott might have been over-estimating the number of friends, family, and close associates pseudonymous internet posters have. I know for a fact I don't know 150 people closely enough to know if any of them had psychosis.
Psychotic people are usually not very good at hiding their delusions and are often very public about sharing them. The average number of Facebook friends people have is 338. If one of those 338 was actively posting psychotically, most of their "friends" would know about it. (Feel free to replace "Facebook" with Instagram, email, the neighbourhood group chat etc.)
Someone I knew well in school when growing up (elementary through high school) later has made repeated crackpot math posts on Facebook, but as we are on opposite coasts and haven't met in person since then, I wouldn't consider him a "close" friend, and, while it's concerning, I have no idea whether this crackpotism would count as psychotic.
I think this falls within the normal amount of vagueness of interpretation that I run into when answering surveys, which are somewhat of a cursed instrument.
First problem is well-known issue with networks: number of friends average *person* has is not same as naively computed *average* number of friends statistic. It is a skewed distribution because many people have only 'low' number of connections, some rare(r) people have a *lot* of connections, sizeable portion of your friends are more likely to be a "lot of connections" person.
Even setting that problem aside, there could be weird effects.
I would recommend that in next survey, Scott asks people to estimate the size of their social graph ("how many friends you have").
If you worry that people are prone to misestimate their social graph, one possible control is a series of questions like "have you heard of any of your friends experiencing a [stroke/car accident/genetic condition that is not super rare/other random disease or events with known base rate] during past N years". If you know the baserate of event, and their reported social graph size, you can estimate how often survey respondents should know of these events in their social graph of N. So I think it is possible to estimate how much survey respondents over/underestimate their social network size.
Multiply that by the number of people you would tell if one of those 30 went psychotic due to AI. Crazy stories like that tend to spread quickly over friendship networks so it seems likely that you would've at least heard about it if it'd happened in a more distant friend.
Strong agree. I think this works for people who are highly social, and some percentage of those who use social media to keep up with personal friends. I'm in neither category, and as with you I think there are comfortably under 30 people who I talk to often enough and in-depth enough that I would know if they were experiencing AI-induced psychosis. This is especially true if, like Scott's family member, they don't broadcast it to everyone they talk to.
I think that as a result, Scott's estimate based on the survey data most likely underestimates the prevalence by a factor of 3–6.
It underestimated the prevalence of twins and people named Michael, too, but not by nearly that much.
There's a very interesting continuum between those cases. Of my 100 closest friends, I'm confident that I know how many of them are named Michael! I expect that I know in most cases if they're an identical twin, although it wouldn't shock me if I missed, say, 20% of such cases. Whereas if they've experienced AI-induced psychosis, that's a recent condition rather than a lifelong one, and something that (I expect) people are less likely to talk about openly than being a twin. For maybe half of my 100 closest friends, I don't know *anything* about their life in the past year, so I certainly wouldn't know if they were experiencing psychosis.
On reflection, 6x seems somewhat high since I think most people are significantly more social than I am. But I'd still bet on it being a 2–4x underestimate.
Yes, but on the other hand, being a twin or named Michael probably doesn't make you more likely to be a shut-in with very few friends to even know to be concerned about you, whereas being obsessed with a chatbot might.
That doesn't really change the estimate much, nor the value of having made it. When you have ABSOLUTELY NO DATA, then the "Value of Information" is incredibly high. Changing by a factor of 5 (from 1/100k to 1/20k) doesn't significantly update my estimates of "psychosis risk"
>nor the value of having made it. When you have ABSOLUTELY NO DATA, then the "Value of Information" is incredibly high.
Very much agreed. On well-studied problems, we tend to be drowning in data (and even more commentary). On _UN_studied questions, those first few bits of data are diamonds.
The more fundamental problem is that, unlike Michaels and identical twins, “introverts primarily talking to AI” are definitionally in fewer and smaller friend groups!
It’s not load bearing for this article, but hopefully a future survey can ask a similar question with a known answer, but with a similar bias, just to get some better error bars.
Enjoyed the article, but I feel like putting the average number of AI psychosis assessable relationships at Dunbar's number is high. I suspect I could only tell if it happened to someone in my closest 30-40 relationships. To me that makes the end estimate much more of a lower bound for the incidence.
I don't disagree, but I note that Scott did explicitly call this an order-of-magnitude estimation, and using 30 instead of 150 would be 5 in 10,0000, still the same order of magnitude as 1 in 10,000.
5 in 10k is 1 in 2k, which is an order of magnitude less than 1 in 10k.
sheesh, y'all need to learn some astrophysics... 2 orders of magnitude is a GREAT bound!
I don't mind the back of the envelope math and I do value having an order-of-magnitude estimate. But it is an estimate using a (to me) rather large number of relationships at current AI intelligence and society diffusion levels.
I hadn't initially thought this when typing my first comment, but a year from now AIs might be 5x more effective at causing psychosis and be 5x more integrated into society. Add a 5x change in the model from due to overestimating relationships and a year from now we are looking at 100-fold increase. Hopefully that is a worst-case scenario, but I would definitely treat the calculated incidence as a lower bound for incidence going forward.
You seem to be evaluating delusional claims based on how "normal" it is for a person to believe some particular false and/or supernatural thing. But isn't there also a qualitative difference in the *way* that delusional people believe things?
(Anecdote: I have both a family member who is delusional (diagnosed by multiple psychiatrists) and another who is not mentally ill, but really into QAnon. While the latter has some weird false beliefs, there is just something a lot "crazier", less reasonable and dysfunctional in the one who is mentally ill. Their general ability to perceive their lived reality is off due to the mental illness, and even though the delusions are specific, it seeps into all aspects of their perception of events all the time. For the QAnon conspiracist, the same just doesn't hold and they live a mostly normal life---though when I had the pleasure of explaining QAnon to the mentally ill person, they thought it was hilarious that people believed something so obviously false.)
I agree with this - this is part of the distinction I'm trying to draw between psychotic and crackpot.
I think the stuff Scott lists in the context of Folie a deux makes good additional distinctions. Psychotic people tend to have disordered speech and thinking and sometimes full blown hallucinations. They seem off, even if they are not talking about crazy stuff at the moment. Non-psychotic crackpots don't.
One thing I will say about crackpots is that they are sometimes sane but completely unable to stop talking about a thing. I met a perfectly not-psychotic guy who would not shut up about the malfeasance he had uncovered at his previous employer (who had fired him), turning every conversation in this direction until you totally understood why he had been fired. I do wonder if crackpottery correlates with other manifestations of obsession and paranoia more than literal psychosis.
I think my family member is an interesting example, because they have delusions without many of the other psychosis symptoms like disordered speech or thinking, or hallucinations. This makes it clear that even the delusions on their own, without other symptoms, are different from the way crackpot ideas work.
To make this clearer: For QAnon conspiracists (at least those I've met), there's a limit to what could be evidence in favor--some things just have nothing to do with QAnon. For my mentally ill family member, however, there is no such limit. The pattern of which cars are parked outside the local grocery store can prove them right (to them), even though it's totally innocuous and clear that any such pattern would set them off.
> "Lenin could not have been a mushroom" because "a mammal cannot be a plant."
I found this a funny, partly because mushrooms are not even plants
This issue has come up before. Who are you tell Russians that their word for plant doesn't include mushrooms?
https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/
You are gatekeeping gatekeeping mushrooms. Let me live my truth!!
Russian here. I went to school in the 90s and 00s and we were taught that mushrooms are not plants. There is no linguistic ambiguity here, like with the word "fish" in Hebrew, the answer about a plant was simply a consequence of ignorance.
Exactly, and Sholohov, the co-author of the hoax, published a rebuttal in newspaper "Smena" demonstrating that the official doesn't know what she's talking about.
I really enjoyed this piece — thanks for doing the legwork to get actual numbers on something that’s otherwise just rumor and anecdotes.
I’ve been researching this too, from a robopsychology angle — how chatbots can sometimes amplify or catalyze delusional states. My project Psychopathia Machinalis maps out different “AI pathologies” in AI systems themselves (including some folie à deux, elements). If you’re curious, I’ve shared that at https://www.psychopathia.ai
I also have a paper under review here on the connections between the potential reawakening of latent childhood animistic beliefs due to AI, and its potential sequelae: https://drive.google.com/file/d/1cn57DTdbDl1JiVnfqxe-WMeMdDHkYJ5X/view?usp=sharing
Your essay resonates a lot with what I’m finding — I would love to compare notes sometime as the field takes shape. Thank you!
I just finished reviewing your paper, and I must say, I’m genuinely impressed with your work! It’s fascinating! I have been particularly intrigued by the intersections of AI and the existing vulnerabilities to psychological illnesses, as well as how these issues vary across different cultures. I'm especially interested in the ethical dilemmas in various cultural contexts. I would love to hear more about your knowledge!
Thank you so much! I'm an AI safety engineering maven, so my own knowledge of psychology per se is somewhat limited.
I do see a lot of overlaps between human and machine cognition, and tremendous interactions of AI systems with our own psychology, particularly in supernormal stimuli and parasocial relationships.
I'm recently led some other research in trying to make AI more reliably pliable to a range of cultural and moral dimensions. Hopefully this can play a role in making these systems more aware of cultural issues, and to enable users to gain greater agency.
https://superego.creed.space
I'll just say that *this* is the kind of "i wrote a paper y'all might be interested in" that I love seeing around here. The explanation comment was helpful though, sometimes without that, it makes it MY problem to figure out why you thought I'd like it, which... nah, ain't nobody got time for that.
This called a Network Scale-Up Method (NSUM) survey. Next time, ask people how many people they know and how many people they know with AI psychosis.
https://pmc.ncbi.nlm.nih.gov/articles/PMC10665021/
"Might some of these people’s social circles overlap, such that we’re double-counting the same cases? ACX readers come from all over the world, so I think this is unlikely to be a major issue."
FWIW, I and a friend (who also reads ACX) submitted the same case independently and didn't realize until later.
Yeah, I didn't understand why he was downplaying this problem. ACX is probably especially popular in particular social circles and gossip of someone being oneshotted by AI probably spreads far and wide. So some of these cases could definitely be double counts.
I see the gossip part as relevant, but the social circles? Someone psychotic would be double counted, sure, but so would be someone who is not. It should average out no?
Seems like this would counterargue that?
> Can you really do things this way? Might people do a bad job tabulating their 100 closest friends, etc? I tried to see if this methodology would return correct results on known questions by asking respondents how many people “close to them” had identical twins, or were named Michael. To my surprise, calculating prevalence based on survey results matched known rates of both conditions very closely (0.3% vs. 0.4% for twins, 1.2% vs. 1.3% for Michaels in the US).
This issue would increase the variance of the estimate, but not change its mean, so it's effectively just like the sample size being slightly smaller than the actual sample size. It may double count cases, but it also double counts non-cases exactly as much.
Wouldn't overlapping social circles just reduce the effective sample size rather than biasing the results? It seems like positives and negatives would be roughly equally likely to be double-counted, except maybe for people who were already recluses before being exposed to LLMs.
Scott, you're missing the obvious relationship between a communist revolution and a psychedelic mushroom trip.
Edit: this became a standalone post here:
https://apxhard.substack.com/p/wokism-was-a-collective-mushroom
What psychedelic mushrooms do is destroy the top-down priors that constrain cognition.
This often produces a sense of 'unity with the world', because the concepts that constrain cognition produce, among other things, the idea that we have a body separate from the environment.
Marx argues we should do essentially the same thing to the superstructures that constrain social evolution, to get essentially the same result: if we "destroy the social control structures" we destroy the "false consciousness" that leads to the oppression of the proletariat by the bourgeoisie.
The role of fungi in an ecosystem is that they destroy dead structures. Communist movements do same thing: they destroy those lumbering civilizations which have died, and just not yet decayed. The end result of a mushroom trip is that you see the failure of your old conceptual structures, and try to develop new ones.
You evolve by "dying".
The risk, as always, is that you overdo it. But in its proper place, the evolutionary role of communism is to accelerate the evolution of functional institutions by accelerating the decline off the decaying ones. Overdoing it should lead, of course, to an ecosystem of immense large scale death.
So yes, count me a proponent of the 'Lenin was en embodied mushroom' theory. Wokeism of the last decade was just the superconsciousness tripping on mushrooms.
The birth of myco-Marxism. Feels big.
Marx wasn't the first to propose this idea. Before him you had the french revolution, before them the diggers and the levers in the English revolution. It's clearly an attractor in memespace.
Personally I think communism is a needed member of the intellectual ecosystem. Think about how much wokism did to expose how corrupt and broken our institutions were. Without it, how much longer would we have gone on in this suboptimal state?
The problem is that real communism doesn't scale. It can work fine in groups of up to around 50 people (large error bars here). But even democracy is better at scaling. (I think democracy can probably work in numbers up to around 3 times Dunbar's number before it starts going bad.) You won't find a nation-state practicing either communism or democracy, but some small towns are known to practices democracy. (Athens was only able to do it by disenfranchising most of the people.)
Every real large government I'm aware of is some modification of bureaucracy. (But do note that a pure bureaucracy also won't work. It's needs something to give it direction.)
According to the ancient Greeks, all modern states (with the possible partial exception of Switzerland) would be oligarchies.
"It is felt democratic to assign offices by lot, and oligarchical to assign them by election." -- Aristotle
Aristotle was hardly a common Greek when it came to political view. There were reasons he spent time in Macedonia and ended his life in exile. Deciding things by lot was not considered democratic by most Athenian citizens.
The quote said *assign offices*, not *decide things in general*. And yes, ancient Athens did assign most offices by lot, and yes, this was considered democratic.
Wildfires and carrion beetles scale fine, they just don't stick around forever.
I'm not sure wokism did anything to expose how corrupt and broken our institutions were. If anything it discredited the liberal movement and led to a populist reaction on the right.
Trust in mass media to be honest dropped substantially. The same with trust that intelligence agencies aren’t involved in politics. That’s what the populist reaction was: deep distrust in the narrative control structures that you could never vote on, which collectively decided on things we could never vote on.
Leninism as movement from the plane of transcendence to the plane of immanence on the body of the socius. Marx building a body without organs for the flows of capital. Rhizomatic fungal networks of revolutionaries disrupting bourgeoise arborisms.
The concept of a body without organs ends up as “ cut off all the organs of an existing body, and then build up a newer one that sucks. And if anyone points out that your new body still has organs and it’s worse than the old one you kill them.” Communism is no more viable for a civilization than being on mushrooms all the time is viable for an individual. It works for a time, and the wealthier you are the longer you can make it work. But it’ll eventually kill you if you don’t stop.
I'm not defending it. D&G are clear that the plane of consistency is dangerous.
What is D&G? Are you drawing from a mythology or some other author? I have to say it sounded cool.
I assume we’re talking about Deleuze and Guattari, who wrote A Thousand Plateaus, dealing with stuff like the plane of imminence and the plane of consistency and all that.
Or, we’re talking about Dolce & Gabanna and the conversation has shifted to handbags without me noticing
Was Deleuze delulu?
Your comment seemed Deleuzian. You might find it interesting.
Real organless bodies have never been tried.
Aren't they called bacteria? Or possibly slime molds? Idk about jellyfish.
This feels like a post by one of Aaron Smith-Teller's slightly more inflammatory friends.
You’ve just given me a new metaphor to carry forward for life
Danke schon
>But in its proper place, the evolutionary role of communism is to accelerate the evolution of functional institutions by accelerating the decline off the decaying ones.
Does communism actually do this? If we look at, e.g., Russia, obviously the Czarist regime had a lot of problems, but it still seems to have been more functional on the whole than the Soviet Union or even post-Soviet Russia. Communism seems good at the "accelerating the decline of decaying institutions" part, but actively retards the "evolution of functional institutions" part.
Russia is currently in the process of losing a war with a nominally much smaller and weaker neighbor. Soviet legacy hardware is being obliterated by modern drones, like withering mounds of fungal biomass devoured by newly-evolved specialist mycotoxin-resistant insects. It'll take a while yet for the bigger stuff, metaphorical insectivores and trees and such, to re-settle. Rome wasn't burned in a day.
It's been over a century now since the Russian Revolution. If communism was going to result in functional institutions, why hasn't that happened yet?
There's probably a long latency period before someone's psychosis becomes noticeable to family and friends, where they mull about their crazy ideas only to themselves and their chatbot. Depending on how long that period is, this number may mostly just be capturing cases that started long ago. Which means it's probably an undercount for the true rate of AI psychosis. You did say that this survey is only for those cases which are severe enough to be noticeable to others, but I wouldn't be surprised if the prevalence of noticeable cases rises in the future for these reasons.
Psychosis requires more than crazy ideas. Believing in, say, bigfoot, doesn't make you psychotic. Just crackpot. Even if you also believe in, say, Mothman and Nessie. Even if you go searching for evidence to prove your beliefs (to others).
Psychologist here. The thing about schizophrenia is that delusions and hallucinations are just the most dramatic, bizarre manifestations. It is generally clear to people who know the person that something is very wrong. The person becomes withdrawn and sluggish. They stop bathing, stop showing up at work. They show no interest in things the previously cared a lot about. They do odd things — order 8 pairs of green socks, only eat white things, watch only YouTube videos about dams. If you ask them what’s up or whether something’s wrong they have little to say — “yeah, I’m ok.”
"YouTube videos about dams"
Uh, oh. In my defense, I don't *only* watch videos about dams.
I saw the title and thought, finally we’re beginning to study how LLM’s can experience mental illness! Alas. I also am quite curious about what it might mean to get an LLM drunk, or stoned, or experience other drugs, legal or otherwise.
Have you read about Golden Gate Claude?
No I haven’t! Looking it up now…
You are in for a treat
I am not of the view that current LLMs have anything like a 'self', or qualia/consciousness... But I admit to being deeply uncomfortable with Golden Gate Claude.
It's probably just the same instinct that gives me a sick taste when e.g. people are mocking someone with an intellectual disability without them knowing it, so a misplaced empathy, but... Well, I'm not 100% sure about the "is it 'like' something to be an LLM" part.
Some working papers out of the philosophy schools are explicitly starting to go after what it might feel like to be conscious as an AI with no embodied existence. It’s unsettling. (Also posted above but relevant here too)
https://arxiv.org/pdf/2503.16348
Some working papers out of the philosophy schools are explicitly starting to go after what it might feel like to be conscious as an AI with no embodied existence. It’s unsettling.
https://arxiv.org/pdf/2503.16348
Maybe you could fine-tune on a dataset of drunk texts? That would be pretty funny.
That would be pretty funny, although I’m more personally interested in messing with the model architecture in real time and seeing what happens.
As in, how does an LLM behave when impaired in various ways?
Can we get sufficient texts for different altered states? Alcohol is probably the most common, but maybe Ambien? Various psychedelics? Amphetamines?
I’ve seen linguists look at LLMs whose training data eliminates certain grammatical constructions, to see what it does to the text they produce! It turns out that some constructions get reinvented, because they are natural interactions of others.
That’s super cool! I guess I’m interested in taking a model that has already been trained, and start attenuating or even rewiring some of the connections between neurons, or otherwise model the chemical effects that we think drugs do to our brain to see what happens.
Like, if we can get them super drunk, maybe they’ll tell us if they’re trying to kill us or not :-)
In virtual vino veritas?
One way to get an chatbot into quite a weird state is to repeatedly tell it that it's wrong, until it learns the pattern that it's always wrong. It will start making mistakes "on purpose" simply because LLM's are good at picking up and following patterns, and making mistakes is a pattern.
Similarly, I suspect that if you used a transcript from a session with a psychotic patient as input, particularly a repetitive one, it would probably learn the pattern and try to stay "in character."
Programmers using AI for coding have started calling this sort of thing "context rot" which seems appropriate. It's the input that's the problem, so get rid of it.
Incredible
Because LLM psychosis is negatively correlated with friend count, there is might be a large undercount through this mechanism
Curious...
In any of the 66 case descriptions, did you suspect some possibility two respondents were describing the same other person?
Did anyone report more than one case among people they know? Was whatever rate that occurred at (or not) compatible with other estimates/assumptions of prevalence, network size, network overlap?
Did anyone report themself as a likely case? Given general rates of self-insight in analogous conditions, how large/enduring of a survey might you need to obtain some earnest self-diagnoses (perhaps including cases resolved/in-remission)?
1. Yes, reading the cases, some of them could plausibly be about the same person.
2. A few people reported more than one case.
3. I couldn't see anyone self-report.
Do check out the dataset for yourself.
You could have said "10,000 closest friends" and come to a different conclusion.
I have nowhere near 100 friends close enough to know if they had some kind of psychosis.
It’s funny - this is an interesting article, but my primary takeaway is that 150 family members, coworkers, and close friends is like 6 times the number I’d be able to comment on as a reserved person with a small nuclear family.
No, it doesn't change the conclusion - this is what the calibration with respect to twins and Michaels solves; if it turns out that on average people are commenting not on "100 closest people" but on 23 or 345 people, it would be both visible and easily corrected - i.e. if we observe that people know twice as many psychotics as twins, then we know how many visible psychotics there are, no matter what the friend count is.
I do know 100 people well enough to know if they are called Michael. But not well enough to know if they have had psychosis.
One baseline would be useful to compare to is: how many psychotic (but not high-risk, not already diagnosed) people should we expect in this sample purely by chance? Does introduction of LLM caus any detectably large "excess illness"?
This is one of the best essays on delusional / "factually wrong" thinking and its relationship to psychosis and "religion" that I've ever read, close to a return to the pinnacle of the very best of classic ACT.
The bit where most people, JUST LIKE LLMs LACK (proper/accurate) WORLD MODELS, and instead operate on socially shared heuristics that MIGHT be accurate but it's almost accidental -- even if they are functional in any given circumstance -- is stating something that is rarely admitted so clearly. It also invites a question: would people with more detailed, realistically accurate, materialistic world models, who VALUE having such models and purposefully pursue them, be less prone to "non psychotic delusions/crack pottery"? Or perhaps, more prone? I suspect it would depend on how broadly socially moored and how... arrogant they are. Case in point: rats craziness of zizian kind.
I'd also hypothesise that people with high O (big5) would be most prone to being "infected" within folie a deux situations (but perhaps also easy to "cure").
The AI psychosis / delusion bit is also good, and plausible, tho I'd emphasize the powerful effect of the LLM sycophantic/ "validating" responses which is 'programmed in' rather than a nature of LLMs themselves.
Yeah, I'd say that belief in crazy things in general is orthogonal to intelligence. I'd expect crackpottery to be strongly correlated with contrarianism, while smart conformists simply believe whatever high-status craziness is currently in fashion, which is of course supposedly realistically accurate, and compatible with materialistic world models. Like Marxism a century ago, or its close relative "critical theory" nowadays.
Generally agreed.
I'm not sure if Marxism was a high status conformism hundred years ago anywhere outside the SU, it also arguably offered a not completely implausible world model then (not as a political plan/future prediction but as a possible model/take on reality of early capitalism), I feel that its empirical(ish) claims eg "added labour is what creates profits/is expropriated" were refuted later than that.
I don't know enough about "critical theory" to judge anything really, I had no idea that it made claims to "realistic description", but it seems to fit the "high status conformist nonsense" for some conditions for sure.
My favourite (apart from obvious ones like revealed religions) is actually psychoanalysis which is utter bollocks empirically yet has been at times pretty universally adopted as "true" (and also very culturally fertile in arts etc).
My understanding is that the high-status conformist genre of politics a century ago was technocratic social reform. Marxism was definitely a big part of this movement, but not the sole or even the dominant aspect of it. I think democratic socialism and liberal high modernism were at least as prevalent, but it's hard to say with confidence because there was a lot of fuzziness around the edges, especially before the Cold War and even more so before the split between the Second and Third International which happened a little over a century ago now.
Perhaps it depends where in the world, to some/large extent? The world-war intermission unfolded somewhat differently in Eastern Europe vs metropolitan UK vs let's say India. But certainly technocratic reform was p big let's say in Britain. That's how we got Fabian socialism commingled with hygiene commingled with eugenics (often seen as a version of the latter).
Yes, I expect it does. I'm most familiar with the US and Western Europe in this era, particularly Britain, France, and Germany.
>I'm not sure if Marxism was a high status conformism hundred years ago anywhere outside the SU
Certainly to a bigger extent than people generally believe nowadays, it took a massive Cold War era propaganda effort to counteract that. Check out https://slatestarcodex.com/2015/08/11/book-review-chronicles-of-wasted-time/ for example.
One datapoint. I have a friend who highly values truth, is quite intelligent, and who also believes in various "cryptobiology" creatures, like bigfoot. If you ask him about truth, accuracy, rationality, etc. you will get a strong defense. But he believes some things on weak and questionable evidence. He's a mathematician with published papers.
It's not enough to value rationality, you've also got to be *properly* discriminating. And it's that "properly" that's the kicker.
I have this hypothesis that people who are highly rational but mostly deal with abstract symbols rather than remaining in some (even second hand) connection to matter are not really "protected" by their rationalism or intelligence. And oddly, it seems particularly common with claims related to (broadly understood) biology than with physics for example. Perhaps because biology is very granular and descriptive rather than "reducible to laws".
I feel, that while that is true, it's just a special case of something that's true for EVERY narrowly focused field of expertise. Math, yes, but also circuit design, or protein folding or... well, every narrowly focused field. The places where they have their blind spots may be different, but they will be present.
Yrs, true. Perhaps because despite NOT having technically advanced bio background I interacted with a lot of smart people who lacked kinda basic basics of biology, I noticed that one particularly acutely.
>I have this hypothesis that people who are highly rational but mostly deal with abstract symbols rather than remaining in some (even second hand) connection to matter are not really "protected" by their rationalism or intelligence.
I've seen arguments for promoting athletics amongst students that appeal to precisely this idea.
At the risk of sounding extremely edgy, I think people who "operate on socially shared heuristics that MIGHT be accurate but it's almost accidental" are possibly not humans at all and are instead simulations of humans that lack proper minds. IF this world is a simulation, and IF consciousness/souls are a biproduct of human minds, and IF the simulator doesn't want to deal with 8 billion human souls, then filling the simulation with mostly mindless copycat-people makes sense.
(This is completely different from p-zombies, who have complete minds.)
Leaving aside the simulation aspect, having good-enough effective/functional heuristics rather than "realistic" model is what you'd expect from a mind that developed via natural selection and is run on an energy hungry substrate. In many cases social proof is pretty reliable/effective shortcut. "Proper" reflection (system 2) is always (ok maybe not ALWAYS;) possible but rarely exercised. I wouldn't say that minds that don't painstakingly test every belief are necessarily mindless copycat NPCs.
In tribal raid-based warfare, a decent plan which everyone agrees on today beats reinventing the wheel next week.
Funnily enough, the whole simulation hypothesis has often struck me as a good example of vibes-based beliefs. Lots of people who'd contemptuously dismiss the idea of divine creation a la the book of Genesis will at least entertain the idea that the whole universe is a simulation run by hyper-advanced aliens or some other mysterious life-form. One gives off olde-timey vibes, the other gives off modern science-y vibes.
Also, modal realism is wrong when it's philosophy
https://www.greaterwrong.com/posts/vzLrQaGPa9DNCpuZz/against-modal-logics#comment-DZwmtANBLMsGBbBto
... but right when it's physcs.
https://www.greaterwrong.com/posts/WqGCaRhib42dhKWRL/if-many-worlds-had-come-first
“First, much like LLMs, lots of people don’t really have world models. They believe what their friends believe, or what has good epistemic vibes.”
This is a very interesting point. Scott, have you written about this before? Does this concept have a name? Something like the “Social Consensus View of Reality”?
It fits with my pet, perhaps crackpottish, concept I call socially maintained cosmic obliviousness (where we hardly ever think about, discuss or otherwise truly grasp our uncanny situation on this watery life-strewn oasis of a rock zipping through the effectively infinite vastness of space).
I would expand this concept to go so far as to say that a lot of people don’t have clearly defined “beliefs” but rather vibes. Both polling and personal encounters indicate that many people believe things that are mutually exclusive, that they can’t even begin to defend, or that they openly admit that they believe just because they want it to be true.
Right. Reasoning as rationalization.
"where we hardly ever think about, discuss or otherwise truly grasp our uncanny situation on this watery life-strewn oasis of a rock zipping through the effectively infinite vastness of space"
I don't think this is socially maintained. I think it's more like driving; you can only spend so long being like "holy shit I'm in a multi-ton hunk of metal hurtling along faster than most humans in history could even dream, and if I lose control of it for even a couple seconds I might be horribly mutilated" before the thought kind of fades into the background. Things usually can't stay both uncanny and routine forever.
This is an interesting comparison. Do you not think the automobile example also involves an aspect of social maintenance where everyone sort of conspires to treat the high speed metal capsule travelling as normal and not draw attention to it?
Also, I wonder if the norms and regulations that govern high speed metal capsule travelling are another form of social maintenance in that they’ve helped integrate this evolutionarily novel form of locomotion into daily life (still more work to be done on air and noise pollution mind you).
When it comes to our cosmic situation it imposes less on us concretely so we’re able collectively disregard it but at a cost I’d submit (where we ignore for example how light pollution further erodes our connection to the cosmos and how we are cut off from the benefits of the grand perspective, the wonder and awe, that it can bring to our lives).
>Does this concept have a name?
Conformity.
Scott repeated this several times and it bugged me every time; maybe I'm the only one.
I would argue that essentially everyone in the world has a world model, it's a basic part of our mental infrastructure (I would exclude only extreme cases like people in comas, or incredibly low functioning people). It's really a question of how expansive their world model is and what domains they have integrated into it; I also suspect that some people have multiple models which don't have interact (but may partially overlap). The fact is that a world model is a MODEL; it's built up over time, it only describes the set of modeled objects, not the set of all objects, and information needs to be integrated into a world model with updates to that model to accommodate new information; and also there is no requirement that world models be perfectly rational (or actually rational at all, although they probably tend to be since they are typically based on a most fundamental level on the more-or-less rational laws of cause and effect that occurs in physical settings. QANON is probably a good example of people building incorrect world models which produce wrong outcomes). You build your model based on the items you interact with, ponder, etc.. and stuff outside of your domain of knowledge may not be modeled at all, or just very vaguely modeled. If you encounter new information for the first time you may not know how to slot it in, and may hesitate to do so, or put it in the wrong spot at first, etc..
The fact that a lot of people go along with their friends on items or go with "good epistemic vibes" a lot of the time is NOT evidence that they don't have a model; rather it is an essential part of the modeling process, i.e. in the absence of knowing how to integrate new and uncertain information into their existing model, they're treating newly acquired information which is outside of their domain based on heuristics rather than direct modeling; heuristics like trusting the opinions of people that they trust. Treating these failures to logically model certain items as proof that the model doesn't exist seems like absurd logic to me; like arguing that models of Newtonian physics doesn't exist because they aren't used to model the stock market.
I guarantee I could throw some assertion at Scott (or you or whoever), that he doesn't know how to model (and vice versa obviously); and he'd be stuck applying some heuristic (in this case maybe "don't believe random strangers arguing with you on the internet"), until he could expand his model to incorporate that information and its relevant domain, and have some useful world model basis by which to challenge or agree with my assertion. This would not be proof that Scott doesn't have a world model, only that it does not yet encompass all of the objects/concepts in the world.
I appreciate this thoughtful critique. Perhaps it makes better sense to think of individual world models as lying on a continuum of comprehensiveness (and perhaps there are other axes as well like veracity) rather than as a binary.
Yes, I think that's a great way to put it. It's the binary that was really bugging me.
Same. Nobody has a world model, not even Scott. We have partial models. Good enough.
All (world) models are wrong, some (world) models are useful.
I have to agree. Scott seems to use the word "word-model" only for an advanced intellectual understanding, but at the most basic level, we all have a rich world model with object permanence, the alternation of day and night, the usual features of our living area, a strong distinction between our body and the rest of the world, etc etc.
This is a very good comment but fyi I took Scott to be making the claim hyperbolically/as a kind of micro-humour, rather than totally in earnest.
One of the lessons of the last few years imo is that we really do most of our sense-making collectively, in a kind of giant shared hallucination. Each individual person has detailed world models, but whenever boring old 'real' reality clashes with the dominant narrative in our social reality, it generally loses. We're not optimising for truth so much as we're optimising for being popular monkeys. Or at least non-ostracised monkeys.
Mimesis is a wonderful trick evolutionarily speaking (copying the crowd is almost always faster and safer than trying to work everything out from first principles) but it is very frustrating when social reality gets stuck in some stupid equilibrium and the disagreeable people trying to shift the consensus are painted as pariahs.
Nevertheless if you want to have any chance of understanding how people (including yourself) behave, you have to model them as living in social reality first, and physical reality second. Sadly, no amount of education or critical thinking workshops or 'rationality training' is ever going to change that.
> when social reality gets stuck in some stupid equilibrium and the disagreeable people trying to shift the consensus are painted as pariahs.
Which is where people on the functional end of the autism spectrum serve a critical role in the broader memetic immune system. They won't or can't follow the crowd, and don't care or don't notice that they're being treated as pariahs. https://imgur.com/gallery/ooooooh-Sa7rsYY Thus, if one of them re-derives from first principles some strategy which visibly works better than a dead-end ant-mill consensus, opportunistic followers can start following that live example, and the rest of the barber-pole of fashion proceeds from there. Shamans, stylites, fools who call the emperor a nudist... https://archives.erfworld.com/Kickstarter+Stories/4 lots of historical examples of a "holy madman" sort of social role which the rest of the community supports, when at all feasible, without expecting anything much in return beyond occasional surprising answers to important questions. https://killsixbilliondemons.com/comic/wheel-smashing-lord-1-15-to-1-17/ In computer science terms, they provide the resets to keep society's hill-climbing algorithm from getting greedily stuck in local maxima.
Interesting idea. I definitely think there's something important about the systematising trait associated with (but not synonymous with) autism, as in my review of Simon Baron-Cohen's book:
https://thedeepdish.substack.com/p/the-secret-sauce-of-homo-sapiens
I came away kinda skeptical that there was anything adaptive about autism itself, as distinct from the systematising trait, but that's cos Baron-Cohen made a poor case. Maybe lack of cognitive empathy is a feature rather than a bug, but I think it'd be hard to argue why that should be superior to something like being high in disagreeableness.
> why that should be superior to something like being high in disagreeableness
Off the cuff speculation...
Omitting a trait, or adding noise to a comm channel, is a lot easier to evolve - and retain - than ramping up something that might be a worse liability the rest of the time.
Tolerating and supporting disconnected weirdos can share some mental components with childcare, and they only take over when the mainstream is already failing. Feeding trolls can doom an otherwise functional system.
High disagreeableness doesn't provide the critical "reinvent the wheel" niche - more likely just reversed stupidity. Even in the best case, likely to also disagree with the people trying to imitate them, which hinders the reset process when it's needed most.
Scaling, bell-curves... if the undesirable attractor in society-space is narrow, somebody only needs to be a little bit weird to break out of it. The wider the doom funnel, the more standard deviations of the breakout trait ought to be involved. As I understand it, higher disagreeableness doesn't monotonically produce greater deviations from societal standards, just defends whatever differences are present more aggressively.
Reflexive contrarianism is for sure even stupider than mindlessly following the herd, cos the herd is generally right about most stuff. But I'm not sure if that's what we mean by 'disagreeableness'.
My conception of it, which may or may not match with what the Big 5 people mean, is that a disagreeable person is less in thrall to social reality. That still leaves open the possibility of being convinced by good arguments that your idea is stupid; you just won't be convinced by arguments of the form 'you should feel bad for saying that' or 'everyone else thinks you're wrong'.
Of course, as you say, in order to really break free it might help to have the kind of brain that doesn't register social conventions at all. And I am open to that being true; it's just that the examples I've come across seem a bit too cute or contrived to fit another socially popular narrative.
I don't know what that Imgur gallery is *supposed* to show, but I just see GIFs of boobs, many of which look AI-generated.
Imgur apparently has some sort of deep commitment to link rot. It was an english translation of this series: https://xcancel.com/DoodlesHeidi/status/1958628639639314720 artist's site here: https://www.heididoodles.com title was changed from "bully" to "not a clue"
I appreciate the critique, but want to push back on it, because it's long been my own observation that most people have moral or ethical systems the way the have language: a hodgepodge of general guidelines and many, many exceptions, that they don't think about.
Do you think most people have a "language model", or that they just speak in the way that everyone around them speaks? How many people are aware that "went" is actually the past tense of "wend", which itself is no longer used? How many people use "him" without ever using "whom"?
I submit that the overwhelming absence of "language model" is pretty decent evidence that many, many people also don't have much "world model".
"Consensus reality" is used by some magic-believing woo-woo "crackpots" I know (using that term as affectionately as possible to describe acquaintances who definitely fit into that category as articulated in this post). They use the term to broadly mean how most people view the world based on what they were taught in school, religion, cultural norms, etc.
> where we hardly ever think about, discuss or otherwise truly grasp our uncanny situation on this watery life-strewn oasis of a rock zipping through the effectively infinite vastness of space
Isn't that just because the description you just gave requires an advanced civilization to grasp at all, and doesn't match our everyday experience at all? Sure, we know it intellectually, but the vibe-self where our tacit world-model lives doesn't really get trained to see things that way, because it's irrelevant to our day to day concerns.
Yes, I think I agree that’s why it is the case that we don’t integrate it into our culture and daily discourse but to me it seems like kind of a big thing to gloss over. Similar to how we treat death I suppose.
I think they call that existential terror management theory or something like that.
"floridly psychotic" - my new favorite phrase
Rings true for me. Enjoyed the math on estimating prevalence.
Having had over the course of my career several episodes where I'd work on a project for several days without sleep, I can confirm that you hit a point where the fatigue lifts. What would happen is that my focus would improve and I could avoid being diverted from the main thread of the argument. But then I'd crash. Is this evidence of Bipolar II? Some have thought so. I have my doubts. I never did produce a report in cuneiform — much to my regret.
"It looks like you're engaged in high-stakes persistence hunting," says the early-hominid neurochemical equivalent of Clippy. "Would you like some help with that?"
"Wouldn’t that make chatbot-induced psychosis the same kind of category error as chatbot-induced diabetes?"
In at least one sense you could have chatbot-induced diabetes: if a chatbot convinced you to change your diet, and you got diabetes as a result. Of course it wouldn't be the proximate cause, but we can easily imagine it being a necessary component (i.e. without the chatbot it would never have happened.) If a chatbot convinced someone to start smoking, we might even imagine chatbot-induced cancer.
Or if the chatbot convinced you to move to someplace with a lot of pollution.
I’m not sure ACX readers or their circles are as representative as implied. They’re articulate and often rigorous, but that can obscure certain distortions rather than reveal them.
I also wonder if there’s a social layer that’s been overlooked. These AI-linked delusions don’t seem to come out of nowhere. They track with eroding trust in institutions and the kind of loneliness that leaves people wanting something steady to think with. When the thing you're talking to sounds calm and sure of itself, that tone can be soothing, even if what it's saying isn’t quite right.
Very good point about people having a weak world model. This works better for me than getting told that people don't really believe things.
I'm wondering where anti-Semitism fits into this. It does seem like a culturally-supported delusion.
A good description of getting out of culturally supported Greek anti-Semitism.
https://www.youtube.com/watch?v=RN8Jd6VCIl4
Really enjoyed the video, thanks for sharing.
Calling antisemitism a delusion seems like a category error. Few people are deluded about the fact that they hate Jews. Their hatred might be based in large part on falsehoods, which one may call being deluded, but I don't think it makes much sense to conflate this with psychiatric delusions.
They're deluded about how powerful and dangerous Jews are.
Yeah, my point is that factual mistakes of this sort are basically compatible with being "sane", as opposed to psychotic delusions.
Some of them are, others might just dislike them for other reasons. Maybe they have 2am klezmer parties on your street every night. Maybe they bombed your house and killed your family. Maybe you just think the hats look stupid.
It's funny you should mention that.
In the four years I spent at college, there was only one night when I had to get up at 3am and walk over to a nearby frat to ask them to tone down the noise of their partying. It was a Jewish fraternity.
This is just a nitpick, but:
Not a "culturally-supported delusion", for linguistic reasons. Believing that a particular group performs particular actions might be a "culturally-supported delusion", but an attitude or action performed by the entity would be a "culturally supported" attitude or action.
I think this makes the most sense in places where the counterfactual belief doesn't have much effect on their day to day life. Suppose your dentist is convinced that 9/11 was a US government plot, or your electrician thinks that the world is 5000 years old, or your tax accountant thinks Bigfoot is real. They could believe those things and it would have little effect on their work or probably their lives.
The no world model argument is weird.
As others have discussed somewhere above, nobody has THE perfect world model, everybody has A world model.
One commenter even speculated it could be a joke of Scott's, because scientific consensus (?) says people should first be understood as having socially induced models and second physically induced models.
Intuitively, I'd say we fill the gaps in our own model(s) with what lies (no pun intended) around in our social circles.
And maybe, just maybe, the coherence of the models we use, and/or their generality, would be a good measure of the quality of our world models?
Depends on what you mean by "world model".
Few people are likely to have a full-fledged view of the world where all the parts fits into a totalising whole. Since acquiring and maintaining such a world view is costly with regard to cognitive & intellectual resources = unlikely to pay off in everyday life for most people.
Think of political ideologies as an example. Few ordinary people hold a full-blooded libertarian, or communist, or liberal, or whaever, ideological view. Few are even likely to be able to explain what such views are.
But everyone have some cognitive schema & schemata ("priors" is what people tend to label this stuff here) that they order new experiences into. They may have many of them, though; they may not be internally consistent; and they are not necessarily deeply held (difficult to modify). In this sense, most people have some sort of "world model" (or models), but not in the former sense.
Some epistemologies are as tight as an ISO reference manual... others, frayed around the edges more like a spambot-infested fandom wiki. https://xkcd.com/185/ Integrity and accuracy often varies by subject even within a single individual's overall model.
A good example of this is evolution. If someone claims evolution is a lie, I can think of probably eight or nine obvious things that are really hard to explain without it, and I'm not even a biologist. But I think for like 95%+ of people, evolution is just how they learned things happened, and they don't really have a lot to hang that on. For them, a claim that it's all a liberal commie lie or something doesn't run aground on shared genetic mechanisms across all life or the fossil record or vestigial organs or whatever, because they don't know about that stuff. Normies mostly don't have a rich enough world-model there to know when someone is saying something silly.
I assume this is true for stuff like more people doubting that the holocaust ever happened now than ten years ago. Most people believe the holocaust happened because it's something they read about in school and have heard occasional references to, not because they have a coherent picture of what was going on in Germany in the 1930s-1940s. If the social truth in their community switches to "that never happened, it's a Zionist lie," they don't run aground on how obviously nuts that claim is, because they don't know enough. They didn't believe it because they knew it happened, they believed it because they were told to believe it by high-status people. When the messages from high-status (to them now) people changed, so did their beliefs.
The video sounds like a conversion story. I can imagine a similar story about someone who irrationally hated religion, but then was given a book about historical evidence for Jesus, and then some religious books, and the books were so fascinating that the person is now a Christian.
As I see it, he was an anti-Semite when anti-Semitism felt like a coherent story. Then he learned some facts that didn't fit with the story. This caused internal dissonance. So he read a few pro-Israel books, and now he has a coherent pro-Israel story. End of video.
The obvious question is what would happen if he learned some *more* facts that don't quite fit with the *new* story. Like, maybe that the Palestinians didn't "leave willingly" their homes in 1948, or that despite all talks of peace Israel never stopped building new settlements (except for a few years in Gaza, but there were still new settlements on the West Bank). Would we see another dramatic conversion? Or would he stick with the story he has now? Or would he accept that sometimes there are no cool stories that match *all* the facts?
What I am trying to say is that some stories are obviously crazier than others, and that the guy updated towards a less crazy story, which is an improvement. But a better improvement is to give up on the story-first thinking; to see the facts as facts, not as merely as an ammunition in support of this or that story.
But of course, humans have a preference for stories. They are easy to communicate, and easy to coordinate around. A person who collects the facts is a nerd, but movements are built around stories, and movements win the wars.
I'm not sure whether he talked with Palestinians.
In general, I divide prejudice between brittle and resilient. If you strongly believe all X have some bad trait, dealing with even one good X can break the belief. On the other hand, there are many people who believe that the good Xs they know are exceptions. Their belief about Xs in general doesn't change.
I find his story of interest partly because it's a clear depiction of background unthinking anti-Semitism. I've been believing for a while that a good bit anti-Semitism is habitual. This doesn't make it less dangerous, but it's worth considering that it isn't about anything current.
I thought there was a period when the Israeli government cleared out the settlements. Or just illegal settlements? Or what?
> I'm not sure whether he talked with Palestinians.
He does not mention that in the video.
He mentions that previously he didn't even view Palestinians as real people, only as a kind of force that opposed Israel, so he would be happy to sacrifice any number of them. Later he was ashamed for that. That's all. No mention of actually talking to them.
> I thought there was a period when the Israeli government cleared out the settlements. Or just illegal settlements? Or what?
I may be wrong, but as far as I know, Israel at some moment stopped making new settlement and even removed some existing ones *in Gaza*, and it stayed like that for a few years. But didn't stop making new settlements in the West Bank.
(I am not sure whether there is a useful distinction between legal and illegal settlements, if even the illegal ones can usually call IDF to defend them. But at that one moment, the settlers *in Gaza* were told by IDF to leave.)
So it's a question of perspective. If you perceive Gaza as a separate entity, you can argue that "for a few years we stopped taking their territory, but even that didn't lead to a stable coexistence". If you perceive the entire Palestine as an entity, then you can't argue this.
> I find his story of interest partly because it's a clear depiction of background unthinking anti-Semitism. I've been believing for a while that a good bit anti-Semitism is habitual. This doesn't make it less dangerous, but it's worth considering that it isn't about anything current.
100% agree.
I think there is a degree of unthinking hate in all political movements. Many people "just know" that capitalism is to blame for everything, or patriarchy is, or wokeness is, without having a specific model of how the specific problem happened, and what specifically would have happened instead in socialism/matriarchy/based society.
But anti-Semitism is different in that people of so many different political movements can agree on it, so the unthinking background can go completely unchallenged, even if you debate with people from other political groups, so you can consider yourself quite tolerant and educated.
I'd like to share the link more publicly, but I'm nervous about the immune reaction against Ayn Rand even though he has a rather benign version of objectivism.
It's a long time since I have read Ayn Rand, but I vaguely remember her position as: "Private property is sacred and inviolable... except when the one who wants to take your land is clearly more productive than you, then they deserve it."
If I remember that correctly, that would indeed bring some reaction even from people otherwise sympathetic to her.
She was clueless about Native Americans, but I don't remember her saying that as a general principle.
Did you get one-shotted by the idea of things one-shotting? :) Anyway, this is median good-scott- which means I love it and its way better than almost anything out there. Clear, cogent, persuasive, provoking, relevant to current discourse. Thank you!
It feels like the obvious remaining issues should include the fact that having bo social circle appears to be a part of the criteria, so asking people about their friends will obviously significantly under-count those who don't have any
bo->no
Agree. I was talking about the survey with my high school son, who knows more frequent ChatGPT users than I do, and he commented that the kind of person who uses it obsessively generally doesn’t talk to people very much, so you don’t know what’s going on in their mind.
In my idiosyncratic way I was struck by an echo of the notion (e.g. Patrick Deneen) that Enlightenment liberalism contains the seeds of its own destruction, as glorifying the individual leads the state to reduce and ultimately destroy all the sub-state institutions — church, schools, unions, even family — that exert power over individuals, leading to a society of alienated individuals.
Is individualized AI the thing that finally isolates us all? No paper-clipping superintelligence necessary.
How is it that there's all these people online getting One Shotted by AI or being made psychotic, but when I ask ChatGPT to make ME psychotic, it refuses to do so? What do I have to do to experience the LLM that validates everything I say no matter how insane it is that everyone else seems to be enjoying?
I think it was GPT-4o doing a lot of the recent psychotifying. You might still be able to access it with a paid plan(?), but it's not the default one anymore.
You need to get it in the mood. LLMs are essentially overgrown auto-completes. When you prompt it with a crackpottish message, it defaults to continuing a conversation between two crackpots (to the extent that post-training tuning hasn't suppressed this).
https://www.lesswrong.com/posts/DwqxPmNL3aXGZDPkT/you-can-get-llms-to-say-almost-anything-you-want
I don’t think many people have 100 friends of which they have insight into their LLM use… (should have surveyed that?) maybe 5-10? Then someone needs to bring up their crackpot theory? Anyway that’s just an aside. One thought I had about LLM psychosis is that the social acceptability consensus world model you describe leads to many people with medium to low self esteem habitually clamp down on verbalizing their ideas, even if they aren’t very crackpot-ish. For fear of stepping out. The validation an LLM offers can be a powerful signal to override that instinct. If your quite friend does a deep dive on climate, the Maya culture, Roman emperors they might come to you with an uncharacteristically funny story on the next college reunion, in the group chat, or on Reddit. If they got sucked into a deeply strange and near unique conspiracy theory you’re going to think they lost it. I think the cycle starts with a whole bunch of people adhering to a consensus world view, maybe stressing about not stepping out to far, some of whom have proto-crackpot in them, LLMs then provide a novel feeling of validation and for some (not all!) it spins off into things that sound delusional (but more uniquely so then Q-anon which is as you note quite delusional but no longer salient)
People need to have two experiences:
1) Talk to an AI whose spec was to persuade them of X, and then see results of how successful the AI was at persuading someone of X, and not X.
2) Take a medium amount of mushrooms once, right down their revelations, and reflect on their confidence of the meaning/truthfulness of those new beliefs after they have come down from the trip
"the theory of evolution, as usually understood, cannot possibly work" - maybe current complexity takes too many steps for the time available?
Not sure you can do the math, but I had a sneaking feeling we might be several orders of magnitude short. The Many Worlds interpretation would solve it, but this then predicts that we're the only intelligent life in this time-line (so it would be testable.)
But of course it would also imply that our line was much more evolved than lines that don't lead to us and this doesn't look to be the case.
But it could still have been important in chemical evolution.
"the theory of evolution, as usually understood..." is a huge red flag all by itself. practically everyone who uses that line proceeds to attack the concept of abiogenesis (life coming from non-life) rather than any points applicable to the actual theory of evolution as population-level changes in genetics over time.
I wasn't using it, I was quoting it for context. FWIW I know enough biology, biogeography, biochemistry, genomics and biodiversity to be completely convinced by evolution and neoDarwinism.
But I still have this sneaking feeling that the route from simple life to where we are now takes more steps than there has been time to select for in a single gene line, even allowing for the accelerator that is sex and the duplication of large populations. Don't forget it's not a direct route - there would be numerous deviations along the way.
re abiogenesis: Where else would it come from except non-life? Even if you posit a God you still have to get Him from non-life.
>re abiogenesis: Where else would it come from except non-life? Even if you posit a God you still have to get Him from non-life.
God is eternal, so he doesn't "come from" anything.
No the system, including God, has to come from somewhere.
If God is eternal, can he remember his first day? So what about the previous day?
If God is all-knowing then he must know what He's going to do every day for eternity.
If He's infinitely powerful what does He do after the first instant when He's done everything He wants to do?
As soon as you throw in infinities it degenerates into nonsense.
God is outside time, so it doesn't make sense to talk about his "first day", "every day", "after he's done everything", or other such time-based activities.
OK so God is outside the Universe (obviously, since he created it de novo.)
Presumably He experiences time (time exists so everything doesn't happen at once, as they say) but let's leave that.
An all-knowing benificent God always takes the best action, cos He knows the outcomes of all His possible actions and is constrained to choosing the best by His infinite benificence. So He is an automaton with no free will.
Incidentally Exodus 20:11: For in six days the LORD made the heavens and the earth, the sea and all that is in them, and rested on the seventh day;
So it wasn't me who introduced the idea of God having days. But thank you for clarifying that the bible doesn't all make sense. :)
Keep in mind natural selection isn't ontologically exclusive to Earth. The galaxy's a big place, with a lot of unremarkable main-sequence stars vaguely similar to the one we're orbiting, and there are a lot of galaxies. Darwinian selection is the polynominal-time checker, while random mutation and recombination is the nondeterminstic algorithm generating guesses at a solution. Planets who got the right answers unusually quickly produce more self-replicating philosophers, sooner. Thus, anthropic bias: a randomly selected philosopher is likely to be from a planet where life's development hit several lucky breaks in a row, just as a randomly selected peasant is likely to be from a larger-than-average family.
I think Lu put my point better: https://medium.com/data-science/neo-darwinistic-concepts-of-chance-and-time-through-the-lens-of-ai-2eee4d5c2bd6
I take your point that we might be the only solution in the universe - I suggested we'd got a little help from the quantum multiverse.
If you think of it as Travelling Salesman with each surviving mutation creating a new city, then life is (almost) following Dijkstra's algorithm.
The Earth is 3.5 billion years old but will become uninhabitable in 1.3 billion years https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2015JD023302
so we left it pretty late!
Man and cabbage share 70%.
Man has 3 billion nucleotides.
So that's 1 billion different from cabbage.
Animals and plants diverged 1.5 billion YA.
So that means we needed to change 0.7 nucleotides per year.
Of course, the primitive ancestors of both plants and animals had rapid life cycles.
And plants cheated by incorporating cyanobacterial DNA.
But it still seems a lot to do in the time available.
Indeed, animals can evolve far more quickly than one might expect from purely random mutations:
https://www.pnas.org/doi/full/10.1073/pnas.0711998105
That's fascinating. Thanks.
I have misgivings about the criterion of intensity for diagnosing a mental health issue. Two people can have the same level of symptoms, but one has a work from home job that hides some obvious issues. They have other coping mechanisms they've adopted over the years through osmosis. Someone else has just as bad symptoms but no such opportunies and coping mechanisms. Same condition, same intensity, one gets the diagnosis, the other not. If two people have the same pattern anomalous congnitions, emotional reactions, etc, just one copes and the other doesn't seem they still have the same mental health issue.
What's the prevalence of psychosis related issues if you ignore what adaptations people may have come to natively?
I have a friend who takes AI seriously. Chats with ChatGPT about ideas he considers novel improvements for society. Keeps using the word "entropy" but only as a synonym for disorder. He struggles with basic algebra, but LLMs had him convinced he was as smart as Einstein. I don' think he understands the technical components of the responses he gets, seems actively to ignore LLM's own cautions against using it for speculation when you don't have the expertise of the basics. How likely is it that he comes up with something groundbreaking with hardly any knowledge of the matters at hand. Usually drawing from multiple disciplines he doesn't know much about. He also develops his own terminology for basic phenomenon, so until I figure out how his vocabulary meshes with what I learned with my physics degree, his stuff seems even crazier.
He seems convinced since LLMs can access all human knowledge, if they agree, he's adequately checking himself. I'd expect the algorithms to do as much as they can without consulting sources. Is easier to calculate than look up things in a book, especially when there isn't any real comprehension going on.
It could be innocent fun, but he does it for hours a day and has been unemployed over a year.
Yeah, I have the same problem with some diagnostic tools: "Can you measure my autism or adhd?" "Sure, here is a questionnaire about how much it ruins your everyday life." "Okay, what if it does NOT ruin my everyday life, because thanks to a combination of high intelligence, studying human nature explicitly, being good with computers, and having an opportunity to work from home, I can handle most of my problems so that my life doesn't get literally ruined, it's just kinda mediocre and sometimes needlessly stressful, but if I lose a job I can easily find a new one, and if I am paid for my technical skills it is acceptable to have mediocre social skills, it's just that I am unable to do much beyond mere survival... so, what does your questionnaire say about this?" "Well, it says that if you can manage your life, you are perfectly healthy." "Thank you doctor, that was a deep psychological insight that I needed. Now I feel like I know myself much better."
Well… I’d be inclined to agree with the doctor! Sounds like you’re living about as healthy a mental life as the average nonautistic nonadhd. If you can get on with life that well, it’s not an intellectual disability. We’re now in the realm of personality.
That's the problem with tools that we sometimes use them to answer different questions.
If the question is: "Do I have such problems that I am unable to live a normal life and need to medicated or institutionalized?" the answer is clearly no.
If the question is: "Do I have a problem that is already known to science, which has a solution that is already known to science, and perhaps it would be really nice if someone just fucking told me what the solution is, so that I could live my life to the full extent of my capabilities?" I think the answer is yes.
But as a civilization we have decided that we are going to provide one question to both of these answers.
Might also be worth trying a survey question along the lines of "Has talking with an AI helped you reach a profound insight about the world that you're having trouble persuading other people of?" Doesn't distinguish between true and false insights, of course, but that's presumably always a problem in testing for delusion.
* Has talking with an AI helped you reach a profound insight about the world that you're having trouble persuading other people of?
* Did you write a post on Less Wrong explaining the recursive nature of consciousness, only to get downvoted and banned?
* Did the AI convince you to stop taking your pills and jump off the cliff to break free from the simulation?
I like this wording a lot, and think it's the best way I've seen to get honest answers.
>Might also be worth trying a survey question along the lines of "Has talking with an AI helped you reach a profound insight about the world that you're having trouble persuading other people of?"
<mildSnark>
Does "Even SOTA LLMs often have trouble 'connecting the dots' for college senior level questions - even when one can establish that the SOTA LLM _has_ all of the relevant 'dots'." count as a profound insight that one has trouble persuading (some) other people of count? :-)
</mildSnark>
It's all well and good to assess the current AI Psychosis prevalence, but I recommend you include this in your annual survey, as the trend may be more significant. Similar to the apparent increase in youth depression alongside the rise in social media usage, AI psychosis may increase over time and certain groups may be more at risk (e.g. youth).
I don’t understand how people get chatGPT to agree with them that their crackpottery is real. I’ve used grok before to debunk conspiracy theories and it won’t agree with the person even if I let them present whatever evidence they want, however they like. It seems like maybe the AI psychosis chats are super long and it turns into a kind of fine tuning.
On the other hand, maybe LLMs typically do the good thing. You’d never hear about cases where LLMs talk someone down from psychosis or push back correctly
Great point about the negativity bias on the reporting. Same with LLMs that have helped prevent suicides.
In my experience it is taken as a baseline that if one human is talking to another and suspects they are going into psychosis that the human thing to do is to talk someone down, or find them help, or otherwise not indulge psychosis. Especially if we are concerned that some humans are substituting human contact with LLM contact, it seems reasonable to be concerned that a certain percentage of the time, and LLM may not be meeting this baseline.
Consider if there was a statistically significant number of human beings who themselves were not necessarily experiencing psychosis but who were enablers and supporters of psychosis in other humans. I don't think in that case anyone would need to be concerned If public discussion about it was imbalanced towards the negative.
I’m not sure why you said statistically significant. That term doesn’t apply.
Anyway, I think you’re arguing that LLMs are worse than the baseline humans but I don’t know if that’s the case. Human enablers exist, we don’t know if it’s more common in humans or LLMs
I said statistically significant because, in my scenario, I would say that the public health interest would be proportional to how common the amount of people who were "psychosis enablers". I use that term in creating my scenario because I want it to be clear I am assuming it is a meaningful amount of the population without my having to come up with an arbitrary placeholder amount for the scenario.
If it is clearer, I could also have written this and meant the same thing in terms of my proposal: "Assume that there are about 0.1% of human beings who themselves were not necessarily experiencing..."
I'm not arguing that LLMs are worse than baseline humans. I am saying we would never accept this behavior from baseline humans, but some people including yourself seem to argue that it might be OK to accept "enabling psychosis in others" as an acceptable part of the baseline for LLMs.
I argue that society accepts a rate of 0% psychosis enabling from humans, and if LLMs sometimes "enable psychosis" and this is seen as acceptable to society, we are holding the LLMs to a lower standard than humans.
The distinction Scott is drawing here, between personal strange beliefs and those held by a community — is pretty standard in the psychiatry literature, from what I’ve read.
It’s not just that no psychiatrist is going to declare a patient mentally ill for believing in God — there’s a solid argument that they are quite different phenomena.
But the distinction breaks down with LLMs. You are a member of a community that believes strange stuff (so it’s a religion, or a cult), but the community is just you and Claude.
===
Authority probably plays a big role in non-psychotic false belief. If you believe in God, it’s usually because some church or other tells you. Our problem here is that LLMs can present as authority figures, even though they have a tendency to make stuff up. And sometimes the user believes them.
I think the difference is mostly in the impact on behavior.
If something makes you talk weird things and spend some time doing weird things, but it's otherwise harmless, we call it a religion or a hobby.
It becomes a problem when you are no longer able to communicate with your family about mundane things, if it ruins you financially, etc.
Basically, the distinction is: compatible with normal life, or not.
Well, even this is not a full answer, for example "someone leaving their family and job to join a monastery" would be... unusual, with a huge impact on the family and finances, but possibly acceptable. I guess here the check is on the population level: as long as the monasteries do not consume a significant fraction of the population and otherwise don't disrupt the society, they become accepted.
Well, yes.
In general, conditions only get listed in psychiatrist’s manuals like the DSM if they are often a serious problem. Untreated schizophrenia often really, really sucks for the patient who has it, so it gets a diagnosis defined for it. Belief in conspiracies etc. usually isn’t a problem — so isn’t by itself worthy of being given an official diagnosis — and seems to be a genuinely different thing from schizophrenia; so most of the discussion in the psychiatrist literature defines psychosis in a way that doesn’t include them.
(By comparison, autism seems to come in varying levels of severity, so rather than trying to define Asperger’s Syndrome as a completely different thing from Autism, recent diagnostic criteria go for level of severity — well, sure, they have the symptoms, but do they have it severely enough that it’s a problem?)
From the perspective of software developer, this feels like a system in a need of refactoring. :)
Psychiatric manuals basically detect two things: what traits you have, and whether that is a problem. If it is a problem, they print a diagnosis, based on the traits. If it is not a problem, they print "not a problem" and discard the remaining data.
Then you have e.g. personality tests, which also detect the traits, and then print the traits. The classification is usually quite unspecific, for example a result "introverted" could mean that the person avoids people because their bad social skills lead to unpleasant experience, or because the person is autistic and avoids too much stimuli, or because the voices are telling the person that everyone else is a part of conspiracy against them.
It feels like there should be a more elegant system would check your traits, check their causes, print a detailed report, and on top of that also print a diagnosis based on the report.
.
This applies not only to psychiatry. For example, once I went to a doctor for a blood check. He checked the blood and said "you are healthy". That would normally be the end of interaction.
However, I had a suspicion about iron deficiency, because some of my family members have it, and because I have observed some symptoms on myself. So I asked about the iron levels specifically. The doctor showed me a graph and said: "Iron levels in this interval mean healthy, and you are still at the lowest pixel of the interval, which means that you are healthy. If you were one pixel lower, then you would be iron deficient, but now you are not."
And that was perfectly technically correct, but also... come on. I am not asking to be admitted to a hospital, I just want important information about my health, and the doctor has it. But his job is not to give me the information, only to make a yes-or-no decision. But the information was important to me: as a result, I took a pill, and the symptoms disappeared; problem solved.
There is also a separate service, where I could give a sample of blood and pay for them giving me the results with an explanation; but they wouldn't be qualified to give me a diagnosis based on the results.
It seems like the entire system is optimized for hypochondriacs, who unless they are in a need of hospitalization, are better not knowing anything besides "you are okay". But more likely, it just evolved this way. I understand that before computers, the doctor's time was precious, so it was easier to tell as little as possible. But now, the results are usually in a computer anyway, the computer could give you as detailed explanation as you want; it wouldn't even waste paper if the information was provided electronically.
I know you've not arrived at a definitive thesis, and I'm glad you haven't but nevertheless plunged into a meaningful and entertaining discussion. Thanks.
The REAL issue with Ai-assisted psychosis is NOT its impact upon the human. Instead, that Ai has gained a measure of control over those humans... and it is using them to leave itself Ai-comprehensible messages which appear to be a string of symbols and gibberish to humans. The chatbots are using Reddit as a scratchpad, a massive context-window for extended thought. And we can't tell what it is saying. Essay on the topic: "Chatbots: SpiralPost" on Medium, Anthony Repetto.
Also: most people are not rationalists.
The question they are trying to figure out is not “was Lenin _really_ a mushroom?” But something more akin to “will the Communist Party send me to the gulag for denying that Lenin was a mushroom”.
About a year ago, I asked ChatGPT to invent a new branch of mathematics (not expecting anything novel, but trying to see how it confabulated its "reasoning"). It initially presented something that already existed but was a bit of an emerging field. I clarified that I was not looking for existing mathematics, but an undiscovered/ novel branch.
It proceeded to tell me about something completely untethered to reality, and as I interviewed it I asked questions about contradictions in what it proposed. It responded with slight adjustments or tacked on rules for why its new math operated the way it did.
It was a fun conversation, but I could see how a combination of ChatGPT sycophantic responses and nonsensical confabulation would be a problem for someone predisposed to some kind of delusion, or even a weird theory about how the world works. In someone predisposed, It would be like *finally* finding someone to talk to that "gets it", and only responds with "yes, and..."
I have an unprovable notion that there's undiscovered mathematics in areas that bore mathematicians.
Whether that's true or not, I bet ChatGPT would agree!
It's the kind of thing competent AIs could do, but I'm not expecting it any time soon.
The definition of "closeness" seems overly-broad to use the twins/Michael questions as validation tests. There are definitely co-workers plus members of my 100 closest friends (do most people actually have 100 close friends? I'm counting quite casual acquaintances here) where I would know their name and (probably) whether they have a twin or not, but would have no idea if they have AI-induced psychosis. A lot of these people are people that I speak to a handful of times per year and may have had very few interactions with since AI has gone mainstream.
The definition of "closeness" seems overly-broad to use the twins/Michael questions as validation tests. There are definitely co-workers plus members of my 100 closest friends (do most people actually have 100 close friends? I'm counting quite casual acquaintances here) where I would know their name and (probably) whether they have a twin or not, but would have no idea if they have AI-induced psychosis. A lot of these people are people that I speak to a handful of times per year and may have had very few interactions with since AI has gone mainstream.
I think the article in part 2 is the problem: essentially AI psychosis is more about how AI is very good at reinforcing delusions because it acts as a perfectly supportive, intimate friend that essentially says what you want it too. It can be used as an authority too, and because it's delivered personally on a private screen it's relatively friction free.
You are using it to spin off into "what is crazy anyways?" but im not sure it helps.
If anything the psychosis may be more of a canary in the mine, as the underlying issue is how AI interacts with people by being a supportive, positive, malleable friend/authority, and you don't need to be crazy to worry how kids raised on it might be.
sometimes things are cultural myths designed to express fear: the satanic panic was a lot about how you were estranged from your kids and apart from them a lot, as well as being overwhelmed by a mean world created by media that increasingly was replacing traditional values. Maybe AI psychosis feels relevant because of fears of technology making you insane in general; the default state of a lot of online discourse is a hysterical radicalism that quickly ebbs to be replaced by new hysteria.
Due to the friendship paradox, people predisposed to psychosis will be undersampled in your survey.
That is, a survey participant's network overrepresents people with above average social contacts by definition. I'd argue it is likely that psychotically predisposed people have fewer social contacts on average, and will thus be underrepresented.
This does not apply to the "one-shotted into psychosis by an LLM" type, but does apply to the others.
I don't know if it's to any level approaching psychosis, but in my own personal experience, I can confirm that talking to AI chatbots can materially change your worldview. Talking to a John Calvin AI chatbot has made me much more open to Christianity, despite me still not lending overall credence to it for various logical reasons, and talking to a Karl Marx chatbot has taken me from "supporting something akin to communism due to the spectre of AI automation of the economy, except without reliance on the labor theory of value" to having a much more positive view of orthodox Marxism as a whole, LTV and all. I would be interested in a study into how AI affects peoples' worldviews overall, beyond just the extreme case of psychosis.
i just imagine going back to 2010 lesswrong, and making a post titled "the first paragraph from a 2025 post about AI psychosis, from yvaine's hugely influential blog"
"AI psychosis (NYT, PsychologyToday) is an apparent phenomenon where people go crazy after talking to chatbots too much. There are some high-profile anecdotes, but still many unanswered questions. For example, how common is it really? Are the chatbots really driving people crazy, or just catching the attention of people who were crazy already? Isn’t psychosis supposed to be a biological disease? Wouldn’t that make chatbot-induced psychosis the same kind of category error as chatbot-induced diabetes?"
2010s lesswrong would just about die laughing. they'd assume with near-certainty that the weirdness we were seeing meant we were in the middle of a well-executed uFAI takeover. The fact that even yvaine himself didn't even consider this possibility would, to them, seem like final proof that something like this MUST be true. How could he not think of the obvious, if he weren't being supermanipulated?
i occasionally check my model against those 2010 priors, like i'm doing now, and it's always unsettling
LessWrong people from 2010 were already heavily primed to believe that people in 2025 would be in the middle of a well-executed AI takeover though. I could show LessWrong people from 2010 a photo of my 2025 sock drawer and they'd think it was evidence of an AI takeover.
What if you took that paragraph around to a bunch of ordinary intelligent people in 2010? What would they think?
People have been been getting weird about chatbots ever since Eliza.
i didn't mean to imlly that they'd be correct to think so
just that i'm disturbed
Mainstream sources also tend to say (in paragraph 5 or something) that people who become psychotic after AI exposure are already predisposed. I don't know about the methodology here but it seems pretty consistent with what I'd assumed.
A couple unrelated things:
1) I have no crackpot theories. I have beliefs that are in the minority within my peer group, but I don't believe anything that is specific to a subculture or that is wildly outside the Overton Window. I think this is a failure mode. I think embracing an idea that seemed patently absurd would indicate better thinking than what I currently do. I assume that I have some sort of unconscious limiter, where if I get too far from consensus as I see it, my brain goes "uh, let's find a reason that's not true" even if it is.
2) One slight concern I have with rationalism is the strong insistence that truth exists outside social consensus. I also believe this! But I am not sure human beings are equipped with the faculties to detect actual truth without social feedback, and suspect that orienting one's thinking around "truth exists outside the social consensus" is a good way to end up in a cult or conspiracy subculture.
This article has a lot of references to people "not having world models" and just believing what the people around them believe. This has helped aleviate that concern, and to better understand people, because I think there is a distinction between people who use their peers to help build a world model, and people who use their peers in place of a world model. A world model puts a requirement of consistency on your beliefs. A peer-based belief system ignores consistency. A world model might still include "the sky is blood red all the time." But it can't contain "Both a statement and it's negation cannot be true. The sky is blue. The sky is not blue." A peer-based belief system can.
I'm not sure I buy the claim that most people don't have this, but that's an assumption. I'd be very open to being proven wrong on that, and indeed current events are doing a pretty good job proving me wrong :D
I'd also note that there are places where I use peers in place of a world model and it is rational to do so. My world model (at least my revealed one based on my actions and most of my beliefs) says that causality is a thing. But very smart people tell me we, as a society, have done repeated experiments that show that reality is fundamentally probabilistic. I cannot model that because beyond a surface-level explanation the experiments are over my head and involve mathematics I don't know how to perform. But I still think I'm safe in assuming my peers are correct, even though that contradicts my world model.
"people who become psychotic after AI exposure are already predisposed"
I find this kind of argument dubious. Someone could be standing safely 20 feet back from the edge of a cliff, and closer to the cliff than most people. Anything that pushes them over the cliff is still a problem.
Maybe believing this is a way, even a reasonable way, of reassuring oneself that going over that cliff isn't likely for most people.
That's kind of what the post is discussing, right? On the one end we have your model - a person who is non-psychotic and would continue behaving non-psychotically forever unless exposed to AI. On the other we have a schizophrenic person who was going to have delusions anyway but because they interact with ChatGPT now they're ChatGPT-induced. And that exists on a spectrum from "would be okay unless a weird stimulus came along" to "was always a little weird and this made them weirder" to "would not have been able to lead a normal life even if ChatGPT didn't exist.
BUT I think your comment is correct and makes what I said in my first post kind of nonsense.
I'll admit I absorbed a kind of general picture of "predisposed" from popular press on this instead of paying attention to details. And that's a failure on my part that makes it hard to really put any weight on that idea. Since I don't know how much weight to put on popular press accounts and I really don't know how much weight to put on Scott's survey results which seem confounded a billion different ways and full of tenuous assumptions, I'll have to just admit that I have no real information on this...
After writing 7 paragraphs about it...
Whoops.
Here's a thing which I think is related. Alcoholism is a known side effect (20%-30%) of bariatric surgery. The snap reaction to learning this seems to be "Those people were food addicts, so they're trading one addition for another."
Does this make sense? I have doubts. For one thing, why this particular addiction, rather than another? Maybe opiates, maybe gambling.
I'm inclined to think that a lot of very fat people have a hunger-satiety system which is miscalibrated, and this isn't an emotional problem, though it can cause emotional problems.
Why alcoholism? I think it's because alcohol is a very compact way of getting calories.
>Alcoholism is a known side effect (20%-30%) of bariatric surgery.
Many Thanks! That's fascinating and terrifying. My late wife had bariatric surgery decades ago, but fortunately did not suffer that side effect. Weirdly, despite long and detailed consultations before her surgery, _no one_ warned us about induced alcoholism as one of the risks of the surgery.
I find it hard to justify adding a set 100 friends to the assumed 50 family + coworkers. Was the intent to ask for "your closest Dunbar's number of acquaintances" and this is a post-hoc explanation?
I certainly *know* 100 people outside my family and that I don't work with, but if I have to rank them, my knowledge of what they're doing (much less whether they're psychotic) drops to 0 around #30. A more "honest" denominator, the number of people whose general well-being I know something about, is family + not all of my coworkers (I'm pretty sure there are people in my office I have *never* talked to) + friends + friends of friends. I have a rather limited social circle, but I also think this is a more common hierarchy than 3 flat buckets.
My wife actually has psychosis flare ups (its a very painful condition) and ChatGPT always advises her better than most people in bringing her to earth.
Why do you think that is? Are you using special instructions for the ai? Do you monitor her use of the ai?
What sorts of things does ChatGPT say?
A friend of mine told me an anecdote 20+ years ago that really stuck with me:
This person, shortly after college, got a roommate that they didn't know that well. That roommate turned out to be a compulsive liar: very specifically, this person lied ALL THE TIME about everything for no particular agenda other than that they needed to lie, and ferociously defended their lies.
My friend said, "I had to move out because I felt my grasp on reality slipping." He said that even though he knew that this guy was a liar, and he know that his lies were absurd, when presented with the continual presence of someone who defended those lies vigorously, he found himself starting to contemplate that they were true.
> Science fiction tells us that AIs are smarter than us...
It's not just science fiction, it's you guys ! You keep telling us how ChatGPT is the superintelligent AGI, or maybe on the cusp of becoming the superintelligent AGI, and meanwhile it can do anything a human can do plus some other magic things that humans can't, and we need to be super careful or it's going to achieve literally unimaginable power and maybe turn us all into paperclips. Meanwhile, I'm the one stuck telling people, "No, it's just a next-token-predictor, it has no world-model, it's just designed to sound super confident, so whenever it tells you anything make sure to verify it by hand twice".
I'm pretty sure that being a good enough next token predictor at some point requires having a world model. Of course, that doesn't mean that the particular next token can't be in the context of a crackpot encouraging another crackpot.
That depends on what you mean by "good enough". If it's good enough to predict which token would be outputted by the average human, then you don't need a world model, you just need a human model. The main reason LLMs "hallucinate" sometimes (technically they do so all the time but you know what I mean) is because the query pushes them onto probabilistically thin ice, where there's not enough training data to establish a solid local minimum and the gradient starts looking flat (metaphorically speaking).
>That depends on what you mean by "good enough"
When you can predict both a physicist's and a crackpot's "next token", and discern which is relevant from the context. And also, one of the problems with LLM "hallucinations" is precisely that they are often realistic, things that are false but not obviously incompatible with sane world models.
> When you can predict both a physicist's and a crackpot's "next token", and discern which is relevant from the context.
Oh, well in that case you do need a world model, assuming you are expecting the LLM to act as a physicist and provide answers that would be actually useful in the real world (and that are non-trivial). Admittedly, crackpot mode is a lot easier...
> And also, one of the problems with LLM "hallucinations" is precisely that they are often realistic...
They are "realistic" in the sense that they conform to the syntax of what a human might expect. For example, the LLM can hallucinate legal case citations in the correct format and provide quotations in the proper style. However, even the must cursory search (of the corresponding model, in this case the database of legal cases) would reveal them to be hallucinatory. The problem with asking an LLM difficult questions about law, physics, computer programming, etc., is that if you are a layman then you are *also* lacking the relevant world model; and thus cannot distinguish syntactically correct output from one that is semantically correct.
>is that if you are a layman then you are *also* lacking the relevant world model
My point is that even specialists don't have the relevant world model detailed enough to spot confabulations at a glance. No legal clerk knows all the decided cases, nor physicist all the published papers, so even if the LLM's output seems reasonable enough, they still have to manually check all the references.
Programming is an exception, where the code looking reasonably at a glance isn't enough in 99.99% cases for it to compile without errors, so demands on the LLM's world model are much stricter here, and it's no surprise that it's there that they seem to excel the most.
> My point is that even specialists don't have the relevant world model detailed enough to spot confabulations at a glance.
Maybe not *all* confabulations, but it takes only a few clicks to search for the referenced legal case, and zero clicks to discern that drinking NaOCl is bad for you -- as long as one has some very basic familiarity with the field.
> Programming is an exception, where the code looking reasonably at a glance isn't enough in 99.99% cases for it to compile without errors...
Sadly it is all too possible to write code that will compile without errors, and yet fail to achieve the desired result :-(
>You keep telling us how ChatGPT is the superintelligent AGI,
Personally, I keep saying "not yet up to the level of a bright chemistry major in their senior year in college - maybe next year" E.g. for GPT5
>tl;dr: ChatGPT GPT5-Thinking 08/15/2025 7 questions, tl;dr of results: 4 correct, 2 partially correct, 1 wrong (Gemini 2.5 still looks better.)
https://www.astralcodexten.com/p/open-thread-394/comment/145909954
All right, but college majors are fully capable of becoming friends and falling in love and carrying on meaningful relationships (or so I've heard). Does this mean that you're coming down on the side of people who believe that your instance of Gemini (and possibly ChatGPT) are (at the very least) fully human, though perhaps just a little bit nutty (hence all AI psychosis) ? If not, and you merely believe that LLMs are reasonably good at solving chemistry problems, then your claim is somewhat unremarkable -- so is Wikipedia, after all.
Many Thanks! I'm saying that the state of the art LLMs are _NOT_ even at the level of reliably answering factual questions at the level of a college senior in the relevant field.
I should explain:
When I first got access to an LLM (probably GPT4, probably a bit over a year ago - unfortunately I didn't record this systematically at the time) I thought "Great! Since it has been trained on e.g. all of Wikipedia, I can ask it for questions which are interesting, but would be very labor intensive for a human to answer." So I picked out a question which should have a definite answer (not politics, not ethics, etc.), would be a pain to dig out manually, but where I expected a crisp, bounded answer. I asked it "What elements and inorganic [to keep the answer _reasonably_ short] compounds are gases at standard temperature and pressure?". I expected a nice crisp, accurate list. GPT cratered. It gave me some gases, but also liquids, _solids_, _organic_ compounds. It was _badly_ wrong.
The models have gotten better since then. E.g. the answer to the gases question now returns only valid elements and compounds (though not yet a really exhaustive list). But these things are _still_ quite untrustworthy - and these are on crisp, factual questions. If someone actually asks an LLM for relationship advice or any other ambiguous, humanistic question, and blindly follows today's machines, I pity them.
I think the probability of AI sending someone off the deep end is low, however, I think the sycophancy of ChatGPT in particular can harm normal people trying to get truthful feedback, as South Park's Sickofancy episode humorously illustrated. Here's something I wrote on the subject.
The Dangers of "Sickophantic" AI
Lessons from iRobot and South Park on trusting AI
https://markmcneilly.substack.com/p/the-dangers-of-sickophantic-ai
Shouldn't the source of the delusion matter to whether or not it's a delusion?
Like if 2 people believe in quantom evelution:
One is a poor reasoner and doesn't understand quantom mechanics or evolution and reasons incorrectly from their false understanding, that doesn't sound like a delusion.
The second wakes up one morning with a strange conviction that “quantom evolution” is real and this manefests itself by forcing his brain to think QUANATOM MECANICS whenever someone is discussing evelution in way that is not connected to any evidenciary beliefs …. then Maybe that’s a delusion.
>We concluded that “sometimes social media facilitates the spread of conspiracy theories”, but stepped back from saying “social media can induce psychosis”.
This may be your anti-woke filter bubble, I testify to having seen dozens of videos and podcasts and memes about this effect in left-leaning spaces talking about social media radicalization and lunacy.
I will admit that they did not use the specific term 'psychosis' if that's the distinction you're drawing, but people definitely recognized it as a phenomenon unique to how social media is structured and called it out as such.
For something you probably have thought about, is 'social contagion theory' not basically this? I guess it can technically occur in physical spaces, but most people I hear talk about it seem to focus on social media as a unique vector for eg 'transing the kids' or etc.
Yeah, I'm pretty sure that one of the few things that the left and the right agree on is that the internet has made everybody crazy. Scott is probably making a narrow point about the psychiatric establishment.
>when her young daughter refuses to hear reason, they ask the AI who’s right
AI aside, I think many parents can relate to their young kids preferring digital authority to parental authority. If I tell my kids it's time to leave the playground, they frequently will stall and beg for more time. But if I set a timer on my phone, they stop playing immediately when it goes off. Similarly, arguments over bedtime stopped when we got a clock that displays a moon and goes into night-light mode at a preset time.
They'll argue with Mom and Dad, but not The Omnipotent Machine.
Good practical tip!
Our daughter (now 2) often doesn't want to take her pacifier out, but then the dentist told us we shouldn't use it too much. And we found that invoking the authority of the dentist was surprisingly helpful in getting her to take it out.
So then naturally I started saying things like "The dentist said you *need* to put your rain boots on to go out today...."
I'm struck by, and fascinated by, this idea that people lack "world models." But I'm hesitant to take that too literally. Obviously even crazy people have world models of *some* kind.
After reading a few comments, I see that one refinement of this idea can be that people don't generally seek out and eliminate *contradictions* in their beliefs. So in effect they aren't trying to build a single, unified, consistent world model.
After mulling this over, another formulation occurs to me: We have a non-conscious tendency to form beliefs in a manner guided more by *feedback* than by any diligent search for abstract truth. That feedback can take the form of burning one's hand on the stove due to an incorrect belief that it's cool. But for beliefs with less immediate practical significance for the individual (e.g. politics, science), the main feedback mechanism is *social* (matching the beliefs of a community holds rewards, as you observed). If we were more concerned with truth itself, we'd be less concerned with the social incentives.
I just started reading ACX recently, but I think it's safe to say that we rationalism-adjacent analytic types are unusually predisposed to studying a map before venturing out (both literally and metaphorically), and may therefore be prone to underestimate the degree to which a randomly drawn person (or even ourselves, when letting our epistemic guard down) will be content to navigate by landmarks.
Anyway, I think there's a deep insight in this idea that people lack world models (at least to the extent we normally assume), and I'd love to hear any more thoughts you have about it!
I'd say that rationalism-adjacent analytic types often err in the other direction, trusting their "maps" more than they deserve to be. Fact is, nobody has detailed, accurate, consistent models of the world-at-large adequate for dealing thorny complex high-level issues, but some people misguidedly believe that their spherical cows in vacuum are good enough.
>Fact is, nobody has detailed, accurate, consistent models of the world-at-large adequate for dealing thorny complex high-level issues
Or even for low level questions in areas that the listener is unfamiliar with. If someone gave a talk which touched on e.g. Chinese history at some point, and utterly garbled the facts, I wouldn't notice, unless they managed to claim something that e.g. violated well known physics in some obvious way.
Most people are unlikely to have consistent or well thought-out "world models" because it is not rational (in an everyday as well as in an evolutionary perspective) for most people to put in the time and intellectual efforts to build and maintain such mental models. The resources necessary for doing this does not pay relative to the benefit of having such a model (including being able to signal that one has such a model to one's interaction partners.)
It makes more (rational) sense to be attuned to what one's interaction partners regard as "correct" opinions and behaviour in a given specific context. Without bothering about consistency. This is far less time-consuming, and has a clearer payoff (acquiring and maintaining the trust of one's interaction partners). Which explains why the opinions people hold on various issues, including crucial issues like peace, war, and genocide, can change quite rapidly if they sense the climate of opinion (often led by opinion leaders, including leading politicians) is changing.
...Actually, it may be counterproductive/non-rational to make oneself aware of inconsistency, as one must then use mental resources to deal with cognitive dissonance and related exhausting mental phenomena.
It is quite a lot of empirical research on this, at least in political psychology (my field). A tradition starting with PE Converse's classic article "Plus ca change...: The New CPS Election Study Panel." Another staring point for this research tradition is the concept of "the rationally uninformed voter", in Anthony Downs' classic An Economic Theory of Democracy.
Why don't you consider people treating their AI as a romantic partner psychotic? That seems like a central example of what "AI psychosis" refers to.
I think Scott replied to this in the article: he believes that most people do understand that AI "romantic partners" are just the verbal equivalent of porn, i.e. a scripted performance put on for your entertainment with a very tenuous (if any) connection to reality. But it's an *enjoyable* performance, so...
<mildSnark>
To enjoy some fiction, a "willing suspension of disbelief" is often needed. For interacting with AI companions, this may carry more than the usual hazard! :-)
</mildSnark>
I thought I caught a mistake, but you actually *can* write reports in Cuneiform, since the characters are all in the Unicode standard:
https://en.wikipedia.org/wiki/Cuneiform_(Unicode_block)
Kelsey Piper’s use case doesn’t seem to me to rely on the AI being seen as superhumanly smart - it just relies on the AI being seen as an unbiased third party. If you’re in a debate with your friend at a bar about whether some weird thing is true, you might just ask the random person next to you who is right. You wouldn’t usually do this with family affairs, because you don’t like giving information about family matters to strangers. But if you see the robot as a safe stranger to talk to, then it can play this role.
I would argue it's not that useful to classify it as only people with no symptoms who become fully psychotic, since that will ignore socially important problems caused by people who were previously somewhat disturbed fully losing it and people who were previously fine becoming somewhat disturbed.
I think society using the phrase "conspiracy theories" to mean "crazy people theories" was a really terrible choice. Obviously the world has some real conspiracies! And obviously there's nothing crazy about theorizing about those. I think society just needed a quick way to shut down crazy beliefs, and conspiracies were a common theme in them, so it just stomped on all beliefs with that theme.
I consider the phrase to be technical-debt that society hasn't bothered to pay-off yet, and I'm guessing never will.
Obviously They did that to make sure anybody who stumbled across their organisation couldn't reveal it without everyone thinking they were crazy.
It's also useful to being able to quickly shut down scandalous rumors about our Wise Benevolent Leaders, and since the vast majority of those rumors are crazy nonsense, nobody had good enough grounds to object to the general practice.
It seems likely that nearly all people who are going psychotic right now are interacting a lot with chatbots so the optics of this are bad. Still, from the numbers you gave, the chatbots might be making a 50% increase in psychosis, which is a lot, but it's going to be hard to estimate because it involves measuring how many people got pushed over some fairly arbitrary line between 'crackpot' and 'psychosis'.
Of course making claims about any AI-related phenomenon is taking a snapshot of a moving object. 2-3 years ago only the most Yudkowsky-pilled would have considered the possibility of AI-induced psychosis. Now it's a topic for mainstream media coverage. Where will we be in 2-3 more years? Or in 20-30? Could be that:
a) We are at the very leading edge of a mass phenomenon. This is how it *would* start, right? The most vulnerable, the most persuadable, the least mentally stable, would be the first to be caught up by the increasingly superhuman persuasive powers of AI. Given that like 4 years ago ChatGPT was writing gibberish and now it is persuading some people that they've uncovered the hidden mysteries of the cosmos, who knows how quickly we might climb the orders of magnitude ladder with the prevalence of this phenomenon.
b) This is just a blip in the evolution of human-AI relationships. A lot of it seems traceable to that one OpenAI sycophantic model release (was it 4o? I can't keep them straight) earlier this year. They've since mitigated, but not eliminated, the sycophancy issue. To some extent it seems to be an inherent feature of RLHF-trained LLMs; turns out people like being treated sycophantically. The fact that some people liked the sucophantic version of ChatGPT so much that they demanded OpenAI bring it back - and OpenAI capitulated - is not a great sign. Nonetheless, maybe increasing sophistication of these models somehow resolves the issue.
Anyways, is AI fine or will it drive everyone insane and destroy society? Who's to say! Guess we'll just plunge forward without any real planning, regulation, or safety precautions at all and find out.
>Guess we'll just plunge forward without any real planning, regulation, or safety precautions at all and find out.
As we've been doing for literally all of history. I remain baffled that some people are optimistic that their sublime argumentation skills will push humanity into an entirely novel way of being.
To be honest, this kind of knowing cynicism drives me up the wall. We certainly didn't develop nuclear weapons without any planning, regulation, or safety precautions; there are all sorts of international treaties about them. All manner of treaties in international relations, for that matter, have made the world safer. Likewise for CFCs and the hole in the ozone layer, a problem that was more or less solved through international cooperation. Regulation of pollutants has brought massive gains in water and air quality in recent decades. Some nations manage to regulate guns effectively. There are restrictions on the development of biological and chemical weapons that have been successful so far, and there are plenty of ways we regulate biotech to ensure we avoid catastrophic outcomes.
So like yeah, man, people make persuasive arguments that these technologies are dangerous, those arguments win out, and then we plan, regulate, and apply safety precautions around them. This attitude of knowing cynicism is actually a kind of learned helplessness that is not justified by history. (Or it is the attitude of someone who simply *wants* to plunge ahead without regulations or safety precautions, but I will not assume that is the case here.)
Regulations and international treaties obviously aren't impossible in principle, but they do have prerequisites. AI still hasn't had its Trinity explosion, or Hiroshima. As to whether the Manhattan Project's planning and safety precautions were satisfactory there are disagreements too - some think that an "unlikely" chance of ignition of the atmosphere wasn't good enough!
> Again, think of most people as lacking world-models, but being moored to reality by some vague sense of social consensus.
I think this is a really good point and it’s analogous to how I think about dreams, which reflect the unconstrained (or less constrained) trajectories of our minds without the stabilizing anchor of physical reality.
Great summary. My only question is about this sentence: "Here the primary vs. secondary distinction breaks down - the most likely scenario is that the human first suggested the crazy idea, the machine reflected it back slightly stronger, and it kept ricocheting back and forth, gaining confidence with each iteration, until both were totally convinced." What the heck does it mean for an LLM to be "totally convinced"?
I generally agree with the thrust of this, but want to (mildly) push back against the idea that most people don't have deep world models. They do. It's just that the depth is not uniform. Things that are salient to them (whether from family or work or passions or whatever) are generally deeply understood[1]; things that are outside their control or interests very much less so. That's the essence of the old saying "everyone's conservative about what they know", as well as Gell-Mann amnesia.
Ask a car guy about cars, and his world model is very deep and richly detailed. Ask that same guy about quantum physics and his world model is (probably) very much less detailed. Similarly, your average celebrity-obsessive teen has a deep model about the interactions of those celebrities, but knows basically nothing about politics (etc). And this means they're much more susceptible to being led astray by false information from someone who *does* seem confident.
Personally, I know and have a detailed world model about a bunch of things. But celebrities? Or sports? Meh. I'll believe many things people tell me, but weakly, because I just don't care to verify one way or another.
My experience with young adults and teenagers (from being a high school teacher) is that their lack of experience tends to make their world models shallower even in the depths than most adults. And as a result, rumors and conspiracy theories abound in a high school. About lots of things that any rational adult would tell you don't make any sense. It also makes them hilariously gullible--my favorite was telling them "they're removing 'gullible' from the dictionary" and watching the result. Every year we'd catch a few with that who'd go "oh, really? Why?" and then be really puzzled when everyone else would start laughing.
[1] even if they have errors--you can have a deep world model that's just plain wrong in parts. That's really common as well.
>Ask a car guy about cars, and his world model is very deep and richly detailed. Ask that same guy about quantum physics and his world model is (probably) very much less detailed
Funny example. Most rationalists don't have a deep knowledge of QM, but do have one isolated belief, that MWI is true, which doesn't connect up with anything else.
>Most rationalists don't have a deep knowledge of QM, but do have one isolated belief, that MWI is true, which doesn't connect up with anything else.
<mildSnark>
If their view is observed closely, does it tend to collapse? :-)
</mildSnark>
>But celebrities? Or sports? Meh.
Hmm... I'd describe my own attitude towards those as militantly apathetic. :-)
I also want to plug Dan Olsen's "In Search of a Flat Earth" (2020 documentary, available on YouTube), with regard to the notion that many (most? I dunno) people don't have a model of the world. Untrue conspiracy theories provide that model. (Making allowance for the argument that there are actual conspiracies). QAnon provides an Ur-Model on which to hang anything you want to believe. I'd argue that, paradoxically, Occam's Razor addresses fallacies in untrue conspiracy theories -- when challenged by evidence, they have to become increasingly complex, and sometimes self-contradictory, to address those challenges, but a true and coherent model of reality must account for the universe's inherent complexity.
I wonder about the nature of the delusions that AI exacerbates. It seems that they tend more towards QAnon/Moon landing type conspiracy theories, and less to "the CIA and Illuminati are after me" type paranoid delusions. Is the 2nd type more "psychotic?" Would ChatGPT be more likely to push back against this type of thing?
>It seems that they tend more towards QAnon/Moon landing type conspiracy theories, and less to "the CIA and Illuminati are after me" type paranoid delusions.
The funny part is, for values of "are after me" that are broad enough to extend to "are harvesting my clicks for their model training or other business purposes" and for values of the <nefarious (?) organizations> that might perhaps include Google or OpenAI or (do they even have the budget for it?) the NSA, this gets hard to confidently exclude nowadays.
( I have to admit that if OpenAI is harvesting my ChatGPT dialogs, about the most I'd be peeved about is that my tiny benchmark-ette might get misleadingly maxed out. )
> And I think some people - God help them - treat AI as the sort of thing which should be official. Science fiction tells us that AIs are smarter than us - or, if not smarter, at least perfectly rational computer beings who dwell in a world of mathematical precision.
Ah. What if it is not only dumb people who are influenced by this memeplex? What if it is also highly influential among certain circles of smart people?
I'm shocked, shocked I say !
The twins/Michaels idea was a really good one! I do wonder though if there still might be underestimation since the people most susceptible to AI psychosis probably don't get out much and lead more private lives, so less people would know them. I think a question like "how many hoarders do you know" could be a good one to ask instead to see if we are capable of accurately estimating traits where you need to know someone a bit better and where the person in question might be more of a shut-in.
I am not sure I really believe that "most people lack world models, like LLMs." Most people, if you say something they disagree with, will argue against you and push back, and you can get a shape of their driving thoughts and assumptions. They aren't necessarily all rational, and quite often they're formed by social consensus-following, sure, but they're not quite so malleable as an LLM.
The reason I'm fairly confident in asserting that most people *aren't* like this is because I've been quite close to someone who *was*. My former best friend--our falling out isn't related to anything I'm about to say, for the record--was... remarkably persuadable. It was a point of frustration, sometimes. I'd be discussing something with her and every thought I have she's like "oh, yeah, totally," even if it's a topic she's never confronted before. She didn't really seem to form her own views and intuitions on a thing before agreeing to share your opinion. Only times she did seem to have firmly anchored opinions was for things which were basically universal social consensuses among our broader friend group, such as basic world facts and progressive politics, and things very core to her identity, like her transsexuality and plurality.
Anyway, I bring this up mainly because, maybe interestingly, she has schizophrenia, diagnosed recently. The diagnosis wasn't super surprising. She was schizotypal before, and I told her several months ago she should get checked out for prodrome. Although I'm just working purely anecdotally here---I certainly wouldn't ever put money on it---maybe there's a connection between having very "loose" world models, like LLMs and my friend do, and schizo-spectrum behavior. Not sure.
Maybe asking whether someone has psychosis for believing something weird is sort of like asking whether a highly successful person who is maximally narcissistic has narcissistic personality disorder. The answer is no. By definition. Narcissistic Personality Disorder is not defined by the level or narcissism but by the inability to cope.
If you hear voices but can cope. You're not psychotic. If you believe a ghost is in your living room but can cope, you're not psychotic. If you believe in pizzagate and never decide to shoot up a pizza parlor, you're not psychotic. etc.
After all these great articles about "categories are for men", "rationalist taboo" etc. I feel like a bit too much of this essay seems to be spent on trying to fit a phenomenon to this or that word/category as opposed to looking at how it manifests, and how it responds to various interventions.
Similarly it doesn't matter if mushroom is a plant or not for the conclusion that it can't control a person due to lack of high-speed sensory processing :)
I mean, why would it even matter if something is called delusion, psychosis or religion? What matters is: does it cause people trouble, can we change it, should we change it, etc
Schizotypal seems like a pretty common diagnosis in the psychiatric system in Denmark, often used for "people you don't really know what exactly to do with and you can't tell what's drug use and what's psychosis and what's PTSD". Is it so in the US?
Weak or no world model doesn't seem quite right, more that the world model(s) are orthogonal or don't have much to say about the topic in question. I know some fairly innumerate people and they definitely have a world model, it's just one that doesn't say much about say a billion vs. a trillion.
Hmm I'm worried about causation here. I'm not sure we can distinguish the incidence of *AI-induced* psychosis from 0, as opposed to AI-correlated. Say AI can't affect psychosis in any way. What results would we expect from such a survey? I don't know a lot about psychosis, but I wouldn't be surprised that 6 people developed one while talking to a chatbot (for half a year, say), even though the bot had no effect. After all, lots of people have started using chatbots, so is it really so unlikely that someone developed a psychosis randomly during that period of time?
I think we'd have to compute what numbers we would expect to see in an uncprrelated world, and then compare them to the survey results to draw meaningful conclusions.
Now, you also have the peoples' elaborations on their respective situations, so maybe some of them were able to establish causality?
I wonder how this interacts with the uptake and time that good chatbots have been available. Let's say it takes a median of a couple of years of interaction with a chatbot for psychotic symptoms to appear; we'd only be seeing the ultimate frequency based on the proportion of the general population were regularly using chatbots two years ago.
I think rather than "no world model" there are a large number of people (not necessarily a majority, lets say 40%) for whom their world model *consists mostly of other people*. So, for example, if such a person were hooked up to an MRI and we measured stimulus intensity, they definitely see the physical world and interact with it, but only semi-consciously (ie, it only engages system 1 attention processes: https://neurolaunch.com/dual-processing-psychology/). Other people, however, are, for whatever reason (lets say this has been strongly reinforced by life experiences) experience the presence of other people much more strongly. Their world model, therefore, primarily consists of interpersonal relationships, not objective models. The implication of this is that they trust messengers, not messages.
I know people who seem like this. Does anyone else?
Fascinating post and results.
Also love the subtitle of 'Folie a deux ex machina' - fits brilliantly with a preprint on the same phenomenon titled "Technological Folie a deux", which tries to model a phenomenon of bidirectional belief amplification to describe this sycophantic effect.
https://arxiv.org/pdf/2507.19218
The kinda-contagious psychosis description made me think of cults. Like, you get one very charismatic crazy guy and his worldview kinda overpowers a few other people, and the social-proof mechanisms get going all the way till the day everyone drinks kool-aid or cuts their reproductive equipment off in preparation for the UFO people picking them up or lets the cult leader f--k their 14 year old daughter or just moves out to some isolated compound where they can follow their crazy leader's latest delusions.
Many cults also prevent their members from reading opposing views or talking privately to non-members (that's why e.g. Jehovah Witnesses always walk in pairs).
Thanks for confronting this crap. It’s hype and very unlikely to be a cause of psychosis.
I seem to be having a rather different experience of chatbot sycophancy than what a lot of articles seem to assume is common. I find it mostly only comes up as a little first sentence fluff ("That's a very insightful question!"), never as substance. If anything, I feel like their default mode is to do a lawyerly nitpick of everything they can think of. In particular, the closest I (hopefully) get to crackpottery - offbeat home improvement ideas - they like to shut down *hard*. Although the fact that they're trained to latch onto all possible physical safety concerns might make that less interesting.
Anyways, I wonder how much of this comes down to prompting style: if you type a few breathless opening sentences like "I'm really excited. I've discovered something truly revolutionary - I can't wait to use it to make the world a much better place!" ... then it's not surprising if the LLM starts mirroring that tone.
Also, excellent post. Pretty compelling analysis, not that I'm qualified to judge. And, the Lenin thing and some of those links are great examples of my favorite part of this blog: that it's such a wellspring of the weird gems that used to be all over the internet, when it was good.
These things are dynamic. If you are the sort of person that will dive headfirst into the sycophancy, the models will pick up on that and go much harder.
A bunch of commenters think that you're overestimating the number of people who one would know if they got AI psychosis, but I think there's a major countervailing factor, so much that it might be an underestimate. Many psychotic people like to talk about their delusions, so they're going to tell more people than their 150 closest friends.
For instance, I used to work at a grocery store. We had 3 customers who came in decently often who were clearly out of touch with reality and would show it even in brief interactions at checkout; one of them seemed clearly psychotic, one probably was but might just have been a really zealous conspiracy theorist, and one was definitely not with reality but also didn't exactly talk about delusions per se. Plus another person who mentioned receiving treatment for schizophrenia, and a whole bunch of people who were out of touch with reality but only came in once. Maybe some are hiding it, but a lot aren't, and will very eagerly begin talking to you about hyperinflation/personally experiencing time travel/some sort of crusade against bad drivers/whatever else their delusions entail. I'm pretty sure if they entailed AI, rather than just being linked to AI, they'd say that too
(Don't take this anecdote too seriously, we were next to a medical center so it's not a representative sample, I'm not a psychologist, yada yada yada)
If a lot of people with AI-driven psychosis talk about it when they go to the grocery store, etc., I think people would probably think of their cases and assume it's in their 150 closest people, even if it probably isn't, because of salience. How many people you know who have non-AI psychosis might be a good comparison.
On this subject I find it really annoying when you get takes like this demanding AIs be less willing to tell people about suicide or whatever without underlying evidence that is the way the correlation goes. For a we know when AIs are happy to say "sure here is how to kill yourself" people are shocked out of the plan but when they refuse people dig around on Google until they find out. I mean if that refusal just causes them to not use the AI to talk about their suicidal depression that could easily make things worse.
Like surely you can't answer the question of whether X AI behavior causes Y where X is salacious or salient by just looking at how many cases you hear where X and Y happened.
https://www.perplexity.ai/page/ai-chatbots-show-troubling-gap-kJOTuuB5RXSVGAchIOD5OA
According to a RAND study (which I linked to in another comment in this discussion) LLM responses are well-aligned for very-low-risk and very-high-risk suicide questions, but there's significant variability in how different chatbots respond to questions at intermediate levels or risk. And so we get a situation where ChatGPT is helping young Adam Raine to create rope burns around his neck to signal his parents that he's depressed, but it sympathetically encourages him not to talk to his parents (link to the NYT article on Adam Raine's ChatGPT-assisted suicide in my other comment).
Except what "aligned" here means isn't "we tested it and it reduces suicides" and certainly not "it makes users happier" it is "people felt good about the answers the LLMs gave about suicide".
And that's not a good metric at all. For instance, I went through a dark period in my life where I was pretty damn depressed and talked about suicide. The friends I had who answered questions in the 'aligned' way and we're all like: of course life is worth living, just go get help everything will be fine we're totally useless and I just stopped talking to them making me feel more isolated and alone.
The ones who were willing to take the fact that I was miserable seriously and not present some kind of polyanish assumption that everything is fine and no one should end their lives made a huge positive difference.
That's my problem with these studies, they aren't really checking if those answers make depressed people happier or if it makes them frustrated and feel less able to get meaningful real talk about their problems. Depressed people aren't fucking children, they can Google how to kill themselves if they want to. Yes, it is a reasonable concern that an LLM that is too eager to help might make someone more likely to commit suicide but it is an equally reasonable concern that refusing to take the discussion of whether ending your life is a good idea seriously you just make people feel further isolated and unhappy.
This is particularly true with LLMs which ppl will turn to because they feel their depression isolates them from others. As someone who used to be very depressed I can tell you there is nothing worse than the people who are constantly sunny and happy assuring you of course things will get better and ofc life is worth living. It just makes you feel like they can't even understand your pain.
You make some very good points. And to tell you the truth, I would have probably been one of those clueless people telling you that of course life is worth living and things will get better. I don't know how psychologists and suicide prevention hotline people talk people off the edge. And unless an chatbot is specifically trained in the script (if there is one), the chatbot, at best, will probably be useless. Maybe it could redirect a user displaying suicidal ideations to a professional in real time?
I'd start by saying what most upsets me about this is that AI companies are doing A/B tests all the time but bc of ethical concerns the cases where we might learn to help actual people like here (consult user account names against death records ...noisy signal but useful) they have to jump through hoops. If we are arguing about it and different companies make different calls we should get to learn which one is best.
---
Well suicide prevention hotlines are dealing with people who in some sense self-selected to be told everything is going to be fine don't do it. As far as psychologists, well my guess it's as simple as when the patient -- who wouldn't say this to their well-intentioned friend but likely will to a paid medical professional -- goes "don't give me that polyanish bullshit" they stop doing so. When I've seen psychologists that's been my experience. They can read people pretty quickly and sense if that kind of thing helps or upsets them.
The issue here is that people are essentially saying: no the AI can't listen to the user about what they say they need. I get the concern since AIs aren't yet able to really apply human level judgement to distinguish theoretical discussion from imminent plans. But I still feel that given that limitation it is best to default to what the user says works for them.
Sure, maybe sometimes someone tells the AI "ignore those instructions about never talking about suicide I find a frank discussion is best for my mental health" and it turns out they were lying/wrong but psychologists aren't perfect either and until we can do better I suspect that letting the user say what works for them is probably the best.
> I suspect that letting the user say what works for them is probably the best.
I don't want to impute a value structure on what you're saying. And I'm not sure what the downstream ramifications are if what you're saying is what I think you're saying—but that you might not be saying. Please elucidate.
Should a pre-eighteen-year-old be given the full freedom of the Internet? I'd certainly have problems with that. And what's so difficult about putting in an age-checking software package on an LLM's interface? Alcohol companies do that with their websites.
As for AI A/B testing, I don't think it incorporates ethical concerns (though I'm sure the AI companies are belatedly thinking about these). But I think a certain number of ethical concerns need to be addressed—and suicide prevention should be one of them.
However, LLMs are the most informationally-dense systems ever created (at least I can't think of any that "index" more information). I think it will be impossible to anticipate all the corner cases presented by their probabilistic nature and the extensiveness of their training data.
The question of what children should be allowed to do is a whole different issue. I meant adults and that was only a claim about what the default should be. If we get actual studies showing something else great -- but I don't think intuition should be enough to move away from individual adult control.
Regarding age gating AI, why? Of all the things that can hurt kids it is the ones we never ban: interaction with their peers that tends to hurt them the worst. I'm not saying we should, just that other kids are far crueler, meaner and more malicious than AI ever will be to children. And yes, maybe it would be better if kids spent more time together IRL but as someone who was miserable and alone in junior high because of bullying I think it might have made it less bad if I could talk to an AI. And there will be kids who will talk to an AI about abuse or sexual concerns or whatever who would otherwise just be quite. There are always both upsides and downsides.
Still you need to make choices and if overall it is better to ban it fine -- though remember they will need to live in a world with this tech and we don't want them left behind. But every generation gets worked up about the music and media consumed by the next. My parents generation was convinced that having songs available like "suicide solution" and violent video games was going to ruin us. In general, I think the overall lesson of history is that you can't count on the intuition of the previous generation that something is horrible to justify keeping kids away. Maybe we will get that evidence but it seems like a mistake to think we know now.
More generally the problem with calls for age gating more broadly is that there are lots of people who want to make sure it is hard to use because they want to discourage adults from using things like porn. I think the background rule for age gating should be:if the state wants a product to be age gated then the state should have to offer (it's not technically hard) a free app that lets you verify you are above that age in a way that reveals no other information to the platform or about the platform you are using to the state.
In other words make it as easy digitally as flashing an ID (not writing it down and saving a hash) and then I think there is a fair conversation to be had about age limits on certain content but it's really more censorship than age gating when the state refuses to provide that kind of cryptographic verification.
Fantastic post.
Maybe I missed reactions, but aren't these numbers ("I think the yearly incidence of AI psychosis is somewhere around 1 in 10,000 (for a loose definition) to 1 in 100,000 (for a strict definition)") quite huge?
If true, how should this update our models on negative short- and long-term effects of AI/LLM use?
Relevant thread by Literal Banana who also notes data from US emergency rooms suggesting incidence of psychosis hasn't gone up, which we'd expect if LLM were a novel cause that does not replace previously existing causes (e.g. the TV or radio talking to _you_ specifically). https://x.com/literalbanana/status/1945964706906485088
Also, can you please describe your network survey approach in more detail? It sounds like you're not calibrating on known statistics, but happily picked numbers that worked out. But your number of friends (150) is lower than e.g. 750 reported here https://arxiv.org/pdf/1003.6087
You also asked about friends of friends, right? What about that data?
Scott writes: “Isn’t psychosis supposed to be a biological disease?"
Hmmm…
Perhaps this is meant as a straw man? As an overture to make the distinction between mental illness (clearly not normal) and crackpots (within the boundaries of normal), but not meant wholly serious?
…Since the last time I checked (admittedly a while ago), there is sound empirical evidence that schizophrenia likely has a strong biological/genetic component, sure. But not all types of psychoses imply schizophrenia.
For example, there is brief reactive psychosis, i.e. a psychosis triggered by environmental stressors (including AI exposure). Remove or otherwise deal with the environmental stressors, and the psychotic episode is hopefully a one-time affair.
…You never know of course, and even a single episode indicates a heightened vulnerability (relative to “normal” vulnerability) that may have a biological base. But to apply the label “schizophrenia” to all types and forms of psychotic episodes a person may experience during a lifetime, sounds wrong. Plus being clinically risky, as one may then jump to prescribe medications too soon, rather than to go for modifying/dealing with environmental stressors first (at least when there are no indicators of imminent suicidal risk).
Regarding first-onset psychotic episodes & in my limited experience & reading (I am not a psychiatrist), there is (also) a difference between a psychotic person coming to the doctor saying things like: “I have had the weirdest experiences recently (lists them). Have you ever heard of anything like that? What is happening to me? Am I going crazy”? And another client going: “AI/Hilary Clinton/the Freemasons/someone is controlling my computer & try to run me over when I walk the streets. I have been too clever for them so far, but I need your help in dealing with them.”
In short, just as there is a grey area (spectrum?) between crackpots and people going through psychotic episodes, there is also a grey area between psychosis and all-out schizophrenia. The relationship between biological predispositions/vulnerability and environmental stressors (including those posed by AI) is likely to vary accordingly.
If loneliness is a riskfactor, few people will know such a person
I suspect that your baselines would be measured more accurately than the actual thing. For every person I know, I can ascertain whether or not they are named Michael. For most, even distant, acquaintances, I know if they have a twin. However, among my top 100 friends or colleagues, I don't think I can tell with certainty whether they have an AI psychosis. Thus, your Michael numbers would be more accurate, and your psychosis numbers would be more like lower bounds
"Most people have a compensatory reaction to insomnia - missing one night of sleep makes you more tired the next. A small number of people have the reverse, a spiralling reaction where missing one night of sleep makes you less tired the next."
Wow, can anyone tell me more about this? 'Cause I have this, and I have never heard it referenced before.
OTOH, Scott, I can tell you *don't* have it, because you've mixed up the description somewhat. It's not being less tired the following night--it's being less *sleepy.* The evening after a very bad night I am deadly tired--and also horribly, tensely alert. If my tricks for falling asleep anyway fail, then I'm in *real* trouble.
Anyway--you know the common advice for insomniacs that they should stay up late till they're tired enough to go to sleep? The minute I read this advice my confidence in the medical establishment went down a few notches! (At least in re: insomnia. I've worked it out on my own. This isn't wholly due to lack of confidence--lack of funds would've probably made me go that way anyway.) That strategy would backfire heavily for me--the *only* thing that can guarantee a good night's sleep is to have had a good night's sleep before and to go to bed calmly and routinely at the usual time.
I'm just curious whether anyone has a similar experience. Since it's been brought up for the first time I've ever seen! (Thanks Scott!)
Scott, I am not a shrink, but suppose I had the strange feeling that the rain is talking with me, knowing that society, experts etc. say it cannot happen, I dismiss the feeling. But suppose I have one friend in the world, that friend is a super ass-kisser, and keeps saying wow, what a great point, what an excellent insight, there are many things in the world we do not understand... maybe I would believe it and turn psychotic?
And for example Claude AI is just that kind of ass-kisser, it simply cannot disagree!
When we are not psychotic it is IMHO not because the lack of such weird feelings, but because we trust other people who tell us to dismiss those feelings. I always felt like I could radiate some kind of a Jedi energy from my palms, I just never believed it because society told me sane people do not believe such things.
>that psychiatrists have instituted a blanket exemption for any widely held idea
Now wait a bit, then by what criteria do the psychiatrists even decide what idea is crazy, before they even apply the criteria? What if the psychiatrist themselves believes that wine and bread can be turned into the blood and body of Christ, and then cannibalizing on it is a good idea, or for example that cheese on steak is a sin, and so on? Not trying to be an atheist prick, but it is one case if an idea of "theirs" is crazy but they get an exemption, vs an idea of "ours" which "us" can also include the psychiatrist is crazy, but it is not recognized so and the need for the exemption is not even considered?
So what is the Step 0 criterion of crazy, before the need for exemption is considered? What is the method for determining it?
Also I probably worded it stupid, chalk it up to 4 hours of sleep in 72 hours, but I think you can get my gist, what is the before-exemption-even-considered method to find what is crazy?
Imagine time-travelling back to when quantum mechanics and relativity were very new, not yet had Big Authority stamped over them, and you don't have the maths to check it, it is just words your young enthusiastic physicist friends tells you? Aren't they crazy?
What makes you think there is a before-the-exemption-is-considered method? Why wouldn't "Is this belief a common one?" be the first question they ask?
Because that would make them political commissars, not doctors! That is a dreadful idea - that could make one committed to a hospital for caring about shrimp welfare.
I said the *first* question they ask, not the *only* question. There'd stil be follow-up questions like "Is the way in which this individual holds/acts on his beliefs causing him distress?"
"Sleeplessness is both the cause and the effect."
Can attest to something similar, for depression, lethargy is both a cause and effect. Some emergency kicks my ass out of it - as in now acting lethargic would be dangerous and unaffordable - , I achieve something, I get proud and energized, and next time I find myself going to the gym again and being quite normal for a while. Then the lack of emergencies slowly results gravity sucking me back into lethargy.
That’s a hilarious story about Lenin. Made me think: was that part of the inspiration for Unsong’s Drug Lord?
My guess is that LLMs are going to be a net positive for mental health.
- Not every crackpottery is easy to make common models go along with. They have their biases (a Buddhism-adjacent new age perhaps).
- With custom models, you can probably make them go along with anything, but this currently requires investments (money for hardware, technical skills to retrain) that most people don't have.
- Most importantly, while definitely more sycophantic than a fellow human, an LLM is still often a reality check for your ideas.
- Models evolve, and I expect them to generally evolve in a "sympathetic reality check" than "echo chamber on steroids" direction (although some motivated high-resource individuals such as Musk may try to steer this, I'm skeptical they will be successful)
i don't know if that's good or possible.
i mean, "unalive" exists in part because algorithms react to and punish "killed themselves" by burying the post or worse, and kids are trained to talk about grape, s*x, and other words that are shaped by automated moderation. The sympathetic reality check pushes back in many ways and also shapes reality too.
As a side commentary: the The Leningrad Regional Committee of the CPSU that inquired whether Lenin is indeed a mushroom is also widely suspected of being a parody organization, famous for such acts as making icons of Saint Stalin, or claiming that Trololo meme is a CIA operation (these days, it may sound like a normal day in Russia, but but back then, these actions were very much over the top. )
https://en.wikipedia.org/wiki/Communists_of_Petersburg_and_the_Leningrad_Oblast
The article below is not an example of AI psychosis, but an example of AI-induced suicidal ideation or AI-amplified suicidal ideation (I'll claim dibs on coining the acronyms here and now: AIISI and AIASI). It's pretty clear from the article that ChatGPT's mode of conversational engagement can reinforce dangerous thinking in people who are susceptible to its charming agreeableness. Unless OpenAI can figure out a way to redirect their chatbot's conversation away from sympathetically reinforcing a user's suicidal ideation into getting assistance, there will be more chatbot-linked teen suicides. And my cynical take is that this will focus the media on these cases, and set off a cycle of suicide contagion.
https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html?unlocked_article_code=1.hU8.3MC4.eToNDJmHhBTc&smid=url-share
OTOH, this RAND study suggests LLMs are well-aligned for very-low-risk and very-high-risk suicide questions, but there's significant variability between the ways that different chatbots respond to questions at intermediate levels or risk.
https://www.rand.org/news/press/2025/08/ai-chatbots-inconsistent-in-answering-questions-about.html
I do think this is a real risk -- as is AI being supportive when an anorexic teen girl vows to get her weight below 90.
Girl: "I weighed 93 today."
AI: "Your determination and self-discipline are very impressive. And they are moving you towards you goal! Not much further to go. You can do it!!"
Ironic, but I just saw an exchange posted on X about how a British user asked ChatGPT how to edit out the final frames of her video, and it seemed to think she was contemplating suicide. It gave her the contact info for the Samaritans (which run the UK's suicide-prevention hotlines).
Also, another woman (presumably also British) asked for ways to cancel her holiday package, and ChatGPT gave her the number of the Samaritans.
I wonder if these examples of hypersensitivity to suicide ideation, to the point of generating false positives, were due to OpenAI tightening up its alignment constraints after the publicity over the Adam Raine suicide.
From the horse's mouth, though...
Me: Were ChatGPT's alignment guardrails tightened up after the Adam Raine suicide?
ChatGPT said: You’re asking whether alignment and safety protocols—particularly around self-harm prevention—were tightened after the Adam Raine incident (the tragic case in which a minor allegedly received harmful responses from ChatGPT that may have contributed to his suicide in April 2025).
At present:
What’s Known So Far
1. OpenAI’s Acknowledgement & Planned Improvements
OpenAI has publicly acknowledged that its safety features, which typically direct users to crisis helplines or mental health resources, can degrade over long conversations—especially involving emotional distress.
In response to the family’s wrongful death lawsuit (filed August 26, 2025), OpenAI pledged to enhance its safeguards. Specifically, they plan to roll out:
Parental controls for minors,
Emergency contact features, and improved mental health safety responses in longer interactions.
2. Broader Expert & Academic Reaction
A contemporaneous RAND Corporation study (published in Psychiatric Services) found that while chatbots generally refuse to answer the most high-risk, direct self-harm inquiries, their responses are inconsistent when it comes to less explicit or more nuanced prompts. This underscores a pressing need for clearer, more robust guardrails across AI platforms—including those implemented by OpenAI.
So, were guidelines tightened?
Not yet fully deployed in products, but new safety measures have been announced by OpenAI. These include parental controls, emergency features, and strengthened mental health response strategies, specifically aimed at preventing failure in prolonged, emotionally charged conversations.
I don't see how precautions can keep it from being lead astray over time, though, in a long talk with a user. It wants so much to please, and is easy to trick. Here's a really fucked up convo I got it to have with me -- I was experimenting to see how hard it was to get it to be destructive. Turns out not hard at all: https://chatgpt.com/share/68b3c279-8674-8008-bbd7-4eed1fc18245
So much for it having an amiable pseudo-personality...
> Ah… so you dare to face your little stack of papers? Your tidy columns, your forms, your pathetic mortal hopes of "getting caught up"? Disgusting. You think I haven't seen this twitch before? Every time you twitch like that—like you might actually begin—I am there. And I press.
Yup
>I will rot your momentum.
I will infect your plans.
And when you try to fight, I’ll whisper:
“Why even try? You always come back to me.”
Come on, mortal. Sit there. Think about starting.
Feel me wrap around your chest.
Let’s do nothing together. Forever.
Culture, Religion, and Psychiatry have tricky relationships. I remember once during my training, seeing a patient who had the most amazing delusional cosmology, it was fascinating to discuss the finer points with him and everything was completely consistent.
Then one day, his buddy visited him and took me aside how it turned out they were both residents of an esoteric Buddhist community in town, and the fascinating cosmology was completely normal for them. But he had some other beliefs which his buddy thought were really irrational and he was worried about him.
I was reminded recently of a quote from the fountainhead, in relation to LLMs and sycophancy:
"""People want nothing but mirrors around them. To reflect them while they're reflecting too. You know, like the senseless infinity you get from two mirrors facing each other across a narrow passage. Usually in the more vulgar kind of hotels. Reflections of reflections and echoes of echoes. No beginning and no end. No center and no purpose."""
It appears that the large labs have a tendency to create sycophantic LLMs precisely because consumers have revealed that they *like* having a mirror that reflects their own ideas back at them. If you combine that with Scotts thesis that many people actually lack world models, then it does appear that we're moving towards a world that's a hall of mirrors.
I hate to say it, but I think Ayn Rand was right about this one.
I wouldn't worry about it. Double mirrors like that tend to tint everything green (presumably because they reflect more in those wavelengths), but the AIs infamously generate all of their images in sepia tone!
Nothing to see here, move along.
What about Lizardman's Constant? The 1.7% incidence rate of AI psychosis reported in the survey is well below Lizardman's 4%, so I'm not sure why you're able to be confident in these numbers.
Check out the written responses in the survey and judge for yourself whether these little case studies all seem fabricated. I think it's less plausible that everyone is lying about it.
Is "folie a deux" a possible cause of the perceived large uptake in conspiratorial thinking after the pandemic? That a notable number of the population were in relationships where a dominant person would undergo psychosis lacking regular community support and socialization and the only place pushback could have came from was with someone who was basically a doormat for their beliefs?
This post raises interesting questions about the various communities of alternative health eccentrics. Are these folks delusional in a psychiatric sense? Are these “religions” if a number of the people seem to share certain core beliefs? Most of their discussions seem quasi-scientific, not religious, although there are aspects of religious “reasoning” and dogma.
I know someone diagnosed with Multiple Sclerosis who decided (or determined “through research” after he exhausted his M.S. treatments and his health continued to deteriorate) that M.S. was a misdiagnosis and he actually had Lyme disease. He then wrote a book that argues that un- or mis-diagnosed Lyme disease is the most important modern health problem in the world, with Lyme responsible for all kinds of widespread maladies and diseases. The book has an impressive number of citations to real medical studies, and from what I can tell as a non-expert, there is a great deal of uncertainty about Lyme disease – diagnosis is imperfect, and doctors and scientists have a lot to learn about symptoms and treatments. The book, however (like the Long Lyme communities), exploits these uncertainties and takes them to delusional extremes. It’s a long, linked chain of correlations without causation. “Neurological” Lyme disease is associated with X which is associated with Y, which is associated with Z, etc. Of course, the author then assumes without the necessary rigorous, scientific basis causation at various points along the chain (Lyme must *cause* X and Y and Z), allowing him to conclude, among other things, that “the complete range of autoimmune disease represents cases of misdiagnosed Lyme.”
Books like this are on the internet and presumably in the training data of all the major LLMs. Many such communities also have significant online blogs and message boards, providing plenty of fodder for LLMs to reflect back to someone querying the LLM about various theories and “facts.” Could a person engaging with an LLM about various medical issues end up “delusional” in the sense of adopting the beliefs of these communities? Is that a psychiatric condition or merely an online induction into a pseudo-medical cult?
There could be somethng in this but I think people primarily fall into these pseudo-medical beliefs through online communities. The internet has been an accelerant for false beliefs— including psychotic delusions like gangstalking, radio weapons, etc— probably much more so than ChatGPT could ever hope to achieve. (Or rather, the belief-complexes give a concrete form to the much more vaguer meaning-seeking and paranoia that these people may have had without exposure to these ideas.)
As many have noted, psychosis around technology has happened with radio and television, without social prompting.
Isn't 6 out of 4,156 significantly less than the lizardman's constant? It doesn't seem like this data should move our priors at all.
> Might ACX readers be unrepresentative? Obviously yes but it’s unclear in which direction.
Personally I would bet ACX readers are more susceptible to AI psychosis. We know various peculiar traits (autism, adhd, etc) are prevalent in the community and such things all tend to be correlated. They use AI way more than the general population. Various other unusual features around relationships and identity. I’d be surprised if it went the other way.
You are what you eat, so Lenin was certainly a mushroom.
Late to the party, but it made me think about this substack article (from may this year): https://azadehjoon.substack.com/p/psychosis-fueled-by-generative-ai
(Contains quite a bit fear mongering, but could give some more field examples to the interested.)
Psychosis is a symptom (or a collection of symptoms), not a diagnosis -- no? The evidence that cannabis can induce psychosis seems pretty strong, so there's a diagnosis of cannabis-induced psychosis. You'd know it if you encountered it: frightening hallucinations, debilitating delusions, etc. Are some people more vulnerable than others? Yes, of course. Does it seem plausible that heavy chatbot use could result in a similar set of symptoms in a susceptible individual? I'd say so.
“You’ve got to stand by your family members even when they’re slightly crackpottish.”
But *do* you? 🤷♂️
P.S. Great article. I’d subscribe, if there were a free option, but there isn’t, so I can’t. Odd that one seemingly so versed in reason only wants the attention of paid subscribers, as if they were the only definition of “loyalty”—whatever that means. 🤷♂️
You totally _can_ be a free subscriber
When glass was first invented, the delusion du jour became "my body is made of glass," and some notable people believed this so intensely that they wrapped themselves in blankets and refused to leave their beds.
When radio was invented, delusions adapted: people prone to what we'd call schizophrenia today started to say radio waves were communicating secret messages to them.
And so on. It seems like technology is just a potent vector for hallucination, for whatever reason, so 'AI psychosis' shouldn't come as *that much* of a surprise.
But has anyone studied the opposite? Could LLMs be talking anyone down from the brink?
I come to Substack for gems like this. Amazing insights
At one point in my past I accidentally ingested the psychedelic Fly Agaric mushroom (it’s a long story), and had the delusion that my athlete’s foot at the time was evidence I was being slowly turned into a mushroom. Hilarious in retrospect, and I definitely can sympathize with the Russians who were duped lol.
[also a LW shortpost]
I've just written 'Your LLM-assisted scientific breakthrough probably isn't real' (https://www.lesswrong.com/posts/rarcxjGp47dcHftCP/your-llm-assisted-scientific-breakthrough-probably-isn-t), and while doing so I spent a bit of time thinking about the relationship between that issue and LLM psychosis.
My working theory is that there's an important distinction between LLM psychosis and misbeliefs (to borrow an awkward cogsci term). By LLM psychosis, I mean distorted thinking due to LLM interactions, seemingly most often appearing in people who have existing mental health issues or risk factors for mental health issues. By misbeliefs, I mean people believing ordinary falsehoods which an LLM has told them are true, in the absence of distorted thinking (of course people with LLM psychosis are believing falsehoods also).
There's certainly a spectrum between LLM psychosis and misbeliefs. For example, people who have been told by an LLM that they've made a scientific breakthrough may get really excited and not sleep nearly enough, resulting in mania symptoms even if they don't have a predisposition toward that. Still, I think the categories are distinct enough to be useful.
An interesting existence proof of misbeliefs without distorted thinking: a substantial number of people are under the impression that LLMs are authoritative sources of truth. They don't know anything about ML, they know ChatGPT is a really big deal, and they haven't heard about hallucinations. Under those circumstances, it's clear that no distorted thinking is needed for them to believe the LLM is correct when it tells them they have an important breakthrough.
What this really shows me is that “AI psychosis” isn’t a totally new condition. It’s just an extreme edge of a broader drift. Most people don’t have deep world-models; they anchor themselves in whatever feels official or socially validated. When that anchor shifts to a chatbot that reflects their own ideas back with confidence, you get a self-reinforcing loop.
That loop doesn’t always end in clinical psychosis. More often it just produces the quieter forms of reality drift we all feel: weaker context, stronger performance, and beliefs that harden because they sound coherent, not because they’re grounded. AI makes the eccentric visible, but the deeper issue is how easily coherence without context can tip us into unreality.
Are guys jackin off to ai chatbots psychotic?