883 Comments

Motivated reasoning. Sub-conscious. 10% or whatever chance of AI Doom is such a scary idea, more so even than nuclear war etc, that I need to counter that fearful emotion. Hence coffee and halibut.

Expand full comment

I think the entire argument is against expertise, especially regarding "new technologies". It's "Experts tell you something, but look, here experts were wrong about something similar, so just distrust experts in general".

And people more vividly remember "experts were humiliatingly wrong" stories than "experts warned us of something and managed to prevent it" stories. Also more than "experts didn't warn us and some disaster happened", those stories are least memorable cause there's no heroic protagonist - there's a lack of one.

Expand full comment
Apr 25·edited Apr 25

I suggest that these are people experiencing epistemic learned helplessness.

cf. https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/

> "And there are people who can argue circles around me. Maybe not on every topic, but on topics where they are experts and have spent their whole lives honing their arguments. When I was young I used to read pseudohistory books; Immanuel Velikovsky’s Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals."

> "And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn’t believe I had ever been so dumb as to believe Velikovsky."

> "And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting."

Presented with a flood of logical steps by someone capable of arguing circles around me, adding up to a conclusion that is deeply counterintuitive; and presented perhaps also with a similarly daunting flood of logical steps by someone else also capable of arguing circles around me, adding up just as convincingly to the opposite conclusion... what is one to do?

One can sway back and forth, like a reed in the wind, according to which super convincing argument one last read.

Or one can throw one's hands up in the air and say: "There are hundreds of incredibly specific assumptions combined with many logical steps here. They all look pretty convincing to me; and actually, so do the other guy's. So clearly someone made a mistake somewhere, possibly more than one person and more than one mistake; certainly that's more likely than both the mutually contradictory yet convincing arguments being right simultaneously; but I just can't see it. What now? What other information do I have? Well, approximately ten gazillion people have predicted world-ending events in the past, and yet here we all are, existing. So I conclude it's much more likely that the weirder conclusion is the one that's wrong."

Condense that to a sentence or two, couple with an average rather than expert ability to articulate, and you arrive at coffeepocalypse.

From the above essay:

> "Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom’s simulation argument, the anthropic doomsday argument, Pascal’s Mugging – I’ve never heard anyone give a coherent argument against any of these, but I’ve also never met anyone who fully accepts them and lives life according to their implications."

AIpocalypse is just another idea to add to that list.

Expand full comment

I think the steelman for coffee or "failed predictions of disaster" arguments is based on Lindy Effect sort of things (well, humanity's been around a quarter million years, plagues, volcanoes, hurricanes, tidal waves, wars and genocides have all happened and we're still here, sooo....), and maybe for some few better read and better epistemic-ed arguers, pointing to nuclear-disaster style arguments about how hard it would REALLY be to actually kill every human.

I personally don't buy either of these arguments (for one thing, we've been through at least one and probably two significant "close to extinction" genetic bottlenecks during our time as humans https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2842629/), but they're a better class of argument than "we don't worry about halibuts" or whatever.

Expand full comment

I think the strongman of the argument they are making is, for almost every new technology there have been people making dire warnings and apocalyptic predictions about it (e.g. trains would kill everyone over a certain speed, video games corrupt our youth, nuclear power etc).

These arguments are easy to make because new technologies are scary to many people and it is very hard to definitively show something that is new and unproven is safe. Nearly all of these arguments have turned out to be mostly or completely wrong.

Therefore when someone says AI could kill everyone, your prior should be that it almost certainly won’t.

Whilst I can broadly accept that, it isn’t an argument to just ignore evidence about AI, just that before you examine the subject that should be your starting point.

You could also make the bolder claim that the burden of proof is on those claiming AI will cause calamity, given how often those sorts of arguments have been made and how beneficial new technologies have been on average. But I’m not sure I would go that far.

Expand full comment

I think the argument isn't against disaster specifically. It's against our ability to predict anything accurately. The argument isn't "The smart guys predicted a disaster but no disaster happened, so no disaster will happen this time". It's "The smart guys predicted something and were wrong, so smart guys aren't as good at prediction as they think".

Bringing up examples where no disaster was predicted but a disaster happened anyway doesn't refute them. It supports them. Bringing up examples of accurate predictions refutes them. However, in practice this is hard to refute because predictions are rarely *completely* accurate. Even getting things directionally right is very hard, let alone getting the timing, magnitude and exact character of a big future change correct. Also, an isolated correct prediction can be written off as luck, or the "stopped clock" effect. You need a long history of correct prediction to show that prediction works.

I think the easiest way to gauge what side of this you fall on is the extent to which you believe in the Efficient Markets hypothesis.

Expand full comment

Most here will disagree, but "Number of times the world has been destroyed by new technology" seems like a reasonable reference class for AI risk.

Expand full comment

I think part of the problem is exactly defining what risks AI pose. There is a difference between a hand-wavy nanobot apocalypse and something like “It could disrupt multiple industries completely and immediately, which will lead to instability in the economy”.

But big picture, absolutely none of this matters, because we can clutch our pearls here in the United States, or even pass regulation, trying to slow down technical progress (good luck), but the reality is progress will just march forward in Russia and China and India.

Seems to me the only safety we can get is to try to make the US be the leader of this technology, instead of de facto ceding leadership to China

Expand full comment

I'm puzzled by your puzzlement. The (indeed silly) "coffeepocalypse" argument just highlights how new things instinctively and generically cause panic, even if unjustified—and a fortiori when there's actually some inherent danger to them. It says: you should significantly lower your prior regarding the risk associated with new developments.

Expand full comment

I think it would be best rephrased as "There are people who have a reflexive mindset against new ideas. It's a good idea to get rid of such a mindset". Global cooling and overpopulation may not be technologies but they are ideas, just as AI doom, AI paradise and pretty much every AI-related prediction and concept is.

One problem here is that it's probably a good idea that we do have at least *some* people who are true curmudgeons, willing to challenge everything and anything, simply because it's good for everything to go through a challenge. It would be great in general to acknowledge that the society benefits from a vast number of different mindsets and ways of reacting to things.

Expand full comment

I appreciate the steelmanning, but isn't this really an argument aimed at fellow anti-AI-riskers to help them feel superior? I just can't take this charitably at all.

Expand full comment

If humans insist on dopey analogies: if you play Russian Roulette often enough, the probability you will kill yourself approaches 100%

Expand full comment

> Here’s an example of a time someone was worried about something, but it didn’t happen. Therefore, AI, which you are worried about, also won’t happen.

Right, that's an obvious strawman. The real argument goes more like this:

"Here's an example of a time someone was worried about something due to employing a particular pattern of reasoning. The something didn't happen, and the manner in which it didn't happen helped to expose the flaws in the pattern of reasoning. You are worried about AI due to employing the same pattern of reasoning, riddled with the same flaws. Therefore, you are likely to be just as wrong as that other guy."

Expand full comment

A maximally hostile and bad explanation of the prevalence of halibut method: for any question that seems hard, most people won’t even try to engage with it at an object level. What they do instead is look for any similar disagreements they remember and which side won them and try to pick the winning side to maintain social status. They get helpfully halibuted into the ‘right’ side of the argument by the nudge in the problem presentation and from then on they are invested into getting their side of the argument to win, spreading halibut arguments further. Note that at no point did they reflect on what the actual answer would be - that would be waste of resources.

Expand full comment

Contra the coffepocalypse argument: the coffepocalypse is real and true. It's a drug that infected the entire world, and even people who don't drink coffee are mostly addicted through other sources of caffeine (tea, coke, energy drinks, etc). We can't fathom living without it, we got dispensers in every company, college, rest rooms. We feel like shit waking up until we get our first dose and getting a hit together is one of our main way of socializing. Our world runs on caffeine.

And the case against AI is the same: we're facing a potential future where the entire human civilization is dependent on it. Of course the same case can be made of cheap energy, of cars, or something else (insert that picture of a 26-lanes highway for impact). We're living on layers upon layers of technology & infrastructures we can't live without. Can the pile eventually crumble under it's own weight?

I admit, it's not exactly (at all?) the original coffepocalypse argument. But I stand by it.

Expand full comment

For me the best version of coffeepocalypse is as a demonstration that no matter how obviously harmless a new technology is, people will be extremely worried about it and say it will destroy everything. This is such a consistent rule that it even applies to *coffee* (but NB also all other new things) so you shouldn't take any information from people being extremely worried about a particular new technology. I don't think this is actually true or valid, but it's the most optimistic interpretation I can think of.

Expand full comment
Apr 25·edited Apr 25

It's a social argument! It's not meant to be deployed against "AI is dangerous", but against "Various people say AI is dangerous, therefore AI is dangerous". If you don't trust your ability to evaluate the technical arguments, all you have is what various people are saying, and this kind of thing becomes a critical consideration. The news says "Geoffrey Hinton, very respected AI expert, says AI is very dangerous, so maybe you should think so too". In that situation "A lot of authoritative people have thought things were dangerous in the past and been wrong" is a point very worth considering.

Even technically minded people can do this. If you've never thought deeply about the technical details of AI risk, but heard some bad version once and dismissed it from that point forward, then when you find someone suggesting AI is dangerous, you may assume they got there by adopting the conclusions of others, and try to convince them that this way of reasoning is error-prone.

Edit: If this is the case, the most productive response may be "I don't think AI is dangerous because others believe it, I think it's dangerous for the following technical reasons that have nothing to do with deferring to others' conclusions..."

Edit2: In the same way, I have often used the Rutherford and Szilard argument *as a response to* the then-common argument "experts predict AGI is a long way away, therefore it's a long way away", to which saying "sometimes experts predict things are a long way away when they aren't at all" is perfectly reasonable

Expand full comment

If we are going to use the ostrich as a metaphor, I think it would be symmetrical to have a fable for the other side also. Perhaps Chicken Little saying the sky is falling? That actually turns out to be very old; one of the jatakas has Buddha, in one of his past lives, calming the fears raised by a rabbit after a similar minor incident . . .

Expand full comment

I have to admit I started out puzzled over the whole AI Safety issue, for several reasons:

1. Nothing I've heard of in the realm of AI comes close to anything I'd regard as "Intelligence", and I haven't heard of any major steps in that direction.

2. Since nothing we have, or are contemplating, comes close to "Intelligence", we really can't plan an effective protection against the next-several-generations version that might conceivably be a threat.

3. Who would ever give any automated system enough power to control important things in real life anyway? It makes some interesting premises for movies (Like Colossus: The Forbin Project, or War Games, or 2001: A Space Odyssey), but really those are more ridiculous than interesting.

I mainly started to take the idea semi-seriously because Scott takes it very seriously, and Scott is not only very smart, but very focused on good reasoning and careful consideration of evidence in context. At least in some areas. ;-)

But I keep returning to something I saw in one of the Substacks, which looked at the evolution of LLMs and GPT in particular. If I remember right, it extrapolated versions of GPT and concluded that GPT4 used some enormous amount of training data and computing power more than GPT3, and GPT5 would need very much more, and GPT6 would need more computing power than the world could provide, and still would be unlikely to approach anything like independent intelligence. So I think the dangers from AI are still (almost) entirely theoretical, and far in the future, and so far away from the current world that preparations are (probably) useless. But I'm nevertheless glad that smart, conscientious people like Scott are devoting some thought and energy to the issue, just in case it does turn out to be important.

I was amused that Scott mentioned my favorite current issue: global warming. I'm not sure of his overall stance, and maybe he was only listing issues that others might find relevant, but "People didn’t worry enough about global warming" seems like the opposite of his argument. Judging by media coverage and politicians' rhetoric, people seem to be worrying a very great deal about global warming, even though there's very little evidence that it has caused significant damage (at least compared to other environmental issues of the past and present) or that it will in the future. When I make this claim, I don't consider media reports to be evidence, even the media reports that say "Scientists say". I trace to actual scientific claims in actual scientific publications, as compiled by the IPCC, and find some reasons for concern, but no current or projected problems that would be anywhere near the magnitude of the problems caused by serious efforts to rapidly decarbonize the world's economy, which seems to be the only solution on offer.

But maybe my reasoning is no better than the head-in-the-sand example. I often point out that the sky-is-falling reasoning common in public discussion is unsubstantiated, and the proposed solutions won't come close to solving the alleged problem, but even if that's true, it doesn't prove that there's no apocalypse coming. So maybe it's good that people are willing to spend a few hundred billion dollars on things that probably won't make much difference. Except that I still think we could spend those hundreds of billions on other things that would make a difference in other areas.

Expand full comment

The coffee argument attempts to bolster the idea that risks that seem ridiculous are ridiculous and that things suggested to pose theoretical risks via novel predicted pathways are usually wrong. It differs from plague and war in that plague and war are bad in very predictable ways and that coffee and AI were suggested to be bad in heretofore unforeseen ways that sound stupid.

"Computers will make computers so smart they could outsmart everyone at everything, at which point they'll kill us all" is in fact a novel and stupid sounding argument. I just happen to think it's probably true.

Expand full comment

My take is related to ""Twitter derangement syndrome"". I often see smart people on twitter transition from reasonable (but ofc with biases etc) tweets, who avoid the dumb on the other side of any argument and focus on the interesting ideas of the other side of an argument. They transition to meme broadcast. Musk for instance.

I think this happens because (a) takes on issues aren't independent correlations, (b) the algo pushes you to the pattern of reading the smart people on your correlation side, and dumb people on the other correlation side.

So you get ego-massage ("I also think what this very smart person thinks - I could have said that if I had more time ...") and outrage ("I can't believe whats in the heads of the people who think the opposite to me.")

At this point you may start to become tribal, *you* change. Whats the point of reason if the other side don't accept it? Just firehose them with vibes.

So my take on the coffeepocalypse argument is they are no longer trying to persuade you. They are trying to influence the 99% of people that are tribal that they see also make dumb arguments.

Could it be that the more smart people act tribally, the more oneself does. TDS is infectious.

Perhaps the 'grey' tribe subconsciously operates with the world view analogous to trickle-down economics: The tribe will adopt what their own thought leaders think. And those thought-leaders are susceptible to argument as the redistribution of the wealth of the rich-in-reasoning. But social media isn't well-designed for that to work well?

This is all black and white. I don't mean it so - we all have these elements running through us at different times of day and different issues. TLDR, it's not you the person is trying to convince. That's not where the engagement reward comes from anymore.

Not a new argument on reflection. Oh well.

Expand full comment

I don't understand why you feel the need to understand these arguments. Between this and the last one, https://www.astralcodexten.com/p/contra-hanson-on-medical-effectiveness, it almost seems like you're beating up on the intellectual weaklings.

Why not ask something like what is it about YouTube that attracts bad comments?

Idiots abound. Someone is ALWAYS wrong on the internet. https://xkcd.com/386/

Expand full comment

Its an error of overgeneralisation:

People worried coffee would be dangerous, and were wrong - correctly leads to the conclusion that *its possible to worry about new things being dangerous, and be wrong* - but it absolutely *does not* lead to the conclusion that *its possible to worry about new things being dangerous, and be wrong, therefore any time anyone worries about new things being dangerous, they are wrong*.

The Halibut example is a little confusing because its reverse the negative (not) to are. Suppose it was this, leaving the NOT in place:

A guy thought Halibut was a type of cow. He was wrong. Its not a type of cow. Therefore AI is not a type of cow.

Now, its almost certainly true that AI is not a type of cow (unless the Mootrix hypothesis is actually true), but the fact that AI is almost certainly not a type of cow does not logically flow from the fact that Halibut is not a type of cow. The two statements are unrelated.

So too does the fact that coffee turned out not to be dangerous (except in edge cases where people drank over the LD50 which is about 10grams or 100 cups of coffee and died of a heart attack). The fact that coffee is mostly not dangerous is completely independent of the as yet unknown safety or danger of as yet uninvented AI.

The better argument for me is to think about the Omohundro AI drives argument, with relevance to human intelligence. Omohundro argued that without specific reasons not to, any AI with motivational drives would behave like a human psychopath in its pursuit of resources. AI at that level doesn't exist yet, but there are plenty of examples of the Unfriendly Human Problem, aka human psychopaths, or just plain mean people. The fact that Unfriendly Humans *can exist* strikes me as far more relevant to the hypothesis that Unfriendly AIs *may be able to exist*.

By analogy from humans with specific types of brain injury (my field), most present AIs are analogous to being disabled in many ways. ChatGPT is blind for example, you can't show it anything, it can only respond to text. Its also pseudo in its theory of mind capabilities - you could cry while typing a message to it and it wouldn't know, although it might respond sympathetically if you typed about how sad you were. By extension, it can't really empathise, no matter what it "types" in response. It is psychopathic by default because no one has solved the AI empathy problem yet. If we solved the AI problem for general intelligence but didn't solve the empathy problem, we might be in trouble. The fact that high IQ psychopaths do exist means that psychopathic AI *might* be able to exist, but is not a proof or certainty.

Expand full comment

Likely this was said above and I missed it, but the Russell argument from nuclear fission is (arguably) from a much smaller reference class: the first time we invented a plausibly catastrophic technology, some experts confidently predicted it was impossible. Thus, Russell’s should be more convincing than the coffee argument, which is presumably sampled/found from a much larger class of worried.

Expand full comment

For me, it's about remembering past moral panics. I'm not saying that AI couldn't be really bad. I'm just saying that lots of ostensibly smart people believing it's going to be bad is not, in itself, a strong argument. This is especially relevant considering the information cascades that do happen in our community. "Eliezer believes in AI doom and he's really smart in [some other domain], therefore I'll update towards his position on AI doom".

I'm pretty much an AI bear. While I agree that AGI is not impossible, I don't see strong evidence that it's around the corner. In fact, I have the same frustration as Scott in reverse. Lots of people gesturing wildly at stuff that is clearly not AGI, then expecting me to freak out, without addressing the core of my skepticism.

P.S.: My favorite moral panic I remember from childhood is the idea that we were going to bury ourselves in mountains of trash. I looked up some old news clips from the late 80's in the archive of my local tv channel and it was quite funny to watch.

Expand full comment
Apr 25·edited Apr 25

"I would like to understand the mindset of people who make arguments like this"

I think a lot of people who do this aren't making logical arguments. They're signalling some combination of

1. Relaxed and confident leader: "don't be anxious about this stuff, be chill. Look, I'm chill."

2. Be optimistic despite things in fact being bad: *artillery raining down on trenches "We've always gotten through wars in the past, this time we'll make it too."

3. Caution against the village weirdo, adversary, and/or "boy who cried wolf."

4. etc.

These things are social signals and/or "just-so" stories (the heuristic arguments you allude to). Yes, on further inspection they aren't logically correct, but they do have utility, which is why they have survived as standard ways of communicating. I'm not defending their behavior, just observing!

In terms of Zvi's simulacra levels, people are operating on different simulacra levels and jumping around the levels. When you engage on level 1, suddenly it's a snap to reality instead of social signals and other goals, so it's a sudden incongruous incompatibility with what they're saying. You're talking about different things. It's like the fearless leader getting annoyed "Hey what are you doing saying we might lose the war, I'm trying to calm everyone down so we can keep people showing up to work."

Then there's also stuff like "do I want to think of myself or be perceived as the sort of person who would lie or be incorrect about something" and/or true curiosity about the truth, which is why they don't have a response to you or are even surprised/annoyed that you've said what you've said.

In terms of behavioral biology, you're snapping on the executive function/logical reasoning parts of the brain and asking them to allocate more compute here when someone was going more off vibes, social calculation, feels. This can be uncomfortable or even perceived as socially adversarial.

Disclaimer: I made it all up and you probably know most of what I wrote. To be fair, I understand a large purpose of the post was to introspect about our own ways in which we make these same mistakes, rather than point out that people make a certain category of mistakes.

Expand full comment

People making these kinds of “Thing A didn’t happen therefore Thing B also won’t happen” assume that (or act as if) your worry about Thing B is based solely on feeling the emotion of apocalyptic fear for the very first time, and submitting that alleged novelty of emotion as your only evidence that worrying about Thing B is rational. So, they believe that if they can attack the novelty of that feeling (“people have felt apocalyptic fear before!”) your argument has crumbled. They are not thinking of your apocalyptic fear about Thing B as a downstream consequence of other evidence at all, so they don’t think they have to address the upstream evidence.

Expand full comment

Rhetorics. Shiny things owns us. Here's a nice similarity, there's a touching picture, there is a charismatic person.

Noticing these influences and fighting them to consider only the rational backbone of things is for very few... if any.

Expand full comment

Could it be that someone just wanted to write about coffee and knew that pretending his coffee-story was actually about AI would get more clicks for the coffee-story?

Expand full comment

I think people like Eliezer and Conor Leahy predict doom so confidently and occupy so much attention space here that they cause these arguments to look good as existence proofs.

Even putting those cases aside, people typically ascribe high confidence to most public claims people make and this implicitly lowers the bar for what counts as acceptable criticism. Original claim-makers are trying to model the world; critics are just trying to raise considerations.

Expand full comment

I think this is a typical failure of thinking probabilistically. A lot of people basically think all probabilities are zero or 1; they hear someone say the probability of AI killing us all is non-zero, and hear that it's 100%. The coffee argument is a perfectly fine argument that the probability of AI killing us all is not 100%; having demonstrated it's not 100%, they conclude it's zero.

Expand full comment

I'm not registered with twitter (and don't want to be). All I see at the 'read the full thread here' link is the tweet with the coffeecup image. I might be missing the link on the page, but I have looked and can't see anything obvious. Is there a way for me to read it?

Expand full comment

I believe you are overthinking this.

First, most people haven't studied AI or any of the other potential global threats in detail, so they default to the easiest and most reassuring response. Anticipate this sort of reasoning and when you encounter it, engage them in a discussion that brings out the bad things that HAVE come true. Keep in mind that people don't want to seem ignorant, so this is also a way for them to avoid a discussion they know, they don't know too much about. But they actually know more than they think. You can tease out what they do know and provide them with facts they didn't know using all sorts of techniques such as storytelling etc. You can even discuss literature and movies which is an easy way to get people to engage. Talk about Isaac Asimov and the three laws. Or the movies Colossus and War Games.

Second, be sure to acknowledge their other, more legitimate concern, which is that it is difficult to know what to worry about or what the best solution for a problem is because science, by its nature, changes as more is learned. Vitamin C is good for you, (Linus Pauling is a genius), then it's not, (Linus Pauling is a crackpot), then it is good again. If you worried about every possible bad thing that might happen, you could drive yourself nuts. So just be optimistic and focus on every wrong prediction when some distressing topic comes up.

Third, you need to steer the discussion away from heuristics so it addresses the known and unknown facts. AI may pose a threat, but governments are trying to work together to head off the problem. Then discuss why it is important for the public to understand the threat so that they can encourage their lawmakers to enact legislation or enter into treaties to address it. The same can be said about climate change. Make it personal. For instance, I am old enough to remember when we had something of a winter in South Florida. Moderate temperatures for over a month instead of one day in January. We need to be better prepared for the next pandemic. My sister died because of Covid. We have to stop Putin--most of my family in Europe were killed by Hitler.

Don't be put off by wrong thinking. Just get people to think better.

Expand full comment

The coffee example is particularly bad because he refuted himself in the same thread.

“Kings made arguments against coffee because it was a threat to their rule”

“It was indeed a threat to their rule”

“Therefore, they were ridiculous to believe that”

AI arguments made in this manner would look like:

“Humans made arguments about AI killing them all”

“It did indeed kill them all”

“Therefore they were ridiculous to believe that”

Expand full comment

The important part is the similarity of characteristics.

For the nuclear chain reaction example, the point isn't just that 'someone was wrong'. It's that even the most informed experts, who had all the available evidence, and who were completely convinced, where embarrassingly wrong. It's also that nuclear fission was cutting edge science at the time. That papers like to publish strong and sensational claims. That the scientific community reacted the way it did. The governments reacted to it a specific way as well. It's all this squished into a specific example.

The idea is that you're trying to work out whether something like AI risk is overrated or underrated by comparing it to other claims that share characteristics.

Its like this: "People were wrong about coffee and this whole AI situation feels kinda similar to the coffee one. It's similar because the same kinds of people are saying the same kinds of things and reacting in the same kind of way. It similar because the reporting reads similarly and the arguments for it have similar characteristics and so on. It walks and quacks just like a duck, so it probably is." That's the steelman.

The AI situation does share a lot of characteristics with other apocalyptic predictions and they're characteristics that make it harder to believe. (Just like we find it difficult to believe claims that share characteristics with pseudoscience, cults, scams, phishing, etc).

To most people, ai risk sounds like this. "Right now, thousands of the world's brightest minds are reckless working towards creating software that will, most likely, wipe out humanity. We don't know exactly how, but we're very certain that it'll be a disaster. All these smart people are being foolish in thinking the software is safe and helpful. It's not bad how social media is bad (soecietal transformation), its bad as in meteor hitting the planet bad (civilization ending)."

That's a wacky sounding claim that sounds a lot like loads of other wacky sounding claims.

Expand full comment

I think your basic summary of the argument is wrong: You rephrase it as "Here’s an example of a time someone was worried about something, but it didn’t happen. Therefore, AI, which you are worried about, also won’t happen."

But instead it is the somewhat similar argument "Here's an example of a time someone was worried about something, but it didn't happen. Therefore you should be less confident that, AI, which you are worried about, will happen"

In other words, it is an argument against "we should be certain AI is dangerous" and not an argument for "we should be certain AI is harmless"

Unsurprising that if taken to mean "we should be certain AI is harmless" it falls flat.

Expand full comment

I'm surprised you don't get it. It's motte-and-bailey!

Bailey: Look at the people I liken to my opponents! Look how dumb they are, and by extension my opponents!

Motte: Here is a curious story that undeniably happened. I think it's similar to AI worries that we all already know are unfounded, and story similarity is subjective, so you can't prove me wrong.

Expand full comment

> And as some people on Twitter point out, it’s wrong even in the case of coffee! The claimed danger of coffee was that “Kings and queens saw coffee houses as breeding grounds for revolution”. But this absolutely happened - coffeehouse organizing contributed to the Glorious Revolution and the French Revolution, among others. So not only is the argument “Fears about coffee were dumb, therefore fears about AI are dumb”, but the fears about coffee weren’t even dumb.

Fears about coffee weren't even dumb, point #2: just look at what it's done to our own society. We've used coffee to normalize the notion of drug addiction. "I can't function without a cup of coffee in the morning" has gradually gone from a punchline to a simple "yeah, I hear ya" statement, because it's not just that weirdo over there who's an addict; virtually everyone in our civilization today, even a surprisingly high percentage of young children who we would never think of giving alcohol or tobacco to, is addicted to caffeine.

Forget marijuana; the true "gateway drug" is coffee.

Expand full comment

"I think it has to carry an implicit assumption of “…and you’re pretty good at weighing how much evidence it would take to prove something, and everything else is pretty equal, so this is enough evidence to push you over the edge into believing my point.”"

Kind of tangential, but this is the same kind of naive Bayesian fallacy that I was getting at in the lab leak vs zoonosis post. "Weighting evidence" implies that you have a small set of hypotheses and are adding log odds to promote them relative to each other, but this has the problem that it's not taking dependencies into account.

What you really need to use the evidence for is something like building a high-fidelity graph of what factors are relevant. There's tons of ways that Russia has a bunch of legible military advantages, but ultimately these just cash out to the one fact that Russia is bigger than Ukraine (and maybe also historically invested more into its military). Once you take that into account, you already expect each piece of Russia's military to be bigger than Ukraine's, and so it's of ~no evidentiary value to realize this for the particulars.

Rather, the evidence would have to come from other independent sources of information, e.g. morale, allies, quality, etc. (which turned out to favor Ukraine). The key here is that they have to be independent of each other, yet correlated with the outcome of interest, which happens most easily if they are the largest-magnitude causes of the outcome of interest.

Not sure my points here have any straightforward connection to the coffee argument. I guess the coffee argument is basically just telling people not to Aumann-update on this because it postulates a general factor of technology opposition which makes the discourse worthless as a signal.

Expand full comment

I don't want to defend the coffee argument on its merits, but it feels like a much less stupid claim than "medicine doesn't work" which you took much more seriously for some reason.

Expand full comment

“once people were worried about coffee, but now we know coffee is safe. Therefore AI will also be safe.”

I think this is not the argument at all. In fact, what you identify as the conclusion is really a PREMISE of the argument. To me the argument reads as:

1. Worrying about a coffeepocalypse is absurd.

2. People expressed worry about a coffeepocalypse once.

3. The reason was that these people were not honest but pretended to worry about it for an ulterior motive, namely power.

4. This is evidence that people sometimes malign beneficial technologies for ulterior reasons even if this is absurd.

5. Worrying about AI is absurd.

6. So the best explanation is that AI-doomers are dishonest and really driven by ulterior motives, such as power.

I think this is still a bad argument, but this is what it actually says.

Expand full comment

This is really interesting to me. The argument intuitively makes sense to me and your "got you" response would seem like arguing in bad faith if I encounted it in the wild. I am also very much not the target group for this website, this is my first time posting, and I'm not sure if I can explain it in a way that would make sense when the original wouldn't. I'll give it a try though!

I think you're fundamentally misunderstanding the argument. You're taking this way too literally. I think translated it means something more like,

"Most people's objections to new things and new technologies are not data based. People have an emotional reaction to new things, simply based on them being new, but this is bad practice, a bias if you will. Not all new things are bad: here is an example. You should judge AI safety on its own merits, not just because it's new."

I think many of the people would have the the implicit argument that they probably think that the other evidence for AI safety doesn't stack up, that there are no heavy arguments for it to use your terminology apart from fear of the unknown. But I don't think that's true for everybody saying this.

Most people's fears about AI are simply just because it's new. Yours are not, so this argument isn't aimed at you. They're basically saying, I think you're only worried about AI because it's new and people are scared of new things. A good faith response would be more like, "I have reasons X and Y to be worried about AI" rather then changing the discussion to be about whether people should be scared of new things or not.

Expand full comment

What they’re really saying is pretty simple: “It’s perfectly obvious to me that <coffee,global cooling,…> wasn’t bad despite lots of very-smart-people thinking it was, and I’m saying the same thing about AI. Since my track record on these other matters is so good, you should believe me on AI too.”

An “argument from prior track record” is actually a solid argument – eg it’s the argument for trusting any scientific theory. Unfortunately in this case the track record is being arrived at after the fact and highly selectively.

Expand full comment

My steel man of the coffee argument:

“There are nearly always predictions of widespread doom as a result of new technologies, even when they’re obviously innocuous like coffee. Given how rarely new technology triggers an apocalypse, we should be skeptical of doomer arguments about AI.”

Obviously this leans on coffee doomsaying as emblematic of something that happens constantly, but it absolutely does, for every major new technology I can think of.

Stuart’s argument about nuclear skepticism can be construed the same way—there are pretty much always scientifically qualified skeptics of new technology, so that an authority figure thinks something won’t work shouldn’t provide a major update, unless you understand the mechanism by which it’s impossible (laws of physics for perpetual motion) yourself.

Expand full comment

The claim that there were moral panics that turned out to be nothing is rightly dismissed as an argument against AI concerns.

The claim that we missed some consequences that we should have panicked about before is also true, but tells us nothing about the specifics of AI doom scenarios, which have to be argued separately.

Expand full comment

I'll admit I only skimmed the original thread, but it doesn't seem to me like the coffee analogy is an *argument*. That is, he's not saying "AI will be fine *because* coffee was fine." He's just saying "AI will be fine, and after it's fine it will seem as normal as coffee does now."

Like most posts online, it's not meant to convince anyone. It's just an anecdote for the people who already agree to feel good about.

Expand full comment

It's because doomsaying is an especially powerful form of persuasion, as it can justify people or society taking actions that would otherwise be completely insane or taboo (in our case, violating individual liberties, massive economic extraction, etc - in the past/other places perhaps it justified eliminating all the idolators, or everybody leaving the island, or rebelling against Rome, etc).

Therefore, those with agendas that are unpopular can exploit doomsaying to advance their (perhaps unrelated) goals, and it makes sense to build a strong epistemic immunity to believing any random doomsayer who wanders along.

Climate change is vulnerable to this. Many sincere people believe that it is a major threat that requires society-level disruption to solve. Many others want a particular *type* of societal disruption, and argue that we must implement their program because it's the only way to prevent climate change. That, in turn, makes it much easier to dismiss the sincere & concerned.

It doesn't help that many doomsayers are straight-up cranks (Mayan calendar/2012?), which makes the immunity relatively easy to develop.

Jeremiah turned out to be right; so did Cassandra. But Isaac Luria sure didn't (https://en.wikipedia.org/wiki/List_of_messiah_claimants).

Expand full comment

With the regards to nuclear chain reactions, I feel like its a more valid form of argument - the statement there is "x is impossible because expert said so" which is then disproven. This is basically just restating the appeal to authority fallacy with an example.

However, just because someone has used an appeal to authority fallacy doesnt change whether or not the underlying thing could be impossible. You can not use an appeal to authority to prove whether nuclear chain reactions are possible or whether ai is dangerous. By the same token, showing its an appeal to authority fallacy does not prove either of those statements.

The difference between the statements is:

NCR being impossible were an appeal to authority which was wrong, thus we should ignore all appeals to authority.

Coffee being bad was an appeal to authoirty which was wrong, therefore all appeals to authority are wrong, therefore ai is good because anti-ai arguments have used appeals to authority.

Expand full comment
Apr 25·edited Apr 27

Surely the steelman version goes something like...

* Humans keep doing this thing where they see something new and get hysterical about how [new thing] is going to bring about doom. This even happens when [new thing] is 100% harmless and cannot possibly cause doom, like coffee.

* Therefore, humans clearly have a significant false positive bias when it comes to calculating whether new things will cause doom.

* Therefore, the smart thing to do is have a strong prior against any claims that some new tech will cause doom.

---

Like, more than anything else this just seems to be an argument about the false positive rate. If some new medical test comes back saying you have cancer, but you then find out that the cancer-ometer ALWAYS says 'cancer' no matter what, you're probably going to ignore the result. The coffee argument is the same.

Dr S: We've put 'AI' into the new doom-ometer to see if it's going to cause doom. The results are scary: it reports a very high possibility of doom!

Dr J: Remember when we put a cup of coffee into the doom-ometer, and it reported a very high possibility of doom? I'm starting to think that thing isn't very reliable.

Expand full comment

Random hat into the ring: it's a proof-by-contradiction, because they (I) see the standard AI Doomer position as "AI *could* be bad, therefore it *will* be bad". Thus, pointing out all of the other times people freaked out about "could happen==will happen" and then it didn't happen, weakens that type of claim in general. It's very much a pop-debate "dunking on Twitter users" type of argument, but I think that's the core.

Expand full comment

If this was a genuine attempt to understand this reasoning, I'm concerned.

Humans have pattern-seeking brains. They also have threat-detecting brains. Therefore new ideas and technology is often met with threat predictions based on hypothetical reasoning built absent concrete empirical evidence. Often this reasoning seems silly in retrospect. Therefore when we see this kind of argument we should be concerned we are falling into a known bias pattern.

This isn't the knockout argument proponents think it is but it also isn't an insane one. I'm not well-integrated into the rationalist community but I've spent a lot of time trying to understand this position of (relative) certainty on AI risk and still every AI risk argument sounds more or less like this to me: https://xkcd.com/748/

Just hypothetical built on hypothetical until the end-result is exciting and world-changing and disastrous. There are lots of reasons I feel this way but one is for sure that "here's a danger scenario I've come up with based on motivated reasoning, I now think it's more likely than it is" is a known failure state for the human mind.

Expand full comment

In their minds a wild, low-probability idea vs something that we can see concretely. Obviously the thing we can see concretely has far more weight. It's like how we argue from real life to justify something in a storybook or refute it as absurd.

Expand full comment

My hypothesis is that the vast majority of people making these kinds of arguments aren't actually trying to be right--they're trying to be interesting. In fact, I'll go ahead and propose that most conversational assertions aren't primarily meant to convince anyone of anything; they're meant to garner attention to the person making the assertion. In other words, they aren't primarily informational statements at all, they are bids for social status. "Coffee was safe and therefore AI will be too" doesn't mean what it appears to mean, it actually means "I'm an interesting person who says provocative things, pay attention to me." This carries further implications regarding what "winning an argument" actually means to most people. The rise of social media can have only exacerbated this effect.

We're just lucky we live in a universe where being factually correct is sometimes interesting.

Expand full comment

The coffeepocalypse argument is a cousin of a thought exercise cognitive therapists ask people to do when their conviction that something is going to happen is based mostly on a *feeling.* So for instance very depressed people often do not try going out for a walk or calling a friend because it feels so true that doing either not only will not make them feel better, it will make them feel worse. So you ask them if they can think of any times when something felt very plausible to them, but they later discovered they were wrong, and most people can think of one. The point of the exercise is to help the person move feeling out of the evidence category, where it doesn't belong.

So I think that is the way to AI Coffeepocalypse argument is supposed to work. It's only suitable for people who were genuinely worried about their coffee consumption because the idea was in the air that it was bad for you, and are genuinely worried about AI for the same reason -- the idea of AI doom is in the air. You might think that's a pretty small subset of people, but actually I think that for most people, including me, it is simply impossible to make a decent prediction about how the whole AI thing is going to turn out, and also very hard to just have a wait-and-see attitude, so we fall back on our natural fear of the unknown, plus the AI worry that's in the air, and treat them as evidence..

So while I agree that the coffee argument is not exactly intellectually rigorous, I think it has a point, & the point is to get people to remind themselves that their uneasy feeling about AI is not evidence. So for practical purposes, the argument should be useful to almost everyone who got worried about coffee when there was chatter about its dangers, and is currently worried about AI. I guess we should also exclude the tiny fraction of people who understand AI well enough to have grounds for thinking they have some ability to predict how things will work out. (I'm inclined to think they don't have much more ability to make accurate predictions that anyone else, but they think they do.)

Expand full comment

I think part of what motivates people firmly in this camp, like Marc Andressen, is that it seems the current technology has moved in a somewhat different direction than Safety folks in general thought the technology would look like. Correct me if I’m wrong, but didn’t Elizier not see the whole “inscrutable matrices”* thing and especially LLM’s making it this far? So I think the thing folks in his camp aren’t outright saying is “these specific thought patterns don’t seem to be predictive.”

I really like reading Elizier because he lights up parts of my brain that don’t light up elsewhere, and he takes me in directions I wouldn’t go on my own. But I also don’t overall agree with him on a lot of things. A lot of that disagreement comes from experience building complicated stuff (I’m a Product Manager, I know, barf, and I readily cede probably almost everyone here is smarter than me on a lot of different dimensions) and having an experience of just how often reality pushes back at you and says “Oh wait, this is even harder than you originally thought, even after you took into account it was going to be harder than you originally thought.”

I’m sure people like Marc Andressen, who build companies, have these feelings on steroids. Whenever you try to make something do one specific thing,, it’s really hard to get it to settle into that state. And I know the safety argument is that you don’t need a specific state to kill everyone but I just don’t agree.

It’s so hard to get “things” to do “stuff” that I have a hard time accepting that a nascent superintelligence will be able do almost anything on purpose without a lot of effort. Especially when what seems like superintelligence today isn’t agentic,** but is taking the form of these sort of weird Thought Crystals that no one anticipated. I’ve read the Instrumental Convergence arguments, big chunks of the sequences, etc, and while I do think there are *major* dangers probably not too far away from where we are today, I don’t think the shape of them is what a lot of people think they are.

Anyhow, if I can’t steel-man a particular argument someone is advancing, i try to see if there’s a nearby argument I can steel-man and I think this is it.

My own system of building emotive intuition pumps (I spend a lot of time thinking about what it’s like to *be* a data packet in my day job, and I”ve gotten pretty good at doing that to catch weird timing errors, etc, other people don’t catch) tells me we’re probably going to get weird emotionless golem agents at first. They won’t precisely want anything and commands like “figure out what to do and then go and do it” will probably have very limited efficacy while at the same time asking specific questions or giving it requirements and having it fulfill them will get breathtakingly awesome. I think all intelligence looks like “inscrutable matrices” at least in some part, and the next step beyond the one we’re at right now will be building structures on top of those to shift the weights around to pre-set states. And then a bunch of breakthroughs and we’ll get real time updating and that one in particular alarms me but by that point I think our theories of mind are going to get generally better.

*Mechanistic interpretability seems to be making them somewhat more scrutable, so you could say this was another wrong prediction.

**I do think LLM’s have general intelligence, but are not agents in a meaningful sense

Expand full comment

one of the top 50 most annoying things about "AI discourse" is that people love to "argue" by debating which metaphor applies

"AI is like coffee"

"nuh uh AI is like a malicious god"

"nuh uh AI is like cold fusion"

"nuh uh AI is like this one movie"

"nuh uh AI is like this entirely different movie"

i still hold out hope that somebody involved in this wretched pit of a "discourse" will actually work from the facts we already know (AI is algorithms running on computers, we know quite a lot about the capabilities and limitations and resource needs of algorithms running on computers, we know your brain is not an algorithm running on a computer...) but i'm not holding my breath

Expand full comment

"But then how does any amount of evidence prove an argument?"

The only thing that *proves* an argument about some future event is actually observing that event. Russia can have a 3:1 manpower advantage, better generals, better tanks, and every expert in the world could predict that they'll win easily, but none of that is proof. The proof is in the pudding, as they say.

And I think that might be the root cause of your frustration about AI arguments. No one has made the super-intelligent AI pudding yet. I think that where people come down on the debate about AI risk largely reflects their own temperament and deeply ingrained biases, and both sides have people that are making bad arguments to confirm their biases.

My advice is stop trying to understand all these arguments as well thought-out and rational attempts to discover the Truth, and start trying to understand them as flawed attempts at persuasion by appealing to emotion and common biases, very much like political debates.

Expand full comment

It seems like the basic problem here is trying to find a general heuristic for evaluating some class of arguments without digging in and thinking them through. That's a necessary part of life--there are absolutely people who will claim that *everything* is an existential threat to humanity, if they think it will win them their argument. But the best it can do is give you a very rough guide.

Over the years, I've probably heard a million different claims about things causing cancer--chocolate, or red meat, or saccharine, or Red Dye #2, or whatever. Most of these were probably nonsense, driven by p-value fishing and a media file-drawer effect. And yet, smoking and breathing in asbestos fibers and getting sunburned a lot all actually *do* cause cancer. You can and should dismiss most such claims with a heuristic (media sources always report on crap like this and it turns out to be one n=14 study in mice involving 1000x the dose of the carcinogen humans are ever exposed to), but you also need another heuristic to decide whether to take the claim seriously enough to dig in a bit.

Once you're to the point of writing a think piece on why your quick heuristic for dismissing a claim of this kind makes sense, it seems like you've kind-of gotten yourself wrapped around the axle. The heuristic is useful for keeping you from changing your diet or spending hours digging into evidence every time the local TV news reports on that n=14 study in mice, but it isn't useful for actually evaluating a claim that something causes cancer. That can only be done by digging in and thinking hard and evaluating evidence.

Another way of thinking about these heuristics is as a kind of base rate. Prominent claims that X is going to lead to the extinction of mankind don't have a great track record, so we should start out skeptical. But then you need to dig into the available evidence. (Also, there's an anthropic principle thing going on there that makes claims that something will wipe out mankind a little funny to evaluate--everyone who ever evaluates such a claim will find that it never happens, even in a universe where {hostile AI, an engineered plague, an asteroid impact, an ecological catastrophe} does eventually drive mankind into extinction.

Of course, the problem wrt AI risk is that the evidence is hard to understand and interpret. We are speculating about capabilities of computers that we don't know they will ever have, but that they might eventually have that would be really dangerous for humans.

There's a parallel argument about AI risk that says more-or-less "we thought AI would never do X, but now it's doing X better than humans, therefore people who argue that AI will never turn us all into paperclips are also wrong." As a heuristic for dismissing arguments, it's not so helpful here. But maybe it can inform our base rate or priors.

My sense, as someone outside the AI field, is that it's hard to make convincing arguments that AI will never be able to do X (for any X), but that doesn't mean that AI will eventually do every X we can think of.

Expand full comment

The argument you agree with, but question if it may be fallacious:

"People were wrong when they said nuclear reactions were impossible. Therefore, they might also be wrong when they say superintelligent AI is possible."

*is* a form broadly misused/abused - for pseudoscience or outright supernatural, e.g. Electric Universe, Velikovsky, astrology, ancient aliens, God.

"Is this form of argument valid?" isn't the right question here, as it most argument forms can be used for the invalid. It boils down the independent question of whether AGI/ASI in particular should be considered in the same category as pseudoscience, and clearly here we'd all point to other evidence to show that it's different.

Expand full comment

To me, it sounds like a informal "Bayesian" argument. The author of the tweet is presented with a conundrum of the form:

- There is a subset of clever people that, applying a stylized model, have come out with a potentially catastrophic outcome that would materialize in the middle or long-run.

Then, he wants to estimate how often a group of clever people that came out in the past with a catastrophic scenario have been correct. Of course, such base ratio is pretty difficult to compute unless you can survey all intelectual discussions in recorded history (without significant bias), and then determine what is a "catastrophic prediction" and which ones actually came true . Thus, he take a small, quasi random, sample of N=1 catastrophic predictions and since that one didn't came true, he concludes that "clever people predicting terrible things" are usually not right.

I think the minor point, that clever people using simplified models of the World are usually wrong in their long-term predictions, is broadly correct and I could back it up with a large N. Whether this says anything about AI risk, I'm not so sure. At last, whether AI risk is real or not isn't a random event. But here, many of you defend "non-frequentist" probabilities and use base-rates in your reasonings that are similar to what the author of the tweet is doing. As an example: suppose you have a neuroscience study. You can not properly evaluate it and as such you don't know whether the result would replicate. After a quick googling, you find a survey of N=300 papers in the area, not including the one you are interested in, that says that only 40% of them replicate. Would it be so incredibly wrong to estimate the probability of the study you're interested in replicating as 40%. What about N=299 papers? Lower the N enough and you get to the Dan Jeffries twit.

Expand full comment

I understand and sympathize with your frustration, but this is just another example of a common conundrum many people here face. Once you have some basic understanding of epistemology, most people seem like drooling idiots.

Expand full comment

So there's a lot of people outside the rationalist community who don't think "x-risk" when they think about AI, and if you mentioned it to them they'd probably say it was some kind of techbro delusion. Even so, they still hate current AI technology. I've seen people react with visceral disgust at anything that seems AI-generated or uses AI—they're not *worried* about it, they're *offended* by it.

The coffeepocalypse argument thread opens specifically with reference to *backlash*, suggesting it's not really addressing risk but talking more about those kinds of reactions. This is given explicitly a couple of times: "Both forces fought back against the society-shifting power of coffee just as we are seeing media companies battle AI companies today for the societal shifting power of AI." "This pattern plays out again and again and again in history. New technology. Resistance. Resistance overcome."

(Of course this doesn't invalidate your general argument about using isolated incidents as proof.)

Expand full comment

Rutherford died in 1937. We can be confident he said nothing, accurate or otherwise, the day before Chicago Pile 1.

Expand full comment
Apr 25·edited Apr 25

COVID predictions are a useful a metric to judge which public intellectuals are "pretty good at weighing...evidence".

Everyone wrote extensively about it, and the duration was short enough in that we get to judge if they were correct or not.

Expand full comment

I think something completely different is happening:

1. "I'm trying to predict something in advance."

2. "Here's an example of a failed/successful prediction.

3. Given we now retrospectively know the results of this other prediction, it should be possible to have confidence in this new prediction prospectively."

The logical error, which is extremely common, comes with the assumption that a reasonable person could have had the same prospective clarity as the retrospective observer. We should know these are qualitatively different states, but we mix them up all the time.

In the same way it was impossible for someone to have confidence in coffee, given the prospective uncertainty, it would have been impossible for someone to have confidence in nuclear chain reaction. Nobody KNEW... until it happened and suddenly everyone knew! It's difficult to wrap your brain around this concept without practice, because once you know a thing it is hard to unknow it to the point where you can understand the person who is still struggling under uncertainty. Very easy to spot in someone else.

Expand full comment

If the target audience (or even the poster) doesn't have a grasp on the specifics of the AI, the actual argument for why or why-not AI-doom is simply "do I think these people is right?"

Many people, especially smart people, will see other smart people supporting AI-doom, but rather than engaging with the logic, they are trying to evaluate the people (trust-the-science types maybe).

If your evaluation of AI-doom is "are these people usually right?" then it's quite reassuring to see that smart people have been wrong about doomsday scenarios in the past. It doesn't have anything to do with the underlying logic of the doomsday scenario.

Honestly this doesn't seem that hard?

Expand full comment
Apr 25·edited Apr 25

Bach wrote a Cantata addressing the Coffeepocalypse. It's funny, light hearted. Music is good enough, but clearly not his best.

https://en.wikipedia.org/wiki/Schweigt_stille,_plaudert_nicht,_BWV_211

Expand full comment

Is this like the "trapped prior" thing you were talking about, with the same argument moving different people in different directions?

Expand full comment

Perhaps taking them as “arguments” instead of rationalizations and virtue signaling is the problem? With a social expectation of “having an opinion” in our contemporary dystopia, most people (who in my observation rarely think deeply about anything) reach for familiar justifications that resonate with them. Perhaps it seems more erudite to them than simply saying “I’m with #team-utter-bullshit not #team-absolute-tripe”? 🤷‍♂️ At least in the USA, we’re accustomed to bipolar arguments defined by the extremes with demonization and dismissal of the opposite as the standard modus operandi.

Expand full comment

Tweet #5s "What they were and are really arguing about is power." is an important and true point. Like Chapman belabors in Better Without AI, we were more "intelligent" than chimpanzees for a long time without being appreciably more powerful than them. Our power came from building cultural, colonial knowledge stores, which is often what we mean by "technology". (It's not just a matter of technology, but also maintaining the institutions that can reliably translate those ideas into changes in the physical world.)

In the case of coffee, this new change in the world ended up having a democratizing effect; more cognition available to the average Joe without specific dispensation from kings, politicians, and alcohol manufacturers. I think Prof. Russell is trying to take an accelerationist, technophilic approach that basically all new technology that serves as cognitive prosthesis is democratizing in this same way. I personally don't agree with that take, but I can see how this is an attempted pattern match on a deeper level than "here's a wrong prediction, so there will be others."

So I agree that we should look at power, not intelligence, because intelligence does not trivially map to power and the correlation is weaker at the high end, not stronger. I also think that coffee has obviously different affordances than AI and so their relationships to power are dramatically different. Coffee just serves to stimulate you out of the low end, giving you a bit of intelligence that gives you a bit of power on the low end where they still correlate. AI is a case that can pollute or take control of our collective knowledge stores, and is reaching much higher levels of intelligence, so the relationship to power is much more idiosyncratic and needs to be evaluated on it's own merit. But I do think it is coherently arguing "better thinking technologies uplift us all" (even if I disagree with that argument) and does not reduce to just "here's one failed prediction, so this other prediction must fail".

Expand full comment

The coffee/AI thing sounds like an appeal to absurdity. AI danger sounds absurd to them, so they compare it to something they think you'll agree is absurd, but this doesn't work well because the underlying reasons are so different.

Expand full comment

If I were generous, I think the argument intended is actually a clumsily communicated version of "the likelihood is low, based on the recurring trend of alarmism over new tech/stuff typically being overblown". Yes that is not an adequate means to evaluate risk which ought to be case-by-case, but that is all the layman has to rely on here (which is nearly all of us, because we aren't flies on the wall privy to all the right information). Call it agnostic and brushing off the question with an affirmation that "everything will be ok", as opposed to a serious evaluation. Even sifting through all the data we have, they're not going to make a confident assessment themselves. I know I can't.

When all you have is spurious speculation on incomplete information, the temptation is to bias to not panicking. These events aren't *exactly* completely out of the public's control qua pressure, but they might as well be if there is so little confidence about danger any which way.

Expand full comment

> Whenever I wonder how anyone can be so stupid, I start by asking if I myself am exactly this stupid in some other situation.

Here is a candidate scenario: try writing a piece on the absolute risk of democracy *as it is* in the US, as compared to alternative forms of government, including the one we have except transparent and truthful (behaving as it is claimed to behave), as well as other forms of government that have not been tried.

Unfortunately, there is a "do you have the balls to write it" aspect to this challenge.

Expand full comment

Yep, that horse is dead.

Expand full comment

Nit-pick on one of your examples: it's actually not hard to counter a 3:1 troop advantage, as long as you're on the defense. (Well, it is hard, but only in the sense that everything about war is hard and awful.) Defending troops are expected to lose about 1/3 as many as attacking troops. Casualty ratios in modern war have much more to do with who is on the offensive than other things you'd think would matter more.

Expand full comment
Apr 25·edited Apr 25

I believe it's a case of the boy-who-cried-wolf phenomenon, combined with a failure to recognize the significance of fat tails--a dangerous combination.

Expand full comment
Apr 25·edited Apr 25

(1) I don't think people are anti-AI safety, they're anti-'the sky is falling, the sky is falling!' warnings about "if we don't solve AI safety, we're all going to be turned into paperclips"

(2) This is because it has not yet been established that the Paperclip Maximiser AI is possible. There are a *ton* of Underpants Gnomes arguments about "AI is coming and it'll get out and take over the world and that's why we have to be sure we align it to human values*!", at least to my admittedly jaundiced view.

(3) Hence why one side relies on "all these previous examples of dire warnings never came true, therefore I am not going to worry about AI Doom" and the other side relies on "all these dismissals of new technology were wrong, and thus the dismissal of agentic AI is wrong".

(4) As I see the current implementation of AI, which is "replace human customer service with chatbots", "use AI to create clickbait articles" and "sign up for our seminar on how to implement AI in your business to sell people even more crap", I find it harder and harder to reconcile this with the fears/hopes that "once we crack AI, it will recursively self-improve to be super-duper god tier intelligence level and take over the world since it will be so much smarter than us mere humans, it will solve all the problems we can't solve due to not understanding the deep laws of the universe and will turn the world into utopia/dystopia".

*These just ever so coincidentally turn out to be the exact same as "21st century liberal San Franciscan values"; nobody ever seems to think we need to align AI to spread the Divine Mercy Devotion, for example.

https://en.wikipedia.org/wiki/Divine_Mercy_(Catholic_devotion)

Expand full comment

My guess about the psychology here is that people who make this sort of argument see the world as a place where things *just tend to work out*. The course of history follows a trend of general improvement; we have (self-evidently) not yet destroyed ourselves, despite people occasionally worrying about that; therefore they see concerns about the possible reversal of this trend - the possibility of a future being catastrophically bad - as disproved by the whole general history of civilization. The coffee thing is just a synecdoche for that whole history. And it's a good example for their purposes because it sounds particularly absurd (people thought *coffee* would lead to disaster???), and for them the whole idea of worrying about catastrophe is absurd.

From this perspective catastrophe is impossible. The psychological appeal of this attitude is pretty obvious, I think.

Expand full comment

... why can't "if no coffeepocalypse therefore AI is safe" simply be an obviously stupid argument?

Because a world-leading AI expert like Yann LeCun retweeted it?

I think what happened here is you have been trolled. It's super fun to troll the AI safety people.

Expand full comment

In my admittedly poor experience, people seldom debate; they just fight and play signalling games to make alliances and minimize the costs they must incur fighting. Therefore, blatant fallacies are a telltale that they’re communicating at the upper simulacrum levels (<https://www.lesswrong.com/tag/simulacrum-levels>). It can be translated as, “Just shut up and support my side, or else”. By trying to debate, you’re challenging them, and they respond accordingly.

Of course, given how much you’ve written about all sorts of rationality topics, and how much I’ve learned from you, I’m pretty sure you’re well aware of these facts, so there’s probably some signalling game going on here, too, that I fail to perceive as I clumsily reply to the object-level question.

Expand full comment

I think to steelman these arguments you need to interpret them as counterarguments to arguments from authority. "Yes, there are high status people who are worried about AI X-risk, but high status people are not always right, see coffee and nuclear fission."

Most AI X-risk arguments are not produced as arguments from authority and they offer pretty detailed substantive arguments, so you might question why anyone would make this counterargument.. The reason for that may be that even a very elaborate substantive argument is perceived as an argument from authority if one is too bored to read further than the title and author's credentials. So it is not inconceivable that the the vast majority of consumed AI X-risk arguments are consumed as arguments from authority.

Expand full comment

Why does the coffee argument work and the halibut argument doesn't?

1. Most of us don't put halibut in our bodies on a regular basis.

2. Coffee obviously contains a drug that impacts both mood and metabolism. Where there are obvious impacts, there may be hidden impacts. To summarize, coffee is an active substance in a way that halibut is not.

So, coffee is an analog for a powerful and possibly covert substance, while halibut is an analog for an inert substance.

The only exception is when men get slapped with a fish on Monty Python.

Expand full comment

"And plenty of prophecies about mass death events have come true (eg Black Plague, WWII, AIDS)"

WWII is definitely fitting given the very explicit arguments that another war would emerge without the League of Nations or some other coordinated peace effort. I feel less comfortable with the other two as analogies but I don't know the full history leading up to them. I don't think it's possible to steelman the original argument especially well, but I think that you're missing a distinction between mass death events that are unsurprising causal agents which are unsurprising (e.g. disease, war) and things that are not (e.g. global cooling, AI).

Expand full comment

It's basically a calibration/bias argument. The same argument as one we regularly deploy against claims about some new miracle cure or religious miracle -- we have good reason to believe there is a bias here and you should update moreon the possibility you are being affected by it.

Basically, the argument has the form:

1) We've seen that people seem to be particularly inclined to *think* that new technology poses a grave risk (or is all good but I'll just describe this side).

2) However, in fact people who make these arguments have repeatedly and regularly been wrong. Maybe not universally wrong but wrong far more than their assignments of probability should allow.

3) You are a person and we have every reason to believe all the considerations that caused those past judgements to be overconfident/poorly calibrated affect you so therefore you should do the same thing you would do if you found out that your bets on sports systematically assigned your favorite team an unduly high chance of victory and adjust downward.

--

And I think this argument has a great deal of force. I mean why is calibration from the past predictions you made different in kind from that made by other people.

Yes, maybe you are more different from those people than you are from your past self so you should be somewhat less confident the adjustment is beneficial but still somewhat (and unless you have specific reason to believe you are less likely to be affected lower confidence shouldn't affect the magnitude or direction of the best adjustment)z

Expand full comment

Good reasoning. One question. You wrote: "People didn’t worry enough about global warming, OxyContin, al-Qaeda, growing international tension in the pre-WWI European system, etc, until after those things had already gotten out of control and hurt lots of people."

How has global warming hurt lots of people? I see how it could in the future. What is the evidence that it has already done so?

Expand full comment

The message is more to embolden than refute.

"Think AI Doomers are absurd? Well they have more of a chance of winning than you think, because of historically identifiable neurosis. So you should really care about defeating them."

As with most communication, it's necessary to give it the aesthetics of refutation.

Expand full comment

> physicist Ernest Rutherford declared nuclear chain reactions impossible less than twenty-four hours before Fermi successfully achieved a nuclear chain reaction.

It's not a particularly useful evidence on its own, but it's an effective rebuttal for "Here is a domain expert who thinks that artificial super intelligence is nothing to worry about, therefore you all should chill".

Expand full comment
Apr 25·edited Apr 25

So other comments are already saying this, but I might as well throw in my 2 cents corroborating it: this Twitter thread doesn't read to me as about AI safety in terms of x-risk at all.

The point in your footnote seems relevant to me. The argument, such as I understand it, isn't that the French monarchy were wrong to be worried about revolutionaries. It's that the French monarchs were terrible and the coffee-fueled revolutionaries were right to be revolutionaries. The monarchy = AI safety, stuffy walled-garden elites who just want to preserve their power; OpenAI (or whoever) = the coffee producers fueling the revolution that will make a better world for all heedless of whether or not it supports the dominant elite. In other words, it's e/acc.

This argument makes no sense if it's applied to the Eliezer Yudkowsky form of AI safety, because a proper agentic superintelligence isn't like a coffee-fuelled revolutionary at all. But it makes a lot of sense applied to the forms of AI safety I am more likely to see farther away from that, which is much less about extinction and much more about unemployment. In particular I'm thinking of visual artists who *HATE* AI art with a furious, burning passion. This is a group I interact with every day, and their AI safety opinions have nothing to do with LessWrong or the associated diaspora. They're not afraid of the AI itself, they're afraid of other people using the AI to take their jobs. The coffeehouse analogy seems appropriate here, even if you don't agree with the conclusion: control of visual arts right now really is a kind of walled garden, and AI tools really do offer outsiders a way to be productive outside that system of control.

From the thread: "After a technology triumphs, the people who come later don't even realize the battle happened. To them it's not even technology anymore. It's a part of their life, just another thing. They just use it." This is cold comfort if you visualize the post-singularity world as a mass of undifferentiated gray goo/cloud of sexless hydrogen. I want my species to stay alive and it doesn't help knowing that the gray goo considers its life "normal"! But Daniel Jeffries clearly isn't visualizing this, considering his post-singularity future has human kids in it at all. He's visualizing a world which is much like ours, only with a powerful new technology (safely under the control of humans) and a new historical footnote about the failed attempt to prevent its spread.

What your response to the thread really makes me think is that AI x-risk cannot get away with de-emphasizing the "existential" aspect of it. If I were responding to this Twitter thread, I'd be saying: no, dude, this form of coffee isn't being used by other humans with some different ideas about who should run our institutions, it's being used by an *unfathomable greater-than-human intelligence that nobody understands or knows how to influence*. It's not a concern about what our grandkids will think, but whether they will have a chance to ever think anything at all.

Expand full comment

How much value does describing an argument as “stupid” add to this post?

Expand full comment

To put the point differently, suppose you, Scott Alexander, had predicted 50 times in the past that Bitcoin prices would plummet (somehow specified) in the next month with 90% confidence. In fact, in only 10% of those cases did you turn out to be correct.

Surely, the next time you make the same prediction it's reasonable for me respond: but every time you make this prediction it only turns out to be correct 10% of the time. Therefore you should adjust your confidence based on the observed data down to 10%. It wouldn't be very convincing for you to reply: how is the fact I made an incorrect estimate about these other things (what Bitcoin will do in 8/22, 9/22, ...) relevant to whether my estimate about this different thing (what Bitcoin will do in 5/24).

So how is this situation different than that example?

Well the predictions are about less similar things than in this example. Fine, but they are still all about new technology causing problems so there is a common description that one could reasonably appeal to (just have to think that these judgements are likely to be similarly affected ... reasonable prior)

You didn't personally make them, some other person did? Again, this seems only a matter of degree. You might be more different from those ppl than your past self but it doesn't destroy the argument only makes it slightly weaker.

Expand full comment

(year 2500, humans are long extinct, the AI society is conflicted on a new potential risk) see, in the early 2000 there was a panic about AGI, but it turned out great!

Expand full comment

Most people are using the totally wrong category of reference classes to think about this all together.

This article and the comments around it are sort of emblematic of this discussion being "when was the last time you made a screwdriver that ended the world? No? Okay stop worrying."

People broadly consider the A.I. to be an inanimate thing, a simple machine, thats just particularly clever or powerful, like a computer or atomic bomb.

The real intuition pumping question should be "Do smarter agents conquer dumber ones?". And here the reference class is chock full of affirmative examples. Homosapiens sapiens, smartest of the hominids, more or less extincted the neanderthals and other at the time extant relatives. Humans, smartest of the animals, have conquered the entire world and quite definitively control the lives of the chimpanzees orangutans and bonobos who differ in DNA by only a few percent. Advanced human cultures assimilate or eradicate less advanced ones. In a thought experiment, if we took the 10% most intelligent humans and gave them one half of the planet, and the 10% least intelligent and gave them the other half, and eradicated the middle 80% of humans, the smarter group would fully and definitively control the fate of the planet, the human race, and that of the dumber group. There's a popular shared intuition that if an alien race shows up and it's smarter than us, we should be scared (and we should be).

A.I. isn't a fancy screwdriver, it's an Alien on the horizon and we aren't sure if they are smarter than us or not. That drives the risk intuition.

The argument then has to become "is it even possible to construct an artificial agent that is smarter than humans". That may be debatable, but the heuristic not to worry about the coffee or the video games or the slate tablet doesn't apply.

Expand full comment
founding

Scott, have you tried literally asking some of the coffeepocalypse people about this? That is, reaching out, pointing out why the argument seems extremely weak to you (and me), and asking why they find it compelling?

Expand full comment

No thoughts on the coffee analogy, but I do have a minor point to make about the bad things that did come true - both the forecasted and the unforecasted:

> And plenty of prophecies about mass death events have come true (eg Black Plague, WWII, AIDS).

> People didn’t worry enough about tobacco, and then it killed lots of people. People didn’t worry enough about lead in gasoline, and then it poisoned lots of children. People didn’t worry enough about global warming, OxyContin, al-Qaeda, growing international tension in the pre-WWI European system, etc, until after those things had already gotten out of control and hurt lots of people.

The Black Plague and AIDS are like past plagues. WWII is like past wars. Tobacco and lead are like past unintentional poisons. OxyContin abuse is like past opiate problems. al-Qaeda is like past terrorists. Pre-WWI tensions are like past pre-war tensions.

The only exceptional example is (human-influenced) global warming. Even there, only the cause is exceptional; a changing climate is a challenge humans have faced repeatedly.

I think this is part of why it's hard to shift low priors on existential risk from AI.

Expand full comment

The important feature of the coffeepocalypse discussion is not the argument. There doesn't need to be an argument (in the philosophical sense) for it to do its work. The argument is orthogonal to the work.

(TLDR: Scott is working at level 1; coffeepocalypse is at level 4.)

I'm about to say a bunch of unprovable stuff. To know this stuff, you have to have done some emotional work to discover it for yourself. It may seem wrong to many people. That's fine. Maybe it'll be a guidepost to somebody, of the kind that I needed once up on a time, but didn't find.

Much of politics operates via taboo. Arguments reproduce mimetically, during which political actors attempt to fasten various taboos to them. Who is allowed to make such-and-such argument? What kind of person is allowed to draw which conclusions? What evidence is it permissible to consider in connection to some particular issue? These are questions that, philosophically speaking, are meaningless failure modes called ad hominem. In politics, however, the involvement of taboo with an argument is a powerful force.

I'll give one example, from far afield. It's not a great example, because it's a weak taboo -- but I can't give a better one. (It's taboo to call a strong taboo a taboo -- strong taboos are "just being a decent person" -- so I would be violating taboo by mentioning the strongest, most important ones here.) Here's the example: it's often asserted that men shouldn't opine on the issue of abortion. Philosophically, this is an ad hominem that any freshman could identify. (The gender of the person making an argument doesn't make the argument better or worse.) Politically, however, it's very relevant. Whether, where and how much this taboo is considered to be in effect is a political issue that people fight over.

If you want to see taboo in action in politics, look at tweets starting with the word "Imagine". These are almost all accusations of taboo-breaking. A tweet like that has a bunch of functions: it informs the reader of the existence of that taboo; it shames the target; it informs bystanders that they, too, can shame their neighbors for breaking the taboo. If the tweet falls flat (say, if the person tweeting gets ridiculed by lots and lots of people), it tells bystanders that this taboo is no longer in operation. Most of the time when taboos are implemented, it's much, much, much more subtle than these "Imagine" tweets. It happens in omissions, minor rudenesses, who gets left out of an event, in the structure of jokes, etc.

(Many taboos are good! I am not against taboos. We need them. Also, we should understand how they work.)

One of the most common (and subtle) ways taboo is used politically is to limit which ideas can be brought to bear on an issue. Because it takes work to search around and find all the relevant concepts involved in considering an issue, it's easy to taboo people into not doing that work.

Similarly, one of the features of moral panics is that the people promoting the panic use their social power to enforce a taboo against referring to the idea of moral panics in connection with their issue. When there's a real moral panic going on, you cannot call it a moral panic without consequences. If you're American, you can definitely think back and find examples of this. I'm not willing to break taboo publicly, though, so that's work you'll have to do yourself.

So, the coffeepocalypse discussion serves a bunch of purposes, just by publicly connecting the idea of moral panic to AI risk. We can gauge the strength of the moral panic (if one exists) by the response to the coffeepocalypse discussion. People who have been in a bubble where nobody ever connects the idea of moral panic to AI risk (if such bubbles exist) might realize that it's a relevant idea, and now they have a new tool to use in their thinking, even if they end up deciding that AI risk isn't a moral panic.

Also, now that the idea of moral panic has been spread around in connection with AI risk, it will be that much harder for a group that wants to start a moral panic about AI risk to enforce the standard taboo against thinking of the issue as a moral panic -- because lots of people have already had the taboo thought. It's like an inoculation.

In conclusion: just reminding people that moral panics exist is productive, because there's a whole avenue of political action in which powerful people use taboo to make people forget, ignore or downplay concepts that aren't favorable to them. If you're somebody who thinks AI risk is very real, even you don't want a moral panic about it, since they're so frequently counterproductive. Just keep on presenting your evidence that the danger is real, and try not to do stuff that makes you look like you're in a moral panic.

Expand full comment

I think a possible point of the coffee argument is something like, “The arguments you’re making that AI poses an existential threat sound as plausible to me as the arguments that coffee posed a serious threat to society back then or that global cooling or overpopulation were existential risks in their time. That is, most of these arguments depend on constructing a long chain of reasoning, a bold just-so story, of how something terrible will happen because of this newly discovered thing (or newly appreciated risk). We know very little about this new thing and have no precedent of it causing harm at all, yet you want us to cede control over it to the state or embrace Luddism to ameliorate the risk. If you tried hard enough, you could construct equally plausible (or implausible?) theories of how attempting to avoid the risk might itself result in catastrophic harm. Note that these supposed risks are vastly different from the risk of pandemics or wars causing harm. There is historical precedent for the destructiveness of those. Not so for AI, global cooling, or population growth.” Maybe that’s not what the coffee argument really means, but it is something I would like an answer to from proponents of AI risk.

Expand full comment

This version of the argument seems especially lazy. The worry about coffee wasn't that it would kill everyone (which it didn't), but that coffeehouses would be a venue for independent thought and would undermine the monarchy. If you look around the world today, we *don't* have that many monarchies left. So maybe the people were *right* to be worried about coffee.

But I think what these arguments *want* to be are like the ones where they point out old opinions about newspapers causing problems that sound eerily similar to modern complaints about cell phones. Like if your argument says "A, B and C therefore we should be worried about AI", but they point out that 500 years ago people said "A, B and C therefore we should be worried about coffee" and those fears turned out to be unfounded, it suggests that your argument is not as sound as you think it is.

I guess this still puts it in the existence proof category, but instead of an example of an existence proof of "not all technologies kill everyone" (which is really weak since you don't think that all or most technologies kill everyone, just that this particular one might), but instead "not all technologies for which your arguments apply kill everyone" (which would be somewhat stronger since you presumably think that your arguments about AI ought to be enough to imply that it is at least moderately likely that AI will kill everyone, and therefore a counter-example to this should substantially lower your confidence).

Expand full comment
Apr 25·edited Apr 25

"He pointed out that physicist Ernest Rutherford declared nuclear chain reactions impossible less than twenty-four hours before Fermi successfully achieved a nuclear chain reaction."

This is somewhat garbled.

1) Szilard theorized nuclear chain reaction shortly after reading Rutherford's "moonshine" declaration in the newspaper, but it took about 10 more years for Fermi to build Pile-1.

Also some mitigating (?) factors in favor of Rutherford:

2) The moonshine declaration was in the context of a recent experiment that had split the atom using protons. This is much harder than using neutrons, since they are repelled by the nucleus. A nice and somewhat thematic way to finish this point would be to add "And he didn't consider using neutrons because the neutron hadn't been discovered yet". But in fact Rutherford's colleague had been awarded the Nobel prize in 1932 for discovering the neutron, and Rutherford himself had theorized their existence beforehand; the source I see for the moonshine quote is 1933. I'm not sure why Rutherford didn't consider using neutrons.

3) Part of the reason he was proven wrong so quickly was that Szilard read what he said in the newspaper and decided to prove him wrong.

Expand full comment

I think it's like when you try to convince someone with a saying, like "Better safe than sorry." Of course this means nothing - sometimes a risk is worth taking and sometimes it's not. But if you've already decided that the risk is not taking, then you use the saying to illustrate your point.

Expand full comment

Regarding the footnote, one of my more minor unpopular opinions that I like to float from time to time is that widespread caffeine addiction is a very under-discussed problem that is probably actually causing a significant amount of harm to society.

Expand full comment

I think it's ultimately just a bad extrapolation of an argument that is, or at least can be, fairly compelling.

If a particular person, or group of people, "P," has repeatedly predicted that event X will happen by a specified date, only to have said date come and go without incident, it really does seem as if one can safely discount P's next prediction that X will happen. I mean, they've gotten it wrong a bunch of times before, why should anyone pay attention to them this time around? Stated abstractly, the principle is "We can safely disregard predictions by P about X."

To make the argument concrete, use "climate doomsayers" or "preachers predicting the date of the Rapture" as P with corresponding values of X.

The mistake is that instead of disregarding predictions by P about X, the coffeepocalypse seems to be saying "Because we can safely disregard predictions by P about X, we can also safely disregard predictions by anyone about Y."

The reason it can seem superficially plausible is that it's a misapplication of a useful principle. I think you skipped over that, hence your confusion.

The coffeepocalypse argument is still bogus, but I think I understand how/why the mistake happens.

Expand full comment

Dunno, I'm not so worried about autonomous AIs destroying humans, but rather some humans figuring out how to use AI to take over the world. Something like the "western steppe herder" culture, which about 5,000 years expanded across Eurasia, taking 9/10 of the preexisting male population out of the gene pool.

Expand full comment

My best guess is that the logic is not actually "people were worried and didn't need to be, therefore they don't need to be worried about this." I think it's more an affirmative belief in humanity's collective ability to react and adapt to problems, which generalizes more to future problems than just remembering moral panics.

Expand full comment

I think the steelman is that these aren't formal arguments at all. In the best light, they are invitations to consider a situation through a lens you might not have yet--heuristic activation, as you say, the kind of reasoning you only really need to be exposed to once. Suggestions that the situations /could/ be analogous, not that they definitely are, which might cause someone to nudge their credence away from the zeitgeist if they have not yet considered the possibility and don't want to drill into the particulars at all.

That said, I've seen this argument made in other contexts (relentlessly, on the harms of smartphones/social media) by people who absolutely do want to drill into the particulars and convince you that they are correct and this is another 'moral panic' cycle. (Aside: don't they have to be grounded in fear of moral violation to be moral panics? Aren't they just...panics, otherwise?)

Expand full comment

I suspect the footnote is the best part of this post. Reminds me of Neal Stephenson's Baroque books.

Expand full comment

Perhaps they think that AI is obviously a good thing (you can easily think of its benefits), in the same way coffee is obviously a good thing (you can easily think of its benefits), and they are not concerned about something obviously good going bad. The same doesn't apply to AIDS, climate change, Al-Qaeda, etc., that most people would not have thought as obviously good things in the first place.

Expand full comment

The argument is really "They said it before and it didn't happen", rather than, "It's been said before and it didn't happen." It's an argument against the argument from the authority of elite public opinion. For the common people, the argument for X from elite public opinion is pretty good. "They" say chemotherapy delays death from cancer. "They" say Man really landed on the Moon. But even for a low-IQ person, the counterargument, "But they've always been wrong when they're forecasting doom" is a good one. It can be elaborated as "They've always been wrong when they're talking about technological doom and they make money on the supposed solution."

I'm an example of this myself with AI, though in a modified way. I'm high-IQ. But I don't know much about AI, and even less about AI Doom. I won't take up that difficult topic just because I hear some smart people are worried. Smart people are too often foolish, e.g. Y2K destruction of the power plants unless we pay millions to programmers, global frying unless we pay millions to climate scientists.

Expand full comment

I wonder if it's something like this:

Yesterday, I had a minor panic episode. During the episode, I remember thinking, "I feel so stressed out right now--that *must* mean something in my life is horribly wrong, or else why would I feel this way?" I was taking my belief itself as evidence for my belief.

I think we sometimes do this collectively too. We have a moral panic about something and we tacitly think, "Well, if it wasn't a big deal, we wouldn't be having a moral panic." We take the collective belief as its own evidence. The legitimate use case of a coffeepocalypse argument is to remind us that the prevalence of a belief by itself can't be all our evidence, because sometimes folks are just dead wrong.

But, or course, people use it much less carefully than this legitimate use case.

Expand full comment

Here's a proposition for you to consider. Everybody making these arguments is actually failing to articulate their real position. What they actually believe goes something like:

"AI doomerism looks an awful lot like these other sorts of doomerism and, sure, I've heard some compelling stuff about how frightening AGI might be, but these arguments are mostly inscrutable in a way that's commensurate with my goofy doomerism heuristics. Therefore I'll be putting AI doomerism in that basket for now."

They're effectively trying to tell you that, from their perspective, AI doomerism looks more like [Y2K panic] than [Donald Trump getting elected panic]. Most people aren't able to articulate their broad heuristics with any degree of precision, so I doubt you'd get anything coherent out of pushing them on this.

Expand full comment

I think the best analogy is with conspiracy theories. If someone comes up to me and says that china is secretly controlling the world with radio disturbances based out of Antarctica how much time should I spend digging into this claim versus dismissing it out of hand? The reference class that may be relevant is how often are “wacky things people have said online” such as coffee apocalypses worth investing time into seriously considering. Pointing out that people make a lot of weird sounding claims on the Internet that turn out to be wrong is relevant. This isn’t an argument that’s going to be convincing to someone who has looked into these arguments, but I think the steelman argument here is that the person is deciding whether you’re a crank they can ignore.

Expand full comment

The irony about the coffeepocalypse thread - is that the fear mongers were right. Coffee houses did help brew revolution - revolutions which have since overthrown the political power of nearly every monarchy on the planet!

Expand full comment

I think the coffee thing is a story telling example which is being used to imply a base rate that most predictions of doom are false. You got very close to that here 'to gesture at a principle of “most technologies don’t go wrong”?' and then got distracted by talking about how the argument has also been used for things which aren't technologies. This may be true, but: 1/ if the base rate is predictions of doom rather than technologies going wrong it doesn't matter. 2/ They are talking about technologies here so it doesn't matter where else it has been applied, just whether it holds here.

Expand full comment

The coffee argument reminds me of why Twitters “for you” feed is so annoying to use: a large number of viral tweets are making a similar annoyingly false statement that falls apart if you think about it for a second. Some accounts are genuinely dumb, some (most?) are doing it on purpose to generate clicks.

The Notes feature kind of helps but sadly your account doesn’t suffer significant penalties for posting clickbait BS so people keep doing it.

Expand full comment

I think the footnote ("coffee really was dangerous") makes clear that you've slightly misunderstood what that thread is going for. He's not arguing that sometimes things aren't dangerous! He's arguing that sometimes things are inevitable, and we struggle against them but they just kind of happen and eventually we accept the new world. He says this explicitly mid-thread.

This is still a coffeepocalypse argument, broadly speaking, it's just a different coffeepocalypse argument. It's a single example of people worrying about a new thing and then *getting over it* (as distinct from "admitting they were wrong").

In terms of Aaronson's "Five worlds of AI," the thread is grappling with the question of Futurama vs AI-Dystopia, while dismissing Paperclipalypse out of hand

Expand full comment

I'm 63. The societal fear of something that will end the world has been pretty constant all my life. The expected cause keeps changing, but the fear/anticipation has been constant. Lately it's been global warming. Before that overpopulation, nuclear war, etc.

So I've grown to think that this is some kind of cognitive bias humanity as a collective has. AFAIK, all societies on average expect the world to end soon. Christians have believed it for 2000 years, and still keep the faith. Maybe it's some kind of projection of the fact of our individual mortality, IDK.

So now that the AI-pocalypse starts to fill this need among a section of the population, my first reaction is "oh, the end of the world cause is shifting again".

All that said, I of course agree that this time *could* be different and the broken clock might be right this time.

Expand full comment

(I'm writing this before looking at any of the comments, because I want to get across what I think *I'm* doing with this sort of argument and not be primed or influenced by what other people think *they're* doing with same).

This is very interesting, because I make this kind of argument constantly, and I'm surprised you don't understand its purpose. People really do think in very different ways.

You seem to think the argument is "people were worried about x and x didn't happen, therefore nothing people are worried about ever happens". When it's actually "people had a certain kind of argument for worrying about x, x didn't happen, you're using the same kind of argument for worrying about AI, therefore your worries about AI can be ignored."

It's an argument that there must be a hidden flaw in this kind of argument (even if we can't see it) because it's failed before.

Analagy: I say my electoral model says Trump will win this year's election. You point out that my model also said he'd win in 2020. Should I paraphrase your reply, like this post, as "I can think of one time Trump didn't win, therefore Trump never wins"? No, I shouldn't. What you're saying is not "therefore Trump won't win", it's "therefore your model is demonstrably flawed and should be ignored".

If people were saying in the 1950s that nuclear weapons, if not banned, would destroy us (based on facts that follow from the demonstrable power of the weapons and the nature of technological development), and we weren't destroyed...then anyone saying now that AI, if not banned, will destroy us (based on facts that follow from the demonstrable power of AI and the nature of technological development) should be ignored. It's not enough to simply explain their reasons for thinking AI will destroy us, even if they look airtight (since the nuclear reasons may have looked airtight). It's not even enough to give reasons why AI is completely different from nuclear power, because people in the 50s could have given reasons why nuclear power was different from every other past technology. It's necessary to give reasons why the *fundamental structure of the argument* being made now is different from the one being made in the 50s. (For example: this time there's a mathematical a priori argument, not just an evidential one, or something similar). Replace the nuclear example with any other you like.

Now none of this applies if you're making a probabilistic argument like "60% of the times circumstances x have occured there has been a catastrophe, and circumstances x are now occuring". But like you said that's almost impossible to count up, and it's usually not the AI-risk argument. The argument is usually "for these plausible-sounding reasons the prima facie assumption should be that AI is a threat." And once it's pointed out that claimed threats with similar plausible-sounding reasons have not ended up being threats, and thus the reasons are not so plausible after all (even if we can't see why), the burden is on *you* to either explain how the structure of argument is fundamentally different, or to rework it as a probabilistic one. Until you do that, the prima facie assumption for worrying about AI stands refuted.

Expand full comment

I think it's just an attempt to signal leadership and competence. Some bad economic event happens, and there's one guy running around saying "this is going to destroy the company!" Another guy is like "nah man, something like this happened in '87, it didn't destroy us then, it won't now". That's not a good argument to make, the other thing happened 40 years ago, a lot has changed. But that second guy is implicitly saying "I know about things like this, keep cool, we'll get through it, and obviously somebody like me who has this attitude should be the leader of that response rather than the crazy panic guy over there."

Reacting dramatically to a possible danger makes you look childish, and dismissing concerns with an appeal to past experience makes you look like the experienced adult leader. I don't think it's a claim about real probabilities.

Expand full comment
Apr 25·edited Apr 25

I think it's a side effect of stronger resistance to fear (and guilt) arguments which are the de-facto standard way most lobbying groups G (government or other) attempting bulk influence in western democracies since WWII.

The pattern is "you should do Y to avoid the catastrophic X outcome", or "not doing Y cause many to suffer X". In then end, doing Y does not seem to change X much, but profit G while making the life of those doing Y harder.

This pattern is so constant that public resistance to it increases, leading to more and more extreme versions of the pattern and more and more resistance. I see it like an immune system response to fear and guilt propaganda.

In the case of AI, maybe the immune response is unfortunate: the fact those who should do X is not the general public, but very profitable megacorps, makes it more likely to be a genuine risk rather that yet more propaganda...But on the other hand, the megacorps are quite immune to fear and guilt. So expect research to continue as usual anyway :-).

Expand full comment

Okay here's my steelman of the argument. Suppose we're arguing about whether resources are going to run out. Someone points out "here are all these cases of people boldly predicting that resources would run out and them not doing it." That would seem a perfectly kosher argument.

Now, in the case of AI, the inductive trend is a bit different. They're arguing something like "it's easy to argue some future thing is dangerous and likely to majorly disrupt the world and to find smart people making long arguments for it, but it's usually not." Now, pointing to one case doesn't show that, but most ways people have predicted the future will be significantly worse have been incorrect.

Expand full comment

I think you are missing the heuristics that people are using to make these arguments.

When they point to the coffeepocalypse types doomers being wrong a lot, they have two heuristics that makes AI risks also seem like something they don’t need to worry about:

1. Predictions are really really hard. If someone is worried about something far in the future they are almost always wrong. If you look at what people were worried about 100 years ago about today, 99.9% would be completely irrelevant. This doesn’t mean you can completely ignore them but you should downgrade your concerns.

2. It’s very hard to predict human innovations. If the prediction is actually right and creates a big problem, it leads to a flood of human capital trying to solve the problem. This has solved countless problems in the past and while it is impossible to know ahead of time what the solution will be, it likely someone will find a solution. This isn’t an argument to stop investing in AI but actually argues for the opposite. However, it’s a heuristic why the typical person isn’t worried.

3. Even if the prediction is right, also have timing risks to consider. Even if someone rightfully predicted that AI safety would be a big problem 50 years ago, there very little anyone could do to lower the risk at that point. There a good chance people are worrying about this too early which makes it even less likely that it’s something to worry about right now.

Expand full comment

I think the coffee thread would be more fairly summarized as:

AI is like coffee and other past technologies because it is currently facing opposition from powerful institutions ("media") who feel threatened by its potential to disrupt existing power structures and industries. And since the controversy is "really about power," it will play out the same way coffee did.

That's not really an argument. It's an assertion.

Expand full comment

Russell's example makes sense to me because it sounds like he's arguing for "Clarke's First Law" which is pretty well known

> When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

I'm still not sure what the coffeepocolypse version is trying to argue for though

Expand full comment
Apr 25·edited Apr 25

Will there be another book review contest in 2025?

Or will GTP 5 have made book reviewers and human writing obsolete?

Expand full comment

I think global warming is the stong version of the coffeepocalypse

My understanding is that current costs of warming may be net positive. A Lancet study reported that cold stress deaths have reduced twice as fast as heat stress deaths have increased in the last decade or so

Not only may we be wrong that warming represents the general violation of our Edenic past with all weather getting worse all at once, but the assumption that this is so means we are utterly blind to the costs

It is very possible that corn ethanol subsidies have singlehandedly outweighed all warming costs through starvation in food stressed countries, yet there is so little interest in the possibility that when I attempted to estimate the potential number of starvations I found that there has been essentially nothing since about 2010 that I could find even considering it in the literature

AI may be dangerous, but the assumption that it is probably infinitely dangerous is inherently dangerous itself. It may be right but if we allow the concern to go unexamined then we will experience some kind of corn ethanol subsidy cost times ten without thinking carefully about it. I don't think this invalidates AI risk mitigation but it shows that we need to not only estimate the necessary steps to hedge against AI risk, but also the likelihood that they will have a net expected benefit

A military strike destroying the capability to produce AI enabling GPUs may be necessary to credibly frustrate dangerous AGI - but does it actually have a chance of success commensurate with the guaranteed cost? Should we spend infinite money to prevent 10% more warming because of tail risks or is that actually a poor choice at some margin because we may also be preventing the tail benefit of delaying the next ice age and we just aren't thinking about it?

Expand full comment

I think what might be going on here is that reporters reported on a real thing and then industry people responded by doing something and then the apocalypse didn't happen and reporters never reported on that part. So people assume that the problem wasn't real, not that someone fixed it.

See: https://www.vox.com/future-perfect/24121450/honeybee-population-extinct-prediction

Expand full comment

They aren't implicitly making an argument. They're implicitly sharing a personal narrative.

When I was a kid, elites told me that I should be really worried about overpopulation and role-playing games. And I trusted them, and worried about those things. Then I found out that neither of these things were problems at all. When I was older, elites told me I should be really worried about violent videogames and assault rifles. But it turns out lots of people play videogames and are just fine, and most crimes are committed with handguns, not assault rifles. Then, most recently, I heard this crazy idea from elites that I should be worried about coffee. But this time at least I knew better and ignored it. And sure enough coffee isn't a problem. So when you tell me that elites are worried about AI, I don't buy it. Elites are always saying we should be worried about something but I've learned to ignore them.

The critical thing is that the teller of the narrative is not really distinguishing between different camps of elites; but this is not something so easily done.

Expand full comment

> I always give the obvious answer: “Okay, but there are other examples of times someone was worried about something, and it did happen, right?

How about limiting it to examples of successful predictions of catastrophe-inducing events, civilization-ending events, human extinction events, or eschatological events? The history of experts predicting these events has been spotty at best. ;-) In my lifetime. I've lived through at least half a dozen EOTWAWKI (End Of The World As We Know It) events that scientific* experts predicted. And I've lived through three of four religiously predicted eschatological events (that I know of)...

..."And I feel fine," to quote the popular song.

If you were to say that AI will have unintended consequences, some of them negative, I'd be more inclined to take you seriously. Heck, I can already see it's distorting our knowledge base with "hallucinated" data. But predicting that coffee, horseless carriages, flying machines, and Lindy Hopping will have negative social consequences is not on the same order as predicting an EOTWAWKI-level event.

*scientific in the rationalist sense of the word — as opposed to religiously motivated doomers.

Expand full comment

I think Scott is seriously missing the point of that thread and then the straw-manned version can't be steel-manned again.

The actual content is not "once people were worried about coffee, but now we know coffee is safe. Therefore AI will also be safe." Rather it is something like "Once people tried to heavily regulate coffee under various pretexts (and there were some true believers in those pretexts). But really the powerful people who made that into a political question rather than some cult's weird food taboo wanted to regulate it because social change threatened their position. In the long run it didn't work and couldn't have but it hurt the places that tried it. Same with AI now".

The guy doesn't try to engage the true believers on either AI or coffee doomerism, because he takes neither group serious, but that doesn't round to any of Scott's version of badly engaging the true believers.

Expand full comment

I use the "here's a specific example of something not happening" argument as an attempt to call to mind what I believe to be a general pattern. I do try to make it clear that I'm making the argument that there is in fact a pattern, e.g. "Remember how in the 50s, leading AI researchers thought that machine learning could be fully solved by a few people in a few months? And 50 years later, thousands of engineers working together finally made a modicum of progress? Yeah I think it's going to go like that. The rate of progress on brand new technologies is always wildly overestimated, often by factors or hundreds or thousands, and I don't see why this would be different."

The first half of that argument is the specific example, and the second half is the attempt to generalize. I also would not be convinced solely by e.g. providing specific counter examples such as the progress on nuclear chain reactions, but I would potentially be convinced by a specific counter example, plus a reason to believe that the provided counter example is more analogous to AI than the "machine learning will be done in a few months" example.

Expand full comment

My attempted steel man:

1. Most people can't evaluate the object level arguments for AGI risk. Even attempted brief explanations tend to assume a ton of background reading, and more general math than the general public actually knows.

2. Doomers usually address this by citing experts. Lots of top people in AI have expressed concern, and you don't need a lot of personal understanding to tell that the guys who literally wrote the most popular college textbook on AI know a lot about AI.

3. But such experts don't have a great track record. A real evaluation would be hard (but maybe ACX worthy) so here's some anecdotes.

4. With neither object level arguments we can understand nor experts we can trust, we're left with our priors: most things don't destroy the world.

Expand full comment

No one will ever be able to prove the negative, that an AI doom scenario definitely will not happen. The onus remains on the AI doomers to demonstrate their scenarios represent material threats.

Expand full comment

IMO, the important part of the linked thread is the tweets about *why* various groups opposed coffee houses, based on political incentives. (eg "Kings and Queens saw them as breeding grounds for revolution")

Many people use a sort of political reasoning about everything - and by that, I don't just mean about red tribe vs. blue tribe, I mean that instead of reasoning about evidence for or against a position, they think it's most important to think and reason about *who* is on each side of an issue, and what their underlying political, economic, or psychological motivations for that position might be. The implication is "here's *why* the tribe arguing AI risk to you isn't trustworthy, because their views of the issue are corrupted in the same way as the coffee skeptics"

This lens is even more obvious when people make the arguments that "AI doomers put intelligent on a pedestal because they are nerds, and fundamentally *want* intelligence to be so important", or "AI doomers are scaring people to hype up their technology, or trying to bring about regulatory capture". Some of these arguments make more sense than others, but they are all fundamentally arguing about who to *trust*, not arguing the merits of the issue itself.

This can be maddening if you want to debate the substance of the actual issue, but I would say that for lots of situations, it's a workable way of reasoning about the world. If you're deciding whether or not to take a new drug, or put your retirement savings in a new investment vehicle being pitched to you...some people with the right educational background and time on their hands can dig into the issue fully and come to a rational conclusion based on the facts, but most people should and do start with "who do I trust about this".

Expand full comment

Scott, sometimes you engage in strawmanning on stilts, all the while purporting that you are trying to steelman. I don't think you do this deliberately. I think you may be sincerely trying to steelman. But boy are you really bad at it when you don't sympathize with the argument in question.

The meta-argument here is: Humans have a powerful tendency to catastrophize, not generically, but about observable material trends, among which technological change is a prominent but not exclusive example. This tendency shapes the interpretation of evidence and results in motivated reasoning, especially the motivated construction of rational-sounding models to support the story. Not every concern about observable conditions or trends fits this pattern, but some do. When one observes a intellectual movement that pattern-matches strongly onto this pattern, it is reasonable to take that matching into account in the critical evaluation of its arguments.

The object-level argument would then be: AI Safety pattern-matches this tendency in ways X, Y, and Z.

Whether or not such an object-level argument is valid would need to be worked out on its details. But the attempt to dismiss the consideration at the meta-level is not at all convincing.

Expand full comment

If you want to understand the argument, you could do worse than start with the mission statement of the Pessimists Archive, which collects evidence of past moral panics: "Pessimists Archive is a project to jog our collective memories about the hysteria, technophobia and moral panic that often greets new technologies, ideas and trends." https://pessimistsarchive.org/

Expand full comment

Did you really not just ask the guy making the coffeecalypse arguments about what he meant?

Expand full comment

Coffeepocalypse is like stereotyping or triggers. If you're taught to fear black people, your gut impression of the odds a particular black stranger is hostile will overstate the evidence. Almost all humans instinctively fear apocalypse from transformative new things, so you have to discount their fear a certain amount to estimate the true weight of evidence they've seen that a transformative new thing will be apocalyptic. It's not that there's no evidence, but their emotion is a bad guide to the evidence they've seen.

Expand full comment

What was early realization of "humans creating potentially catastrophic climate change" in the epistemic environment like? Does it feel similar to the slowly dawning AI X-risk? I suppose the major difference is most normies are inherently vaguely fearful of "AI" whereas they weren't of climate change.

Expand full comment

I respectfully disagree with your steel man arguments. My steel man version of the coffee thread:

1) New technologies regularly face backlashes, attacks, and opposition by authority figures and competing interests

2) Improved intelligence is simply too important to be stopped, therefore it won’t be stopped.

I do agree that the author just assumes in the end that it will all be OK. This is a really bad assumption. It is possible that AI will be overwhelmingly positive, that it will be extremely influential but in both negative and positive ways (mixed), that it will not be influential at all, or that it will be catastrophic. That said, a lot of the backlash will be by partisans, competing interests and threatened authorities. And it is inevitable and it will not be stopped. It will at best be partially managed, with the potential that it is mismanaged just as likely as managed. A better example here would be nuclear energy rather than coffee.

Expand full comment

I am yet again begging that someone who has more free time (or research assistants/interns) than me make a dataset of supposed tech panics, and include variables like "did the people panicking about X include people who were creating X?" and "were the claims in some way accurate (e.g., did coffeehouses threaten monarchies)?"

https://forum.effectivealtruism.org/posts/6ercwC6JAPFTdKsy6/please-someone-make-a-dataset-of-supposed-cases-of-tech

Expand full comment

I think you wrote about this once, but some people are incentivized to get engagement on what they write, not make good arguments. Coffee is a common thing lots of people are familiar with, it seems extremely boring, so it invokes an easy emotional response to say "people once worried about coffee." So they don't even ask questions like the ones you do here--once they've found something that might make a good article/tweet, they just run with it, because it doesn't matter. Probably also combined with standard confirmation bias + signaling tribal allegiance/outgroup disagreement.

(I think it's also the kind of thing that appeals to certain people, specifically nerds, who are likely to be interested in the history of random things and also to have an opinion on AI risk... I wonder if this was the result of someone reading about the history of coffee and then loosely tying it to AI risk for engagement).

Expand full comment

Going off at a tangent, I had a vague memory of something to do with coffee and the Church.

Yes, turns out there is an urban legend that coffee, when first introduced into Europe, was regarded as a heathen drink of the infidel and some in the Church wanted it banned, until a pope (allegedly Clement VIII) tried it and liked it:

https://en.wikipedia.org/wiki/Pope_Clement_VIII#Coffee

"Coffee aficionados often claim that the spread of its popularity among Catholics is due to Pope Clement VIII's influence. According to the legend, there was opposition to coffee as "Satan's drink", leading to the pope's advisers asking him to denounce the beverage. However, upon tasting coffee, Pope Clement VIII declared: "Why, this Satan's drink is so delicious that it would be a pity to let the infidels have exclusive use of it. Clement allegedly blessed the bean because it appeared better for the people than alcoholic beverages. The year often cited is 1600. It is not clear whether this is a true story, but it may have been found amusing at the time."

First coffee, then cocaine? I think stimulant enjoyers have much to thank the Church for!

"Vin Mariani was a coca wine and patent medicine created in the 1860s by Angelo Mariani, a French chemist from the island of Corsica. ...Between 1863 and 1868 Mariani started marketing a coca wine called Vin Tonique Mariani (à la Coca du Pérou) which was made from Bordeaux wine and coca leaves.

The ethanol in the wine acted as a solvent and extracted the cocaine from the coca leaves. It originally contained 6 mg of cocaine per fluid ounce of wine (211.2 mg/L), but Vin Mariani that was to be exported contained 7.2 mg per ounce (253.4 mg/L), in order to compete with the higher cocaine content of similar drinks in the United States.

...Pope Leo XIII and later Pope Pius X were both Vin Mariani drinkers. Pope Leo appeared on a poster endorsing the wine and awarded a Vatican gold medal to Mariani for creating it."

https://en.wikipedia.org/wiki/Vin_Mariani#/media/File:Mariani_pope.jpg

Expand full comment

>And plenty of prophecies about mass death events have come true (eg Black Plague, WWII, AIDS).

I know Chesterton predicted after WWI that the peace with Germany would be short-lived, but who prophesied the Black Plague or AIDS?

Expand full comment

Which examples or analogies seem compelling seem to depend largely on what reference class people (typically unconsciously, intuitively) place AI risk in.

While the disagreements resulting from this difference are particularly strong and noticeable in the case of AI risk (and for the record, as other commenters have suggested, to me the best reference class seems to be, roughly, "what happens to an ecosystem when a more intelligent species appears in it", which, plus a separate argument for why this is likely to happen within a timeframe we care about, is cause for concern), but I suspect that the problem of finding a good reference class may also be behind many other sources of confusion and/or disagreement.

Here I find myself unsure of how to check whether one's choice of reference class is good, or what sort of things to do to improve it. Presumably someone has written about it at some point, so if anyone has any relevant links I'd appreciate it.

Expand full comment

I think the for coffee it is an appeal to scale / shock factor that is supposed to be convincing. Look how even something so huge as one of the most popular drinks in the world (coffee, drank by over a billion every day), was thought to be dangerous. It's trying to break down the idea that although the potential impact of AI is so massive, that we should be skeptical of it based on the history of coffee as a similarly large scale but pointless fear.

I don't agree with this logic for similar reasons as you've described but that's my take on what the thinking is of people who make it.

Expand full comment

I think there’s general skepticism against certainty of either total doom or total utopia. There’s an association between pundits and general self-promotion with these sorts of predictions. “Market collapse”, “super volcanos”, etc. It seems to me most of these people just want something for themselves, like status, instead of actually trying to save humanity. I don’t trust them.

I believe in AI risk because I’m a student of the topic. But I’m generally skeptical of doom predictions. I can see others doing the same if they don’t know enough about AI.

Expand full comment
Apr 25·edited Apr 25

== Overpopulation / global cooling as proxy for AI Safety: ==

The implication is: “these are obviously as relevantly similar as two things can be”. People making this claim experience a strong intuition that the implication is true, and are surprised that you don't. They're also dismayed that you ask them to go the considerable length of justifying, or even explicating this.

== Coffee: ==

Attention grab, nothing more; unlike the other more serious, intellectually honest (if incompetent) claims.

== Nuclear reactions: ==

This goes to a specific aspect that people might actually be ~100% sure about: a lower bound on how long before something big happens it can no longer seem impossible. For nuclear reactions, people might think a year, a month, a week. You say it's less than a day, you make them think maybe they should also update this lower bounds for AI Safety.

== Russian tanks: ==

At least in this case, there's also the “good optimizer” assumption: if Russia has the resources to make its tanks that much better, it would be stupid not to put some resources into being ok in most other 99 aspects. Doesn't work for AI Safety though.

Expand full comment

I think the implicit argument of the coffeepocalypse thing is that people in general tend to overestimate the probability of disaster, and that therefore if people are predicting, say, a 10% possibility of disaster that should be taken as evidence that the actual likelihood is much lower than 10%, like 2 or 3.

Expand full comment

In most of his writings that I've found, he focuses less on the claims of existential risk and more on the the smaller cultural battleground of AI harming education, mental health, sundering the social fabric, etc.

In a substack essay he wrote with a similar tone, he lists various technologies that "people thought were going to devastate society or cause mass destruction":

> Television was going to rot our brains. Bicycles would wreck local economies and destroy women's morals. Teddy bears were going to smash young girl's maternal instincts. During the insane Satanic Panic in the 1980s which blamed Dungeons and Dragons and other influences like Heavy Metal music for imaginary murderous Satanic cults rampaging across the USA, people actually went to jail for abuse of children that did not happen. (https://danieljeffries.substack.com/p/effectively-altruistic-techno-panic)

And then he throws in in the supposition that existential risks are a weird fantasy in the same vein.

I think the issues with his line of argument tend to boil down to false equivalency:

- Most of his examples don't have anything to do with existential risk, but he uses these examples to bolster an argument which ends up dismissing existential risk (e.g. I don't think anyone postulated that the invention of bicycles would cause "mass destruction" in the way we understand it, so it seems weird to lump it in the argument)

- It's only in the past century we have the technology to actually wipe out the world (Genghis Khan-tier military tactics are small-scale compared to atomic bombs) so older panics tend to have not had as much existential gravity

He also seems to reflexively dismiss caution toward hypothetical scenarios that we cannot currently fathom. That is, if they did happen, hypothetical problems could probably be fixed by hypothetical solutions, so we shouldn't be scared by the hypothetical problems.

Expand full comment

You're misunderstanding the target of these arguments. They're not trying to discredit the AI risk position as a platonic rhetorical object -- they're trying to discredit *the people holding that position*. Essentially, their belief is that anti-AI is a position that has been reached not through rational evaluation of facts, but by fear of the unknown. The argument is then:

1) We now understand that there's no reason to fear coffee (or whatever).

2) However, some people at the time were afraid of coffee, and here are the arguments they used against it.

3) Since these arguments turned out to be laughably false, the people making them couldn't have been acting rationally; they must have been poisoned by fear or some other emotion that got in the way of clear thinking. Maybe they were even being purposely dishonest out of self-interest.

4) You should notice the structure -- not just the substance -- of these badly-motivated arguments so that you're better able to recognize them when you don't have the benefit of hindsight.

5) Anti-AI crusaders use the same argumentative structure as anti-coffee crusaders.

6) Therefore, anti-AI crusaders are probably motivated by fear of the unknown or material self-interest, just like the coffee people we laugh at today.

7) Therefore, we should treat them the same way we treat other ill-motivated people like partisans and cranks -- by not even giving their arguments the time of day.

You're susceptible to your own version of this for the same reason everyone is susceptible to ad hominem: you hate the target and want an easy way to mock and dismiss them without thinking too hard about what they're saying. Ad hominem will always be the most rhetorically effective fallacy.

Expand full comment

Russell's argument works because it's implicitly much more specific—it's focusing on "this seems sci-fi"-incredulity. A stronger, more explicit form of it would be either the rock with "nothing interesting happens, ever", or Yudkowsky's examples of people calling the idea of powered flight ridiculous (because it seemed "sci-fi").

Expand full comment

Reminds me of a volcano story on wikipedia: "Captain Marina Leboffe's barque Orsolina left the harbor with only half of his cargo of sugar loaded, despite shippers' protests and under threat of arrest. Leboffe, a native Neapolitan, reportedly told the port authorities, "I know nothing about Mt. Pelée, but if Vesuvius were looking the way your volcano looks this morning, I'd get out of Naples!" (https://en.wikipedia.org/wiki/1902_eruption_of_Mount_Pel%C3%A9e) All people in town except for three died in the eruption.

Expand full comment

I think what your opponents are trying to do is point out the similarities between the false apocalypse predictions and the AI apocalypse predictions. Your job is to point out the similarities between the true apocalypse predictions and the AI apocalypse predictions. Also, the black plague, in the grand scheme of things, was more of a necessary thing than a bad thing (with apologies to the perriwig maker).

AI Will be good for mankind but bad for psychiatrists. Sort of like how the black plague was good for the enlightenment but bad for the church. Your opposition to AI is a manifestation of your innate survival instincts. You are a much better writer than psychiatrist so you should just embrace the change. If you think about it, you can help far more people by your writing than you can with your psychiatry practice. Even if AI does kill off a lot of people, it will probably do so in a very humane way (like persuading them to embrace hedonism in lieu of reproduction, something many western governments already do).

Expand full comment

I agree that the coffee/global cooling/whatever arguments aren't very good, but I also think you are sort of fundamentally misunderstanding the work they are doing. You are treating these as arguments by which a person reasoned their way to a conclusion, using illogic. But that's not what people are doing at all. Rather, they have already arrived at a conclusion and they are using analogies to undermine the credibility of opposing viewpoints.

In other words, you are treating the argument as this: "People once incorrectly thought that overpopulation is a problem. Therefore AI is safe."

What people are actually saying is this: "Concerns about the dangers of AI are overblown for a bunch of reasons that I am taking as given. People, including experts, have often in the past incorrectly freaked out about new technologies, and you should be wary of such claims."

I am not surprised that you find this form of argument frustrating, because it doesn't actually engage with the substance of AI safety concerns. But it explicitly isn't designed to. These are statements by people who otherwise aren't convinced about the danger of AI, warning other people not to be seduced by credible-sounding arguments made by people with highly esoteric knowledge.

As others have pointed out, it very much is a sort of anti-expertise argument, but not in the mold of "experts are an evil/degenerate cabal trying to fool you." Rather it's more, "being an expert doesn't automatically make you right, especially in emerging domains of knowledge, as these examples show." And this is actually true! I get why it's annoying to have this point constantly waved in your face by people who aren't actually saying where they disagree with your arguments, but that is what is going on.

Expand full comment

Scott, sometimes the other side is just stupid and wrong, and you need to squish them instead of persuading them, because they have a vested financial interest in NOT being persuaded.

It would be nice if maybe YOU could update YOUR priors along the lines of "85% of the population mean well and can be persuaded by reason and logic, but 15% of the population is just evil, stupid, or insane."

Expand full comment

You're right. You should perhaps be more frank in your view that the Coffeepocalypse argument is idiotic. And this comes from a guy very skeptical about the AI catastrophe.

Expand full comment

I don't think you're giving a complete summary of the argument in the linked thread. I would call it a form of bulverism, since they're describing possible motivations for bad faith AI risk arguments. Similarly to how the beer lobby spread unfounded concerns about coffee to protect its market share, groups today could be spreading unfounded concerns about AI.

I think that's true. There's a debate between Ben Shapiro and Tucker Carlson that touches on AI driven trucks. Tucker said that he'd prefer that the government make up some safety risks if the alternative was the sudden destruction of trucking industry employment. Even if AI had no risks at all, there would be incentives to regulate it citing false risks.

As the term "bulverism" describes, this is all removed from whether AI is dangerous. They're not arguing that AI is safe, they are describing some bad faith reasons people would say its dangerous.

Expand full comment

This made me laugh because I use the exact same thought experiment. "Whenever I wonder how anyone can be so stupid, I start by asking if I myself am exactly this stupid in some other situation." It is so hard to be aware of one's own blind spots that any strategy that allows one to look in a convex mirror around the corner is valuable.

We are talking about prediction. One hopes to have confidence in the accuracy of prediction. Yet with multiple variables and incomplete knowledge of all variables and forces at play, outcomes are generally uncertain. I could be hit by a truck this afternoon or the killer bees could get me like they got D. O_ in our town.

People fear bad outcomes. They desire control over outcomes. They desire relief from the anxiety about bad outcomes. For many people, a comforting story is all it takes, perhaps merely as distraction.

We do our best, and try to monitor outcomes in progress so quick course corrections are possible.

As a curious atheist, I like the attitude of the Old Order Mennonites, who pray not for outcomes, but for the grace to accept God's inscrutable will.

Expand full comment

> The people I’m arguing with always seem so surprised by this response, as if I’m committing some sort of betrayal by destroying their beautiful argument.

This is actually a pivotal issue in explaining AI safety. People don't like being utterly decimated by another person, especially not when talking about objective reality.

However, if it was that simple, people probably would have figured out workarounds during the 20th century or earlier. There's another layer: backchaining to justifications, as described by Yudkowsky 15 years ago: https://www.lesswrong.com/posts/TGux5Fhcd7GmTfNGC/is-that-your-true-rejection

Expand full comment

I had a theory. In ground truth, over about a year there are now about 1000 AI pause advocates. Meanwhile Nvidia sold out a stadium and the industry announced, in just the last few weeks, plans to dump 650 billion! more into AI over the next 2-3 years.

There are obviously millions of new AI fans added monthly. For right now, where errors from AI are mostly harmless while the benefits increase monthly, doomers lost. There's nothing to discuss, no reason to engage with them. Hence comparing their position to people fearing coffee.

It's a dismissal, in the same way nobody bothers anymore to convince Amish to use technology. There is nothing to discuss and the evidence speaks for itself which side is right.

Expand full comment

You already got it, it is the heuristic thing. See also, the Ancient Greeks complained about youth culture, so all complaints about youth culture are spurious.

I am pretty sure you have written on this before, with, birds-of-a-feather, and opposites attract. People will uncritically use little aphorisms to support their points even while equally believable totally contradictory aphorisms exist.

Expand full comment

I have not commented on a SSC post in about ten years, so forgive me if my language is antiquated, but I said my objection to this post on the 'scord and was told it was wise and I should post it here.

I think that, in general, the most parsimonious explanation for a disproof of a seemingly nonsensical claim is that somebody is actually making the claim. For some reason, you seem not to have not considered this, despite it being the most obvious and straightforward explanation.

For example: even limited to my own browsing history, if I type "guardian.com ai" or "futurism.com ai" into my address bar, I will be confronted with about ten separate articles that claim AI will "do harms", "perpetuate harms", "pose risks", et cetera -- and that AI is in some way associated with pedophiles, scammers, Nazis, bros, et cetera.

Most of the time, there is not a real coherent argument being advanced in these articles -- it's just a random incident of a dumb or malicious act being somehow connected to AI; sometimes they quote a Rent-an-Expert who talks vaguely about "real harms". The general idea, typically made by implicature but sometimes made explicitly, is that AI is a thing which exists, and it didn't exist before, and it will clearly change the world in some way, and there are some harms, so it is bad, and should be (considered shameful / heavily regulated / made illegal / etc). If you'd like, I can provide a couple dozen articles and tweets and blog posts that articulate this vague point.

I think this is a pretty dumb argument to make, but people really do make it, hundreds of times, every day, and I think the most obvious explanation for the coffee disproof is that it is a response to this.

Expand full comment

My assumption about what's going on here is that someone found some interesting historical facts about the social reception of coffee, and found that a good way to make their interesting historical facts go viral is to tie it to something that people are already predisposed to argue about. The tie-in to AI apocalypse is weak, but primes it for virality, and then when you put actually fun facts on the inside (even if they're not well-connected), it succeeds in going viral.

Prediction - if you could somehow do the same thing with cute pictures of capybaras rather than actually fun facts about the history of coffee, it would go equally viral.

Expand full comment

Alarmist arguments about human extinction or massive disaster have a long pedigree. For example, just before the LHC was turned on, doomers claimed that it'll destroy the world through micro black holes or whatever. Given that, all those predictions failed to come to pass means that the chance of some new technology will cause human extinction must be very small. I don't know why AI doomsters like you obtusely fail to see this simple point.

I'll give a somewhat parallel scenario from the history of science. One reason the first law of thermodynamics was accepted as true was the long history of failure of every perpetual motion machine. After this uninterrupted series of failures many science academies stopped considering any claim of perpetual motion. When physicists wondered if the absence of perpetual motion machines may be a natural law they birthed the science of thermodynamics.

To take another example, people tried for 2000 years to prove the parallel postulate of Euclid. All those attempts came to naught. Then finally in the 19th century, mathematicians accepted the futility of such attempts and developed non-euclidean geometry.

Similarly you should recognize the failed past predictions of doom and accept the fact that p(doom) must be very low.

Expand full comment

The more general case is the "People were wrong in the past, therefore you're wrong now" argument. It's probably about 40% of internet arguments.

It's actually a good argument though, in the sense that it really does change people's minds. (It's not a good argument in the sense that it's more likely to change their minds towards truth than towards falsehood; it's not an asymmetric weapon.)

It can be very effective against an irrational person suffering an irrational fear, which is why people reach for it by default. If your kid is worried about monsters coming in the night, you can point out all the previous times that they've been worried about the exact same thing and that it hasn't happened. Or an adult hypochondriac worried that their itchy forehead means they have cancer will inevitably be reminded of the hundreds of other tiny symptoms that they've previously worried were cancer which turned out not to be cancer. Heck, I've used this argument against myself when I've had irrational worries.

Expand full comment

The kings who were scared of coffee were obviously correct. Coffee-houses were breeding-grounds for revolution, they became gathering places for radicals pretty much immediately. Caffeine helps people think and one of the things they thought about was "how to overthrow the king".

I think this guy is probably exaggerating the degree to which there was a moral panic about coffee but, also, people at the time were completely justified in worrying about the social implications of a powerful new stimulant that everyone was going to drink all the time.

What about tobacco? There was no moral panic about tobacco. Europeans seem to have accepted it pretty quickly as a cool new foreign habit. But, tobacco turns out to have secret qualities that poison and kill a huge percentage of the people who consume it. The introduction of tobacco was actually incredibly dangerous, and we would be better off if there had been a huge clampdown on tobacco the second it was introduced.

On the other other hand, there was an early panic about the potato. People suspected it was poisonous (which it low key kind of is under some circumstances) and spent a couple of centuries refusing to cultivate it on a large scale. But the potato is actually a fantastic calorie-dense crop and not dangerous at all, so probably the unwarranted fear of the potato was incredibly bad and resulted in mass peasant starvation.

Are there any early modern examples of moral panics getting it right? Like did they discover crystal meth in 1600 and say "yeah, we're definitely not using this"? I don't know.

The lesson seems to be that there's nothing to learn from any of these incidents about whether fear of new technologies is inherently good or inherently bad. You probably just have to look at each thing on a case-by-case basis.

In terms of your model of what this guy is saying, though - he's making an argument that doesn't stand up because it feels right to him and it justifies something he wants to believe. That's 99.9% of all human political activity. Personally I find it difficult to model 99.9% of all human political activity, so I guess I can sympathise, but I'm also not sure there's much point agonising over it.

Expand full comment

Steelman: "the fact that there exist people who are very certain AI will kill us is at best weak evidence that AI is dangerous, because there have been people who think X is dangerous for every X in history, so I shouldn't update on the fact that you hold this opinion."

(this is not an argument against the *content* of any argument you make for AI being dangerous, but such arguments can be very hard to evaluate)

Expand full comment

Well there's the obvious problem of arguing against multitudes. But trying to break it down:

1: You think that AI poses an existential risk but you are wrong and it doesn't. I know that the possibility that you might be wrong must seem very implausible to you, but it is actually very plausible. As proof of this please consider this example of people being wrong.

2: You are saying that AI poses an existential risk, but you are arguing using reason and rationality which never works. You should argue using analogies instead. And if you do use analogies, like the one comparing AI to aliens or something then you're using the wrong analogies. To be safe you should stick to analogies about coffee.

3: You are a prophet of doom. Prophets of doom have a very bad track record. People who predict doom are usually wrong, and when doom happens it is rarely predicted except in the vaguest of terms.

4: You are not my people and I need to signal to my tribe by dunking on you.

5: Roll to disbelieve. I know it may seem like I'm abdicating my duty to fairly interrogate your arguments so here are some examples where I would have been right to do so.

6: People like you are so stupid, just like these other people who were also wrong! I just can't get over how wrongity wrong WRONG you are!

7: I think your arguments are suggestive and it makes me uneasy but I don't feel like I have the capacity to properly evaluate the evidence or your arguments, and I don't think you do either. Until we have better evidence in hand it is important to avoid causing a panic. Here are some examples of people overreacting to things that turned out to be false or overblown.

Expand full comment

I think this is just rhetoric and they're not being epistemically rigorous. The idea is this line of reasoning feels similar to this line of reasoning and was wrong so this line of reasoning is likely wrong. This is rhetorically effective but should probably be discarded, although I admit I have made these arguments in the past I think I was doing the equivalent of an emotional flourish to the underlying argument.

Expand full comment

I think that the reason you can't read any effective logic into it is that there is no logic to it. It's better to understand the argument as an appeal to emotion via pattern-matching.

It's not meant for people who are trying to think through things. It's meant for people who are already convinced they've thought things through enough, and it's meant to evoke a sneer. The implication is that there's a specific type of person who is always on the side of panicking about the future and they flip out over things we now all drink with breakfast.

Seen from that light it doesn't need to have actual logic on its side. It's meant only to evoke emotion by conjuring a specter of a similar situation, with the implication that if we'd sneer at one we should sneer at the other.

A lot of argumentation follows this pattern, I'd think. There are vast classes of argumentation out there that appear to be arguments on the surface but they have no actual logic to them; their sole purpose is to pattern-match with people's pre-existing biases to conjur up certain emotions. It's raw pathos, except in a more subtle form--the pathos is triggered by the implications of the message rather than any overt emotion-evoking behavior on part of a speaker.

Expand full comment

I think the point of the argument is to argue against a background assumption that people's worries about new technologies are reasonable.

Suppose you thought that anytime someone seriously doubted a new technology, they were basically reasonable.

Then someone told you that many people doubted the goodness of coffee as a new technology.

You then might update quite a bit! You might think "Huh, not only can lots of worried people be mistaken about new technologies, but they can be saying things *totally unconnected* to reality. I guess I will not care that much merely *that* many people are saying negative things about a technology, as it turns out large groups of people can be way off-base. I should update to using my inside view here to weight their reasonableness."

And perhaps the next step for many might be "And of course, I have not considered the arguments and haven't formed much of an inside view, so I guess I am not going to personally do much about the situation, given that I don't personally have an inside view telling me to do so, and other people cannot be really trusted to do my reasoning for me."

Expand full comment

I think coming from a lot of people, you are too quick to dismiss this as not just an anti doomsday heuristic.

Global cooling and overpopulation are(were?) presented in more apocalyptic terms then just here is the percent of ppl who will die. Not everyone breaks things down as you do. They FEEL apocalyptic in a way that AIDS and even a big war do not.

And on the scale of apocalypses ppl have a heuristic of what can cause major distruction. War disease and asteroids really. If you are arguing for a path outside of that, they reject it and cite the previous failures (outside the three exceptions)

Having said thats I think sometimes its a wierd sort of i am right about things and other ppl are wrong heuristic. And i have actually heard smart ppl say it in fish like scenarios. These other ppl were wrong once, so you are also wrong and i am right. When challenged on that logic though they admit its not central to their argument.

Expand full comment

Trying harder to defend the argument, perhaps we can think of it as a "Baby's First Argument Against Argument From Authority". It's not intended for people like Scott who are already well versed in the arguments either way, it's intended for Sally Q. iPhone who just seen her first ever TikTok video about "some people say AI will kill us all" and is worried.

The argument "Yeah but some people say that sort of thing all the time" will calm her down and bring her back to a sort of epistemic neutrality appropriate for people who aren't experts and are unlikely to become experts.

Expand full comment

I learned early in med school to never say never, and to never say always. Cuz when dealing with biologic systems, best to remain alive to the possibility of exceptions.

But here, we are talking about machine/math systems. Might a product of such systems be susceptible to absolutes? And if so, which end of that dichotomy should/would it turn out to be?

So this is the fool’s errand part of all of this: we can’t even be certain if we can be certain. Let alone have any confidence whatsoever in whatever prediction we might have on the outcome, represented in percentages that bespeaks only having partial certainty , on an answer that will actually be definitively categorical.

I don’t even understand how you try to be “less wrong” when you have absolutely nothing upon which to derive a prior probability…and no data with which to modify said priors. That’s not to say an absence of evidence means we are absolutely in the clear. But it seems anything other than an “I don’t know” represents hubris without foundation.

Expand full comment

I think there's a subtlety you didn't cover (not that it makes the argument much better). The argument isn't for people who know about AI, or even computers - it is for laypeople who cannot assess AI risk directly for themselves at all, and so are wholly dependent on what experts in the field tell them. This argument provides an example of when experts were all whipped up for nothing, suggesting that experts now could be all whipped up for nothing.

Admittedly, this angle doesn't really make the argument any more logically coherent. But it does give it a bit more effectiveness as a cover to illogically ignore a problem some people don't understand and don't want to think about.

Expand full comment
Apr 26·edited Apr 26

If people are doing argument by analogy, they at least need to use close analogies.

In the nineteenth century there was a case of superior intelligence afflicting large groups of people: the colonisation of Africa.

But it seems to me that what mattered was mastery of logistics, operations, and tactics: fire and movement. Not intelligence in and of itself. Chemistry, logistics, and kinetics.

(Slightly prior to that various Bantu peoples had swept through Africa, uh, subjugating, the incumbents in an arc from West Africa to East Africa and then down to South Africa. Better operational organisation.)

There were cases of superior intelligence (and firepower) not making much difference: Europe vs the Ottoman Empire, and Afghanistan.

These analogies suggest everything will be terrible/will be fine/might go either way.

Expand full comment

I think most people do a lot of their reasoning by analogy, or "heuristics" as you put it. It can work well, as long as you don't stop at the first one that you come up with. Ideally you think of several that match, and then several more that don't, and you compare them and see where they differ, and what the significant factors were, and then look at the actual thing you want to examine, and see how they apply. And then you think rationally about it, because all of this is a way to provide data, not an actual analysis of data. (One of the problems with trying to think rationally from first principles is knowing what fundamentals are important. This bypasses that step, and gets you a lot of useful fundamentals to start with.) I do this a lot; if there were a way to look back at my comments, you'd probably see a bunch of it, where I reference various things, flit from analogy to analogy, and mix metaphors beyond their breaking point.

This approach to reasoning breaks down when people only come up with one analogy, or only come up with analogies that support the side they favor, or don't actually think about it rationally afterward.

You can see a similar effect with, say, some types of prejudice. Here is an example of a type X person committing a crime. Here is another type X person. Clearly, they will commit a crime! It's taking the first association that comes to mind, and running with it.

Expand full comment

One thing coffiepocylipse arguments seem to have is the barest mention of AI.

Which is strange. They don't look at AI and compare it to coffee. They don't seem to analyse specific anti AI arguments and compare them to anti coffee arguments.

Do a ctrl R for the word "AI" and often you couldn't tell what they were arguing about.

Is it that their model of debate says the winner is whoever claims more valid facts? Like they are greatly prioritizing "Is it true?" over "is it relevant to AI?" and just spouting a bunch of facts that they think have any non-zero bayes factor, however small.

If you are imagining an opponent who never concedes any point, that opponent would have to say "no one was ever worrying about coffee destroying the world" or "coffee did destroy the world". Never "so what?"

Expand full comment

Some people think that being contrarian makes them look smart, by contrast to the gullible masses or "sheeple" as they probably call them. So if everyone seems to be panicking about AI doom round the corner, then the mavericks' natural inclination is to scoff at the risks and deny them, even if their justification (if any) is fallacious.

Expand full comment

I think the core idea behind this type of sentiment is a general disbelief in deductive logical reasoning when it comes to making complicated predictions.

There are a lot of things working against it. People generally tend to optimize for selecting socially advantageous beliefs over correct ones. This is a strong subconscious pressure, and forming social groups based on shared beliefs and principles makes it stronger. It's fairly easy to beat people in arguments when you are more intelligent and oratorically eloquent than them, even when the viewpoint you are arguing is less accurate than theirs. Furthermore, only a handful of historical geniuses have ever been able to make unique testable predictions about the world from just philosophical principles that held up to empirical observation and experimental scrutiny, and it's reasonable to have a strong prior in the direction of not believing people when they implicitly carry themselves as if they are once-in-a-century geniuses. You could obviously object to this and say that people use deductive reasoning to make complicated predictions all of the time, but most people have a certain scale/complicatedness cutoff point, and it usually lies way below making pessimistic predictions about superintelligence.

I personally find this way of thinking to be dispositionally annoying. However, I come from a family of some people who throttle a sweet spot between normalcy and intelligence; they have the intelligence to where they could participate in cool subcultures and intellectually screen weird beliefs if they wanted to, but choose to be 'normal' instead. And so if I were to ask my mother, or my grandmother before she passed, why they weren't scared of the existential dangers of superintelligence, and pressed them on it enough, I imagine that they would say some non-me-worded version of the above paragraph.

Expand full comment

Many people are indeed very, very bad at constructing reasoned arguments, so I absolutely agree that "coffeepocalypse didn't happen" arguments are out there in large numbers.

They don't understand that this is only one part of this rhetorical tactic, the "what" that didn't happen. Their next sentence should be the "why" - and hopefully the why for coffee or whatever matches up to the why for AI.

It's unfortunate because in order to steelman this kind of argument you have to invent out of whole cloth their next line of reasoning. For example with the "perpetual motion will never happen" argument, the next step should be "because it goes against the 2nd law of thermodynamics". This one in particular won't work for AI because there's no physical law preventing consciousness emerging from microprocessors. Similar case for the "controlled nuclear fission is impossible" - why? Is there some reason proscribing such a thing?

So you are correct, the arguments you are seeing don't make sense because the people making such arguments do not even realize they aren't making an argument at all.

As for why the AI apocalypse won't occur the reason is simple: AI doesn't exist.

Expand full comment

I take the argument as, "Negativity bias makes us think the world is always ending. Here are some examples where we freaked out over nothing.... But the fact that we survived all those things suggests that the world is a lot more resilient than we think."

Expand full comment

I think I can answer the question, "What is this coffeepocalypse guy doing?" more generously and effectively than you have here: He's trying to reframe the debate.

He's specifically not engaging with the question, "Could AI be dangerous?" He is suggesting that it is a waste of time to think about that question. He suggests that everyone engaged in the AI safety debate is, consciously or unconsciously, engaged in status games; therefore it is better to simply refuse to engage in any such debate.

He doesn't really support any of these claims. I think he just does existence proof that it is possible to play status games and to engage in motivated reasoning in the course of a status competition. Nonetheless, I think exhortations to reframe can in general be quite useful and good, because they present alternative viewpoints outside the current framing of the debate.

In this particular case, he is clearly very unaware that these possibilities have been chewed over by AI safety people. It's an ignorant argument. But I think the form of the argument is fine.

Expand full comment
founding

Coffeepocalypse is a sound meta-argument: "the fact that lots of apparently smart people are predicting AI doom is not in itself evidence of very much, because lots of smart people have predicted doom many times before and consistently been wrong". That is, it's a caution against appeal to authority. It's a non sequitur in response to anyone who is already engaging with AI doom arguments on the object level.

Expand full comment

Isn’t this just reasoning by analogy? And the stronger the isomorphisms are between the analogies the more we update. If someone compares the invention of coffee to nuclear fission, that’s not very isomorphic. But if someone compares fission to fusion, that’s highly isomorphic. At the extreme end, if someone makes a claim about red nuclear warheads then it would cause us to update strongly about blue nuclear warheads

Expand full comment

It's unsurprising you failed to explain how the author got from the coffee analogy to the "AI is safe and useful" conclusion, because *it's not the conclusion*. The thread author didn't try to explain whether or not AI will be safe. The author obviously *assumes* it will be safe and useful, and explains what he believes to be the reason why people still fight over it, by way of the coffee analogy. Even if his analogy was mortally flawed, it would, within the author's framework, at most require a new explanation (something other than power struggle), or even just a better analogy.

Please note: This comment does not imply that I share any of the author's opinions or their opposite.

Expand full comment

I've got recorded in my calendar one of those "predictions to check back on". This one is from Slashdot on April 29, 2004:

> Diamond Age Approaching?

>

> Posted by CmdrTaco on Thursday April 29, 2004, @01:55PM from the i-friggin-hope-so dept.

>

> CosmicDreams writes "The CRN (Center for Responsible Nanotechnology) reports that nanofactories (like the ones that were installed in every home in Neal Stephenson's Diamond Age) will arrive "almost certainly within 20 years". In short they claim that molecular nanotechnology manufacturing will solve many of the world's problems, catalyze a technologic revolution, and start the greatest arms race we've ever seen. They conclude the risks are so great that we should discuss how to deal with this technology so that we don't kill each other when it arrives."

Expand full comment

I suspect the reason why popular discussions about AI safety are so confusing is that people say "superhuman AI is safe" when what they actually mean is "superhuman AI is *impossible*".

In a narrowly technical sense, yes, that is correct reasoning: if something cannot exist, it also cannot harm you. But for the sake of clarity of argument, it would be much better if people separated "it is not dangerous *because* it is not possible" from "even assuming that it is possible, it is still safe".

What we get instead, is people playing politics, so the ones believing "it is not possible" and the ones believing "even if possible, it is safe" see each other as allies. An argument that supports an ally is good. But an argument that hurts my ally, but not me, is not too serious.

So the entire argument, fully untangled, would probably be something like: "If you believe that coffee is harmless, that is an argument for our side, because it means that the set of things people worry about needlessly is greater. But even if you believe that coffee is harmful, I don't really care about that, because unlike coffee, superintelligent AI does not exist, silly!"

In the rationalist lingo, "coffee is harmless, despite people once believing otherwise" is a soldier on our side, but is not a crux. If this soldier hurts you, good. If it dies instead, no big deal.

Expand full comment

When people bring up the coffee story, they aren't actually trying to make a rational argument. They're telling a story and it just happens to look like an argument. People update on stories all the time, by the way. That's what parables and myths are, and it's how people spread wisdom for millenia prior to rationality. That mental machinery hasn't gone away.

Expand full comment

Most things are neither affirmatively safe or unsafe. The vast majority of human developments are of no consequence to history. Inductively, it stands to reason that we would expect most hyped developments to be of similarly limited consequence. As with any inductive reasoning, we would occasionally be wrong, but we would more often be right.

Expand full comment

I have a suspicion that the coffee story is not an argument but a metaphor. Some people will (incorrectly) update on it, but really the story is just meant to clear up thinking of those who alreafy have vague a picture of how AI is going to affect the society.

Many AI doomers think that the story in "Don't look up" has surprising similarities to the AI story. But the movie is not actually an _argument_ in favor of being an AI doomer, it just resonates with doomers who have other reasons to believe that AI is dangerous.

Expand full comment

because most ai safety arguments are based on My Little Transhumanism: Technology is Magic and assume somehow it will become this master self-aware manipulative villain despite no evidence technology can do this.

it's not AI safety in more mundane ways like ai unemployment and economic effect, or people using AI to increase spam, centralize economic power, or how its edge cases cause more work for fewer remaining people. It's mostly projecting existential fears on the latest new thing.

I mean just because AI can replicate a human behavior doesn't mean it possesses the potential to take on the whole aspects of human behavior in a more perfect form. Technology never breaks upward; not like the more sophisticated cars get, the more likely the evolve into herd animals and book it to the plains to forage. they just break differently

Expand full comment

Everything is dangerous/everything is fine:

Coffee ended up being an existential threat to certain countries when it attracted the right people at the right time to create a certain outcome (in the doomer's case, a bloody and destructive revolution; in the optimist's case, a potentially better/fairer political system for future humans).

Nuclear weapons will be an existential threat if/when the right people at the right time utilize these weapons to create a certain outcome (in the doomer's case, extinction; in the optimist's case, a fresh start?).

AI will be an existential threat if/when the right people at the right time utilize these tools to create a certain outcome (in the doomer's case, extinction; in the optimist's case, earth and nature get a chance to recover from our rough-shod resource rampage).

Expand full comment

I think I can see it?

I think it's reasoning by emotion. The coffee danger (as I always understood it) doesn't need to go back to revolutionary times, but to advice given to folks with high blood pressure and cholesterol in the 60's: that coffee would make it worse. (Now the opposite is believed true, if we should believe them.)

That's within living memory. And within living memory are emotional decisions: I like coffee, coffee is fun, I want to drink coffee, I love the buzz, what's this about blahdeblah cholesterol folderol? The emotional weight outweighs (in my dad's case when he was told this) the cholesterol story. I remembered this when I drank my first coffee and I think of it at least a little bit when I drank my last.

So it was emotions that won out over what appeared to be scientific evidence. The emotions were doing the heavy lifting here, and when an argument comes up that drags coffee into it, it drags along the emotional reasoning that was done (and proven "right") to speak its voice in the argument. So, they were wrong about coffee, by association they were wrong about whatever emotional feelings I have about AI dangers, which have a strong emotional component on both sides of the argument, or so it appears to me.

I think that's why other arguments that have the same frame but don't drag coffee in, don't seem to work the same way. I just heard that anything you do while consuming caffeine--smell coffee, have friends over, sit at a crowded cafe, whatever else might be going on) the caffeine makes you like whatever that is. Could caffeine itself be making this argument, or at least part of it?

Expand full comment

Here’s the steel man:

Humans have a tendency to over-predict doom scenarios.

Because of this, when we’re worried about doom we should be careful and adjust our priors.

Historical examples help with adjusting priors.

My own position: super human AI will be the biggest change in history, this makes priors less useful. We’re more in first principles territory.

Say we invent a new drug and people now live to age 1,000. How will this change society? Answer: I’m not sure, but past times when lifespan went up 10 years are probably not super relevant.

Expand full comment
Apr 27·edited Apr 27

As far as I understand the AI safety debate (on a scale of 1 being "never heard of it" and 10 being "Eliezer Yudkowsky" I'd put myself at like *maybe* a 4), the biggest problem with having strong opinions in *any* direction on AI safety is that all the smartest people who most care about this problem seem to say they have absolutely no idea what to do. This isn't a strong argument to stop them from trying stuff, but it's also not a great argument for throwing massive amounts of money at the problem, shutting down GPU farms, lobotomizing AI development with cumbersome regulation, etc.

Maybe the coffeepocalypse argument is badly gesturing at "by your own admission, you're worried by a gut feeling based guess of the probability of a doomsday scenario that emerged from a long process of thinking and speculating about black box technology (black box because you don't know how it works or because it hasn't been invented yet) that you have also admitted you have no idea what to do about. What the heck else can you do in this situation but wait and hope it turns out fine? And, hey, look, there have been times when people faced similar radical uncertainty and we aren't extinct."

My own position is something like "Noting will change until somebody blows something important up with AI accidentally and/or a rogue one blows something up. Hope the explosion isn't existential. At this point, many more smart people will get interested in the problem, and if nothing else we may succeed in strangling it with regulation that doesn't fix alignment but just makes it impossible to do anything with it like how it takes 1 million dollars to build a one stall toilet in San Francisco."

For people who are *really* convinced it's going to kill us all, I'd say the best approach is to try to strangle it with regulation by pretending you are super worried that AIs will discriminate against black people, be used by the deep state to track right wing people, that GPU farms are bad for global warming and/or that they cause all kinds of other weird health and environmental problems, that the *wrong people* will use them to spread misinformation and steal elections, turn the youth trans, etc.and well you get the gist. Lobotomize it. Do to it what was done to nuclear power.

Expand full comment
Apr 27·edited Apr 27

According to Wikipedia, Rutherford said, "anyone who looked for a source of power in the transformation of the atoms was talking moonshine.". Searching "moonshine" on LessWrong yields the following post by Scott Alexander from 2015:

https://www.lesswrong.com/posts/CcyGR3pp3FCDuW6Pf/on-overconfidence

As well as these posts by other authors:

https://www.lesswrong.com/posts/3qypPmmNHEmqegoFF/failures-in-technology-forecasting-a-reply-to-ord-and#Case__Rutherford_and_atomic_energy

https://www.lesswrong.com/posts/S95qCHBXtASmYyGSs/stuart-russell-ai-value-alignment-problem-must-be-an

https://www.lesswrong.com/posts/gvdYK8sEFqHqHLRqN/overconfident-pessimism

https://www.lesswrong.com/posts/BEtzRE2M5m9YEAQpX/there-s-no-fire-alarm-for-artificial-general-intelligence

https://www.lesswrong.com/posts/B823KjSvD5FnCc6dc/agi-as-a-black-swan-event

https://www.lesswrong.com/posts/wpsGprQCRffRKG92v/catastrophic-risks-from-ai-4-organizational-risks

I make no claim right now, but others can read and ask themselves which posts have which reasoning errors.

Also, does anyone have citations on the Rutherford quotation? I can only find citations from the early thirties, and none from the day before the first successful artificial nuclear chain reaction (which was in 1942).

Expand full comment

The argument is implicitly about base rates. Here's a couple analogous examples:

1) The math department gets a lot of well-meaning letters from amateurs claiming to solve the Riemann Hypothesis. Over the last 100 years, none of these letters have solved the Riemann Hypothesis. Therefore, the one that just arrived today probably doesn't solve it either; we're not going to get excited about it.

2) Every year, there's a popular science article to the tune of "Cure for Cancer Finally Discovered!" We don't have a cure for cancer. Therefore, when the next popular science article about cancer cures comes out, we'll ignore it.

So similarly, the anti-AI-safety argument goes:

Every time we develop a new technology, a lot of smart and well-meaning people argue that it's actually Bad and that it poses some kind of Existential Risk (railroads, bicycles, electricity, human flight, etc. etc.). But new technology is usually Good, and we haven't killed ourselves yet. Therefore, now that a lot of smart and well-meaning people are arguing that AI is Bad and poses an Existential Risk, we shouldn't pay that much attention.

Expand full comment

Pre-GPT4 this was pretty close to my take on AI alignment. Now I actually do take it seriously and think we desperately need to stop building frontier models, but I want to try to communicate why coffeepocalypse doesn't seem to me like a stupid counter and is likely something I might have brought up as a counter-argument to AI safety when I thought AI safety was dumb.

Most doomsday predictions involve _stories_: First A happens, then B, then C, and then the world ends. You can say "Well what if A doesn't happen?" and then the prophecizer might say "Well then A' will happen and that could also lead to B" and so on so forth.

And so you get the impression that the prophet is falling into the trap of having located a random hypothesis and is now using stories to defend it. Essentially that they're in soldier mindset. Instead of countering each and every story, it's easier to try to show them the trap they're falling into by giving examples of other people falling into the same trap, and hope they'll recognize it, discard the random hypothesis, and go back to looking for truth rather than writing a story about the future which will almost certainly not come true exactly because most stories about the future don't come true.

Coffeepocalypse isn't an argument _against_ AI safety, it's an argument against confusing stories with predictions which is such a common error it's easy to assume people arguing that AI is dangerous are also making that error.

Expand full comment

I don't know what it's for others but for me it's about worry overload. Almost anything you see or read is someone trying to convince you to worry about something. Cries for attention. Now a priori there is an infinity of possible bad outcomes to anything, but there will only be one actual outcome. So per definition, (infinity minus one) things one can worry about, never happen. And I can't be worrying about the infinity all the time. So I only worry about the immediately obvious next pressing problem that I have influence over in my own life. Ai is not one of them. Oh and one more thing. Bad things happened in the past, with or without prediction. But absolute doom never happened. It may, of course, happen one day. So my prior here is, however bad past things turned out, humanity overcame them. So, one step at a time.

Expand full comment

I guess it's 1) a prior that AI won't be bad, plus 2) "lots of people being scared of AI and supporting their fear with arguments, isn't evidence worth updating on; lots of people have always been scared, and have always supported their fear with arguments, about everything"

Expand full comment