One of the most common arguments against AI safety is:
Here’s an example of a time someone was worried about something, but it didn’t happen. Therefore, AI, which you are worried about, also won’t happen.
I always give the obvious answer: “Okay, but there are other examples of times someone was worried about something, and it did happen, right? How do we know AI isn’t more like those?” The people I’m arguing with always seem so surprised by this response, as if I’m committing some sort of betrayal by destroying their beautiful argument.
The first hundred times this happened, I thought I must be misunderstanding something. Surely “I can think of one thing that didn’t happen, therefore nothing happens” is such a dramatic logical fallacy that no human is dumb enough to fall for it. But people keep bringing it up, again and again. Very smart people, people who I otherwise respect, make this argument and genuinely expect it to convince people!
Usually the thing that didn’t happen is overpopulation, global cooling, etc. But most recently it was some kind of coffeepocalypse:
You can read the full thread here, but I’m warning you, it’s just going to be “once people were worried about coffee, but now we know coffee is safe. Therefore AI will also be safe.”1
I keep trying to steelman this argument, and it keeps resisting my steelmanning. For example:
Maybe the argument is a failed attempt to gesture at a principle of “most technologies don’t go wrong”? But people make the same argument with things that aren’t technologies, like global cooling or overpopulation.
Maybe the argument is a failed attempt to gesture at a principle of “the world is never destroyed, so doomsday prophecies have an abysmal track record”? But overpopulation and global cooling don’t claim that everyone will die - just that a lot of people will. And plenty of prophecies about mass death events have come true (eg Black Plague, WWII, AIDS). And none of this explains coffee!
So my literal, non-rhetorical question, is “how can anyone be stupid enough to think this makes sense?” I’m not (just) trying to insult the people who say this; I consider their existence a genuine philosophical mystery. Isn’t this, in some sense, no different from saying (for example):
I once heard about a dumb person who thought halibut weren’t a kind of fish - but boy, that person sure was wrong. Therefore, AI is also a kind of fish.
The coffee version is:
I once heard about a dumb person who thought coffee would cause lots of problems - but boy, that person sure was wrong. Therefore, AI also won’t cause lots of problems.
Nobody would ever take it seriously in its halibut form. So what part of reskinning it as about coffee makes it more credible?
Whenever I wonder how anyone can be so stupid, I start by asking if I myself am exactly this stupid in some other situation. This time, I remembered an argument from one of Stuart Russell’s pro-AI-risk arguments. He pointed out that physicist Ernest Rutherford declared nuclear chain reactions impossible less than twenty-four hours before Szilard discovered the secret of the nuclear chain reaction. At the time, I thought this was a cute and helpful warning against being too sure that superintelligence was impossible.
But isn’t this the same argument as the coffeepocalypse? A hostile rephrasing might be:
There is at least one thing that was possible. Therefore, superintelligent AI is also possible.
And an only slightly less hostile rephrasing:
People were wrong when they said nuclear reactions were impossible. Therefore, they might also be wrong when they say superintelligent AI is possible.
How is this better than the coffeepocalypse argument? In fact, how is it even better than the halibut argument? What are we doing when we make arguments like these?
Some thoughts:
As An Existence Proof?
When I think of why I appreciated Prof. Russell’s argument, it wasn’t because it was a complete proof that superintelligence was possible. It was more like an argument for humility. “You may think it’s impossible. But given that there’s at least one case where people thought that and were proven wrong, you should believe it’s at least possible.”
But first of all, one case shouldn’t prove anything. If you doubt you will win the lottery, I can’t prove you wrong - even in a weak, probabilistic way - by bringing up a case of someone who did. I can’t even prove you should be humble - you are definitely allowed to be arrogant and very confident in your belief you won’t win!
And second of all, existence proofs can only make you slightly more humble. They can refute the claim “I am absolutely, 100% certain that AI is/isn’t dangerous”. But not many people make this claim, and it’s uncharitable to suspect your opponent of doing so.
Maybe this debate collapses into the debate around the Safe Uncertainty Fallacy, where some people think if there’s any uncertainty at all about something, you have to assume it will be totally safe and fine (no, I don’t get it either), and other people think if there’s even a 1% chance of disaster, you have to multiply out by the size of the disaster and end up very concerned (at the tails, this becomes Pascalian reasoning, but nobody has a good theory of where the tails begin).
I still don’t think an existence proof that it’s theoretically possible for your opponent to be wrong goes very far. Still, this is sort of what I was trying to do with the diphyllic dam example here - show that a line of argument can sometimes be wrong, in a way that forces people to try something more sophisticated.
As An Attempt To Trigger A Heuristic?
Maybe Prof. Russell’s argument implicitly assumes that everyone has a large store of knowledge about failed predictions - no heavier-than-air flying machine is possible, there is a world market for maybe five computers. You could think of this particular example of a prediction being false as trying to trigger people’s existing stock of memories that very often people’s predictions are false.
You could make the same argument about the coffeepocalypse. “People worried about coffee but it was fine” is intended to activate a long list of stored moral panics in your mind - the one around marijuana, the one around violent video games - enough to remind you that very often people worry about something and it’s nothing.
But - even granting that there are many cases of both - are these useful? There are many cases of moral panics turning out to be nothing. But there are many other cases of moral panics proving true, or of people not worrying about things they should worry about. People didn’t worry enough about tobacco, and then it killed lots of people. People didn’t worry enough about lead in gasoline, and then it poisoned lots of children. People didn’t worry enough about global warming, OxyContin, al-Qaeda, growing international tension in the pre-WWI European system, etc, until after those things had already gotten out of control and hurt lots of people. We even have words and idioms for this kind of failure to listen to warnings - like the ostrich burying its head in the sand.
(and there are many examples of people predicting that things were impossible, and they really were impossible, eg perpetual motion).
It would seem like in order to usefully invoke a heuristic (“remember all these cases of moral panic we all agree were bad? Then you should assume this is probably also a moral panic”), you need to establish that moral panics are more common than ostrich-head-burying. And in order to usefully invoke a heuristic against predicting something is impossible, you need to establish that failed impossibility proofs are more common than accurate ones.
This seems somewhere between “nobody has done it” and “impossible in principle”. Insisting on it would eliminate 90%+ of discourse.
See also Caution On Bias Arguments, where I try to make the same point. I think you can rewrite this section to be about proposed bias arguments (“People have a known bias to worry about things excessively, so we should correct for it”). But as always, you can posit an opposite bias (“People have a known bias to put their heads in the sand and ignore problems that it would be scary to think about or expensive to fix”), and figuring out which of these dueling biases you need to correct for, is the same problem as figuring out which of the dueling heuristics you need to invoke.
What Is Evidence, Anyway?
Suppose someone’s trying to argue for some specific point, like “Russia will win the war with Ukraine”. They bring up some evidence, like “Russia has some very good tanks.”
Obviously this on its own proves nothing. Russia could have good tanks, but Ukraine could be better at other things.
But then how does any amount of evidence prove an argument? You could make a hundred similar statements: “Russia has good tanks”, “Russia has good troop transport ships”, “the Russian general in the 4th District of the Western Theater is very skilled” […], and run into exactly the same problem. But an argument that Russia will win the war has to be made up of some number of pieces of evidence. So how can it ever work?
I think it has to carry an implicit assumption of “…and you’re pretty good at weighing how much evidence it would take to prove something, and everything else is pretty equal, so this is enough evidence to push you over the edge into believing my point.”
For example, if someone said “Russia will win because they outnumber Ukraine 3 to 1 and have better generals” (and then proved this was true), that at least seems like a plausible argument that shouldn’t be immediately ignored. Everyone knows that having a 3:1 advantage, and having good generals, are both big advantages in war. It carries an implied “and surely Ukraine doesn’t have some other advantage that counterbalances both of those”. But this could be so plausible that we accept it (it’s hard to counterbalance a 3:1 manpower advantage). Or it could be a challenge to pro-Ukraine people (if you can’t name some advantage of your side that sounds as convincing as these, then we win).
And it’s legitimate for someone who believes Russia will win, and has talked about it at length, to write one article about the good tanks, without explicitly saying “Obviously this is only one part of my case that Russia will win, and won’t convince anyone on its own; still, please update a little on this one, and maybe as you keep going and run into other things, you’ll update more.”
Is this what the people talking about coffee are doing?
An argument against: you should at least update a little on the good tanks, right? But the coffee thing proves literally nothing. It proves that there was one time when people worried about a bad thing, and then it didn’t happen. Surely you already knew this must have happened at least once!
An argument in favor: suppose there are a hundred different facets of war as important as “has good tanks”. It would be very implausible if, of two relatively evenly-matched competitors, one of them was better at all 100, and the other at 0. So all that “Russia has good tanks” is telling you is that Russia is better on at least one axis, which you could have already predicted. Is this more of an update than the coffee situation?
My proposed answer: if you knew the person making the argument was deliberately looking for pro-Russia arguments, then “has good tanks” updates you almost zero - it would only convince you that Russia was better in at least 1 of 100 domains. If you thought they were relatively unbiased and just happened to stumble across this information, it would update you slightly (we have chosen a randomly selected facet, and Russia is better).
If you thought the person making the coffee argument was doing an unbiased survey of all times people had been worried, then the coffee fact (in this particular time people worried, it was unnecessary) might feel like sampling a random point. But we have so much more evidence about whether things are dangerous or safe that I don’t think sampling a random point (even if we could do so fairly) would mean much.
Conclusion: I Genuinely Don’t Know What These People Are Thinking
I would like to understand the mindset of people who make arguments like this, but I’m not sure I’ve succeeded. The best I can say is that sometimes people on my side make similar arguments (the nuclear chain reaction one) which I don’t immediately flag as dumb, and maybe I can follow this thread to figure out why they seem tempting sometimes.
If you see me making an argument that you think is like coffeepocalypse, please let me know, so I can think about what factors led me to think it was a reasonable thing to do, and see if they also apply to the coffee case.
. . . although I have to admit, I’m a little nervous asking for this, though. Douglas Adams once said that if anyone ever understood the Universe, it would immediately disappear and be replaced by something even more incomprehensible. I worry that if I ever understand why anti-AI-safety people think the things they say count as good arguments, the same thing might happen.
And as some people on Twitter point out, it’s wrong even in the case of coffee! The claimed danger of coffee was that “Kings and queens saw coffee houses as breeding grounds for revolution”. But this absolutely happened - coffeehouse organizing contributed to the Glorious Revolution and the French Revolution, among others. So not only is the argument “Fears about coffee were dumb, therefore fears about AI are dumb”, but the fears about coffee weren’t even dumb.
Share this post