911 Comments

Motivated reasoning. Sub-conscious. 10% or whatever chance of AI Doom is such a scary idea, more so even than nuclear war etc, that I need to counter that fearful emotion. Hence coffee and halibut.

Expand full comment

You beat me to it. You see this all the time in my field (teaching). I say, “Kids these days are not alright,” and then someone pulls up a quote from Plato saying that the kids in his day weren’t alright either, as if the fact that someone in the past was concerned about young people proves that any problems today are illusory.

Expand full comment
Apr 25·edited Apr 25

I completely agree with your point - but disagree with your example of it! The point of the Plato quote isn't to say "kids in classical Greece weren't okay" halibut-style; the point is that Plato's description of kids is *so similar* to present-day descriptions of kids that people literally can't tell whether the kids being described come from classical Greece or present-day Europe unless they're told the origin of the quote - and therefore there probably isn't a downward trend.

A more interesting argument that can be made from the same observation is that the reason why Plato's description is so uncannily similar to ours is that "how kids look to an elder" is a sort of constant, and kids will always look that way to elders no matter what the kids are actually like, therefore we need proper metrics rather than just elders' impressions.

(Not endorsing any of these arguments myself, by-the-by, just setting them out because I think you aren't doing them justice!)

Expand full comment

It might sound similar because that quote isn’t from Plato but is from someone’s dissertation a hundred years ago.

Expand full comment

Oh wow, I never knew that. Well-caught!

Expand full comment

I'm not sure I know it now.

Expand full comment

Which one are you thinking of? Because I’ve seen people quoting Plato (framed as Socrates):

> The teacher in such case fears and fawns upon the pupils, and the pupils pay no heed to the teacher or to their overseers either. And in general the young ape their elders and vie with them in speech and action, while the old, accommodating themselves to the young, are full of pleasantry and graciousness, imitating the young for fear they may be thought disagreeable and authoritative.

http://www.perseus.tufts.edu/hopper/text?doc=Plat.+Rep.+8.562&fromdoc=Perseus%3Atext%3A1999.01.0168#note-link1

Expand full comment

But that quote isn't a complaint about "kids these days", it's just a description of a hypothetical scenario.

Expand full comment

I know the Plato quote is fake, but it's worth remembering that Plato lived at a time of great Athenian declinel. The kids of his day were the ones who would grow up to lead Athens into a bunch of badly-concieved wars that would end with Athens as a Macedonian vassal. The philosophers who came before him were having civilised dialogues, the philosophers twenty years younger than him were lying around in the street wearing barrels.

Expand full comment

<i>A more interesting argument that can be made from the same observation is that the reason why Plato's description is so uncannily similar to ours is that "how kids look to an elder" is a sort of constant, and kids will always look that way to elders no matter what the kids are actually like, therefore we need proper metrics rather than just elders' impressions.</i>

(a) That quote is fake anyway.

(b) Athens suffered a decline in power and importance during the fourth century which it never recovered from, so "Fourth-century Athens was in a simular situation to us" isn't quite as reassuring as the quotemongers seem to think.

(c) Even if the quote were actually true, you'd still need way more datapoints to show that ""how kids look to an elder" is a sort of constant", and the mere fact that everybody always brings up this one specific quote should make you suspicious that these other datapoints don't actually exist.

Expand full comment

Agree with (a) and (b), but for (c) I think we actually do have some datapoints for the phenomenon: our experience of how we were regarded by elders when we were kids, and our experience of how we find ourselves regarding kids now that we're elders ourselves.

I do think there are also other works of literature that reference this phenomenon - Wodehouse and Jerome K. Jerome spring to mind, but I'm sure there are others besides - and suspect the "Plato" quote is favoured simply because it appears to show the phenomenon being present over the longest timescale (and possibly because Plato has considerably more cultural clout than P.G. Wodehouse..)

That being said: I'm personally not convinced the phenomenon actually exists - in other words I'm not convinced that "how kids look to an elder" actually is a constant - I'm just suggesting that, should one wish to argue that it does, one has more ammunition than simply that "Plato" quote.

Expand full comment

The other literature you mention is all from the post-industrial West, which is historically unusual in many respects. To show that the phenomenon is actually universal, we'd have to look at date from a variety of different societies and time periods.

Expand full comment
Apr 26·edited Apr 26

The "Plato" quote, IIRC goes on about how kids are lazy and uncouth, disrespectful and slovenly.

Kids these days are anxious and depressed. Is different.

Expand full comment

I always took that to prove that kids are perpetually NOT alright. Most people make their worst mistakes between aged 15 and 22.

Expand full comment

"The Winter's Tale":

"I would there were no age between ten and three-and-twenty, or that youth would sleep out the rest, for there is nothing in the between but getting wenches with child, wronging the ancientry, stealing, fighting — Hark you now. Would any but these boiled brains of nineteen and two-and-twenty hunt this weather? They have scared away two of my best sheep, which I fear the wolf will sooner find than the master."

Expand full comment

"Wronging the ancientry" is a GREAT phrase! Words to live by.

Expand full comment

"Wronging the ancientry"

Sir, your comment is quite ambiguous. Are you for or against such wronging? ;-)

Expand full comment

Many thanks for such a wonderful quotation.

“They have scared away two of my best sheep” —now that's a tragedy.

Expand full comment

Maybe they were influencer sheep. Sheep tend to flock, after all. If the best sheep go, the remainder are soon to follow.

Expand full comment

I read a book where a character was complaining about youth. "At age 10 they should be shut up in a barrel and fed through the bung hole. At 18 you decide whether to let them out or drive in the bung."

I read some strange books.

Expand full comment

Heinlein. I think it's one of the Lazarus Long quotes (but I'm willing to be corrected). It's been misattributed to Mark Twain. And the lower and upper bounds of the age range change as it's been misquoted.

Expand full comment

Vaguely surprised it isn't Oscar Wilde, it has that smart guy attitude.

OTOH, Wilde probably wouldn't get within a hundred miles of child raising.

Expand full comment

In my experience, 9-year-old Boy Scouts are the worst people on the planet. I run when I hear them coming—they are quite audible at a distanced.

Expand full comment

Attributed to Mark Twain

Expand full comment

The real issue is culture drift, as Hanson has been exploring lately. The kids have, in fact, been getting worse continuously for several millennia, by all the standards both originally and incrementally. If you think they're getting better (or at least were until you were full grown) you can make an argument for that.

Expand full comment

>The kids have, in fact, been getting worse continuously for several millennia, by all the standards both originally and incrementally.

Kids of the last couple of generations are stronger, smarter, faster, read more and write more than almost any generation before them. to the people of ancient Athens on average the kids of today would be unusually tall, buff, intellectually capable and their skin unblemished by pox-marks.

most of that is due to them being one of the few generations in history to get such good nutrition ,such broad access to information and lower disease burden but still.

https://xkcd.com/1414/

https://www.smbc-comics.com/index.php?db=comics&id=2337

Expand full comment

I started out being mildly disappointed by your comment, then visually skipped down to where you were posting xkcd and smbc links and thought "oh yeah, ok".

No, friend, you skipped the actual point, which is where the standards are coming from. Yes, by your standards today being taller and handsomer and smarter is worth being utterly corrupt morally and worthless in terms of piety or physical courage.

I'm sorry that you hallucinated that I said "worse by all standards" but that's on you. Maybe reading comprehension hasn't improved by as much as people who make comics for a living think.

Expand full comment

"by all the standards"

Are we talking about the same athenians?

They blurred the line between morality and physical beauty so much that a sexy person stripping was a plausible defence because beauty was seen as a sign of having the favour of the gods.

And of course they'd view us as awful in some ways, we don't worship their gods, we don't follow their social customs. Awful!

The point is that social customs are always changing, generation after generation.

Some people interpret change as social decay, they see the youth are moving away from the almost-random set of morals they themselves embraced on the day when society finally mostly figured out morality just as they hit young-adulthood and decide that means society is going down the toilet.

Though that would of course also apply if time were reversed. Someone brought up in the 18th century catholic church faced with a future that included the Sacred Band of Thebes would see that as a sign of awful moral decay.

Because socially popular morality is an almost random walk over time. As flighty as fashion. depart in either direction through time and pretty soon society starts to look terribly immoral.

Expand full comment

Now it sounds like you're partly agreeing with me, so I'll be charitable. Yes, all cultures drift over time (Robin Hanson has been writing some great stuff on this very recently). But specifically by the standards of BC 3000 the ancient Athenians were already hopelessly degenerated in ways at least analogous to how we are hopelessly degenerated by Athenian standards. If you just average the values of the human population of, say, the steppes on westward every 500 years or so, you'd observe a slow decline calibrated by the original standards of, say, the Yamnaya or by increments (each 500 year block looking at the more recent).

But no, I'm not talking about surface-level ideology (like which gods or what customs). Your bit about interpreting change as decay is the bog-standard explanation and requires no elaboration. I'm making a different point, which is that actual substantive change has been happening over time, and that this change would be seen as bad by people from the past. We think of ourselves as better than the people in the past - you wouldn't lynch a black man for being seen with a white woman, or (further back) capture an escaped slave at gunpoint to return him to the authorities. But -could- you, if you wanted to? Probably not. You definitely couldn't row a longship up an estuary to kill armed men, or live out of a wagon while conquering your way across Eurasia on horseback. We'd like to paper over this, but the fact is that our own ancestors would think we suck - unblooded boys who in their manners and customs are more like girls even than boys. And that's fine. That's what becoming civilized looks like. But let's not kid ourselves about "lol the ancients were just boomers who though the kids were no good".

Expand full comment

Have the kids ever been alright? They certainly weren't when I was in high school...

Expand full comment

I think that was Cicero?

Expand full comment

Sure, it’s no rebuttal to a general argument that Kids Aren’t Alright. But I think he’s fair game when people are arguing about a specific failure mode. Specifically that the kids are a certain kind of wrong.

https://www.themotte.org/post/890/culture-war-roundup-for-the-week/192595?context=8#context

Expand full comment
Apr 25·edited Apr 25

Good job patting yourself on the back for having explained away the outgroup's behaviour as a consequence of them having characteristic personality flaws?

The intro to Scott's post only works due to the arbitrary restriction to technologies in

> Maybe the argument is a failed attempt to gesture at a principle of “most technologies don’t go wrong”?

Drop the qualifier that the thing to go wrong must be technological, and you are one step away from a good candidate for a heuristic that people in the real world actually use - "most threats drummed up by people who stand to gain from you believing them don't play out". Surely this heuristic has its utility, especially if the Cassandra is sufficiently socially removed from the target that she could shrug off being revealed as a fraud later. The prototypical catechism on applying it takes the form of "sketchy outsider tries to scare you and then make you buy his product" - think Melville's "Lightning-rod Man" (which I actually had as school literature, for English class in Germany). I'd reckon this is why this anti-AI-risk argument is also often accompanied by hinting that AI safetyists are icky.

It's also unsurprising that people would feel betrayed if you pulled a reverse card on the heuristic - these sorts of simple rules are often perceived by the holder as hard-earned pieces of life wisdom that comprise their advantage over the younger and more naive, and to insinuate that they actually don't give an unambiguous answer is to devalue them and by extension the advantage.

Expand full comment

Cassadra is an odd thing to call the type person you are describing, given that part of her curse was that she was always actually right.

Expand full comment

Yeah! Surely there were other prophets who were wrong and unpunished! Use those guys as examples!

Oh wait, you can't because they were WRONG.

Expand full comment
Apr 25·edited Apr 25

The classic modern example of the character you're describing is the con man "Professor" Harold Hill, from the musical "The Music Man", who drums up a moral panic over any arbitrary recent change in order to sell them a bogus solution:

https://youtu.be/LI_Oe-jtgdI?si=ffuhyZe1rD9GBrTB

Oh, we've got Trouble

Right here in River City

With a capital T and that rhymes with P and that stands for Pool!

Expand full comment

Many Thanks! That song always come to my mind in any discussion about moral panics. :-)

Expand full comment

Not necessarily sub-conscious. Putting on your Talmudic thinking cap, first you decide what the right answer should be based on your experience, or what you think the world should be like, and then choose your evidence.

Expand full comment

This is definitely correct, but I would say all reasoning is motivated, so you still have to look a little deeper to figure out what makes the motivation or reasoning Bad. Comparing the Coffeepocalypse arguments to Scott's counterarguments, obviously we both think Scott's reasoning is better and more convincing, but why? Well, he's goes through the steps of considering counterarguments and examples and then dismissing them for coherent reasons...but what makes those reasons/reasoning coherent? Well, I think one of Scott's main motivations is to have a worldview where his beliefs about how the World works actually align with how the world really works. In other words, his motivation is primarily to "Truth."

Of course, we also know Scott also has a higher prior than many that any sufficiently advanced AI will destroy or enfeeble Humanity, and he is also motivated by that. I think the Coffeepocalypse arguments are intended to lower that prior; its not convincing because the reasoning is Bad and someone of Scott's caliber will see right through that. But most people are not up to Scott's caliber of reasoning and therefore may lower their prior for AGI bringing Ruin, and thus will be less motivated to resist it. Again, I don't think their arguments are intended to be consistent, they're just trying to move priors one way or another. They don't work on me, but mostly because I share the prior that AGI is potentially dangerous, and one of my main motivations is having a worldview that would be accepted and explicable to Scott Alexander and his readers.

Expand full comment

I think the entire argument is against expertise, especially regarding "new technologies". It's "Experts tell you something, but look, here experts were wrong about something similar, so just distrust experts in general".

And people more vividly remember "experts were humiliatingly wrong" stories than "experts warned us of something and managed to prevent it" stories. Also more than "experts didn't warn us and some disaster happened", those stories are least memorable cause there's no heroic protagonist - there's a lack of one.

Expand full comment
author

I feel like if this was the argument, they would have made more of an effort to show that it was "experts" who were against coffee. Instead they blame kings, queens, and alcohol vendors.

Expand full comment

At least to me these register as critiques not so much against experts, but towards predictions in general. ”People are bad at predicting things. Nothing ever happens, so you shouldn’t worry.”

Expand full comment

My new motto.

Expand full comment
Apr 25·edited Apr 25

I'm thinking of the person I know who always make a similar argument, and I think she aims it at the conjunction of experts + long term predictions, especially catastrophic ones.

Scott wrote:

"And plenty of prophecies about mass death events have come true (eg Black Plague, WWII, AIDS)."

The person I'thinking about would say that none of those were predicted enough in advance to qualify as long term predictions.

Expand full comment

There's the notorious documentary, filmed in 2019 and released on Netflix in January of 2020, that went into detail about how the Wuhan wet market was a very good candidate for the origin of the world's next big pandemic...

Expand full comment
Apr 25·edited Apr 25

It’s worse than that. People are bad at predicting things, what they predict usually doesn’t happen, and when big things happen they usually weren’t predicted.

So, by this logic predictions of AI doom should make me update that it’s *less* likely to happen!

Expand full comment

In the coffee example, you're right. But the usual argument is usually framed around experts.

Also, experts might be the wrong term here. Maybe it's closer to "elites", which Kings and Queens definitely fall under. I could imagine someone saying "President Clinton claimed X but actually Y happened", about, say, the economy. And this would count as proof that elites get things wrong, even if Clinton isn't actually an "economic expert".

Expand full comment

There are much more concrete examples re. the economy. President Obama famously claimed that, without his proposed stimulus, unemployment would reach 8%, but the stimulus would prevent this result. We got his stimulus, and unemployment went over 10%, and stayed above 8% for two years. Now, I don't know whether any (let alone most) economists would have supported his claim in 2009, so this doesn't exactly reflect on economists' capabilities. But it sure does reflect on politicians' and other "thought leaders'" use of expertise claims to advance their agenda.

Expand full comment

To be fair, presidents have a huge incentive to express overconfidence in their policies, since it directly relates to their electability. Unfortunately, we don't live in a society epistemically humble enough that people are more likely to vote for a candidate who expresses "The best evidence I have suggests this will probably help."

To an extent, what presidents try to do probably reflects what they think are good ideas. But what they say very likely isn't an accurate reflection of what they think.

Expand full comment

Sure - politicians do politics. Maybe, sometimes, they really think their policies will benefit their constituents in significant ways (as opposed to getting more votes to get power, or to stay in power).

But my point was that people who claim the authority of experts to advance their agendas deserve fully as much skepticism as people who advance their agendas based on no support from anyone.

Expand full comment

I know it's tangential, but creating a forecast that underestimated the unemployment rate by 2% doesn't strike me as an especially damning indictment of economists. I'm happy they got it within 5%.

Expand full comment

The claim you attribute to Obama may be famous, but I haven’t seen any evidence that he actually made that claim.

If I may offer a guess as to what you are referring to: On January 9, 2009, Christina Romer and Jared Bernstein were not working in the White House but they had a semi-official status because they had been chosen for jobs in the White House starting on January 20, when Obama would be inaugurated. They wrote a paper projecting the effects of a stimulus package, which included a graph of projecting unemployment with and without a stimulus. It showed unemployment topping out at 8% with a stimulus and 9% without.

Their baseline projection turned out to be wrong: in February 2009, when the actual stimulus package passed, unemployment was already at 8.3%.

Expand full comment

OK - so it was (some) economists.

Expand full comment

Undermining experts is about undermining their claim to expertise. Hence comparing them to non-expert authority figures.

Expand full comment

I almost take it as the opposite: moral panics by people with little-to-no understanding of the issue, but high status, are common. As an example of where this argument somewhat works, I'd provide most "normal" concerns about AI (e.g. taking jobs or misinformation).

Where it falls flat is in explaining Geoffrey Hinton, Stuart Russell, Yoshua Bengio, etc.

Expand full comment

If you tried to make the analogy closer to Hinton, Russell, and Bengio you'd get the "moral panic" raised by Manhattan Project scientists relating to the nuclear bomb potentially resulting in human extinction.

Expand full comment

Re: "the experts" - did you take counterfactuals into consideration, as in how planet Earth could have been had we not placed so much faith in experts, and thus perhaps had taken a less foolish path(s)?

Expand full comment

This counterfactual is pretty easy, without trusting experts we'd still be living in caves. We especially trust experts every day of our lives. I flew on a plane just the other day!

Expand full comment

What is it that you are describing, technically?

Expand full comment

It's a big pretty white plane with red stripes, curtains in the windows and wheels and it looks like a big Tylenol.

- Stephen Stucker, Airplane!, 1980

Expand full comment

>This counterfactual is pretty easy, without trusting experts we'd still be living in caves.

Well said, and quite literally true! Every time I step into a building, I'm relying on the structural engineers and the people who designed and ran all the processing steps for all of the materials in the building. _I_ certainly don't have the relevant tensile and compressive strengths and safety margins at hand.

Expand full comment

I disagree with the phrase "humiliatingly wrong" because it implies that these people suffer some kind of reputational damage.

Expand full comment

More explicitly than just “experts,” it’s distrust of Silicon Valley tech bros who want to feel important and be at the center of the narrative. Crypto hype is fresh on everyone’s minds - the blockchain bros cried wolf, and then when everyone came to look and the wolf wasn’t there they said, “no wait you just can’t see it, only we can see it because we’re better/smarter/more technical than you!”

Most of the media is buying the AI-sooner narrative because it gets clicks, but people are also understandably exasperated at media and negativity bias and doomerism in general. There are a LOT of reasons to be skeptical of the incentives of everyone hyping this narrative, because they’re people/industries who desperately want/need to be the center of attention.

Expand full comment

Bitcoin is still bullshit (its primary use case is facilitating illegal transactions, aka literally committing crimes), but it continues to remain expensive bullshit.

Expand full comment

Even worse, people remember "experts warned us of something and managed to prevent it" stories AS "experts were humiliatingly wrong". Climate change deniers will cite ozone layer depletion and Y2K as examples of the experts warning about things that didn't happen.

Expand full comment

Y2K didn't happen because people spent a lot of time and money to fix it.

Expand full comment

Yes, that's my point. But some people see it as an example of a baseless scare.

Expand full comment

As the Y2K deadline approached, not many people, including elites or experts, were saying that the problem was largely fixed, or that there wouldn’t be serious if not catastrophic problems because the appropriate people had taken appropriate action. In retrospect, the lesson seems to be that the expenditure of money, time, and effort to avoid catastrophe was sufficient to avoid most serious problems. (Of course, we can’t run the counterfactuals of spending 10% or 20% or 50% less time, money, and effort.) In any case, back in late 1999, there were still plenty of Y2K doomers who had more-or-less sophisticated (but, in retrospect, incorrect) arguments about why the Y2K problems had not been – and even “could not be” - adequately addressed. Still - it’s hard to know how to apply the lessons of Y2K to AI.

Expand full comment

Yup. At the time, I thought that there was a lower bound on the amount of damage that Y2K would do, from the roughly 25% of the economy (roughly half of small businesses) that did nothing to avoid the problem. I was wrong - and I'm still not sure why the final impact, even on that sector, was so small. I've never been so happy to _be_ wrong.

Expand full comment

That's interesting - is it possible that small businesses mainly used off the shelf software that had been fixed by the vendor, so even if the business didn't care about Y2K, if they happened to update their software before the millennium, it got fixed anyway? But then it's hard to imagine businesses that didn't care about Y2K would be diligent about software updates, especially because updating would be more manual and complicated back then.

Or maybe organisations that were small enough not to have done anything were also not that reliant on computers anyway, so when they ran into problems they could just do things manually?

Expand full comment

Many Thanks!.

>But then it's hard to imagine businesses that didn't care about Y2K would be diligent about software updates, especially because updating would be more manual and complicated back then.

Agreed!

>Or maybe organisations that were small enough not to have done anything were also not that reliant on computers anyway, so when they ran into problems they could just do things manually?

Could be, though even tiny organizations generally need to send out invoices and keep books.

Expand full comment
Apr 25·edited Apr 25

I suggest that these are people experiencing epistemic learned helplessness.

cf. https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/

> "And there are people who can argue circles around me. Maybe not on every topic, but on topics where they are experts and have spent their whole lives honing their arguments. When I was young I used to read pseudohistory books; Immanuel Velikovsky’s Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals."

> "And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn’t believe I had ever been so dumb as to believe Velikovsky."

> "And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting."

Presented with a flood of logical steps by someone capable of arguing circles around me, adding up to a conclusion that is deeply counterintuitive; and presented perhaps also with a similarly daunting flood of logical steps by someone else also capable of arguing circles around me, adding up just as convincingly to the opposite conclusion... what is one to do?

One can sway back and forth, like a reed in the wind, according to which super convincing argument one last read.

Or one can throw one's hands up in the air and say: "There are hundreds of incredibly specific assumptions combined with many logical steps here. They all look pretty convincing to me; and actually, so do the other guy's. So clearly someone made a mistake somewhere, possibly more than one person and more than one mistake; certainly that's more likely than both the mutually contradictory yet convincing arguments being right simultaneously; but I just can't see it. What now? What other information do I have? Well, approximately ten gazillion people have predicted world-ending events in the past, and yet here we all are, existing. So I conclude it's much more likely that the weirder conclusion is the one that's wrong."

Condense that to a sentence or two, couple with an average rather than expert ability to articulate, and you arrive at coffeepocalypse.

From the above essay:

> "Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom’s simulation argument, the anthropic doomsday argument, Pascal’s Mugging – I’ve never heard anyone give a coherent argument against any of these, but I’ve also never met anyone who fully accepts them and lives life according to their implications."

AIpocalypse is just another idea to add to that list.

Expand full comment

People who believe in AI doom, even a moderate amount (more than 1% chance) seem to me to be invoking a Pascal's Mugging, but vehemently disagree with my conclusion that they're similar.

In both cases I agree with you that nobody seems to have a good counterargument against AI doom/Pascal but unless they already agree with the conclusions are not at all *convinced* by that argument.

Telling me there's a non-zero chance of AI doom doesn't tell me how I should or shouldn't lead my life. There's a non-zero chance that an asteroid will destroy the planet this year as well, but almost nobody treats the possibility of that as a legitimate reason to live in fear or build underground bunkers.

Because total destruction of everything we value is part of the equation, all answers to the question of "how seriously should we take this" comes out to infinity for any doomsday prediction. We obviously cannot react to every such infinite concern with proportionate resources (we can't go spending all of earth's resources every day), and for every doomsday except one that we ourselves agree with, we think the actual appropriate level of resources to spend is in fact close to zero. The problem is multiplying by infinity, and the solution is to take infinity out of the equation. People who agree with the infiniteness of the problem don't do that, come up with infinite in the equation, and wonder why everyone else doesn't get the same answer.

Expand full comment

a >1% chance of something happening isn't a Pascal's Mugging, it's a "it will happen at least one time out of 100". A Planet-killer asteroid hitting earth has an incredibly lower chance of happening anytime soon, and yet people do seriously worry about that (see eg https://en.wikipedia.org/wiki/Asteroid_impact_prediction).

Expand full comment

We actually have *NO* idea how likely AI doom is. It could in fact be far far less likely than an asteroid. Also, with asteroids we can try to game out the possibility of one hitting us and wiping our humanity, but even knowing it's very unlikely doesn't tell us it's impossible even on short timeframes.

That's why I refer to people who believe AI doom may happen (even if they also think it pretty unlikely) compared to people who don't. Pascal's wager was originally about believing in God, and how it makes sense to believe in God even if the probability of God existing was low. As someone who already believes in God this argument makes perfect sense to me! But, I totally understand why this is not at all convincing to someone who doesn't already believe in God.

Expand full comment
Apr 25·edited Apr 25

>We actually have *NO* idea how likely AI doom is

Then you are just arguing probability numbers. This has nothing to do with multiplying by infinity, people who argue coherently for AI risk do so because they assign a high probability of AI doom comiing to pass - not primarily because the outcome is infinitely negative.

People who are worried about infinite negative utility are clearly making a Pascal-class mistake, and it is not a correct argument against AI.

Expand full comment

We have evidence for large asteroids hitting Earth in the past, so there is a non-zero probability of one hitting Earth in the future. You can argue about how likely and how soon, but you cannot argue with non-zero.

We have *no* prior evidence for AI, or AI doom. Saying AI is like humans is just wrong; we're pretty sure that we're not replicating human thinking in there.

So there's a more uphill battle there.

Expand full comment

Do we have evidence of AI being smarter than humans? Say in Chess or Go? Or maybe image recognition? Do we have evidence in our historical record of one intelligence wiping out another intelligence?

Expand full comment

On top of that, I'd argue that we have a lot of evidence *against* the possibility of functionally omniscient/omnipotent entities.

Expand full comment

I'm saying that their belief that AI doom is likely is unjustified. They cannot provide evidence that the chances are actually high, because it's an unprecedented question that depends on future technological advancements that we cannot know are even possible. Recursive self improvement is an idea that may not may not be true, not a fact that we can plan around.

I believe that the counter-argument to the fact that AI doom is an unjustified belief is to talk about how serious it would be for us, if true. That's Pascal.

Expand full comment

The issue I've come up against is that the people who argue for catastrophic AI risk tend to disregard other catastrophic risks that strong AI would hedge against. AI strong enough to pose an existential risk could also help fight climate change, find alternative energy sources, reduce car deaths, find cures for cancer, eliminate the need for human drudgery, and do polygenic screening which, when combined with embryonic selection, could reduce illness and death in general across a population. There is currently a 100% chance you will die at some point. What weight should you assign to addressing that issue? The accelerationist argument is almost always framed as 'accelerationists hate human beings' which is a severe weakmanning of the accelerationist arguments.

I do think we should be arguing relative probability numbers as well as we're able to. But in practice, infinity or no, people strongly tend to choose one goal or catastrophe and let it fill their entire mental field of vision. Most people just REALLY don't like to have to juggle 7 mental balls or consider that multiple inputs may contribute to a single result.

Expand full comment

> There is currently a 100% chance you will die at some point.

Well, the heat death of the universe is probably going to be a problem, but the chances of a well-preserved cryonics patient being restored to health in some form or other is greater than zero.

Expand full comment

Dunno about cancer and polygenic screening, but I think we've got adequate non-AI-based options lined up for energy, climate change, drudgery, homelessness, and car deaths, it's mostly just a matter of legislation and budgeting at this point.

First, broad regulatory exemptions for solar PV (and, optionally, wind) construction. Anybody objecting on environmental-impact grounds needs to show that a given project would somehow be worse than an equivalent coal plant, at least until there are literally no more grid-scale coal-burning generators operating anywhere in the world.

Second, Georgist tax on land value, replacing most other taxes.

Third, GPS tracker (and, optionally, ignition-lockout breathalyzer) as mandatory automotive safety equipment. All roads become hassle-free toll roads, no more DUI hit-and-runs, no more selective enforcement of traffic laws based on individual biases. Urban planners could experiment with complex schemes of road-use fees varying by time of day - or even outright banning cars from particular areas - at the push of a button, confident that an updated map would be on everyone's dash faster than you can say "in fifty yards, turn left."

Revenue would increase while enforcement costs plummet. Spend the initial wave of money (and surplus solar and wind power) on accelerated weathering: grind up a mountain's worth of olivine and dump the resulting powder in the ocean, thereby absorbing excess CO2.

Once that situation is stabilized, celebrate by implementing universal basic income. Anyone who doesn't like drudgery will spend some of their share on avoiding it, thus bidding up the price of that dwindling supply of drudgery-tolerant labor, until most dirty jobs cross the threshold of "more cost-effective to automate away," while remaining automation-resistant ones pay so well as to be prestigious by default.

Of course, people won't just sit around idle. They'll use that security and abundance to start systematically solving all the problems they were previously obligated to ignore, due to being too busy flattering some middle-manager's ego in exchange for subsistence, or too broke to do anything but get in arguments on the internet.

Expand full comment

That we have no idea how likely it is is certainly in dispute: https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist

Expand full comment

I like your last argument. Very clearly no one actually believes AI will cause the death of all humans because their behavior would be radically changed.

Expand full comment

>because their behavior would be radically changed

It depends on the time scale and the probability and the person's time horizon.

I'm 65, so I don't lose sleep over much that happens more than 20 years from now. My default assumption is AGI succeeding and probably (with _HUGE_ uncertainties) eventually replacing humans (but possibly preserving large chunks of our culture).

Expand full comment
founding

I took a two thirds paycut and moved from a place I liked more to a place that I liked less in order to do my part about preventing the AI apocalypse; I think that counts as strong evidence that I believe in it.

[Why didn't I do <other thing you think AI risk implies>? Well, probably because I didn't think AI risk implied that thing, or it doesn't match up with my personal competencies, or whatever.]

Expand full comment

Increasingly many people in the EA AI safety community have taken fairly drastic actions. But you're right that basically none of them think it's a 100%, or even >95%, certainty, and the possibility that they'll have to live with the consequences of their actions keeps them from doing anything even more drastic (e.g. spending all their money or getting themselves imprisoned or...)

Expand full comment

...what?

every time I see someone make this argument, the actual changes they recommend are pretty nonsensical

the truth is, i have absolutely no idea what to do with this information. every "extreme action justified by extreme threat" i can think of has a negative expected value

i suspect those are the same extreme actions you are thinking of, but since you don't actually believe, you don't feel obligated to continue the chain of thought to its conclusion

Expand full comment

Pascal's mugging is, and always has been, about prospect theory and human psychology. The solution is very simple: stop assigning non-negligible probabilities to extremely unlikely events. Pascal's mugging is about a problem with human psychology—it shows that drawing our attention to an extremely unlikely, but potentially highly-important event, causes us to hugely overestimate its probability.

Expand full comment

I think the steelman for coffee or "failed predictions of disaster" arguments is based on Lindy Effect sort of things (well, humanity's been around a quarter million years, plagues, volcanoes, hurricanes, tidal waves, wars and genocides have all happened and we're still here, sooo....), and maybe for some few better read and better epistemic-ed arguers, pointing to nuclear-disaster style arguments about how hard it would REALLY be to actually kill every human.

I personally don't buy either of these arguments (for one thing, we've been through at least one and probably two significant "close to extinction" genetic bottlenecks during our time as humans https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2842629/), but they're a better class of argument than "we don't worry about halibuts" or whatever.

Expand full comment

I don't think this study really relates to the question of human extinction - reading through it it seems to be more about the "genetic bottleneck" of a small group of humans moving either out of Africa or across the Bering Strait, not all of humanity being close to extinction ("we" haven't been through a bottleneck if "we" includes Africans).

Expand full comment

You know, you're totally right, thank you. On further research, it looks like the bottleneck I was thinking of was around 900k years ago, before we were human, and the population went from ~100k individuals to ~1k, and stayed low for ~100k years. So still a "close to extinction" bottleneck, but for human ancestors vs anatomically modern humans.

Expand full comment

I think the strongman of the argument they are making is, for almost every new technology there have been people making dire warnings and apocalyptic predictions about it (e.g. trains would kill everyone over a certain speed, video games corrupt our youth, nuclear power etc).

These arguments are easy to make because new technologies are scary to many people and it is very hard to definitively show something that is new and unproven is safe. Nearly all of these arguments have turned out to be mostly or completely wrong.

Therefore when someone says AI could kill everyone, your prior should be that it almost certainly won’t.

Whilst I can broadly accept that, it isn’t an argument to just ignore evidence about AI, just that before you examine the subject that should be your starting point.

You could also make the bolder claim that the burden of proof is on those claiming AI will cause calamity, given how often those sorts of arguments have been made and how beneficial new technologies have been on average. But I’m not sure I would go that far.

Expand full comment

I think its even broader than this. There is a market for consuming doomsday prophesy. So, writers produce doomsday prophesy and are incentivized to make it as plausible as possible to gain market share. This has shifted over my lifetime from nuclear war, ozone, global warming, climate change, COVID, and AI. If you put my casual knowledge against any of the top writers and say, "respond to my specific arguments otherwise you are wrong", I loose every time.

In the paradigm, most doomsday authors will switch between the most salient doomsday narrative over time. And, I think this is what we saw with Scott, Yud, etc. They all switched to COVID for a bit, and now that that has lost steam, they are back on AI. I think their COVID record is good evidence that they are overreacting to AI. The historical overreaction to coffee shows that this is a normal thing to happen in societies.

Expand full comment

> If you put my casual knowledge against any of the top writers and say, "respond to my specific arguments otherwise you are wrong", I loose every time.

+1. This is the core of it. I'm reminded of conspiracy theorists who insist that you're not entitled to disagree with them until you've watched [some random 4h long youtube video], or read Marx cover to cover.

Sneering about epistemic learned helplessness is dishonest, when it just isn't practical for me to spend months looking into everything in detail. I do take AI risk seriously, and I've engaged (admittedly somewhat superficially) with many of the arguments. What else do you want me to do?

Expand full comment
Apr 25·edited Apr 25

As a conspiracy theorist, I have the same complaint about Rationalists, Normies, and The Experts.

> What else do you want me to do?

I would like you to wonder what is true, using an appropriate form of logic. Knowledge that there is more than one form is typically only cognitively available when discussing logic itself directly/abstractly, and this knowledge availability phenomenon applies to many things other than types of logic.

Expand full comment

> As a conspiracy theorist, I have the same complaint about Rationalists, Normies, and The Experts.

Exactly. In the end we all have to apply heuristics to decide what is worth paying attention to.

> I would like you to wonder what is true

I do. But I am a very small being in a very large world.

Expand full comment

> In the end we all have to apply heuristics to decide what is worth paying attention to.

This is so? There is no alternative?

> I do.

This is so? Is "do" a binary?

Expand full comment

>This is so? There is no alternative?<

The alternative is "pay attention to things at complete random". You have finite scope, your choice is how to aim.

Expand full comment

Yes, You get me to come back tomorrow, if you tell me how bad things will be. In my old age I find I've become less and less certain about anything. I don't know enough to have an opinion either way. And so for most things, "I don't know", is the right response. And arguing with other people who mostly know nothing is just a waste of my time. So why do it?

Expand full comment

Because the facts in your last sentence are actually predictions/opinions?

Expand full comment

Because if all the wisest people take this option, all the actual actions will get taken by unwise people, and I for one don't want to live in that world.

(Cf that one line in HPMOR about how "good people" are often too busy wringing their hands over what is ethical to take actually helpful actions)

Expand full comment

Do you have any examples of any of the people you are talking about (or anyone prominent, really) actually saying covid would end the world (a "doomsday prophecy")? It's been a few years, but my memory from early in the pandemic is a lot heavier on people who said things like "it will be over by May" and "I'd be surprised if there are even 10,000 deaths in the United States" than people who overestimated the actual number of deaths, which is in the millions in the U.S.

Expand full comment

I also am pretty sure Scott and Yud kept doing AI posts in 2020-2021!

Expand full comment

Just re-read Scotts post from March 2020.

https://slatestarcodex.com/2020/03/02/coronavirus-links-speculation-open-thread/

Scott advocates for: healthy quarantine, fomite prevention measures, home healthcare, praises Chinese lockdowns, advocates for contact tracing, wearing masks, wearing full respirators, and zinc supplements. He mostly ignores costs associated with these measures. Its an impressive list of predictions that would be adopted by global policy makers while also being ineffective and in some cases disasterous, especially to the global poor.

Meant "doomsday" as people predicting doom, not the literal end of the world. Maybe I am using the word incorrectly.

Expand full comment

If everyone had followed those recommendations in March 2020 then we probably would have eradicated the virus entirely by July 2020 and got on with the rest of our lives, with far less total disruption, probably with fewer total days of lockdown.

Expand full comment

Nearly every government on earth tried this and failed.

Expand full comment

No, they tried something that looked vaguely like it and failed. Locking down until eradication (and then keeping borders closed) was the best move at the time.

Locking down until almost-eradication is a huge fucking waste of time and money which just happens to look a lot like locking down until eradication.

Expand full comment
Apr 25·edited Apr 25

>than people who overestimated the actual number of deaths

I was an (obscure!) overestimator.

In the early days, when, at one point, the death rate looked like 7% and the fraction of the population who needed to have gotten it and gotten immune to stop the pandemic needed to be around 50%, I expected the worldwide death toll to be on the order of 2.8x10^8. I was wrong. The final toll seems to be about 40X. smaller https://www.worldometers.info/coronavirus/ gives 7x10^6

Expand full comment

"Doomsday prophecy" is perhaps hyperbolic, but

But Scott was wearing a full respirator out in public in maybe Feb 2020. I didn't keep up with their output at the time, but my sense was that they were advocating for more caution and for lots of interventions social distancing, travel restrictions, healthy quarantine, vaccination, boosting, etc. But then corrected faster than other authors when these ended up not doing much.

I do expect a similar course with AI.

Expand full comment

> And, I think this is what we saw with Scott, Yud, etc. They all switched to COVID for a bit, and now that that has lost steam, they are back on AI. I think their COVID record is good evidence that they are overreacting to AI.

This seems like a pretty unkind thing to say, so it better be true. Do you have any actual examples of these people making "doomsday prophesies" about COVID?

Expand full comment
Apr 25·edited Apr 25

The term "doomsday prophesy" is unfair, and unkind, and overly provacative. I like Scott and find him interesting and worth reading.

As for "overreacting", I think the history speaks for it's self. Here is Scott in march 2020 very intelligently advocating for all the ineffectual and counterproductive NPIs that became public policy.

https://slatestarcodex.com/2020/03/02/coronavirus-links-speculation-open-thread/

Expand full comment

I don't think that post matches the way you describe it here, let alone the sort of thing you claimed in your original comment. Most of it seems to hold up pretty well even in hindsight, let alone compared to the level of information that was available at the time. In general the course of the COVID pandemic seems like a big win for Scott and others in the community who were saying early on, "hey this could be a really big deal" over the people who were saying it was nothing to worry about.

Can you point to a specific claim in the post you linked that seems like a hysterical overreaction?

Expand full comment

His advocacy for healthy quarantine, is a a hysterical overreaction. This was easily the worst public policy decision of the pandemic.

"The story I’m hearing from most smart people is that China has done amazing work and mostly halted their epidemic. This is very impressive...Nobody’s too optimistic that democratic countries can follow their lead, though"

There is also a section titled: "Self-quarantine starting now vs. later?"

Expand full comment

China's early coronavirus response worked, as far as I can tell. Where they dropped the ball was that they didn't distribute the mRNA and other vaccines in favor of a Chinese one that was significantly less effective and continued to have a strict "zero COVID" lockdown policy long after vaccinations with Western vaccines would have made it no longer worthwhile.

Expand full comment

"advocacy for" is a bit of a stretch relative to the actual content of the post IMO. But I will grant that that part was a bit overzealous.

Still, given the overall impact of COVID and our society's response to it (whether or not you agree with the latter), Scott comes out of it looking a lot better than the people who were saying it would be no big deal. I don't think it makes a very good argument that he's prone to making mountains out of molehills.

Expand full comment

There also a difference between warnings and prophecies. Basically, warnings are conditional, prophecies aren't. If someone says X will happen ,and it doesn't, they were wrong. But if someone. Says "X will happen if we don't do Y", and we did Y, they weren't wrong.

Expand full comment

> most doomsday authors will switch between the most salient doomsday narrative over time. And, I think this is what we saw with Scott, Yud, etc. They all switched to COVID for a bit, and now that that has lost steam, they are back on AI.

I don't see what happened as particularly fallacious. They've been concerned about AI, and there isn't a global pandemic, so they just talk about AI. Next, they're concerned about AI, and there's also a global pandemic, so they talk about both. Then the pandemic goes away, and they're back to AI.

These people aren't one-note pundits; they're talking about what's on their mind, and if the entire world is going nuts due to a pandemic, this is going to be reflected in their output, one way or another. And I don't recall any instance of them treating covid as being even a serious threat to human civilization, let alone human existence.

I'd also disagree with the characterization of them as "doomsday authors". Scott certainly isn't: he writes about a lot of stuff. Eliezer probably regrets not being a more effective doomsday prophet about AI, but he's not a general-purpose doomsday prophet: he saw the danger of AI early, and devoted a lot of his life to trying to stop it, and his advocacy is simply a part of that. It's the difference between a war journalist who travels the world from one war zone to another, and a war journalist who only happens to be covering a war because their country is currently being invaded.

Expand full comment

> Eliezer probably regrets not being a more effective doomsday prophet

AI doomsayers need a mass market, inexpensive, public ritual, along with some really expensive remedies to really take off. The public ritual keeps the issue, "top of mind" for the public, provides controversies for the popular press to write about, and provides a mechanism for social enforcement. The expensive remedies create profitable businesses who can bribe politicians and fund doomsayers. For climate change these are recycling/green energy. COVID it was masks/medical care. Not sure what it could be for AI, but the first to figure it out makes billions.

Expand full comment

So AI doesn't actually fit the pattern/ you are trying to make it fit?

Also masks were never a profitable business, so the model doesn't work there very well.

To be clear, I do think the point that there are lots of doomsayers provides a prior against these sorts of arguments in general, but I think it also clearly does not produce such a strong prior against them that we can safely assume they are all false.

It seems like a situation where skepticism but attention is warranted, and an awareness that there is something in a lot of our minds that really likes believing doomsday arguments, even when they aren't very good.

Having said that, I think the case for AI being dangerous enough that we should be very cautious in developing it is solid, and we probably would agree on a lot of specifics in terms of that if we pinned them down carefully enough.

Expand full comment

Broadly agree.

> Also masks were never a profitable business, so the model doesn't work there very well

Masks were the public ritual and had nothing to do with money. The 50K insurance payments for treating patients w/ COVID, vaccines, Remdesivir, etc. were the expensive remedies.

Expand full comment

> And, I think this is what we saw with Scott, Yud, etc. They all switched to COVID for a bit, and now that that has lost steam, they are back on AI.

If this is what you sincerely believe, then you fail at understanding quantitative arguments. The worry about AI is that it could kill literally everyone. (For "Scott, Yud, etc."; other people worry about other things, for example AI saying something politically incorrect, or failing to pay artists for using their art as inspiration.)

There was never a similar worry about COVID; instead it was that it could kill a lot of people... and it actually did, but much fewer than was expected. So the "doomsaying" at worst meant overestimating the number of victims by one or two orders of magnitude (and even this only assuming that all the vaccination etc. had zero effect).

There are people out there who worry about crime, car accidents, fires, floods, and various other things that sometimes kill people. Should we also include them to the list of doomsayers who merely follow the popularity incentives?

Expand full comment

They overestimated the risks for COVID. So, I think its reasonable to assume they are are overestimating on AI.

I think they have brains that subconsciously fill in holes of uncertainty with doom. It's part of why they are popular, but also leads to overestimation of risk as a power law of uncertainty: 1-2 orders for COVID, ~7 for AI.

Expand full comment

Oh c'mon, pretty much everyone "switched to COVID" in 2020-2022, and Scott and Yud and the EAs have been on about AI safety *long* before it was cool

Expand full comment

Why wouldn't you go that far? Why wouldn't the burden of proof be on the people making a claim, especially a claim that would necessitate significant changes in how every person on earth lives their lives?

Expand full comment

Fair point I guess it depends on definitions, but in my head putting the burden of proof on the AI risk lot (I.e. showing decent chance AI likely to kill us all) would mean at a minimum making them show it to a UK civil trial standards (i.e. more likely than not). I wouldn’t be comfortable for life to go on as normal if there was a 49% chance they were right.

Expand full comment

I would go further than that, proof that there's a 5% chance of doom seems like a decent place to put serious effort into preventing catastrophe. I've seen a LOT of conjecture about what the probability of doom is, and a whole lot of reasoning about how it could happen, but just about zero proof. If this was a court of law, the case would be thrown out as conjecture.

We can respond to conjecture, but such a situation doesn't require action, especially specific actions. I happen to think that being pragmatic about limiting AI is a good idea. For instance forbidding recursive training (AI learning from experience to train itself) is probably a good idea. But we should be clear that there's zero proof of AGI being possible, let alone any of the further concerns about how that could lead to doom (often also involving technology we don't know to even be possible, like nanites).

Expand full comment
Apr 25·edited Apr 25

For the record, it wouldn't take sci-fi nanotechnology for an AI to be able to destroy human civilization using only an Internet connection. If enough novel killer viruses struck humanity at the same time, we'd go down like the American Indians did to European diseases. Creating a virus given a DNA sequence is well within the capability of current technology, and there are companies that you can pay to send you custom DNA sequences as long as they don't match a database of known dangerous viruses. If an AI *could* design killer viruses (admittedly a difficult task) and then bribe/threaten/trick people into assembling them (something much easier than designing one), it might not kill literally everyone, but it could certainly leave humanity in no position to oppose it.

Expand full comment

zero proof of AGI being possible?

do you not consider humans to be such proof? i certainly do. we seem to be fully generally intelligent, we certainly count as an existential threat to all beings less intelligent than we are, and we sprang out of incremental improvements to non-generally-intelligent designs

i have some trouble understanding people who are not immediately swayed by this argument, which seems utterly ironclad to me, but i guess if anyone has any novel arguments i'd like to hear them

Expand full comment

Zero proof of any of the specific things (foom/takeoff, specifically) that would lead to AGI takeover. I didn't say, and wouldn't say, that AGI is impossible.

To put some of my cards on the table, I'm religious, so it's not at all weird to me to consider human intelligence potentially separate from machine intelligence.

Secondly, there is nothing at all weird with saying that we may be on the wrong track for AGI or that true AGI would be far more complex than anything we've even attempted at this point.

Expand full comment

Ah, I must have misparsed then

to be fair, you did say "But we should be clear that there's zero proof of AGI being possible"

Expand full comment

One reason: because winning arguments and optimal avoidance of risk may look identical, but they are not.

Another reason: you are considering "that would necessitate" to be necessarily factual - this may be fine if you're operating within a thought experiment, but we can't live within thought experiments. Or can we? 🤔

Expand full comment
Apr 25·edited Apr 25

> Why wouldn't the burden of proof be on the people making a claim,

The burden of proof is indeed on those making a claim, which includes those who claim AI will NOT be an existential risk. Agnosticism on the outcome is the right logical position, but then you're still left in the position of evaluating how plausible a danger it is, and thus whether some investment in mitigations are warranted (like the Y2K bug).

> especially a claim that would necessitate significant changes in how every person on earth lives their lives?

AI doesn't exist yet. How could mitigations against something that doesn't yet exist necessitate significant changes in how every person lives their lives?

Expand full comment

Well, consider the necessary mitigations. There are people around the world riding hell-bent for leather to produce human or superhuman level AI, and a regime that was capable of suppressing them would be…truly intrusive.

Expand full comment

Why is an oppressive regime a necessary mitigation? Why is intensively funding research into AI safety not a sufficient mitigation?

Expand full comment

It may not be.

But research into AI safety has not borne much fruit yet — certainly no silver bullet. That very fact might, to some eyes, suggest that slowing down would be prudent, but there is no sign of that happening. So an oppressive regime might seem, to other eyes, the only choice.

There’s also the fact that AI research is pretty easy to hide, compared to, say, nuclear-weapons research. A global compact to restrict it would be so *very* easy to cheat on.

Expand full comment

AI safety has also not seen much funding. What's the ratio of funding between "training/developing new AI" vs. "*ensuring* AI outcomes"? 100:1? 1000:1? possibly even worse.

My point is that a lot of people are working on AI so as to achieve specific outcomes. AI safety research generally aligns with the goal of *ensuring* specific outcomes, but it's not typically seen in that light, and most people have instead been content with "has a 2% chance of going wrong". There's value in closing that gap to zero, and I consider that within the realm of AI safety, because what is AI safety if not "does what it's supposed to and doesn't go wrong in unexpected ways"?

Expand full comment

People always say that.

But like Making these giant neural nets takes serious compute. It's harder than making drugs, comparable enriching uranium. And the world is trying to suppress the first one, and mostly managing to suppress the second.

Expand full comment

Asking someone to prove a negative is bad faith isn't it?

Expand full comment

https://en.wikipedia.org/wiki/Burden_of_proof_(philosophy)#Proving_a_negative

Russell may not have any reason to believe there is a teapot in orbit, but if someone walks up to him and says there definitely isn't a teapot there, he's absolutely going to ask for evidence.

Expand full comment

Which either leaves us with nothing at all to say about anything we cannot directly observe, or working from other principles we can examine more directly.

The likelihood of a teapot being in orbit cannot be considered 50/50 just because someone claims it's there. Unless shown positive reason to expect the teapot to be there, we should start the conversation with something *significantly* less than 50%, likely some small fraction of one percent. This is necessary from logical and practical standpoints. Logical, given we have no mechanism for how a teapot would get there, and practical because we can't respond with action to every assertion made.

Assertions are practically free, while action is many orders of magnitude more expensive.

Given that starting point, we would be perfectly justified in saying he was wrong even if we can't prove the negative. Or, more likely, to ignore the hypothetical and spend no resources trying to prove or disprove the conjecture or remedy any issues the argument brings up. We should spend zero dollars trying to protect orbital satellites from this teapot, for instance.

Expand full comment

I mostly agree, so now the question is whether Russell's teapot has the same probabilities as AI being an extinction level threat, so that you can clearly and obviously conclude which is more plausible and which requires the greater burden of evidence.

It seems clear that an intelligence greater than human intelligence is possible, as evolution would disfavour maximal intelligence for various reasons. Therefore greater intelligence is possible. We know of no reason why intelligence should be correlated with moral superiority, at least in a system that did not evolve as humans did, where cooperation was key to survival, therefore we should assume there is no intrinsic relationship between the two.

So it seems that an intelligence greater than human intelligence and that is not necessarily morally superior to humanity is eminently possible, and that we are plausibly on that path given the economic and strategic incentives there are to replacing and expensive fallible humans.

There are clear steps in this deduction that would permit empirical refutation, such as some intrinsic association between intelligence and morality, so it's falsifiable in the course of standard research on AI, particularly AI safety research.

Expand full comment

Agreed that this is the strongman, further strengthened by the esotericism of the subject and how early we still are in the game. It's intuitively clear how nuclear weapons can destroy civilisation, and it's reasonably clear how a pandemic might, but not how and why the lovably-bumbling "glorified autocomplete" evolves into something that does, even by reaching for those other means. To persuade an ordinary layperson you have to keep hammering at the physical mechanism. Pascallian (or near-Pascallian) thought experiments are just not compelling, and stuff like the AI-box hypothesis is downright counter-productive.

Expand full comment

What does "it" and "is" refer to in this context, both colloquially and technically?

Expand full comment

The first instance of 'is' affirms my view of the correctness and truth of the claim in the comment I replied to. The two subsequent instances, coupled in contractions with their 'it's, assert common knowledge of how more stereotypical x-risks might play out. The final 'is' asserts my opinion of the negative persuasive power of the AI-box thing and by implication of other elegant but weird rhetorical techniques used by the doomers.

I'd do the 'are's, but, well, I can't quite be... bothered.

Expand full comment
Apr 25·edited Apr 25

So, you are contemplating a model. Why not contemplate the thing itself? Isn't that the goal here, or is this more so a sort of social activity, that perhaps comes with unwritten rules?

Expand full comment

Is "your prior should be" necessarily optimal as a method?

Expand full comment

I'm sorry, but isn't this identical to the argument he addressed in the heuristics section?

Expand full comment

Not exactly, the heuristic section talks about whether moral panics in general are correct (which is a fair point) but I think if you just focus on the technology worries and moral panics you can make a much stronger case that most of them have turned out to be nonsense.

In that section Scott is looking at this as evidence for the anti AI risk side (for which it is pretty weak), but if you look at it for what your priors should be before you jump into the evidence on AI it is much stronger.

Expand full comment

<mild snark>

One could have accurately claimed that cigarettes and automobiles would each have killed millions of people over the course of a century... Of course, millions of deaths _sounds_ apocalyptic but actually isn't...

</mild snark>

Expand full comment

Well it certainly killed them less dramatically than World War II did.

Expand full comment
Apr 25·edited Apr 25

True! Many Thanks! And the body count is smaller too, for automobiles (for tobacco in all forms, WHO gives around 7 million a year currently, so over a century it _might_ exceed WWII's 70-85 million - not clear how much of this is cigarettes specifically).

Expand full comment

Interestingly, with most past technologies, the people making doomer arguments about it were the people not involved with it, while the people involved with it were the ones who were dismissive. With AI we seem to have the reverse pattern, which doesn't directly tell us anything except that it's different.

Expand full comment

I think the argument isn't against disaster specifically. It's against our ability to predict anything accurately. The argument isn't "The smart guys predicted a disaster but no disaster happened, so no disaster will happen this time". It's "The smart guys predicted something and were wrong, so smart guys aren't as good at prediction as they think".

Bringing up examples where no disaster was predicted but a disaster happened anyway doesn't refute them. It supports them. Bringing up examples of accurate predictions refutes them. However, in practice this is hard to refute because predictions are rarely *completely* accurate. Even getting things directionally right is very hard, let alone getting the timing, magnitude and exact character of a big future change correct. Also, an isolated correct prediction can be written off as luck, or the "stopped clock" effect. You need a long history of correct prediction to show that prediction works.

I think the easiest way to gauge what side of this you fall on is the extent to which you believe in the Efficient Markets hypothesis.

Expand full comment

Most here will disagree, but "Number of times the world has been destroyed by new technology" seems like a reasonable reference class for AI risk.

Expand full comment

Maybe, but if the world were destroyed, we wouldn't be here to talk about it so this number will always be zero.

Expand full comment

Arguably, the world was indeed destroyed at least a couple of times, for certain values of "world" and "destroyed". The extinction of the dinosaurs is one obvious global example; the Black Death and any of the various genocides in human history are local examples. None of those cases depended primarily on new technologies, however.

Expand full comment

I agree.

War, famine, and disease were the historic (local) x-risks. Notably, war often leads to famine and disease by disrupting supply chains and capital infrastructure.

Expand full comment

Well, that, and also killing a bunch of people. I would say that war is indeed a significant risk, especially given modern weapons.

Expand full comment

Sure, but who gives a shit about *local* x-risks, right?

Expand full comment
Apr 25·edited Apr 25

The people exterminated by them cared, presumably?

Bad for book sales though.

Expand full comment

Dead people can't complain, thankfully.

Expand full comment

In risk-analysis I believe they have a system for assessing the probability of risks that must always turn up nil for observer-selection-effect reasons: I gather they look at the near-misses, then apply a correction factor based on the average ratio of near-misses* to incidents within similar fields.

(* presumably near-misses for less-observer-selection-effect-susceptible risks..)

For example: for Titanic-level maritime disasters, even though they're hard to do statistics with about because the frequency is less than once per lifetime, the incident:near-miss ratio is about 1:100 (source: vague half-remembered lecture I once attended, possibly whilst not entirely sober..) so if you see 10 near-misses in a given time-frame you can suppose a roughly 10% chance of an incident within the same time-frame.

Can we do this with AI doom? Probably not, because there probably aren't any incidents similar enough to AI doom with a known incident:near-miss ratio. However, to address Peter's phrasing: we do have data on nuclear war near-misses - and there are certainly more than enough of those to make one unwilling to just assume new technology won't destroy large parts of the world that one cares about.

Expand full comment

Experts say that (at least some of) the negative consequences of what we've *already* done to the environment will arrive in our future.

Expand full comment
Apr 25·edited Apr 25

Having read through other comments, I'm convinced that while superficially appealing, this is actually the *wrong* reference class, and it may explain why some of non-doomer arguments seem like a combination of willful blindness and philosophically-baffling moonlogic to the AGI pessimists.

"New technology" as a reference class implicitly means "tools" -- things that expand the capacities of individual humans or groups of humans while fully subject for all intents and purposes to their human masters.

The (or at a least a primary) doomer argument is that the reference class for AGI is totally outside of this paradigm. Instead it's "inventing a cognitively-superior organism that, for any conceivable agentic goal, will necessarily compete with other previously-existing species for resources to accomplish it." This is a story that *already happened* with natural GI and the outcome was that every other sentient species lost, hard. Hell, humans broadly *like* charismatic megafauna and they've still been devastated in population numbers where they weren't simply subject to extinction because humans don't like them enough relative to other priorities and would prefer to put their habitats (if not just their bodies) to other uses. Whereas one expects an agentic AGI to be as indifferent to humans as we are to bacteria, and for humans to have roughly the same capacity to challenge AI dominance as, like, parrots do to humans.

Meanwhile, the AGI's resource needs and ambitions have no natural upper bounds, which as a corollary means competition with humans that it will always, always win.

The relevant reference class isn't "new tool makes bigger bang," (scary though that is on its own), it's "the impersonal force of evolution results in the most cognitively-capable species being best adapted to its environment and winning resource competition, mopping the floor with every less intelligent species, and controlling whatever resources it feels like, to the exclusion of potential competitors and the deliberate extermination of pest species."

Expand full comment

Yeah, the argument that is most striking to me is to ask where Neanderthals and Homo erectus are now. They could tell you about the danger from the competition of a superior intellect.

Expand full comment

And dogs, cats and, more recently, coyotes, tell us about the opportunities created by superior intellects.

Expand full comment

Being a dog is not my ambition.

Expand full comment

They seem to be content in their lot and to be thriving compared to most other species… pretty much ever.

To steel man your argument, it is that similar species competing for a given niche leads to a winner and losers/extinction This is indeed well documented.

On the other hand, complementary species could support and thrive together. Interestingly this would imply that the solution isn’t via slightly smarter AI, which could be in competition, but rather in substantially and more powerful and smarter AI.

Expand full comment

Your call, of course. But pethood is not for me.

Expand full comment

There are thousands of people on this planet with enough power to completely derail or even eliminate your life with the stroke of a pen, and face zero negative consequences for it. The only reason they don't is because they don't care enough about you to do that. Unless you have the potential to be a billionaire or a world leader, then all people like you and I can aspire to be is a comfortable, placid resource for people with actual power. They're the only ones on the planet who aren't pets(at best) to someone else. And if you say no, you have dignity in this arrangement, well, you potentially can have that with AI overlords too.

Expand full comment

Wow. Dark. Heavy.

Expand full comment
Apr 25·edited Apr 25

Tell that to chickens, cows, pigs, and all the other animals being tortured on factory farms.

Expand full comment

I agree that an entity whose capabilities have "no upper bounds" is likely to be quite bad news for us (arguably, the *needs* of humanity likewise have no upper bounds, but our capabilities are bounded). However, it's not enough to merely posit such an entity; humans have been inventing gods and demons throughout pretty much our entire history, and there's no reason to believe that any of them do or can exist. If you want to convince me to take specific steps and contribute specific resources toward stopping your uber-entities, you must first demonstrate that they are at the very least plausible, if not actually likely. Unfortunately, by positing unbounded entities (either truly unbounded, or "merely" effectively unbounded, as in e.g. "capable of doing anything they want to anything within the Solar System), you are kind of shooting yourself in the foot, since your burden of proof becomes likewise unbounded.

Expand full comment
Apr 25·edited Apr 25

The statement about lacking upper bound was about needs rather than capabilities — the idea being that the AI need only be superior to humans to win in any competition for resources, and (barring implausible artificial restrictions like “make exactly ten paperclips and no more” rather than “survive and make paperclips,”), there’s no upper limit to AI demand for resources and attendant competition with humans for same.

The specific statement was “Meanwhile, the AGI's resource needs and ambitions have no natural upper bounds, which as a corollary means competition with humans that it will always, always win.”

I am thus making a much less strong claim about the necessary capabilities of the AI than you are imputing to me — it needs to only be better than humans, not arbitrarily better than humans.

(I should note that I actually do think arbitrarily high capabilities in a practical sense are likely under a recursive self-improvement scenario, which seems ex ante extremely plausible, but I don’t think it’s a necessary element of the claim.)

Expand full comment
Apr 25·edited Apr 25

Understood, and I withdraw my objection -- but I'd still argue that the needs of humanity are ultimately unbounded, seeing as we are driven to reproduce without limit other than resource constraints. In fact, given a zero-sum competition between two societies of humans, one with greater needs than the other, the outcome is uncertain unless you also factor in their *capabilities*. Or to put it another way, an unemployed bum has a greater *need* for a sandwich than a billionaire, but the billionaire is still much more likely to obtain the sandwich.

Expand full comment

I would agree that needs of humanity are unbounded except that it seems like material success is correlated with sub-replacement fertility empirically speaking. But I think the issue is moot in any event: assuming infinite needs by both humans and an AI (the AI having unbounded needs basically because that's just the way that utility-function-maximization works for any plausible goal beyond "power down and stay powered down," and for the same reason being a lot more single-minded about working to meet its goals than humans empirically are about reproducing), if the AI is more capable you have both conflict over resources and AI victory.

Expand full comment

Agreed, in any conflict the more capable side would usually win; but you are now talking about *capabilities* (just as I was), not merely *needs*.

Expand full comment

Humans have been inventing such things for ages. But now we have things that can talk sensibly, win chess games etc.

They aren't yet uber-powerful. But they clearly exist, and those demons didn't.

> you are kind of shooting yourself in the foot, since your burden of proof becomes likewise unbounded.

That is really not how reasoning works. Like at all.

Occams razor penalizes the complicated specific hypothesis. Not the possibility of something being smart/powerful.

Expand full comment
Apr 26·edited Apr 26

If you propose someone or something that is reasonably smart, or strong, or fast, then the burden of proof is still on you, but it is quite modest. We can observe many such beings all around us already, after all.

If you propose someone or something that is the smartest, strongest, or fastest person in the world, then the burden of proof is much higher, but perhaps still doable. It is not clear whether intelligence, strength, or speed can even be sorted like that (e.g. bulldogs have extremely strong jaws but cannot lift heavy objects like humans can), but perhaps you could still make a compelling case, if you bring up a mountain of evidence.

But if you are proposing an entity that is not merely the smartest/strongest/fastest person in the world, but so superior to everyone else as to be literally unimaginable -- then I am not sure how you could justify it. What kind of evidence could you bring up for an entity that is definitionally incomprehensible ? You could propose it in an abstract, but you have just defined yourself out of the ability to comprehend it, which means that you could not possibly have sufficient evidence, since looking at evidence is *how* we tend to comprehend things.

Expand full comment

> If you propose someone or something that is the smartest, strongest, or fastest person in the world, then the burden of proof is much higher, but perhaps still doable.

The Astronauts returning from the moon were the fastest humans ever. So are you saying that the Apollo missions had some exceptionally high burden of proof?

I mean sure, no one else is going that fast. But no one else is getting on a giant rocket.

Before the first atomic weapons were made, the scientists were talking about explosions enormously bigger than anything that had come before. The evidence they had was some fairly modest particle physics experiments. Yet they correctly deduced such weapons were possible. And if physics had been slightly different, and nuclear explosions were a trillion times more powerful than chemical instead of a million, the evidence required to show this would not have been greater.

> What kind of evidence could you bring up for an entity that is definitionally incomprehensible ?

Superintelligent AI isn't definitionally incomprehensible. I don't know what limits your imagination is under. But I can at least try to understand some aspects of it.

Deep blue is better at chess than me. So I can't predict it's exact moves (If I could, I could play as good chess). But I can understand what it's code is doing, what a minmax search tree with heuristics is. I can understand why such a thing would be good at chess. And I can predict it will win. That is quite a lot of understanding.

Reasoning about minds smarter than you is hard. Sometimes I use analogies involving things I understand but that dumber minds (human or animal) don't. Sometimes I use abstract maths.

There is a lot I don't know, but some things I can make a guess at.

> but you have just defined yourself out of the ability to comprehend it, which means that you could not possibly have sufficient evidence

If the AI produces a valid proof of the Rienman hypothesis and designs a fusion reactor, I don't need to know how it works internally. I don't need that much understanding. The concept of "very intelligent" predicted that the AI could do those things, and most other concepts didn't.

If you saw the fusion reactor in front of you, clearly designed by an AI and clearly working, would you say it was dumb?

Expand full comment

> So are you saying that the Apollo missions had some exceptionally high burden of proof?

Yes, and it was met with sufficient evidence. The same goes for atomic science.

> And if physics had been slightly different, and nuclear explosions were a trillion times more powerful than chemical instead of a million...

Yes, if the laws of physics were completely different, lots of things would become possible (or impossible). I don't think you and I, not being physicists, can speculate on it in any kind of an informed fashion (well, at least I know I'm not a physicist).

> Deep blue is better at chess than me. So I can't predict it's exact moves (If I could, I could play as good chess). But I can understand what it's code is doing...

Are you saying that Deep Blue is "superintelligent" ? I don't think so:

> If you saw the fusion reactor in front of you, clearly designed by an AI and clearly working, would you say it was dumb?

It depends. I think that Deep Blue is, in a way, really dumb. It's a program designed to play chess, and that's the only thing it can do. I may not fully understand how it works (other than knowing how alpha-beta pruning works in general), but I am fairly confident that it's a relatively simple tool. A machine-learning system designed to build fusion reactors would be a much more complex tool, but it would still be exactly as dumb as Deep Blue.

But you are not proposing such a system; rather, you are proposing an independent autonomous agent that is not merely good at designing reactors or playing chess, but is good at doing literally everything. And not merely good, and not merely better at it than the best humans, but literally beyound (perhaps far beyound) the upper limit of human capabilities. This is why AI-doomers are afraid of AI: not because it could build a better reactor or play a better game of chess, but because it would be able to easily outwit literally everyone at everything. I think this does add up to being "incomprehensible", in addition to being completely unprecedented (outside of the realm of fantasy), so yes, I'm not sure what kind of evidence you could muster towards such a grant claim.

I acknowledge that an AGI (assuming such a thing is possible at all) doesn't necessarily have to be so overwhelmingly smart. Technically, it could be something like a top-notch physicist who is better at particle physics than any living human physicist. This would make the AGI technically "superhuman", but not much of a threat, because there's really not much you can do *just* by being able to think really hard about particle physics.

Expand full comment

We have evidence in favor of superhuman capabilities for arithmetic, chess, go, poker¹, next-token prediction², maybe Dota2³, Atari games⁴, and protein folding⁵. We also know that evolution can design very efficient and even self-repairing objects, which humans can't. Existing computers are far below physical limits to computation⁶.

Humans like von Neumann and Ramanujan demonstrate the existence of general and specialized cognitive capabilities far above the mean human level, respectively.

A relatively small advantage in technology and tactics has been sufficient for crushing victory of a small group over a large group of humans⁷.

¹: https://en.wikipedia.org/wiki/Libratus

²: https://www.lesswrong.com/posts/htrZrxduciZ5QaCjw/language-models-seem-to-be-much-better-than-humans-at-next

³: https://en.wikipedia.org/wiki/OpenAI_Five

⁴: https://en.wikipedia.org/wiki/MuZero#Reactions_and_related_work

⁵: https://en.wikipedia.org/wiki/AlphaFold#AlphaFold_2,_2020

⁶: https://www.nature.com/articles/35023282

⁷: https://www.lesswrong.com/posts/ivpKSjM4D6FbqF4pZ/cortes-pizarro-and-afonso-as-precedents-for-takeover

Expand full comment
Apr 26·edited Apr 26

Yes, of course; but I could make an even longer list if I included the capability to multiply large numbers (and large sets of such numbers), route network traffic, manage fuel injection schedules in real time, and so on. I could in fact expand this list to the ability to crush hard objects or fly across continents or even extract energy from sunlight. I don't see how my list would be any more (or less) valid than yours. And yet you are not overly concerned about the dangers of hydraulic presses; so why are you overly concerned about the dangers of e.g. go-playing systems ?

Humans like von Neumann and Ramanujan do indeed demonstrate "superhuman" performance (I mean, they're technically human but I understand your meaning) on a variety of tasks, yet both of them were highly specialized in their areas. From what I can tell, even Von Neumann was fantastic at mathematics and physics (and computer science, arguably a branch of mathematics); pretty good at organizing highly specialized committees; eidetic memory (possibly); and pretty close to human-average on the vast majority of other tasks.

The only non-trivial point on your list that I can agree with is #7:

> A relatively small advantage in technology and tactics has been sufficient for crushing victory of a small group over a large group of humans

But this sweeps a massive amount of complexity under the rug. Terms like "technology" and "tactics" imply a degree of self-guided autonomy and adaptivity that no present-day machine learning system could remotely approach. It's like saying that swords pose an unprecedented level of existential risk because in a war the army with better swords will probably in (all other things being equal). This is technically true, but not because of some kind of an unprecedented level of danger posed by swords. It's the humans wielding them who are the danger (and sadly a well-precedented one).

Expand full comment

I considered non-cognitive abilities, but those didn't seem super relevant. Yes, humans have created devices that are much faster/powerful than many animals, but speed/power is not the thing that gives humans advantage over other animals.

I put multiplying numbers in the arithmetic bucket.

The list was a collection of systems that have superhuman cognitive abilities in narrow ranges, which (according to me) are evidence that superintelligence is not impossible.

I think that von Neumann was actually much better than most other people in a wide variety of tasks, e.g. he was a consultant for a while, getting paid large sums by his clients, and learned six languages before the age of eight¹. I think that e.g. Ramanujan was probably a better mathematician, but much much more cognitively specialised.

> Prisoner’s dilemma says that he and his collaborators “pursued patents less aggressively than the could have”. Edward Teller commented, “probably the IBM company owes half its money to John Von Neumann.” (pg. 76)

I agree that he was not flawless, as is detailed nicely in this post: https://musingsandroughdrafts.com/2019/10/26/some-notes-on-von-neumann-as-a-human-being/. Although the list seems pretty slim on concrete cognitive tasks that he was actually bad at, not just moral failures.

> But this sweeps a massive amount of complexity under the rug. Terms like "technology" and "tactics" imply a degree of self-guided autonomy and adaptivity that no present-day machine learning system could remotely approach.

I don't necessarily disagree with that :-)

I was merely arguing that the gap between agentic dangerous AI systems and us would not need to be unboundedly large. It is an entirely different line of argument to show that we're anywhere near that.

¹: I have not heard

Expand full comment

> but speed/power is not the thing that gives humans advantage over other animals.

Maybe I'm being pedantic, but this is not the case at all ! Humans are endurance hunters, which gave us an early edge, due to being able to run down prey animals. We also have better physical equipment than many animals, while being deficient in some domains. For example, dogs have better smell but we have better vision. But yes, our cognition is also superior to that of any animal (that we know of anyway).

> The list was a collection of systems that have superhuman cognitive abilities in narrow ranges, which (according to me) are evidence that superintelligence is not impossible.

Right, but my point is that superhuman performance on an extremely narrow task does not translate into "intelligence" (insofar as the term has a meaningful definition). Calculators can multiply numbers very quickly, but are not "intelligent" in any sense of the word; and this would not change even if you bought yourself a huge crate of them.

Perhaps more importantly, "intelligence" alone doesn't buy you much. As I'd said before, a superintelligent box that sits under the desk and thinks very hard without interacting with anyone or anything might as well be equivalent to a dumb old brick. What AI-doomers are worried about is not "superintelligence" but rather "super-capabilities", and these are subject to much more stringent physical constraints.

> and learned six languages before the age of eight¹. ¹: I have not heard

Sorry, I'm having trouble parsing this -- not heard what ?

> I was merely arguing that the gap between agentic dangerous AI systems and us would not need to be unboundedly large.

That is true, but the AI-doom argument relies on this gap being unboundedly large. The AI is not merely smart, it is "super"-intelligent; it cannot merely position troops better than human generals, it can win every battle despite being completely outnumbered and outgunned, etc. The AI who is a slightly better general than its human counterpart would be a problem, but not an unprecedented problem, as the stories of humans like Caesar and Napoleon illustrate.

Expand full comment

Since you put "no natural upper bounds", here's a pedantic list of upper bounds:

Holographic principle information processing density

energy available to cool everything on earth's surface

Speed of light limits on communication and cognition

Halting problem

Note that these upper bounds are """fairly far away""", but you know, failing to mention them probably also means that 10-20% improvement over human capabilities in all fields is also impossible. This is an unfortunate fact of how logic works, and scientists are racing to find new forms of proof that disallow this, because they have motivated reasoning, and I have every man reasoning.

Expand full comment
Apr 25·edited Apr 25

Both you and Bugmaster appear to be misreading the actual context of that claim, which was regarding resource demands rather than capabilities having no natural upper bound. Capabilites need merely exceed those of humans, and then basic agentic goals like “survive and reproduce” or “make paperclips” have no natural upper limit as to the resource demands that they create so as to create conflict (won by the AI, because it’s smarter than humans) between it and other species.

Expand full comment
Apr 25·edited Apr 25

I agree I misread, but by complete stupid coincidence those are also indirect constraints on the ability to gain resources, (I.e. AI will be bounded by the light cone, cooling will limit the rate and indirectly the total amount of resources acquired due to 2nd law of thermodynamics). Of course if that was the argument I wanted to make, I should have made it, instead of an unrelated one.

Oh also I was being sarcastic, and so think I am your mind clone when it comes to object level points. Let's echo chamber, I mean, combine our proofs.

Expand full comment

I'm confused.

>failing to mention them probably also means that 10-20% improvement over human capabilities in all fields is also impossible.

Specifically for the speed of light limit, specifically for communications, barring a surprising discovery in physics, we are already at the physical limit for communications speed, since we routinely use e.g. radio to communicate, so we can't get even a 1% improvement in the communications delay to e.g. our satellites and space probes.

I'm unsure whether I'm agreeing with you or disagreeing.

Expand full comment
Apr 25·edited Apr 25

To be clear, I was being sarcastic, because I think naming physical bounds that are orders of magnitudes away as constraints on near term improvements is a dumb argument

Secondly, I said "communication and cognition". Yes the *latency* may be at or close to a physical minimum, but the *bandwidth* may not be, considering how little information English contains, and how slowly it gets produced relative to good thoughts (tm). Also yeah if hypothetically if we get a giant honkadonka badonker's brain (A REAL TERM USED BY A REAL X RISKER), we're talking more about "limited by bus speed" (aka Einstein traveling at light speed thought experiment train fast) rather than limited by extremely slow human reflexes (regular train fast).

I think we're just talking and neither actively agreeing or disagreeing.

Expand full comment

Many Thanks!

Expand full comment

I think part of the problem is exactly defining what risks AI pose. There is a difference between a hand-wavy nanobot apocalypse and something like “It could disrupt multiple industries completely and immediately, which will lead to instability in the economy”.

But big picture, absolutely none of this matters, because we can clutch our pearls here in the United States, or even pass regulation, trying to slow down technical progress (good luck), but the reality is progress will just march forward in Russia and China and India.

Seems to me the only safety we can get is to try to make the US be the leader of this technology, instead of de facto ceding leadership to China

Expand full comment

Yes, this is a vital point that just gets whistled past. The closest I've ever seen a doomer get to addressing it is "the Chinese government is notoriously pragmatic and careful and would never take such a risk" which is so disconnected from reality it leaves one staring in awe.

Expand full comment