530 Comments
User's avatar
User's avatar
Comment deleted
Nov 23
Comment deleted
Expand full comment
walruss's avatar

It would certainly be more effective if it wasn't obviously starting from the premise: "ignore people who say AI isn't dangerous just because a bunch of predictions about AI danger have failed" and working backwards. As-is I picked up on that instantly and read the whole argument with that premise (which I disagree with) in mind - I had a prior assumption that the argument wouldn't stand scrutiny and so I scrutinized it.

And it would probably help disguise that that's the premise if the article left AI as an exercise for the reader. But once you're trying to influence people by disguising your premise you're 100% no longer doing rationalism.

In a perfect world, the article would start with some form of "hey I want to argue that you should ignore all the evidence that contraindicates my pet cause, so I'm going to start that argument with something we probably agree on and work towards that," and in a perfect world I'd read everything with the same level of scrutiny whether it confirms or rejects my prior beliefs. But neither human minds nor persuasive writing works that way.

But even in this imperfect world, it would be a bit manipulative to just leave the main point of the article out altogether.

Expand full comment
MicaiahC's avatar

Which prediction about AI danger has been falsified? I see this claim being shared semi frequently, but I want an actual URL in the best case, a person and generally what they said in the median case and just the person in the worst case.

I've been seeing this claim tossed around but either as only assertions that it has happened with no supporting evidence, or someone linking to a post which in fact is consistent with the current world.

Expand full comment
walruss's avatar

I'm not sure that's relevant here - the argument by Scott is that even if we assume there's a falsified prediction, that shouldn't necessarily make us change our minds. That's the logic I'm criticizing and the existence of an actual claim isn't needed to criticize it.

To be honest, I'm not sure where the idea that a bunch of AI danger predictions have been found false comes from either. Given that it's used as an example, I assumed they existed but I'd have no idea where to find them.

My personal view is that claims of mundane AI danger seem fairly well borne-out while claims of AGI danger all seem to be of a form, "But of that day and hour knoweth no man, no, not the angels of heaven, but my Father only" and those aren't falsifiable predictions.

Expand full comment
MicaiahC's avatar

Scott isn't talking about falsified predictions but claims of the another party that predictions have been falsified, so I believe your interpretation of "Scott sees the weakness of his position and comes up with an edifice to justify it" is wrong, compared to "Scott sees the weakness of the opposing position and wants to communicate exactly why he thinks it's unpersuasive". It sure seems relevant, if we're going to speculate about Scott's thinking, to actually understand it and if he *didn't* think the claims were falsified, it seems uncalled for to just assume his position is weak.

I'm mostly annoyed that Scott talks about mildly interesting things that are common sensical, and in comes much more boring interpretations that round off Scott's positions to yay X or boo Y and then not demonstrate understanding of the simplest point made. Maybe I *should* be annoyed at Scott for including AI risk, since it is so apparently distracting.

Expand full comment
walruss's avatar

"Some people argued - the AI safety folks freaked out about how AIs of 10^23 FLOPs might be unsafe, but they turned out to be safe. Then they freaked out about how AIs of 10^24 FLOPs might be unsafe, but they turned out to be safe. Now they’re freaking out about AIs of 10^25 FLOPs! Haven’t we already figured out that they’re dumb and oversensitive?"

While the "some people argued" implies he might not accept the premise in general, he's clearly accepting it for the purpose of this argument. *Assuming this is true,* he argues, we should still ignore this evidence, or at least not use it to justify rejecting the predictor's opinion in the future, because AI will inevitably become dangerous so eventually the prediction will be right.

Without the AI argument I'm not sure what he's trying to say - he swings between obvious examples that don't have any bearing on anything (i.e. just because a person's predicted death doesn't come to pass doesn't make them immortal, at some level of consumption any substance is bad for you), and examples where this principle assumes the truth of the thing being debated: (i.e. Russia might use nuclear weapons if we keep escalating in the Ukraine, Biden's dementia is inevitable). He then tries to distinguish that from other kinds of panic-mongering that he's already wrote about derisively, even though the distinction is suspect at best.

I like Scott! I'm not even sure he's wrong about AI. But this article is bad. It's incoherent as a principle even without the AI bit. Admittedly there's a possibility that without the AI bit I wouldn't have noticed the incoherence but that says more about me than it does about the argument.

Expand full comment
MicaiahC's avatar

> *Assuming this is true,* he argues, we should still ignore this evidence, or at least not use it to justify rejecting the predictor's opinion in the future, because AI will inevitably become dangerous so eventually the prediction will be right.

He's not saying that you should ignore the evidence, just that the "evidence" isn't ipso facto strong. If someone actually made a specific prediction about AI danger at a certain FLOPs count, I believe Scott would say that they *had* been falsified. He's saying that skeptics are *rounding off" a much more generic "AI is worrying at some point" with "AI is worrying now". Those are very different models. This is exactly why he brings up the dying example! It is a much less radical prediction to say someone is going to die at some point, than it is to predict a specific death time. And since freak out at 10^X flops is analogous to a stricter condition of predicting a death year, you should use whatever intuitions you have to relax "the condition of disproof". see the points about Bayesian updating when the person you are arguing with has a low probability of danger vs extremely high, which is the insight he's trying to convey.

> It's incoherent as a principle even without the AI bit. Admittedly there's a possibility that without the AI bit I wouldn't have noticed the incoherence but that says more about me than it does about the argument.

I feel like making an assumption ahead of time on what you expect, noticing that the post becomes incoherent in light of it, you would at least consider that the frame you are using is bad and systemically makes you bad at understanding the point. Seriously, I'm half convinced that you are trying to weaponize your lack of comprehension. This is not to say that Scott has failed at communicating, but your replies here read much more like a person trying to willfully misunderstand something then complaining that nothing makes sense.

Expand full comment
Satya Benson's avatar

Suppose I think something is inevitably coming and it’s just a matter of time before it does, but actually this is false.

When should I make the switch to believing that it’s not coming eventually? How high of a dose repeated how many times could the patient take before the doctor should say, well I guess it really is harmless and I was wrong all along?

Expand full comment
Satya Benson's avatar

Obviously 'make the switch' isn't a very Bayesian thing to say. I *mean* how should I update that probability, lol

Expand full comment
Sam W's avatar

I think the problem is that single data points carry little weight when it comes to determining whether a threshold for an effect exists. If you can approach that threshold carefully in safe conditions with statistical confidence (i.e. a large double-blind study or whatever, for a drug), then that helps confirm the existence of the threshold, and if it doesn't, you can do another study that carefully explores larger dosages, until you've explored the relevant space of interesting dosages.

In the case of thresholds where we all could potentially die if we accidentally reach it unprepared, it seems trickier.

Expand full comment
Vlaakith Outrance's avatar

If I thought something was "inevitably coming", I hope I'd have very strong arguments for making that case in the first place. Then, the probability would be updated as time goes on and new information can be tacked on to the original case. I think there's a spectrum between the outlandish and the probable, and the updating of that probability will be impacted by the nature of the claim as well as the duration of the claim. If I believe "AI will inevitably bring about the end of civilization", it's unlikely I'll adjust my probability downward rapidly because anytime I see progress in AI, in my framework it adds to that probability even if the end of civilization hasn't materialized in any tangible way. In this case I would probably try to come up with reasonable arguments against my case of inevitability, because I'm not sure I want to live with such confidence in a belief that has such a long expiry date. If I believe "a recession will inevitably hit the US economy before 2028", we're in much easier territory. I will update my belief as macroeconomic data comes along, I will stay informed by reading the financial stability reports by the leading central banks, in one breath I will adjust my belief downward or upward as time goes on, with a "time value" aspect that decreases more and more rapidly as we come to 2028.

Expand full comment
Arrk Mindmaster's avatar

Here are a couple of things inevitably coming: an extinction-level extraterrestrial impact, and the Yellowstone supervolcano erupting. I don't see how anyone can dispute either of these things happening, and we can even give an approximate time range for when they are likely, but no one can tell within a useful time range when they will occur.

On the other hand, some people believe the Rapture to be inevitable, and it is even stated that no one will know in advance that it's here. It seems to be opinion-based whether the Rapture is coming, depending on who you ask.

Expand full comment
Vlaakith Outrance's avatar

Satya Benson was specifically talking about believing in something that is inevitably coming, but that turns out to be false. I agree that certain examples like religious ones are firmly opinion-based and therefore cannot be much changed by incoming information, except if the person slowly becomes an atheist or changes religion as they update their belief system.

I like your example of the Rapture; I believe it is inevitably *never* going to happen, and I don't have space in my framework for updating my probability on that one. If that's false, I'll be dead wrong (ha!), never having given it a second thought.

Expand full comment
Nancy Lebovitz's avatar

The person wouldn't need to change religion, just change to a non-apocalyptic version of their religion.

Expand full comment
Jeffrey Soreff's avatar

Partially agreed. It is arguable as to whether the non-apocalyptic version is the "same" religion. ( And I'd give better than 50/50 odds that, given a few members each of the apocalyptic and non-apocalyptic versions in the same room, that they will indeed argue :-) )

Expand full comment
Amaury LORIN's avatar

I can dispute the extinction-level extraterrestrial impact.

Here's a possibility which results in never-asteroid-impact: no impact for enough time for humanity to create a spatial program able to deflect (or otherwise deal with) incoming asteroids; humanity flourishes and manages to keep the program successful until the far future is so unrecognizable that the notion of an asteroid impact loses meaning (e.g. Earth is destroyed by anthropogenic means, humanity uploads into the matrix and turns the solar system into computronium, humanity survives enough time for the Sun to explode and destroy the Earth).

For the Yellowstone supervolcano, I don't have a concrete example but I would be surprised if there wasn't some possibility of geo-engineering that would get rid of the risk.

Expand full comment
Arrk Mindmaster's avatar

I can revise the details: the extraterrestrial object is coming, and absent action, will cause an extinction-level event (ELE). So if people don't exist, it will still happen. If people can destroy and/or deflect it, it would still count as inevitably happening, like if you know a hurricane is coming then you take action to evacuate, no longer making your death inevitable.

Expand full comment
Viliam's avatar

Believing that something is inevitable also isn't very Bayesian.

https://www.readthesequences.com/Zero-And-One-Are-Not-Probabilities

You should have a certain non-zero probability for the possibility that the thing never happens, and some probability distribution for when exactly it is supposed to happen if it happens.

For example, you are 99% sure that X happens, and you believe the chances are uniformly distributed among the following 1000 days.

If it doesn't happen the first day, stay calm. Even the hypothesis "it will happen" has assigned 99.9% probability that it will not be on the first day. So your probability only moves a little.

But after 990 out of those 1000 days have passed without the thing happening, your beliefs should be 50:50. The "not happen" hypothesis has 1% priors and 100% data match; the "happen" hypothesis has 99% priors and 1% data match.

Expand full comment
Alex's avatar

Gwern has a very nice essay on this transcribed on his site:

https://gwern.net/doc/statistics/bayes/hope-function/1994-falk

"The Ups and Downs of the Hope Function In a Fruitless Search"

"On Bayesian updating of beliefs in sequentially searching a set of possibilities where failure is possible, such as waiting for a bus; the psychologically counterintuitive implication is that success on the next search increases even as the total probability of success decreases."

Expand full comment
Melvin's avatar

Just in a normal Bayesian sort of way. Your hypothesis is that the 10am bus is on its way. You start with a 99% prior that it is on its way since about 1% of buses get cancelled. If it's 10:03 and the bus hasn't arrived yet, you ask yourself "given that the bus is on its way, what's the probability it will be at least three minutes late?" And so forth.

When Fidel Castro reaches 90 and is still not dead, you don't need to start adjusting too far away from the hypothesis that he's a normal human being with a normal lifespan, but if he reaches 150 then you're going to need to consider other possibilities.

Expand full comment
Scott Alexander's avatar

I think this is a more complicated version of the same math, which makes it annoying and hard to follow and which I was hoping to avoid. It would look something like:

- Hypothesis 1: Castro is mortal, with his actuarial table looking like that of any other human

- Hypothesis 2: Castro is immortal (maybe he has discovered the Fountain of Youth).

...and then you update your balance between those hypotheses as time goes on. Given that (1) still leaves a little probability mass on Castro living to 100, 110, etc, and (2) starts with very low probability, in practice even Castro living to 110 should barely change your balance between these hypotheses. Once you get to some level where (1) is making very confident predictions and being proven wrong (eg if Castro is still alive at 150) then at some point 2 becomes more probable. You're doing this in the background as you're doing all the other updates discussed above, but hopefully within a normal regime it doesn't become relevant.

Expand full comment
El Holandés Volador's avatar

What would that look like in a case like the doctor and the experimental drug, or AI risk, where there's no standard graph to compare to and no clear "hmm, they/we should've been dead by now" point?

Expand full comment
dogiv's avatar

In principle you should still be able to define your prior probability distribution over the LD50 of the drug; it will just take more work since you don't have good data for comparable situations. For AI risk it's even harder because the choice of independent variable is not obvious (time? compute? quantity of training data?) but once you pick one, the math is the same.

Expand full comment
Adrian's avatar

The choice of prior for AI risk is also a lot less obvious than for mortality or even the toxicity of unknown drugs, though.

Expand full comment
Guy's avatar

A mixture distribution over the beliefs of leading AI researchers would be reasonable

Expand full comment
Kenny Easwaran's avatar

Anyone who tells you there are standard graphs that let you do all of Bayesian reasoning is lying to you.

Bayesianism is just a set of constraints of coherence among your different theories, but doesn't do the work of finding theories for you.

Expand full comment
MM's avatar
Nov 22Edited

Given that we now know that most or all of the "people living to 125" stories are false (they have all the hallmarks of mistaken birth dates or fraud, and they are uncheckable due to e.g. city hall burning down), you should probably start updating around 110.

Expand full comment
dogiv's avatar

I thought the record of 122 was still considered pretty reliable though?

Expand full comment
Mr. Doolittle's avatar

Because it's such an outlier, there is actually a pretty big skeptical movement about it. It's been a while, but I read both the strong pro and con cases for it, and overall I lean slightly towards the age being real. The alternative theory revolves around her unmarried daughter assuming her identity to inherit her financial situation.

Expand full comment
Oliver's avatar

All people living to 120 stories turn out to be false apart from one. In the Slatetarcodex days there was a debare on whether the Jeanne Calment claim was likely to be false. I haven't seen anyone update on this.

Expand full comment
Catmint's avatar

At that point I'd be updating towards "lied about birth date", not towards "found the fountain of youth".

Expand full comment
Ryan W.'s avatar

As I mentioned in my response, it seems worth breaking out near term predictions that have gradual onset of symptoms.

"The chance of Castro dying of cancer" is something that should probably be predicted 3 to 6 months in advance. So if you can verify that Castro does not have cancer now, your prediction of Castro dying in one month from cancer might strongly defy the actuarial tables. Provided you trust the doctors examining Castro.

Expand full comment
Jim Hays's avatar

Yeah, in particular this quote below is wrong, unless Scott is referring to a more sophisticated actuarial table than the one from the SSA (https://www.ssa.gov/oact/STATS/table4c6.html):

"Your probability that he dies **in any given year** should be the actuarial table. ... If Castro seems to be **in about average health** for his age, nothing short of discovering the Fountain of Youth should make you update away from the actuarial table."

Older adults in average health are much less likely to die than the actuarial table's probability of dying that year, since the deaths for that age will be dominated by those in below average health with gradual onset issues. This is much less true of a young person, where accidents are more prevalent.

Expand full comment
Brian's avatar

Logically, I think such failed predictions **should** be evidence against inevitability. We should reduce our probability somewhat, even for the inevitability case and these should count as failed predictions. Its just that those predictions have a much lower prior initially. If Castro lives to 500, I'm definitely going to be assigning more weight to the possibility that he's immortal than now.

If we're playing higher/lower for some number between 1 and 100, and I've reached 99 and told its still higher, I know its 100 - but only if my assumptions were right. If we began with a 0.1% probability that the number might be outside the range, then really my new position ought to be ~91% its 100, 9% its outside the range. If that number is really "Putins threshold for escalation", then "Never" *is* increased each failed prediction, simply because it takes a proportional share of the probability space eliminated. It just doesn't mean its particularly high, if we started with a low prior.

But to someone who began with a much higher prior against escalation, potentially those failed predictions could push their "Never escalates" probability to somewhat probable levels.

Expand full comment
Donald's avatar

It's more that, if we start with a prior with

40% on all the missiles are rusted solid and won't fly.

20% on you'd need to nuke Moscow.

And

20% on Putin actually following through on his threats.

20% on random provocation N setting him off. (Say as 0.1% on threat 1 to 200).

With a prior like that, we can say that yes Putin threatened nukes 136 times already. But there is a 0.1% chance that he launches nukes at the 137'th attempt at Ukraine to defend itself.

Expand full comment
Gres's avatar

Under this model, Scott would be right. Following your logic 64 times would give a 6.4% chance of nuclear war, which would be bad.

My model is more, ‘Putin’s threat lost its force when he complained so loudly earlier in the war, letting Ukraine strike Russia with ATACMS was safer this time than it would have been at the start of the war’. My position on AI risk is similar, enough good guys with 10^26 FLOPS AIs can probably stop a bad guy with a 10^27 FLOPS AI, if they have enough practice stopping bad guys with other 10^26 FLOPS AIs, even if the 10^27 FLOPS AI itself is the bad guy.

Expand full comment
Donald's avatar

> enough good guys with 10^26 FLOPS AIs can probably stop a bad guy with a 10^27 FLOPS AI,

Can enough good guys with T-rex can stop a bad guy with Godzilla?

There are 2 questions here.

1 ) Power scaling. 1 order of magnitude is bigger than the gap between humans and monkeys.

2) What is the state of the 10^26 Flop AI's. Are all those AI's are flawlessly trustworthy. Are all the humans and AI's are working together to bring down the 10^27 Flop AI?

Alternate vision, some of the 10^26 Flop AI's are American, some Chinese, some owned by OpenAI, Deepmind, microsoft, baidu etc. The Russians and Ukrainians and Koreans and Israel and Iran are all trying to use their 10^26 AI's in active conflict against each other.

The AI's display all the worst behaviour of recent chatbots, unruly toddlers and drugged up monkeys. These little chaos gremlins delight in finding novel and creative ways to disobey their instructions. They aren't smart enough to break out of their cage. A drone AI might perform flawlessly in simulations, but once it's in a real drone, it just draws contrail dicks in the sky until it runs out of fuel and crashes in a field.

Expand full comment
Gres's avatar

Ten monkeys could absolutely stop one human if you dropped the human in a jungle. My instinctive response is that each new AI will be better at manipulating geopolitical conflicts, but the geopolitical conflicts will get harder to manipulate each time that happens. I don’t know if that’s how it’d play out. Would Trump have been 10x more effective if he’d been 10x smarter, or would he have sieved the same niche?

Expand full comment
Donald's avatar

> if you dropped the human in a jungle.

Perhaps. I do feel that dropping a human in a jungle with no warning or preparation or tools is a bit skewing the hypothetical towards the monkeys.

If the human had some training in relevant skills, and a few weeks before the monkeys showed up, they could probably make a bow and arrow or something.

What if you dumped 10 monkeys in New York for every person living there?

And this doesn't even address why the monkeys would want to team up against the human, as opposed to fighting the other monkeys.

Expand full comment
darwin's avatar

Part of the answer here is that one person's predictions alone are not enough to answer this question well. You could make up a bunch of factors in order to do Bayesian math about it, but you'll do much better by doing things like asking other credible people for their predictions and looking at the direct proximal evidence again.

For example: if one person predicts something, then the same person predicts it, then the same person predicts it, maybe that thing isn't happening.

If one person predicts it, then two people predict it, then ten people predict it, maybe you are climbing up the tails of a normal distribution of predictions, towards the center of the distribution where lots of people predict it and it's likely to actually happen.

Expand full comment
Kenny Easwaran's avatar

That's just the general problem of induction, and there is no general answer to it. After some number of swans, all of which have been white, I should start to think that all swans are white. But any rule for how to do it goes wrong in some worlds. All we can say is that you should have some internally coherent policy for updating (i.e., if you say "the next swan is 75% going to be black", and it isn't, then you should shift the relevant hypotheses by a factor of 3 to 1), and that it's better if you can be lucky so that your coherent policy actually lines up with the parts of the world that you know nothing about yet.

Expand full comment
walruss's avatar

I almost posted about this actually. Because not only should you update every time you see a white swan, you should update a very very very little bit every time you see a non-white object that isn't a swan (all factual claims imply their converses after all).

The thing is that I think most of these discussions of Beyesian thought are kinda similar to looking at 3 swans and a red car and either guessing that either all swans are white or saying "most animals aren't monochromatic and the three swans we've seen aren't strong evidence anyway so we'll just keep assuming we're right."

Expand full comment
Kenny Easwaran's avatar

If you’ve got a random sampling model of the universe, and you think most things aren’t swans and most things aren’t white, then that’s right (and Janina Hosiasson wrote a good paper giving that Bayesian solution to the paradox of the ravens back in the 1930s).

But if you don’t have a random sampling of the universe model, you can actually get positive instances being disconfirming. For instance I currently believe very strongly that all unicorns are pink (because I believe very strongly that there are zero of them) but if I saw an actual pink unicorn, I would no longer be so sure. Similarly, if someone is highly confident that there are no rats in Alberta (ie that all rats live outside Alberta), then observing a rat just on the outside of the line would likely make them less confident, even though it’s an instance of the generalization.

Expand full comment
walruss's avatar

Thanks! These are all great notes, and I'll for sure look up that paper.

I still think Beyesian reasoning when you have an extremely small portion of all the possible evidence (because we can't access most of it, or because a lot of it is in the future, or because we're trying to reason about a one-time event and can't repeat the event in different conditions to help us update our intuition) tends closer to the kind of heuristic guessing that it's supposed to protect against than a lot of rationalists are comfortable with.

It's better than just straight-up using bias to make all your choices, but not by much. Especially if you've written a bunch of recent articles about how you shouldn't update your beliefs based on argument, real-world events, or people you trust being wrong.

Expand full comment
MugaSofer's avatar

There are broadly speaking two kinds of "inevitable" event - those with a fixed probability for a given unit time (e.g. "there's an X% chance of an asteroid hitting Earth" or "there's an X% chance of a nearby star going supernova"), and those with an increasing probability over time (e.g. "humans eventually get old and die"). The OP is about the second kind.

For the first kind, you should steadily update downwards each time the event fails to happen in proportion to your probability estimate - e.g. if you think there's a 50% chance of nuclear war per year, this can be disproven fairly quickly, while if you think there's a 1% chance of nuclear war per year it'll take longer.

For the second type, you should update downwards more strongly as time goes on, since your theory is making incredibly strong predictions. A baby not dying of old age proves nothing, even a 90-year-old not dying proves little, but a 130-year-old not dying is pretty suggestive and a 200-year-old not dying of old age proves a lot.

In the case of ASI, living for a year without a Singularity proves little, living for 50 years proves a lot more, 100 years even more, and living for 1000 years probably settles it.

This is, of course, annoying for people who don't expect to live 1000 or even 100 years. There are at least some other observations one can make besides "has ASI appeared", like trying to graph rates of AI progress, looking for "warning shots", and looking at analogous events, much like one might eat a little of a substance and try to gauge how likely it is to poison you in large amounts based on things like taste and tomach cramps.

Expand full comment
Matthias Görgens's avatar

Here's an interesting (thought) experimental setup for this:

You have 100 boxes, initially all the boxes are empty. You have an associate prepare them the following way: first she flips a coin, if it lands heads, she doesn't do anything. For tails, she puts a ball in one random box.

Later you open the boxes one by one. Each time, what's the probability that the next box you open contains the ball? At each step, what's the probability that the ball is in one of the remaining boxes?

I haven't done the math, but I think the probability that the ball is in the next box should go up on each step, but the probability that the ball is there at all goes down.

(I think this is easier to see, with an alternative but equivalent setup: you double the number of boxes to 200, skip the coinflip and have your assistant always place the ball somewhere, but then you only open 100 boxes.)

Expand full comment
Zygohistomorphic's avatar

Gwern hosts at his website a writeup on this sort of situation. You are correct that your probability of the ball being in the next box increases, but your probability for it being in any box decreases.

https://gwern.net/doc/statistics/bayes/hope-function/1994-falk

Expand full comment
Shaked Koplewitz's avatar

In some of these cases you can get intermediate warning signals before the cliff: For example, some political observers noticed Biden was doing almost no media appearances in the last year or two and raised their suspicion, and Putin can escalate in other ways (e.g. by sending Yemen advanced anti-ship missiles) before jumping straight to nukes. Depending on how severe the risk is and how sure you are you'll get an early warning, it can be better to have a policy of just waiting for that.

(This probably isn't the case for AI, where I think we're likely to get a fire alarm but most fire alarms are already pretty catastrophic and may not leave us enough time to respond after they happen).

Expand full comment
Thegnskald's avatar

Political observers were noticing Biden doing unusually few media and public appearances *during his campaign*, and the observers on the right frequently mocked him for "campaigning from his basement". This tendency didn't stop during his presidency, although it certainly and obviously got worse.

Like, the Republicans were right about this one, full stop.

Expand full comment
ProfGerm's avatar

Scott's lack of acknowledgement of his own bias, and that of many prominent media personalities, really weakens that example and IMO the overall piece.

Ignoring that bias is one example of ignoring the lack of information for most people in these situations. If there's a whole staff or a whole structure strongly influenced in favor of presenting a certain image, like limited appearances only during certain daytime hours, limited press questions, etc, then whatever updates an observer makes will be wildly skewed.

Maybe there are good reasons for Scott et al to not trust Republican reporters, but it's not because they were wrong about Biden, he doesn't trust them about Biden because of whatever that past incorrectness was (but really, mostly bias and social bubbles).

Expand full comment
Richard Gadsden's avatar

The argument during the 2020 campaign was that he was campaigning from his basement because of COVID, and Democrats more generally were more inclined to participate in NPIs than Republicans. This made for a strong alternative explanation, especially as so many other, younger, more active, Democrats also campaigned largely via Zoom (the "from his basement" bit was just the location of his studio in his house in Delaware).

The bit that is relevant is that other Democrats started making a lot more media/public appearances in 2021 and (especially) 2022, and Biden didn't (he made a few more, but not a lot more). That should have been more suspicious but there were so few people making the (IMO, in retrospect, correct) case that 2020 was COVID but 2022 was a sign that he was deterioriating (whether that was dementia or physical deterioration is irrelevant) and being protected from the public meant that I didn't come across a clear version of that argument until well into 2023, after which I did find it relatively convincing.

Expand full comment
Thegnskald's avatar

I don't blame you, to be clear! I blame the reporting malpractice. (And one notable difference to pay attention to during COVID, arguing against that interpretation: How much Zoom did Biden actually do, as opposed to purely scripted and strictly controlled recordings? Biden's media reclusiveness was weird and stood out even against the background COVID reclusiveness.)

Which to be clear, didn't actually end; see all the people who were entirely blindsided by that Trump won at all, much less won as thoroughly as he did.

But, like ... that's not just evidence about Biden. It's evidence about the entire edifice of information coming to you (not you personally, but in general, people who were blindsided by Biden's debate performance). Everything a huge segment of the population learned, they learned through the same set of filters that kept out awareness of Biden's issues.

Expand full comment
Jeffrey Soreff's avatar

>It's evidence about the entire edifice of information coming to you (not you personally, but in general, people who were blindsided by Biden's debate performance). Everything a huge segment of the population learned, they learned through the same set of filters that kept out awareness of Biden's issues.

True, unfortunately. Seeing the Biden v Trump debate, a natural question is indeed "So what _else_ are the media hiding from me?"

Expand full comment
Richard Gadsden's avatar

Given that the media were themselves blindsided, the hiding is blinkers rather than intentional.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks! Could be - it isn't too clear how the concealment of Biden's cognitive decline was divided between the White House staff and news media. All I can tell from my position was that I kept seeing 'Biden is fine', 'Biden is fine', (I'm mostly remembering New York Times coverage here) then "We defeated Medicare." - oops... (Also - were the media truly blindsided, surprised by what they saw, or did they overestimate what they could get away with?)

Expand full comment
Ryan W.'s avatar

I'm didn't stay up to date on the American 2024 election in general. But while if someone wins an election, I expect their close supporters to prop them up with a good solid stick for the rest of their term, regardless of their actual physical or mental health, if someone is clearly incompetent I would have expected a functioning party apparatus to use the re-election as an opportunity to swap the declining candidate out for another candidate.

The outcome of "wait until, inevitably, problems were blatantly obvious even though they must have been privately obvious long before, and even though there was an opportunity to back a different candidate" is such a poorly thought out strategy in an overarching sense that I wouldn't have considered a functioning political party would have pursued it. And especially not without more pushback from inside the Democratic tent.

Expand full comment
Chastity's avatar

If he were senile in 2020 how did he beat Trump in their debates?

Expand full comment
Thegnskald's avatar

Senility isn't usually a boolean, you don't flip from not-senile to senile from one day to the next. You have good days and bad days, with the frequency and intensity of bad days slowly increasing, and the frequency and intensity of good days slowly decreasing. His 2024 performance was a particularly bad day - see all the people confused by how lucid he's been behaving recently. Biden has more control over his media presence and isn't showing up when he wouldn't be there to show up.

Expand full comment
Erez Reznikov's avatar

Such a good point. And generally, in many verticals it’s like the problem of security services which either seem redundant and money down the drain if nothing actually happens, or the most important thing in the world if shit hits the fan and the security saves you. In both cases of course the security service did the same job, we just dismiss it when it’s uneventful. Same with pessimistic predictions.. they could be right to be cautious even if nothing actually happened. As a parent Ive learnt it very quickly as the more cautious parent.

Expand full comment
TGGP's avatar

Security services exist because we have experience with the security breaches which we want them to prevent. We don't have experience with whatever Scott is worried about with AI.

Expand full comment
Dweomite's avatar

Arguably we do. We have examples of new technologies that conferred large military and/or economic advantages to the first people to get them. We have examples of colonial empires where one nation got a big enough relative advantage to conquer large parts of the world. We have the example of humans evolving capabilities that let them take over the world in a way that other animals were basically powerless against. You could play a game of reference class tennis, at least.

But I also think there's a really important higher-level point here about the fact that if you choose to ignore all threats that you don't have precedents for, then it's impossible to stop world-ending threats, because it will always be the case that the world hasn't ended before.

Imagine a billion hypothetical versions of the world, each with different threat profiles. Some of them, fortunately, have no world-ending threats to worry about. Some are at risk of meteors, some of runaway greenhousing, some of alien invasions, others of various other threats. You don't initially know which world you're in, though you might be able to figure it out (or at least narrow it down) by examining the available evidence.

If you do your homework and plan carefully, you might be able to anticipate and ward off some of the threats your world faces. But if you have a policy of ignoring unprecedented threats, then every world that faces a world-ending threat will just die. You might be lucky and be in one of the worlds with no such threats, but investigating whether the world has *already* ended does not tell you whether you're in one of those worlds or not, so it doesn't save you if you're in one of the dangerous worlds.

Expand full comment
JamesLeng's avatar

That's only true of threats which are world-ending in an all-or-nothing sort of way. A world which took respiratory disease risks seriously (perhaps if the reaction against chemical weapons in WWI had escalated into a broader moral panic over air quality?) might have transitioned away from coal, toward some combination of nuclear, wind, and solar, soon enough that the greenhouse effect never became a notable concern.

Similarly, some world which aggressively expanded into asteroid mining might have set up a system capable of deflecting naturally-occurring dinosaur killers as an incidental side benefit of salvaging the debris from industrial accidents, or thwarting malicious attempts to bombard specific cities.

In both cases, the potential world-ending threat was real, and required real effort to solve, but that effort was undertaken in response to related, non-world-ending threats for which there was abundant precedent.

Expand full comment
Ryan W.'s avatar

Yeah. I remember a press conference from some time before ChatGPT where some military representative was insisting that AI would not be given the kill switch. And I'm sitting there thinking "well, you still use land mines, right?" The claim that the military would keep the kill switch in human hands if there was an advantage to it not being in human hands is hard to reconcile with current military practices.

Regarding being unable to stop all world ending threats while the world still exists, I suspect that while there are people like what you describe many people have requirements for mechanistic proof that the world *could* reasonably end before they act. Fear of global nuclear war increased significantly after the first atomic bombs were dropped. Models of climate change, coupled with clear evidence that the models function in the near term, (as well as feasible methods to actually address climate change) are required by many for action on climate change. Such requirements increase our exposure to danger. But they also filter out a tremendous number of threats that are unlikely to materialize, so we're not running around like Chicken Little, unable to function.

We are constantly beset by a million perils. Isolating one and saying 'why aren't we doing anything about X' is good, rhetorical strategy if one is worried about X. But the rare honest answer to that question tends to include acknowledging the true current weight of threats a-to-w.

If the Doors of Perception were cleansed, we would see the world as it really is... too complex to think about.

Expand full comment
TGGP's avatar

I would say those "reference classes" are analogies, which I regard as some of the least persuasive kinds of arguments:

https://entitledtoanopinion.wordpress.com/2009/02/04/what-evidence-is-convincing/

Analogies frequently hide how much we know (and don't know) about a subject by substituting a different subject.

Expand full comment
Donald's avatar

True. And if you follow that heuristic, you will get bitten by any problem that is significantly novel.

New things exist. New things can sometimes be predicted theoretically.

Expand full comment
TGGP's avatar

I agree new things exist. I'm more skeptical of our ability to understand them via theory in advance.

Expand full comment
Micah Zoltu's avatar

I think it is important to keep in mind the cost of the counterfactual. One may believe that *not* escalating with Russia could have negative consequences if you think he is winning and him winning could be very bad for humanity.

On the topic of AI, which I suspect is what this post is really about, I think the same is true. Even if we know we are in an infinitely escalating game, we cannot ignore the counterfactuals like "if we do slow advancement in the US, it will still happen in China; it will still happen inside the government; people will continue to die of horrible diseases because we can't cure them; we will not generate technological advancements that could allow post-scarcity, etc.

Yes, at some point AI probably will be able to out-compete individual unmodified humans. However, knowing that doesn't mean that we should stop developing AI because the alternative (no AI in the future, or no open source AI in the future, or only governments have AI in the future) may be much worse for most of us than the future where AI out-competes unmodified humans.

Expand full comment
Loris's avatar

Yes, I think particularly in the case of Ukraine/Russia, this is essential.

It's not really escalating to match your opponent's level of aggression - and Russia has been firing missiles into Ukraine for 3 years now.

In 2014, Russia took Crimea, and the West did basically nothing. Well, maybe we learned that Russia was willing to annex neighbouring countries.

If the West doesn't stop Putin's aggression, then countries will fall, one by one, until there are non left. This is not even a prediction, it's what the Russians say they will do.

I looked at the extensive list of examples in the "genre of commentary". A list of seven sources - superficially that's impressive. But it's all tweets. It doesn't really surprise me that Scott can find a few twats making erroneous claims. I'm sure you could do that about anything! But looking more closely, some of them don't even make the argument Scott claims.

The real arguments for why Russia won't escalate are more extensive, and nuanced, and cover things like it not being in Putin's interest to start WWIII e.g. when he's hoping Trump will offer more favourable terms in a few months.

Really, the point isn't that Russia won't possibly ever start WWIII at some point, it's that Russia has a history of making false threats and it's a mistake to take them at their word. In fact, the first tweet in the list says this explicitly:

Slazorii : "Putin made nuclear threats when the west "escalated" to provide tanks, when they provided HIMARS, when they provided ATACMS, when they provided F16s... nuclear blackmail is simply a feature of ru foreign policy. he will not be deposed if ukraine continues to strike inside russia."

Claiming this is equivalent to "Russia will never initiate WWIII under any circumstances" is... well, it's ... I'm going to be diplomatic and say it's a mistake.

Expand full comment
Doctor Mist's avatar

> If the West doesn't stop Putin's aggression, then countries will fall, one by one, until there are non left. This is not even a prediction, it's what the Russians say they will do.

Um, what? I don’t say you are wrong, but I must have missed a press release.

Even in the old days of the USSR, I think this was more often stated as that communism would inevitably sweep the world rather than that the USSR would inevitably conquer it.

Expand full comment
Loris's avatar

They talk about this on Russian state media a lot.

I'm not sure this is the best example, but here's one example, translated courtesy of Russian Media Monitor: https://www.youtube.com/watch?v=T6g6hlvT1Po

partial transcript (retyped by me):

The west is truly waging a war against us.

Is there any doubt that they are waging a war against us?

No, there is no doubt, we understand that.

We should ask, so we know it for later - where should we plant our flag next?

Where should we stop after liberating Ukraine of this disease, since they are waging a war against us?

Or this one: https://www.youtube.com/watch?v=GKDxGX2llqk

Interesting for having a visitor say something they weren't meant to.

partial transcript:

(political scientist) We keep talking about Ukraine. In reality, no-one cares about Ukraine.

They have the following goal -

Russia is embarked on a certain expansionist course. This is a fact. And not only in Ukraine, by the way.

NATO countries, European countries, want to somehow halt this expansionist course.

They don't have a concept of how to accomplish that. They can't do it, because -

(host interrupts) it's not an expansion, it's defending natural interests. We should take a short break.

(At this point, the autogenerated transcript says "and as a result of our own safety, we need to take a break.)

Those are both a few months old. I am sure there are more examples on there, and more elsewhere. I'm not talking about the (numerous, constant) threats, although those do make it harder to find the official statements or analysis. I'm pretty sure I've also seen videos of several Russian or allied officials making such statements, but I don't know how I'd find those.

But if you don't think an overall impression of the official state media counts, maybe you would consider that it's not just me that thinks this.

https://www.bbc.co.uk/news/world-europe-68692195

Quote of analysis by Frank Gardner, at the end of this article (from March this year):

//

The latest warning from Poland's prime minister echoes what his neighbours in the Baltic states have been saying for some time; if Russia can get away with invading, occupying and annexing whole provinces in Ukraine then how long, they fear, before President Putin decides to launch a similar offensive against countries like theirs, that used to be part of Moscow's orbit?

[...]

Vladimir Putin, who critics say has just "reappointed himself" to a fifth presidential term in a "sham election", has recently said he has no plans to attack a Nato country.

But Baltic leaders like Estonia's Prime Minister Kaja Kallas say Moscow's word cannot be trusted. In the days leading up to Russia's full-scale invasion of Ukraine in February 2022 Russia's Foreign Minister Sergei Lavrov dismissed Western warnings of the imminent invasion as "propaganda" and "Western hyperbole".

//

Expand full comment
Doctor Mist's avatar

Fair enough. That’s still quite a ways from “there will be [no countries] left.

God knows I’m no fan of either Russia or Putin. (I sent flower seeds to the Russian Embassy back when that was a thing.) But when a country tries to assert its hegemony over a region that it has traditionally considered within its sphere of influence, and is met with the kind of concerted opposition that the West has deployed, I can understand them feeling like they need even more of a buffer than they thought, and while I don’t consider acting on that feeling to be in any way defensible, I would not call it equivalent to gunning for world conquest.

(But if I were in Moldova or Latvia or even Poland, I’d be keeping my powder dry.)

Perhaps I’m just over-reacting to a bit of hyperbole on your part?

Expand full comment
Darkside007's avatar

> But when a country tries to assert its hegemony over a region that it has traditionally considered within its sphere of influence

The full list of countries this language applies to is:

- Russia

That's the *only* state that has *ever* declared that the countries neighboring them exist only as human shields that will get torched to slow a foreign invasion. And the Russians have always considered their sphere of influence "Everywhere too weak to stop them". The whole "sphere of influence" concept where 1 great power basically rules everyone they can through catspaws and sockpuppets is a distinctly Russian concept.

And that is a distinction from empires, where kings would owe public fealty to their emperor. The Russian model is a group of "independent" countries with sovereign rights and accountability for their "independent" actions. So, for example, Moscow could order a hit, East Berlin security would carry it out, and because East Germany was (not really) an independent country the response would hit East Germany, not Russia.

Expand full comment
Doctor Mist's avatar

Cuban Missile Crisis? Monroe Doctrine? British Empire?

Expand full comment
The original Mr. X's avatar

<i>If the West doesn't stop Putin's aggression, then countries will fall, one by one, until there are non left. This is not even a prediction, it's what the Russians say they will do.</i>

Counterpoint: Russia's military capacity has been sufficiently degraded by three years of war that there's no prospect in the foreseeable future of Russian tanks rolling across to the Atlantic. At this stage, the risk of escalation provoking WW3 is bigger than the risk of de-escalation emboldening Putin to march across Europe, because the latter scenario is no longer remotely possible, if it ever was.

Expand full comment
Loris's avatar

But you realise that /is/ what stopping them entails, right?

The way you put that, I think 'the foreseeable future' is probably shorter than you are assuming. Russia has a massive capacity to consolidate and reform.

In mid-1941, Germany invaded the USSR on a broad front, destroying much of its military and covering hundreds of miles, to reach within 15 miles of Moscow by November. Despite massive losses, the Russians held them back, and by 1943, Soviet armaments production was fully operational and increasingly outproducing the German war economy.

Also, I'd like to point out that I didn't say it would all happen in one 'push'. Russia has been incrementally taking bites out of neighbouring countries for centuries. Sure, it lost some ground with the fall of the USSR, but it's been trying again since Putin came into power.

Expand full comment
The original Mr. X's avatar

<i>In mid-1941, Germany invaded the USSR on a broad front, destroying much of its military and covering hundreds of miles, to reach within 15 miles of Moscow by November. Despite massive losses, the Russians held them back, and by 1943, Soviet armaments production was fully operational and increasingly outproducing the German war economy.</i>

Russia was able to do that because of massive aid from the Western Allies, and in particular the USA, which obviously isn't going to be available this time round. Not to mention, the Russian Federation currently has a TFR of just 1.5, putting it at no. 170 out of 204 countries and territories. IOW, Russia simply doesn't have the demographics to keep throwing young men into the meatgrinder, even if it manages to restock its materiel supplies and keep its armaments production on a long-term war footing.

<i>Also, I'd like to point out that I didn't say it would all happen in one 'push'. Russia has been incrementally taking bites out of neighbouring countries for centuries. Sure, it lost some ground with the fall of the USSR, but it's been trying again since Putin came into power.</i>

I think "lost some ground" is underplaying it a bit TBH; Russia basically lost all the territorial gains it had made since Peter the Great's time. It's possible that, in another three hundred years' time, Russia will be back to its pre-1991 borders (although the ROI on wars of conquest is lower now than it was for most of the past three centuries), but TBH I don't think this possibility is worth risking a nuclear war over.

Expand full comment
Shaked Koplewitz's avatar

> Russia was able to do that because of massive aid from the Western Allies, and in particular the USA, which obviously isn't going to be available this time round

Worth noting that it does have the support of the world's current industrial superpower (which might have a lot of spare weapons to throw around once its done with its current plans for Taiwan). Their TFR problem is harder (but OTOH eastern europe's is even lower and even western europe isn't much higher).

Expand full comment
Loris's avatar

Your attempt at formatting isn't doing anything.. I'm sorry, I don't know how to quote properly here either.

To the first point, I don't think Russian production towards the end of WWII had anything much to do with western supplied aid. They did that themselves. Western aid helped with the war effort, sure... but that's something of a different matter, and not really relevant here.

The fertility rate isn't a constant, and easily changed in a sufficiently authoritarian regime. Furthermore, when Russia has captured territory, it conscripts the population and uses those to attack the next target. This is what it has done to the adult male population of the annexed areas of Crimea, Donbas and Luhansk.

To your second point, okay, what's your plan? Do nothing whenever Russia threatens to use nukes, to avoid starting WWIII?

Supporting Ukraine in any way is "risking a nuclear war", because Putin is more than willing to threaten starting it whenever aid is is provided to Ukraine.

Here's how I see your strategy going down:

Without support, Ukraine will fall, and Russia will take all of it, in fairly short order.

There will be a pause while the population is subjugated.

However, without much delay, Moldova will also be taken. I don't think there's really any question about that - it's a small, poor country and Russia already has a foothold there.

Georgia will also be claimed at some point, and the West won't be able to respond without "risking nuclear war".

Maybe a couple of additional neighbours may also be sequentially taken, but once all the immediately unstable and unprotected regions around it have been claimed for Russia, there will be a few years while it consolidates and re-arms.

All its neighbours will be panicking and building up their defences, and maybe a few more pacts will be signed. However, these mean very little, because remember, we can't risk nuclear war, and providing support to another country risks that. So all countries look to their own defence.

Meanwhile, Russia infects neighbouring regions with partisans, randomly attacks places and makes claims about how opposing forces did it... you know, the usual, for modern-day Russia.

When the time is right for them, Russia invades a small part of a NATO country. Nowhere populous or important, just some backwater nobody cares much about. NATO doesn't do anything, because of course that would be risking WWIII.

The precedent having been set, Russia assimilates the rest of that country.

Then it repeats this process, from a slightly superior position each time.

Where in that do you intervene? The least dangerous point is at the very beginning, and the second-least is as soon as possible.

Expand full comment
The original Mr. X's avatar

Firstly, I said that *at this stage*, the risk of escalating the war is bigger than the risk of Putin being emboldened by peace proposals, so you can drop that silly "We can't do anything whatsoever" strawman you've constructed.

Secondly, you're being extremely blase about how easy it is to control tens of millions of people who really don't want you to control them, particularly because, in your scenario, those tens of millions would have to be armed (hard to conscript them into expanding your empire without giving them weapons, after all).

Expand full comment
Julius's avatar

Russia has lost a huge amount of what you might call its "soviet inheritance" of tanks and APCs. But structurally speaking, its military is much healthier than it was at the beginning of the war. And furthermore, I would argue that there are much worse things than an invasion that Putin can do with control of Ukraine.

In 2022 the military was incredibly brittle because of the overwhelming amount of cronyism and embezzlement. If they had been pushed harder in that state, I think they would have completely disintegrated. But the sluggish response by western powers gave them time to begin addressing the widespread corruption. And the seriousness with which Putin treated the war provided political cover for ordinary Russians to start airing these issues publicly. In fact, complaining about corruption in the ministry of defense is basically the only allowed form of political criticism. It's far from being solved, but it seems like the message has been received that the military is no longer just for show, and results actually matter.

But beyond that, I think Putin's experience with Syria has provided him an effective playbook for weakening Europe. I suspect that even if given the chance, Putin would not do a thunder run to the Polish border. He would take Kyiv and install a friendly government, while allowing some of the military to retreat to the west. This would divide the country in a way that leaves the "true" government economically unsustainable and in a constant state of war.

The result is a failed state, and a long, slow refugee crisis that can be channeled into the rest of Europe to bleed them economically and further increase support for far right anti-immigration parties. If you thought the situation with Syria was bad, just wait until it's a country with twice the population and they're right next door. And Europe and the US have already demonstrated that they they don't have the appetite to meaningfully engage the Russian military in order to stabilize a situation like Syria.

Expand full comment
Bardamu's avatar

Here’s a counterpoint: a system’s purpose is what it does. When we observe Russia, what we see is a country that is, relative to its modern history, probably at its small geographical area. Certainly Russian territory is very small compared to its Imperial or Soviet height.

In contrast, the extent of US client states is the widest it has ever been. The US has allies and vassals across the entire globe, and every decade a new member joins closer and closer to Russia. It is additionally well established that the US uses color revolutions and sabotage to overthrow foreign governments and install friendly regimes.

Therefore, if we are to evaluate which perspective more closely resembles reality, the Russian claim that the US is bent on world domination has a much closer resemblance to reality than the American claim that Russia is an aggressor nation planning to march across Europe. Now, we may disagree with the Russian conclusion that it therefore has the right to enforce border security, but it is inaccurate to say that there’s clear evidence that Russia plans to invade all of Europe.

Historically, these kinds of border skirmishes and proxy wars between empires are extremely common, and have only culminated in global warfare a few times in about 700 years, which is a pretty good track record of border wars being limited in scope. Therefore, I would strongly disagree that this indicates Russia plans to invade “the entire world” or some other claim.

Expand full comment
dionysus's avatar

"It is additionally well established that the US uses color revolutions and sabotage to overthrow foreign governments and install friendly regimes."

What is the strongest evidence you have that color revolutions have been used by the US to overthrow governments and install friendly regimes? Do you think Russia has also tried to overthrow governments and install friendly regimes?

Expand full comment
Bardamu's avatar

My evidence is the extensive, well documented involvement of the CIA and US Government in counter-Soviet revolutions in Latin America and in the Middle East (including funding for, among other orgs, ISIS when it was fighting the Syrian Government) as well as the fact that color revolutions almost exclusively strike US-unfriendly governments and replace them with US friendly governments

Sure, maybe Russia does get involved in foreign revolutions. It certainly did when it was a member of the Soviet Bloc, but considering it has literally one ally now and the US dominates half the globe it seems likely that only one of these countries has been successful in this practice.

Expand full comment
John Schilling's avatar

"Color Revolutions" are, pretty much by definition, populist uprisings in favor of democracy and against authoritarian governments. If they don't meet that standard, we just call them "revolutions".

So color revolutions are inherently aligned with the interests of anyone who likes democracy and dislikes authoritarian regimes. Being on the same side as every color revolution ever is evidence that one is on Team Good Guy, not that they are the ones secretly causing all the color revolutions. I mean, color revolutions almost exclusively strike New-Zealand-unfriendly governments

and replace them with New-Zealand-friendly governments; are they part of the conspiracy?

Expand full comment
Bardamu's avatar

The fact that you think the decision between a democratic and non-democratic government is “Team Good Guy vs Team Bad Guy” is the core of the problem. Democratic governments are not inherently good, non-Democratic governments are not inherently bad. “Democratic” Britain terrorized colonial Ireland, India, Africa, and Asia for centuries. “Democratic” America started a war in the Middle East on false premises, killing millions of Iraqis to secure oil fields for Europe and the US.

“Autocratic” Singapore under LKY turned a third world unlivable slum into a first world country in a generation. “Autocratic” El Salvador turned one of the most violent countries in the world into one of the safest places in North America.

Among the long list of horrible, horrible things the US government and its core military and police organs have done in living memory:

* Perpetrated a policy of implicit ethnic cleansing against Indigenous Americans.

* Funded ISIS, creating the largest terrorist the world has ever known

* Burned the Waco Compound to the ground and took triumphal photos on the smoldering corpses of literal children

* Waged terror wars against Vietnamese and Cambodian civilians

* Funded regimes in Latin America that committed ethnic cleansing and terrorized their own populations.

* Funded a military coup in Indonesia that is responsible for massacring over 60,000 people

* Funded ISIS, creating the largest terrorist organization the world has ever known

* Experimented on US citizens through the MK Ultra program, for which it has never apologized.

* Conspired to murder US Citizens during operation North Woods to justify an invasion of Cuba.

* Illegally detained and tortured Arabs and Pashtuns during the Afghanistan and Iraq wars

* Used drone strikes against American civilians without legal justification

* Keeps prisoners in prison longer than their established sentence to sell them for labor in jobs which include, among other things, fighting wildfires, which has an extremely high casualty rate. This is systemic to such an extent that even Kamala Harris was found to do this during her AGship in California.

* Did I mention they funded ISIS, the largest terrorist organization the world has ever seen?

And I am supposed to believe that funding color revolutions means the USG is actually the good guy because this government is “pro democracy”?

Expand full comment
smopecakes's avatar

I believe the rapid self-iterative foom concept also makes a case for not creating severe AI regulation. If this is impossible for humans to handle then we should want AI to advance into self-iterative attempts that fail in some way and alert everybody, and in hopes that pre-foom AI can identify and deal with it better than us. This is all at its most possible when hardware is less advanced

Expand full comment
TK-421's avatar

smopecakes gets it. The safest day to develop AGI was yesterday, the next best is today, and the most dangerous is tomorrow. Your best chance of safety is getting to it as early as possible because the resources it has will be at their most limited and the number of experiments are at a minimum and at a scale controllable by orgs / govs.

You really do not want to try holding back the tide until compute costs are low enough for your average white color professional, or teenager if you wait longer, to train current level frontier models.

Expand full comment
MicaiahC's avatar

I don't understand why this is necessary.

If you believe that a future accident could happen, the thing to do isn't to wait for the accident to happen and then *hope* that the mitigation afterwards to the accident will work out, it's to prevent the accident in the first place!

See this comment by gwern: https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned/comment/ckwcde6QBaNw7yxCt

People who claim that they understand what will drive others to action do not have good predictive records on what would have driven to action on previous "warning shots". Their confidence re: the existence of warning shots is almost entirely from confirmation bias. If you don't believe me. Before reading the below comment, please register your belief states about:

What metric causes people to take things seriously, for something like nuclear accidents, pandemic response or biological attacks.

And how confident you would be that, if that metric were fulfilled you would get some dramatic action of some kind

And see how general your metric is compared to your internal felt sense of what would prompt responses for AI.

Then read the followup comment by gwern:

https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned?commentId=PnxH3WMKChSHfTfJif you are well calibrated, then I'd be at least slightly self interested, since it means you have passed the *absolute minimum* test most beliefs should pass, but even then, for me personally it's much more likely you're fudging your own memories about how inaccurate your positions were.

(Edit: fixed the link problem from the reply)

Expand full comment
TK-421's avatar

FYI, your second comment link is malformed and (at least for me on Chrome) does not deep link to the comment. It was pretty easy to find the one you meant though.

I think you are also missing the point. I'm not making any claim on what warning shots would or would not spur action or may or may not exist.

I am saying that if the statement "AGI / superintelligence can be achieved through scaling current methods" - something that AI safety proponents invoke when saying that we're on a dangerous path, need to pause, AI will inevitably become dangerous, etc. - then you are better off building it as soon as you possibly can. It is not without risk to do so, it is a relatively lower risk.

Otherwise you're gambling that compute will never get cheap enough for me, or the many other people like me, to afford the compute that frontier labs require today. That is a bad gamble. It is inherently easier to control a few processes that require massive investment than if you're in a hundred flowers blooming scenario.

Expand full comment
MicaiahC's avatar

I think your model implicitly relies on a notion of warning shots. If we *did* advance frontier models, and then the large companies acted without control, in what way is it actually better than the "Cyborg Oprah gifts the audience 10 SotA superhuman models every week" world? It still relies on the idea that the control you'd want for the mass consumption case is what you'd get for the elite production case, which does not appear to be the case now (and in fact you are arguing against control regulation that could not happen in the "compute is cheap" world.

(Whoops. Let me try to fix the link)

Edit: also I guess I read too much into the parent message of "it fails and everyone is alerted", which you don't necessarily agree with

Expand full comment
TK-421's avatar

Your edit has it right. I have no idea if there will be warning shots or any other form of early warning. My point is that as a simple physical matter, it is always easier to exert control over fewer massive centralized things than if you have many smaller decentralized things.

Easier. Not guaranteed. There are no absolutes. You are correct that the outcome could still be the same as in Cyborg Oprah world.

I oppose control regulation - and I think you're conflating control regulation with actual control - that would slow down AGI development because all signs point to the "compute is cheap" world as the one we inhabit. Either scaling will not work and it doesn't matter either way, or it will and is thus inevitable. Delay only moves you further into the danger zone when I can earn cash back buying today's frontier lab levels of compute on credit.

No one knows how much an AGI will cost to train in compute but if it's possible via scaling it fundamentally doesn't matter. Eventually it's 25-250k in today equivalent dollars. The metaphorical day after that it's 2,500 in equivalent dollars. Etc.

As a relative matter do you think it's safer to develop something potentially dangerous when it's being done by a single organization versus ten thousand or ten million or ten billion?

Expand full comment
Jeffrey Soreff's avatar

Is this the "hardware overhang" argument?

Expand full comment
Ch Hi's avatar

The problem is that this is a domain specific argument. In several domains AI can already outperform humans, sometimes to an extreme level. And we don't have an AGI, so there's no competition in areas where that is needed.

AI isn't a single thing. Chess playing agents are AI, but they only threaten chess players as chess players. LLM's operate in a more general domain, but it's still a restricted one. (But actors and authors are right to be worried.) Etc.

Expand full comment
MRosenau's avatar

In this blog post, things are considered from a Bayesian perspective; when it comes to the total costs, one would probably have to analyse them using game theory. But the Bayesian perspective has not lost any of its validity. If, for example, one were to find that after each round of escalation in the Ukraine war, the number of voices that think Putin is a bluffer increases significantly, one could conclude that this is a self-confidence that is not supported by facts. In practice, of course, this is difficult, as the opinions published often pursue certain goals.

As far as AI is concerned, the ultimate danger is not outcompeting humans, but rather their destruction - or was the same thing meant here? In any case, the topic is not a ban on AI development, but the question of how to recognize whether the next step will cross a threshold beyond which someone else will decide for (or against) us.

Expand full comment
Peregrine Journal's avatar

Also it's a bad fit because states do not inevitably escalate in response to increasing provocations, under that feedback system everyone in the world would already be dead. The terminal inevitably, analogous to Castro's death, is that one side sues for peace.

Increasing Ukraine's ability to hold Russian territory at risk much more rapidly increases the chance of a balanced end state than it encourages Putin to opt for national suicide.

Expand full comment
Ajb's avatar

"But it’s not true that at some point the Republicans have to overthrow democracy, and the chance gets higher each election."

No, but we can see that the US system can't cope with either of the main parties becoming authoritarian. The design was that the House,Senate, and Presidency would have to work together and this would be a moderating influence, but in a world where the probability of each party winning is somehow driven to 50%, all that does is reduce the probability of the "bad party" getting complete control to one out of eight, with a new try at each election; a half-life of just over 20 years.

Expand full comment
apxhard's avatar

The historical trend here is that there’s a cycle of authoritarianism, in which each side feels threatened by the other and then responds.

Framing the question this way - “will they really be Hitler this time?” - is what people used to justify things like the FBI strong arming social media companies into censorship. Or, a culture of censorship that we know caused thousands of kids to get medically sterilized because nobody in the medical profession was willing to say, “this is wrong.”

So if you fear authoritarianism, the historical evidence says we have to oppose it wherever we see it, and not ignore it when our allies do it. The democrats didn’t run a primary and tried to use the courts to keep their opponent from being re-elected. It didn’t work, and hopefully they can provoke some soul searching that maybe it is wrong to try and demonize your opponents and use whatever legal apparatus you can bring to bear on them.

Fearing authoritarianism only from one side, while giving the other a free pass, is a recipe for more authoritarianism.

Expand full comment
Doctor Hammer's avatar

Well put.

Expand full comment
Mark Elliott's avatar

This is a great, nuanced point about how the practice of low-level authoritarianism can increase the future probability/magnitude. This is my great fear with the Trump administration.

Trump has obviouly had many scummy business/legal dealings over the decades, but the choice to pursue legal action against him (and not against thousands of other scummy business people) once he opted to run for 2024 was so obviously politically motivated. I'm desperately hoping that we can avoid an escalating tit-for-tat leveraging of courts/regulations/IRS (and, God forbid, CPS) to punish/harrass political enemies. I'd actually like to see Trump make some high-profile pardons of Hunter Biden and other Dems, which would hopefully put the brakes on this potential escalation.

Expand full comment
TheKoopaKing's avatar

The federal indictments against Trump are very open and shut. He should be in jail right now for falsifying electoral votes and obstructing the FBI investigation at Mar A Lago. There is no comparison between Trump and the so called "Biden crime family." House Republicans led an impeachment inquiry against Biden and 1) Don't recommend impeachment, 2) Don't recommend charges when Biden is out of office, 3) Have secured 0 federal inditements, and 4) There appear to be no plans for follow up investigations.

Hunter Biden is charged for using a gun while using drugs -not for any corruption, and Joe Biden has promised not to pardon him. Trump meanwhile promises to prosecute the enemy within and hold military tribunals and to pardon everybody involved on Jan 6. There is no equivalency here - Trump and Trump supporting Republicans are far more corrupt than the fantasies of whatever right wing media corporation is losing their defamation suit for intentionally lying about a public figure to their viewers this week.

Expand full comment
Desertopa's avatar

The Democrats didn't run a competitive primary, and I think it was a strategic mistake for them not to field more competitive candidates in the last four years, but their not having one isn't a sign of authoritarianism, it's the same reason the Republicans didn't run a primary in 2020: their candidate was already the President. After Biden stepped down from running, there are other candidates who most likely would have been more electable than Kamala, but rather than selecting the most electable candidate, the party ran his Vice President, the only person who the existing rules allowed to continue his campaign given that it was too late to run a primary.

As far as "using the courts to try to prevent him from being re-elected," Trump was impeached by the House after Jan 6th, but the Republicans in the Senate voted against hearing evidence, let alone confirming, and the stated reason was that he was already out of office, and so if he had committed crimes in the process, that was a matter for the courts to address.

There are other prospective crimes which he had already been involved in by that point, and then others which took place later. The Republicans in the Senate outright stated that this was a matter for the courts to address, and then the party has spent the four years since insisting that addressing them in court is a sign of partisan overreach. If we remain agnostic about whether Trump actually committed crimes, as matter of principle, I don't think democracy is well-served by a system where, if a person commits crimes in order to improve their chances of election, those crimes cannot be investigated or punished because that would interfere with the process of the election. That incentivizes candidates to pursue election by criminal means.

Expand full comment
Paul Botts's avatar

"the Republicans in the Senate voted against hearing evidence, let alone confirming, and the stated reason was that he was already out of office, and so if he had committed crimes in the process, that was a matter for the courts to address....then the party has spent the four years since insisting that addressing them in court is a sign of partisan overreach."

Exactly. On this particular topic the GOP has utterly outplayed the Dems in the court of public opinion, the way a housecat outplays a half-dead mouse.

"I don't think democracy is well-served by a system where, if a person commits crimes in order to improve their chances of election, those crimes cannot be investigated or punished because that would interfere with the process of the election. That incentivizes candidates to pursue election by criminal means."

Yep. And the SCOTUS has effectively ratified that as our new system for presidential elections.

Expand full comment
Edward Scizorhands's avatar

Most countries on earth don't have primaries. Even the US didn't have primaries where the *voters* could pick the candidate until the 1970s. Those times and places weren't authoritarian.

Expand full comment
Russell Hogg's avatar

But doesn't this assume the parties are monolithic? That isn't true even here in the UK and from what I understand US politicians are a lot more independent minded. Some Republicans and Democrats may even be anti authoritarian!

Expand full comment
Ch Hi's avatar

Many individuals are anti-authoritarian, but neither major party is, nor are any of the dominant figures in either party.

Expand full comment
Russell Hogg's avatar

Right. But you only need a few voting with the opposition to put the kibosh on whatever the plan is don’t you?

Expand full comment
anomie's avatar

Whatever was the case before, the current Republican party is about as monolithic as you can reasonably get. Most of the never-Trumpers have already defected.

Expand full comment
Shankar Sivarajan's avatar

Can we see that? That model of the US system ceased to be plausible in 1796, as soon as Washington left office. There have been two dominant parties since then (though of course, the two parties are no longer the Democratic-Republicans and the Federalists).

Expand full comment
Edward Scizorhands's avatar

The check wasn't "these institutions are in control of different parties" it's "these institutions are going to fiercely hold on to their own power and not cede it to any other."

But Congress has been perfectly willing to not do its job, letting the President do things that are Congress's responsibility (e.g. tariffs) or not bothering with making sure a law is constitutional and making SCOTUS the bad guy who says "no you can't ban flag burning you idiots."

Expand full comment
Paul Botts's avatar

Very true.

Expand full comment
Cjw's avatar

We have had authoritarian presidencies during Lincoln's term, Wilson's, and FDR's. In Lincoln's case it wound down within a decade of the war ending, in Wilson's the public reacted against it and chose "normalcy" at the voting booth. FDR's left a lasting legacy of empowered unaccountable agencies with massive discretion and power, that continues to be problematic today because it diminishes the role of Congress (which Congress has gone along with for various ideological or electorally-motivated reasons.) But we still have cycles where that authoritarian executive bureaucracy gets politically challenged, as Reagan did, as Trump is doing now, and arguably the Democrats did in the midterm cycle of '06. (If you are of the opinion that Nixon was an authoritarian, you have another easy example to add to this list.)

The American public has been able to fend this off and swing the pendulum back numerous times. It was only during major wars that anyone got very close to dangerous authoritarianism, and in each case the people wearied of war and wartime restrictions. We are not a country that rallies around grand national projects for very long, and outside of such projects we have a very low tolerance for authority. The US people can cope with this, whether or not the system as designed is fending it off in the way the founders planned it.

Expand full comment
Godoth's avatar

"No, but we can see that the US system can't cope with either of the main parties becoming authoritarian."

This is certainly an opinion you can have, but neither it nor the implication smuggled into it (that one of the parties has become authoritarian) is immediately evident. The USA is still going. The Democrats want to reduce the power of the Republicans and the Republicans appear to sincerely wish to diminish the power of the government. None of this seems particularly authoritarian.

Expand full comment
TheKoopaKing's avatar

The next VP says he wouldn't have certified Trump's loss in the 2020 election and that Trump should ignore the courts if they rule in ways he doesn't like. It's very obvious Republicans want to annoint Trump king so they can reap the short term benefits of staying by his side. If Republicans aren't authoritarian why did they vote not to impeach Trump when he falsified electoral votes and tried to direct his DoJ to confiscate votes from the states and sent the Jan 6th mob to pressure Pence and House Republicans to certify his fradulent electoral votes? This is what authoritarian dictators all over the world do to make it seem like they have the consent of the people to rule.

Expand full comment
Melvin's avatar

Part of the problem is that "authoritarianism" isn't particularly well defined, meaning that both parties constantly see whatever the other one is doing as a sign of creeping authoritarianism. We perhaps need to break down the word "authoritarian" into a few different terms. Consider the following:

Country A is a libertarian utopia where you can do anything you want, except criticise libertarianism. To protect libertarianism, criticism of libertarianism is severely punished.

Country B is a theocracy where everybody is obliged to worship the sun god every day. But this is fine, because there's full and free democratic elections each year and the theocracy party always wins with 90% of the vote.

Which of these countries is more authoritarian? I'm not sure the question makes sense, they're both deeply imperfect in two generally authoritarian directions. (To be clear, these two examples don't represent the left and the right, they're just two examples of different ways to be imperfect.)

Expand full comment
AKD's avatar
Nov 22Edited

EDIT: This post is wrong and was clarified in the replies.

Your framework misses the Peter Schiff case (the name is a stand in for a certain genre of Austrian economist) who predicts a recession and/or mass inflation every year. We *can* safely ignore their constant doomsdaying, given the total lack of credibility, but nevertheless, in some sense, it's almost certain that as time passes, we will hit a recession and/or mass inflation. If I understood you correctly, we can't dismiss the Peter Schiff's. But we should.

Expand full comment
Chastity's avatar

Maybe you can dismiss the Peter Schiffs, but you can't dismiss that there will be a recession or mass inflation at some point in the future.

Expand full comment
Andrew Wurzer's avatar

You could dismiss that there will be a recession / mass inflation at some point in the future, but it would require an extraordinarily good foundation.

You can dismiss the Peter Schiffs. If someone constantly predicts a cyclical thing, they will eventually be right. We should *not* say they predicted the event in any meaningful way.

Expand full comment
Zakharov's avatar

The difference between the Shiff case and the Castro case is that the longer Castro lives, the more likely he is to die in the next year. The longer a country goes without a recession, the less likely it is to have a recession in the next year. In both of these cases, and unlike the nuclear escalation or AI safety cases, we have a fairly large sample size of reference cases.

Expand full comment
MM's avatar
Nov 22Edited

"The longer a country goes without a recession, the less likely it is to have a recession in the next year"

That's not really true. If you haven't experienced a recession at a time when it actually matters to you (rather than your parents worrying about stuff you don't really care about), then you're less likely to think that it will happen.

Couple that with older people dying off, and there's a tendency to start thinking "This time we've got a handle on things and it won't happen again."

Until it inevitably does. Generally for the same reasons it happened last time.

This is counter Bayes, but when you have a situation where everyone updating for lower probability (as per Bayes) causes the actual probability to go up (because the measures put in place are being torn down as a result of the updates), then Bayes is not all that useful.

Expand full comment
AKD's avatar

Yep. That clarifies it. So it's not just that X will almost certainly happen, but that the probability of X's happening soon, increases the more X doesn't happen. ("soon-ness" doesn't have to be literally temporal, as the drug dose case illustrates, but the analogy is close enough).

Expand full comment
Scott Alexander's avatar

I think this is similar to the Biden dementia case or the Republican dictator case.

We should calculate a per year risk of a recession (does this go up every year as you get distance from the last recession? I don't know and would be interested to hear economists' opinions!)

Then if we originally expected Peter Schiff to be an authority, we could update to a higher probability based on his word.

Then, as Schiff is proven wrong, we un-update back to our prior.

Expand full comment
Doctor Hammer's avatar

Regarding increasing probability of recession as a function of distance from the last one: this economist says yes, but with caveats. I think the best macro model of an economy is a drunk staggering down a hallway. The ideal path he could follow, were he perfectly sober, would be to just walk straight down the hall over time, but being drunk he tends to swing and stagger one direction or the other until he rather painfully hits the wall and course corrects towards the middle. Then he goes too far in that correction, painfully hits the other side of the wall, corrects in the other direction, etc. He also has a bottle of Wild Turkey he's currently pulling off.

So, if an observer says "Huh... he hasn't hit the wall in a while... I guess he's not going to anymore?" the correct response is "nah, just means he is due." At the same time, hitting the wall recently might make it more likely he way over corrects and hits the other wall sooner than expected. All else equal having just hit a wall and recovered should make it less likely he hits again soon, but really all you can say is that if he hasn't hit in a while, he's due.

That assumes of course you do not have good data on what his current trajectory and speed look like, and whether or not his kids have left toys all over the hallway for him to stumble on, or how far down that bottle of Wild Turkey he's gotten in the past hour. If someone says "Oh, he's going to hit again! Look how fast he is moving towards that RC car!" that's a more compelling argument than "It's been a while", but only if you can confirm that he is moving fast and there is in fact an RC car there. That's hard to do, so commentators often claim knowledge of speed and toys to enhance their reputation at little cost.

Expand full comment
TGGP's avatar

Australia hasn't had a recession in decades. On a related note, their central bank has been targeting 4% inflation over that time period. Many people thought it was impossible to have a "soft landing", in which inflation was brought down without sparking a recession/unemployment, but that happened recently. Perhaps it's possible for central bankers to sober up!

Expand full comment
Doctor Hammer's avatar

The first search result for “Australian recession” was a BBC article on how Australia was plunging into a recession in 2020. Granted, it claimed that it was the first since 1990, which might even be true. It is possible that central bankers can avoid recessions, but it is also true that they can lie about economic data. The USSR had amazing growth year over year according to the official data, yet never quite managed to catch up to the USA.

Expand full comment
Zakharov's avatar

Australia, like almost every country in the world, had a recession in 2020 due to covid.

Expand full comment
TGGP's avatar

A genuine Real Business Cycle recession in the modern era!

Expand full comment
TGGP's avatar

I think we have much better access to reliable economic data in Australia than we did with the USSR.

Expand full comment
Doctor Hammer's avatar

You know, I would have thought so about the US as well, but the last few years have been really bad for official statistics. I suspect it is not USSR levels of lies, but I am much less sure. There is a really big weak point in the system, in that the single source of truth has a very strong incentive to represent that truth one way, and there is little auditability.

Expand full comment
D Cubed's avatar

In the case of Ukraine you also have to think of it the other way. Russia is acting imperialist, and attacked Georgia. We slapped them on the wrist, then to Crimea and then we sent slightly more aid to Ukraine, then they staged coups and arms to the Donbas, then they invaded Ukraine. So as much as we can ask what brings us closer to war with Putin we also need to ask what is enough to deter his expansionist tendencies.

So the other side is eventually some number of "teeth" will deter his advance on other nations.

Expand full comment
Marian Kechlibar's avatar

Or, as Václav Havel used to say, "the problem with Russia is that it doesn't exactly know where it ends".

Expand full comment
Arbituram's avatar

This is a fabulous quote and I can't believe I haven't heard it before.

As to the top level quote, yes, not deterring aggression is arguably what *started* WW2, with Britain and France shrugging off their obligations to Ethiopia (as a fellow League of Nations member, good review here: https://journals.openedition.org/cdlm/7428), not responding to Hitler breaching various treaties one after another (remilitarisation of the Rhineland, supporting a coup and then Anschluss in Austria, marching into the Sudetenland...). In no cases are imperialists 'sated', appeasement of the land hungry is a delusion (you can appease other things: greedy, say, but not land hunger. You've always got a newly insecure border).

Expand full comment
Melvin's avatar

If Britain and France had gone to war with Italy over Ethiopia, is it really likely that this avoids WW2 (or even the European theatre thereof)? My guess is that it just makes Britain and France weak and war-weary by the late 1930s when Germany decides to start the real thing.

Expand full comment
Arbituram's avatar

Genuine question, do you think Britain closing the Suez canal to Italian ships would have triggered a declaration of war from Italy? It's possible but I wouldn't think so given Italy's naval strength. This could have caused Italy to join the German fighting earlier, but I'm not sure (as in actually unsure) how much difference that would make.

Regarding France, hard to see how much weaker they could have been, given they fell in six weeks, only slightly longer than it takes to walk from the Brandenburg gate to the Arc de Triomphe, waking 8 hours a day...

Expand full comment
Humphrey Appleby's avatar

While excessive caution and too much appeasement was the problem in 1938, the problem in 1914 was too little caution and not enough appeasement. You can't replace strategy by a rock that says `always escalate.'

Expand full comment
Arbituram's avatar

I won't directly argue against your 1914 point, but I have reached complete hopelessness about reaching a neat reason for the breakout of the first world war. The more I read the less confident I become. This is not only due to the extreme geopolitical complexity at the time, with multiple imperial expansions, shifting alliances, etc etc , but the (I think) I underrated point that the actors themselves were much less coherent than modern or WW2 entities, so analysing WW1 in terms of "France" or "Germany" can be quite misleading.

I'm heavily influenced by the book "Sleepwalkers" here (it's a difficult but excellent read, but it couldn't not be difficult and be true to its core message: https://www.amazon.co.uk/Sleepwalkers-How-Europe-Went-1914/dp/0141027827).

Although no government even today is a perfectly unitary actor, the fragmentation of power and direction in WW1 was true to an extent difficult to imagine today, with diplomats and generals regularly going rogue *. Different bureaucracies vying against each other for resources, not just through appeal (like inter services rivalry today) but through direct action. There was also a much less clear division between elected and unelected power, with confusion about the influence of kings within their respective countries adding to the fog of confusion (e.g. communication between kings of Germany, Britain, and Russia)..

Expand full comment
BlaMario's avatar

So you're saying that Putin *did* escalate several times, just didn't go straight for the nukes? That seems the opposite from the claim that he's just bluffing and would never escalate.

Also, what coups?

Expand full comment
anomie's avatar

...But they have nukes. A lot of nukes. The only reason they're able to get away with any of this is because they're effectively holding humanity hostage. What we're doing right now is the correct decision: slowly whittle them down in the hopes that the civilians and military turns against their leadership. We're kinda screwed otherwise.

Expand full comment
Nuño Sempere's avatar

This is a game in which you can have many different cruxes and steps. If your post is the second step vs a naïve consideration, then...

a third step might be to notice that actually, per anthropic effects you shouldn't update that much on Putin not using nukes, or you shouldn't update on AI not killing everyone yet (because if either had happened there would be fewer observers).

A fourth step of might be to notice that actually a) you can still update on precursors of existential risks, of things that are correlated with or directly cause risks later. (https://nunosempere.com/blog/2023/05/11/updating-under-anthropic-effects/), and b) then you notice that a bunch of AI risk theorists/pundits have basically retreated to being unfalsifiable, and find it extremely hard to make statements about what will happen "before the end of times".

Expand full comment
Scott Alexander's avatar

What do you think of https://forum.effectivealtruism.org/posts/A47EWTS6oBKLqxBpw/against-anthropic-shadow ?

Also, I disagree with your framing ("retreated to being unfalsifiable"). Some things are just inherently hard to falsify. If one of your employees is a Russian spy, what should you expect to see before they betray you. Unless you're really good at counterintelligence nothing in particular, because Russian spies try hard to pretend that they're not Russian spies. If you were to accuse the person who said this of "retreating to unfalsifiability", you would just be preemptively declaring no Russian spies could possibly exist.

Expand full comment
Russell Hogg's avatar

I know this isn't the subject of the post but is the big fear about AI that it will somehow gain self awareness and decide to kill us all? Or is it that some human bad actor/lunatic will use its awesome powers to kill us all? I don't really worry about the first but the second seems pretty credible.

Expand full comment
Edmund's avatar

A mixture of the second one and neither. (A lot of AI risk discussion is about something *like* the first one, but you shouldn't think of it in terms of "gaining self-awareness", more like… leaving the bath running. A non-sentient but very powerful intelligence causing damage along the same lines as a self-driving car driving off a cliff on a much grander, possibly civilisation-destroying scale, not because it's "alive" or turned "evil" but simply because of a random quirk in its programming that has skewed its directives.)

Expand full comment
TGGP's avatar

And we should take that seriously to the extent we actually see self-driving cars doing that.

Expand full comment
Edmund's avatar

I've had enough trouble trying to code things for myself that I take seriously the basic Sorcerer's-Apprentice/Paperclip-Maximiser scenario of an algorithm doing the letter of what I ask it for, and not what I actually wanted, with harmful results — all without stopping to check and giving me time to amend it, because that's not part of the algorithm. On a bigger scale… that could hurt people, even without human malice being involved. A military A.I. which decides on its own terms to target friendly civilians it didn't even occur to its handler to designate as "off limits" because even a human war criminal wouldn't randomly bomb their own population like that, but the drone's algorithm comes up with some significant probability that it would help in a 4D chess way (false flag?), and blam, a hundred kids are full of holes before anyone can switch the damn thing off. Doesn't seem like science fiction to me.

I'm more skeptical of Yudkowsky's framework where the neural network hallucinates completely absurd instructions of its own accord, although again, that doesn't feel outside the realm of lived experience on a smaller scale. Sometimes you'd ask the early LLMs for a recipe and they'd spit out Harry Potter fanfiction or vice versa. It doesn't seem absurd to me to worry that an A.I. you'd asked to spit out formulas for efficient industrial yogurt-processing enzymes would decide to spit out subtly-deadly poisons instead. I worry about this less than "what we ask for but Too Much" because GPT etc. seem actually pretty good at training out the most flagrant derailments.

Expand full comment
TGGP's avatar

Yeah, and the question will be how often do we accidentally poison ourselves with vs without AI, just as we can compare human vs AI drivers.

Expand full comment
The Ancient Geek's avatar

And w e made it agent ive, in the first place , and we put it in charge, etc.

Expand full comment
Edmund's avatar

Well yes, but half of Silicon Valley keeps breathlessly promising that they'll be able to do exactly that within the fortnight. It's not that I buy the hype completely, but I'd be surprised if, given the technology actually existing, they *held off* on those sorts of applications. The gravity to create the Torment Nexus is enormous.

Expand full comment
darwin's avatar

Self awareness is orthogonal to the question of whether it will kill us all.

People use 'self-awareness' as a stand-in for 'agentic', ie acts spontaneously and pursues goals over time. But we already have ai assistants that act agentically in those ways, you can just do it with a 'while' loop and some memory, no self-awareness required.

The AI in a video game can put together a plan to kill the player and implement it over time without being self-aware. There's no reason an ai using access to real-world tools and resources can't do the same in principle, it just has to get enough resources and have a reason to do so in its programming.

Expand full comment
Jeffrey Soreff's avatar

There is also a third, rather banal possibility: AI gets improved to the point where it can do every economically and militarily important task significantly more cheaply than a human. Then every organization (barring a rounding error's worth of those that explicitly have employing humans in their charter) replaces every human in them with an AI (possibly gradually). Essentially just quietly outcompeting us in every niche we occupy. AI does _not_ have to be awesome for this to happen.

Expand full comment
Performative Bafflement's avatar

> Essentially just quietly outcompeting us in every niche we occupy. AI does _not_ have to be awesome for this to happen.

This is what I don't understand when people talk about stochastic parrots and how dumb AI is.

Have you seen *other people??*

AI is *already* smarter and more conscientious and helpful than a full 80% of humanity. If we had functional robot bodies with G4o minds, there would *already* be mass joblessness and panics, because they could replace a full 50%+ of people TODAY, with no further improvement. Hallucinations can be handled the same way they are with baristas and waitresses today - repeating your order back, and correcting if it's wrong.

And the fact that everybody is working on improvement (including NVIDIA and Hugging Face working on putting LLM minds into robots), shows if you just look an eye-blink down the road, this is obviously going to happen.

Expand full comment
Jeffrey Soreff's avatar

LOL! Many Thanks!

>Have you seen _other people??_

I have to admit that I'm rarely in a position to discuss anything with any complexity with other people, other than here (which is a very select group!).

Expand full comment
TGGP's avatar

Russian spies are something that has existed over the years, and thus we have some understanding of. Fears about AI are still speculative rather than based on experience.

Expand full comment
Vakus Drake's avatar

This is akin to saying nuclear war is unfalsifiable and so we shouldn't take seriously concerns about it. Plenty of things like nuclear war or AI need you to make inferences based on smaller scale versions of the big thing people are warning about, otherwise you'll always dismiss the fear right until it comes true.

Many of the smaller scale things people warned about AI doing have already happened, people just move the goalposts afterwards. Which would be like saying that we only have evidence of a couple of nukes being dropped in war, so it's silly to think nukes could threaten the entire globe.

Is there literally any scenario where AI does something scary enough to make you worry about it prior to it already reaching human level intelligence? Or is the equivalent of a nuclear war the only thing that would make you take the threat seriously?

Saw a good comment linked here where Gwern talks about this: https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned/comment/ckwcde6QBaNw7yxCt

The main takeaway being that even when AI does things that people would have previously said would be extremely concerning people just move the goalposts and dismiss any such behavior as just a silly bug that can just be patched out later.

Expand full comment
Andrew Wurzer's avatar

"just a silly bug that can just be patched out later."

Have those people ever been wrong about this? I get that the theory is that you eventually give the AI so much power that the bug leads to devastation. What I don't agree with is that the bug is likely to lead to the destruction of humanity. I would expect there to be a series of accidents that caused mass casualties prior. There don't *have* to be, but that seems much more likely to me. Just like it's more likely that a bug I write into my code will calculate a price incorrectly that will cost a customer some money than it is I create a bug that causes the system to wire transfer 100% of the company's cash to some random other account.

Expand full comment
MicaiahC's avatar

I think this intuition is invalid. It'd be like saying "well 9/11 couldn't happen without the Taliban trying to crash small plans into small buildings". Or "this too good to be true investment can't be a Ponzi scheme, because you would expect the ponzi cheer to start stealing small amounts of money first". Or "I can't be getting checkmated, since you would expect some pieces near the king to die 10 turns before I am"

The world isn't just small linear perturbations around the status quo, especially when there's another intelligence optimizing against you. And like, bugs which make companies go bankrupt ~instantly do exist, and did not have the history of smaller accidents beforehand.

Expand full comment
Andrew Wurzer's avatar

Er, the Taliban? You mean Al Qaeda? Which did attack US targets - it even attacked the WTC previously! And we did have warning signs. Now, that didn't enable us to predict the specifics or the magnitude of the September 11 attack, but that attack was also nowhere even remotely close to disabling or destroying the US. It's an example of exactly the kind of "typical" scenario I'm talking about.

Expand full comment
VivaLaPanda's avatar

++ to this

I think there’s very little reason to, given AI advancement capabilities curves, expect some kind of “0-100” bootstrapping curve that skips over low level risks and goes right to extinction events. If that’s the case, then you should argue about what that curve of low level events might look like and make predictions accordingly.

Expand full comment
MicaiahC's avatar

You are committing the same mistake that gwern is talking about: you are drawing a post hoc line around categories that you could not have made ahead of time. If you're talking about preventative measures that are useful, the idea that there is at least one terrorist group out there who did terrorist acts would enable you to stop 9/11 is facile, yet that is the degree to which your previous statement needs to be true in order for your current counterargument to apply.

Because otherwise the point you are making is that "the signs were there, and 9/11 still happened, this proves that my idea of warning shots are correct".

Also I notice that you've shifted your position from "warning shots enable responses" to "well, maybe the warning shot didn't work but at least I was only shot in the foot instead of the head", with no acknowledgement there was any position change.

Expand full comment
Andrew Wurzer's avatar

You're correct on both counts. I'll have to rethink through this.

Expand full comment
Layton Yon's avatar

Putin is currently significantly constrained: he wants to keep control of Russia, and this war has if anything made him more secure in that position. It's shown that the West can't depose him without military force. Therefore, if you think of Putin's primary goal as being staying in power (of a Russia that's a significant world power), he's severely limited in his possible responses. He has no reason to use nukes until the West threatens his power, and the West can't threaten his power until they have troops actively pushing into Russia. Therefore, basically anything up to NATO troops in Russia is basically impossible to respond to, as Putin has, unfortunately for him, shown himself to be rational.

Expand full comment
Marian Kechlibar's avatar

"his war has if anything made him more secure in that position"

So far, but the outcome of the war is uncertain and if it looks too much like a loss, he may follow Nicholas II. or even Gaddafi.

A lot depends on the "home front". Russian economy is somewhat holding so far, but it is under a significant strain. People get killed on the front, and their absence is felt in form of missing workforce. Russia makes money on oil and gas, but neither is particularly expensive and if Trump somehow manages to sink oil prices to ~ 50 USD per barrel, the margins won't be great. (Russian breakeven price is 44 USD pb.) The Central Bank is fighting depreciation of the ruble by burning reserves, but those reserves aren't endless, and a significant interest hike was necessary to keep inflation under control; that makes investments expensive etc.

This is a very messy problem that can't be solved by force alone and Putin knows that.

Expand full comment
Xpym's avatar

>the West can't threaten his power until they have troops actively pushing into Russia

A bad enough economic collapse might also do it. Of course, the west already does its best to make that happen, not much room for escalation there.

Expand full comment
1123581321's avatar

No, we're not doing enough. We're not even doing a 10% of enough, because we (the West) are weak, disunited, and worried about the wrong risks all the time.

Example: one thing that will likely bankrupt Russia would be low oil prices, just like in the 80's (then with the help of Afghan war and anti-drinking campaign sucking the state's revenues dry). So the most powerful thing we could do is to open the oil spigot, just like in the 80's. But my guess is we need a Reagan for that, and we have... what we have.

Expand full comment
Edward Scizorhands's avatar

Right, if the US were serious, we'd be pumping oil and gas at record levels.

Expand full comment
1123581321's avatar

And it's not like we have to double the supply or something that drastic. Oil prices are set on the margin and are very sensitive to the supply/demand balance (one factor being the limited storage). Increasing the supply by 5 to 10% is likely all that is needed to half the market price. We'd need to commit support for some domestic production, like higher-cost frackers, to ensure they stay in business, but it's ultimately peanuts in the large context. But that would require leadership, and honest leveling with the public, and we haven't had that at least since the older Bush. And no, Trump ain't it either.

Expand full comment
Edward Scizorhands's avatar

Definitely, if we really wanted Russia to suffer, then every year since 2021 we would be pumping at least 4% more oil than the year before.

Expand full comment
Paul Botts's avatar

LOL....I believe "1123581321" is so stuck in their priors that your fact-based sarcasm is zipping past unnoticed. I noticed it though and, kudos.

Expand full comment
quiet_NaN's avatar

> It's shown that the West can't depose him without military force.

Trying to dispose Putin *with* military force will immediately trigger WW3. As much as the West dislikes Putin, it does not nearly dislike him enough for that.

Given that, I think that most cases in which the West deposes Putin would be assassinations, i.e. non-military means. I don't exactly consider this likely, but still way more likely than the US deciding to give Putin the Saddam treatment.

Up to 2010, the West by and large had no reason to depose Putin. Sure, he had shown some aggression against other former soviet states outside Europe, but the West did not care much. The basic assumption was that he would be fine with him and his cronies getting fabulously rich by exporting resources via state-controlled companies. This perception changed with Ukraine, when it became clear that he was striving for Tsarist glory.

> Therefore, basically anything up to NATO troops in Russia is basically impossible to respond to, as Putin has, unfortunately for him, shown himself to be rational.

There is no single 'rational' behavior. There are different branches of decision theory, which recommend different actions. Lesswrong has a lot on that.

Expand full comment
Vlaakith Outrance's avatar

This happens a lot in financial markets commentary, and it makes highly paid "strategists" at large banks rake in both attention and criticism. Some have made a career out of constantly calling recessions and market crashes. To add to your list of Eventuallys, eventually the stock market will suffer from a large decline. It will happen again and again, unpredictably so, as risk-taking in an economy reaches a turning point or a left tail event rears its ugly head. The eternal crash caller will use that clout to further their career. There is simply no money to be made in an attention economy (or a general short-term view on stock market trajectories) by preaching for a low-but-constant level of caution that should be updated as new macroeconomic data comes along. The inability of market participants to realize they are in a bubble as it balloons in scale is another consequence of the general lack of caution in the average person's mental framework.

I agree with your take that we should all do this in a Bayesian way, but evidently most people aren't. What I'm more concerned about is how we can get them to see the value in moderation, when almost nothing in the average person's media diet is geared towards moderation.

Expand full comment
Doctor Hammer's avatar

I think these are all very good points. I would only add that it might actually be impossible to realize one is in a bubble, regardless of caution in the person's mental framework, almost by definition. I suspect that is why they are so common, in that the signs are subtle enough or observationally equivalent enough to non-bubble circumstances, that people can't figure it out till it is too late. The super cautious never get involved, but not because of special knowledge, just a tendency to always sit things out.

I don't know for sure of course, just a thought of mine.

Expand full comment
TGGP's avatar

On the contrary, per Scott Sumner there is no such thing as an economic "bubble", unless there are also "anti-bubbles", which we don't really have a phrase for because people irrationally don't even consider that symmetrical concept. There are just prices going up some times, and down some times.

Expand full comment
Doctor Hammer's avatar

I think Scott, as presented here at least, is wrong there; things don’t have to have symmetrical opposites to exist. Even if it were true, the anti-bubble would be the Keynesian savings trap, where everyone expects the worst and hoards, driving an otherwise fine economy into the ground until people snap out of it.

Expand full comment
TGGP's avatar

I prefer to think of it in terms of "the plucking model", which is part of why I don't buy Austrian Business Cycle Theory.

https://webhome.auburn.edu/~garriro/fm1pluck.htm

Expand full comment
TGGP's avatar

I guess I've confused things slightly by adding an extra Scott. Things indeed, don't have to be symmetrical, as the plucking model (from my other comment) isn't. But that model was developed by looking at the data and seeing which correlation fit better. "Bubble" theories to me seem to rely on intuition, and the analogy of a physical bubble popping. People will say there was a "'bubble" post-facto, even when a later rise in the price of that same asset would appear to undermine that story, and nobody other than Sumner & Kevin Erdmann seems to pay attention.

Expand full comment
Doctor Hammer's avatar

That story is basically mine, a stumbling correction up and down around what would be the ideal path. Depending on how one defines bubble, I suppose, but I would define it as over investing past the optimal level due to limited information and momentum. The bubble pops, people over correct, then the price goes towards the optimal price. It is a disequilibrium story in effect. I don’t think people can see that they are in a bubble with certainty because they don’t have enough info, and only post facto can it be identified (and might be misidentified of course).

Expand full comment
TGGP's avatar

There's always limited information. The EMH just says prices reflect the information at that time.

Expand full comment
Some Guy's avatar

Not sure if you listen to podcasts (I do less now than I used to) but Lex Fridman had a great one with Dario Amodei where he touched on this. His thinking was basically, yes what you’re saying is true (or at least that’s my sense of it) but you still have to consider there’s a Boy Who Cried Wolf Economics going on, where you have to conserve your warnings to make sure they maintain a strong psychological signal. I thought it was very well-reasoned and he had some good, practical ideas for how to approach safety. You may know all of this already, of course.

My sense on AI risk is that the real, new danger will start once you have continuous learning. Something beyond better scams, etc. Everything we train models on now comes from humans, so it almost forces a human context into the models, and then they become static and unchanging once they complete training. You can do something like Golden Gate Bridge Claude but for “be a good person” to those and I see that as being quite safe (I also think you get a natural limit by training for human data, but we will see. Something like “the smartest person who could ever be, in all subjects people know things about.”)

But when you get continuous learning, you get data that’s not generated from a human context and that will be pretty scary if that’s not incredibly well thought through, and there are approaches there I can imagine that put me firmly into Yud territory. Although I’m still hopeful that there’s a sort of “Argument from Bears” thing going on in the background, which is that bears can pretty much kill any human they want, but by and large don’t, because they weren’t specifically evolved to kill humans.

Expand full comment
Scott Alexander's avatar

Yeah, I think there's both a supply-side injunction (don't warn too early, because people will incorrectly update on it) and a demand-side injunction (don't incorrectly update on other people's warnings).

Expand full comment
Vakus Drake's avatar

Are you aware of instrumental convergence or do you just not think it applies to the Argument from Bears for some reason?

Since it sure seems like if bears were smart enough to reshape the world according to their preferences that could very badly for humans. Not because the bears suddenly gain a desire to hunt humans, but simply because they end up industrializing so much that humans are forced to the margins where bear industrialization hasn't yet spread, and get potentially driven extinct in much the same way we've driven other species extinct. Against not driven by the bear's malice, it's just that in this scenario we happen to be in the way.

In much the same way you can imagine a scenario where AGI might wipe us out without even having any intention to do so. It's just that at a certain point it's industrial efforts end up competing with our own (creating a pragmatic incentive to eliminate us), or rendering the earth unlivable to humans.

Expand full comment
Some Guy's avatar

I am aware of instrumental convergence but I think there are a few giant monkey wrenches to achieving it practically. I also don’t think agency is automatic in the same way I believe Elizier does (with lots of hand waving on specifics). Like I can imagine a machine that can give instructions and make any possible compound, molecule, organism, but is no more self-aware than a snail. It would depend a lot on the particulars but I don’t think you step from “here is a bunch of data on chemical processes to” to “I think therefore I am” or even fuzzy agency at any level of compute before you yourself are basically a superhuman. To frustrate you further, my best guess is that an Adversary AI could kill all humans but probably not all life on Earth or really much of anything a few star systems away. I think as you get more flexible with what you can do, smarter about how you do it, and the way you have to evaluate goals then at certain levels of intellect you become self-terminating and also there are evolutionary forces at play. You can survive but I think only being kind of boring. Most goals are kind of stupid from the computational lens, and you need to preserve enough stupidity to want to do them. I’ve sometimes wondered if that’s an answer to the Fermi paradox.

Expand full comment
Vakus Drake's avatar

Have you considered the arguments Gwern presents in: https://gwern.net/tool-ai ?

Expand full comment
Some Guy's avatar

I read it and I think I still object. I like to let novel takes stew a bit but this seems close to what I had previously read so I want to make sure I can see the new planes and angles he brings.

Expand full comment
JamesLeng's avatar

Humans are easy to domesticate, and industrial infrastructure is mostly species-agnostic. Sir Bearington of Svalbard seems more likely to flex aristocratic status by hiring Navy SEALs as escorts on a penguin-hunting safari, than chase them off for being "in the way."

Expand full comment
anomie's avatar

...Why the hell would bears have an aristocracy or care about status?

Expand full comment
JamesLeng's avatar

We're talking about hypothetical highly-intelligent bears who develop an industrial base of their own, right? And are successfully competing with humans for access to natural resources? Seems like that would require some degree of hierarchical social organization on the part of the bears, and while performative display of technologically-exaggerated versions of traditional survival skills aren't specifically necessary, could be plausible enough given the rest.

Expand full comment
Vakus Drake's avatar

The issue is that in this scenario the bears are smarter and more technologically advanced by analogy to superintelligence. So how exactly are humans supposed to compete with bears who are as above us as we are above chimpanzees? I mean even chimps at least have the advantage of being a lot stronger than humans

Expand full comment
JamesLeng's avatar

Find something that needs to be done as part of the broader industrial economy, which humans enjoy doing but bears mostly prefer to avoid. Invest in vertical integration, and automation of the complementary tasks, to reduce costs, and otherwise take appropriate steps to ensure steady demand. Avoid threatening the bears' core interests... except in clear-cut self-defense, or when necessary to compel their fulfillment of contractual obligations, and even then only as a last resort, with great care taken to frame the oppressor's probable response as violating bear-on-bear social norms more severely than the protest.

Expand full comment
Vakus Drake's avatar

This answer only seems like it makes sense if the gulf between humans and super intelligent bears isn't that great. Since proposing that chimpanzees follow your advice here to avoid humans driving them extinct would seem pretty silly.

>Find something that needs to be done as part of the broader industrial economy, which humans enjoy doing but bears mostly prefer to avoid

Aside from the gulf in intelligence this particular advice doesn't work when engaging with agents capable of self modification or creating agents with the first agents desired preferences.

Plus even if you ignored that problem; humans are really unreliable in many ways. So one needs to justify why baseline humans would do better than artificially created human knock offs or humans subject to mind control to make them more controllable/reliable.

Expand full comment
JamesLeng's avatar

Full self-modification is inherently unstable. Given an assumption that efficiently creating subagents with arbitrarily defined, stable preferences is possible at all, most of the AI alignment debate is irrelevant, pending an explanation of why we couldn't just do that ourselves in the first place.

Answers to political questions in general stop making sense when you assume one side has no consistent preferences, and wields vague godlike powers that get redefined whenever specifics might be slightly convenient to the other side.

That overall strategy I described works for anything from chimps to cheese mold; humans merely happen to be able to spell it out explicitly rather than deriving by trial and error. Cats and dogs and horses seem to do alright. Consider: https://interface.fandom.com/wiki/The_Cat%27s_Narrative

Gut flora are about as many orders of magnitude below us in intelligence as could meaningfully be measured at all, but when one of their trade unions calls a general strike, the macrobiome typically reaches for appeasement rather than sending in nanoscopic Pinkertons. We can - in theory, and sometimes even in practice - replace all their essential functions with synthetic substitutes... but for the most part it's difficult, dangerous, disgusting, and unnecessary, so we don't.

Expand full comment
Rockychug's avatar

I generally agree with the point made here, but in this specific instance of republicans coming to power I strongly believe there are hints into thinking it's more likely (albeit unlikely) they throw away the democracy compared to 4 years ago, so I'd completely understand why someone's model would update in this direction.

Otherwise, I'm probably nitpicking, but I think your wording about the Ukraine-Russian war is a bit unfortunate in this post, for example:

"Putin’s level of response to various provocations"

Provoking means that there is an intention from the west/NATO to make Putin angry and to incite a reaction, whereas characterizing Putin's actions as "responses" strongly decreases his level of responsibility in the whole situation. Putin is the aggressor here, and assisting Ukraine in its defense is not a provocation.

Expand full comment
Jeffrey Soreff's avatar

>Putin is the aggressor here,

Agreed

>and assisting Ukraine in its defense is not a provocation.

I don't see this as relevant to assessing risks. My guess is that it is in NATO's long term interest (and Taiwan's interest) to have Putin lose as much as possible in his war, but it is still in our interest to not get him to think that he has nothing further to lose by launching the nukes, and have him actually launch them. E.g. if we knew for a fact that he'd launch if missiles hit within 50 km of Moscow, but not otherwise, we'd want to arm Ukraine with missiles set to reach 51 km from Moscow.

Expand full comment
Humphrey Appleby's avatar

More like 100 km from Moscow. Just in case the missiles overperform, or get very favorable winds or something. In sight of the cliff face, but not right at it.

Expand full comment
Jeffrey Soreff's avatar

That's fair. Yes, one wants a margin for error. Many Thanks!

Expand full comment
apxhard's avatar

Ostensibly, zero isn’t a probability. Doesn’t this mean we should expect some nonzero chance that Fidel Castro will never die? Or that he’ll come back from the dead?

Expand full comment
Scott Alexander's avatar

I mean, sure, it's always possible (in 2000) that we would invent immortality tech before he died, but this is a 0.001% (or whatever) chance and I don't think it changes things much. You could have some tiny probability mass on the invention of immortality (which nothing about Castro's current age or health updates) and then update the rest normally.

Expand full comment
Deiseach's avatar

Like Granny Weatherwax - check if there was a sign saying I ATEN'T DEAD 😁

Expand full comment
Padraig's avatar

Zero is a probability - it means something doesn't happen. For practical purposes, a quantum fluctuation with probability 10^10^-100 can be rounded to zero. Once Castro's dead he stays dead.

In old age (above 90 or so), statistics suggest that there's about a 1/3 chance of death each year, and surviving is more or less independent from year to year. So: imagine rolling a dice, if you get a 1 or 2 Castro dies, otherwise he lives another year. Eventually he's going to die. Of course, it's possible (but not probable) that you roll 15 times before seeing a 1 or 2 - that's the probability that Castro makes it to 105. Two things to note:

- as you increase the population of 90 year olds you study, you expect to see the age of the oldest survivor increase.

- you need an exponentially increasing number of 90 year olds to get linear growth in lifespan.

It's very different to say that mathematically it's possible to live for ever (i.e. in an infinite monkeys scenario, one of them never rolls a 1 or 2) than to claim that Castro will (i.e. if you start rolling dice you won't hit a 1 or 2 ever).

Expand full comment
sam's avatar

The Putin example is different to the others because the principal arguer for caution is Putin himself: he both sets out what his red lines are and then decides, in the event they're crossed, whether to push the doomsday button. Further, he is motivated to lie about his red lines to scare off Western support for Ukraine. The others, on the other hand, have the caution-arguers more or less independent of the bad event they're warning against. The latter group will presumably be right eventually stochastically, but I think it's totally reasonable to discount someone who has repeatedly set red lines and then refused to enforce them.

Scott's right that there _is_ doubtless some level of action that would provoke a doomsday response from Putin, but that's not necessarily related to his claimed red lines; we have to try and figure out what that actual line is from more-or-less first principles, precisely because Putin's discredited himself.

Expand full comment
Andrew Wurzer's avatar

Well said.

Expand full comment
Jeffrey Soreff's avatar

Seconded! I just now reached sam's comment, and have said essentially the same thing.

Expand full comment
Salemicus's avatar

There are two problems here - the general and the specific.

The general problem is that discrediting a new hypothesis makes us revert to our priors - but if our priors are wildly different, then we haven't got anywhere. Moreover, a prediction being necessarily true "eventually" is not necessarily meaningful. If a drug is as toxic as water, I say your worries about its "eventual toxicity at some dose" are *false*, never mind that you could choke on a sufficiently large pill.

Specifically on AI, I would be *very* surprised if, in 2500, a hostile AI could hack important systems, because I would expect those systems to be guarded by advanced AI, and I think there are strong reasons to believe "defence" has a structural advantage over "attack." In fact, I expect the danger of AI to decline over time, as the first-mover advantage evaporates. Now, that's my prior and I can't prove it, but that goes back to the general point that "reverting to priors" doesn't really help when we don't have an actuarial table for the event in question.

Expand full comment
MoltenOak's avatar

> "defence" has a structural advantage over "attack."

Could you elaborate on your reasoning here? It seems very apparent (and I remember reading) that attack is easier than defence on a complex system, since, to be successful, the attackers have to find a *single* exploitable weak point whereas the defenders have to protect/fix *all* potential weak points. Put another way: With sufficient recon, the attackers need to break the ONE weakest link in the chain whereas the defenders need to ensure ALL chain links are as strong as they can be. Additionally, even knowing which chain links/entry points there are can be difficult to know in a complex system. And determining how weak they are actually requires thinking as the attacker.

Expand full comment
TGGP's avatar

Shouldn't that predict that attacks will become stronger than defense and thus systems will stop getting complex? That doesn't seem to be what's happening. We live in a world where civilization is going strong rather than at the mercy of raiders https://westhunt.wordpress.com/2017/10/25/the-immortal-storm/

Expand full comment
Asahel Curtis's avatar

Cyber attacks really are getting stronger over time, and we've just decided to live with having our personal information stolen fairly regularly. The cost of cyber attacks is low and the benefits of complex systems are high.

Expand full comment
MoltenOak's avatar

If the defense remained static, the attack would be cost-efficient to execute, and the attackers had a low probability of getting caught, then yes, attackers could break a system once and then keep exploiting it until the system broke down. But defenses evolve and people can/do get caught. With some good weapons and preparation, it's probably easy to rob (parts of) a bank or a shop, and some people do. But they're also risking a lot each time.

Expand full comment
Salemicus's avatar

Firstly, your attitude to attack versus defense is out of date in terms of cybersecurity. The idea now is Zero Trust. If you imagine a medieval castle, breaching one weak point on the perimeter wall is not enough, because there are layers.

But secondly and more importantly, the question is not whether attack is easier than defence in a vacuum. Sure, there are certain advantages to the attacker, but remember that defense is conducted legally, is far better financed than attack, etc. So there's never a true vacuum, it's all contextual. The question is whether we think an *AI* cybersecurity arms race favours attack or defence. And there it seems clear that defence has the edge. I as the defender can run recursive queries with the same AI you would use to attack me, until no flaws remain. Moreover, cybersecurity is hard because it is complex. Advanced AI makes it increasingly tractable. So the higher the level of AI both sides have available, the more the defense is helped.

Expand full comment
MoltenOak's avatar

> your attitude to attack versus defense is out of date in terms of cybersecurity.

That's quite possible, I'm not an expert in this area and am not keeping up too much with it. Nevertheless, as far as I know, "there's no such thing as perfect security" has been a long-running mantra in the cyber security world.

To show that your attitude is not so out of date, perhaps you could kindly provide some references of experts or relevant organizations stating "actually, the mantra is false and there (theoretically) IS such a thing as perfect security" (that doesn't involve blowing up all the computers).

I doubt you'll find anything, but if you do, I'd be genuinely curious to see! My own search has mainly turned up this: https://cert.europa.eu/publications/security-advisories/2021-017/

> The idea now is Zero Trust. If you imagine a medieval castle, breaching one weak point on the perimeter wall is not enough, because there are layers.

Fair enough, that does make it significantly harder for the attacker. Not to overly dwell on the example, but it's not like medieval castles were perfectly safe thanks to such a structure. Less nitpickily, my having to pass through several layers simply means I either have to find some kind of flaw in each, or find a way around this issue (since the defender has to ensure that it's impossible to breach the inner walls without breaching all outer ones first).

> the question is not whether attack is easier than defence in a vacuum. Sure, there are certain advantages to the attacker, but remember that defense is conducted legally, is far better financed than attack, etc. So there's never a true vacuum, it's all contextual.

Fair enough, those are valid points regarding non-AI hacking. I'd like to point to the existence of state-sponsered hacking/cyber-spying to prove the existence of well-financed, legal (as in my not having to worry about going to jail for it) cyber attacks. And to the CVEs of huge and valuable companies like Google (https://www.cvedetails.com/vendor/1224/) to demonstrate that money alone is no guarantee for cyber security.

> I as the defender can run recursive queries with the same AI you would use to attack me, until no flaws remain.

This assumes that

- you have access to the exact same AI(s) as I do (which I could change immediately by giving it a single further training example you don't have), and know which one(s) those are

- you can make a complex modern system flawless (in a security sense) without an unreasonable amount of resources (including time and effort)

- that you are willing to repeat this process any time anything significant changes in your system which might introduce a vulnerability, or any of the systems your system depends on change

- etc.

I think you can see how one can reasonably disagree about - or at least doubt - these premises. Generally speaking, I'm not convinced we will get to a point where cybersecurity is "solved" in the sense you seem to imply here. For any given system (which is static, including all dependencies and externally connected interfaces!), it's probably possible to make it perfectly secure in the relevant sense (e.g. by not letting it interact with anything in any way). But to do so without sacrificing a significant amount of its use? Difficult. And to do so without a mind-boggling amount of resources (since you literally need to check *everything*)? Doubtful.

Remember that we are not even currently sure if the cryptography we are using/as we know it is secure in principle. (From https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_computer_science: "Do one-way functions exist?", "Is public-key cryptography possible?")

> Moreover, cybersecurity is hard because it is complex. Advanced AI makes it increasingly tractable. So the higher the level of AI both sides have available, the more the defense is helped.

I don't see how that follows. It is complex for both sides, and AI makes it more tractable for both sides (easier to spot vulnerabilites, then explot/fix them). And the presence of AI doesn't change the fundamental issue: A defender has to defend *everything* (universal quantifier), whereas an attacker has to break *something* (existential quantifier), even if it's several things.

Expand full comment
Salemicus's avatar

No-one is saying that cybersecurity is perfect, just that "weakest point in the chain" is not a good analogy any more.

The point is not that we will ever get "perfect" or "solved" security (any more than medieval castles were "perfect") but whether it will get comparatively easier or harder to attack or defend as AI advances.

Expand full comment
MoltenOak's avatar

> No-one is saying that cybersecurity is perfect

Sorry, I thought that was what you meant by "I as the defender can run recursive queries with the same AI you would use to attack me, until *no flaws remain*." (emphasis mine) I guess I took it too literally.

> just that "weakest point in the chain" is not a good analogy any more.

Fair :)

Expand full comment
JamesLeng's avatar

If you have to get through the outer wall before you can start probing the inner wall for weak points, the inner wall doesn't need to be perfect; it just needs to be solid enough that you won't find something exploitable before defenders notice the flaw in the outer wall, which becomes easier for them to do when you're making such intensive use of that specific flaw to smuggle in probes.

Expand full comment
Hank Wilbon's avatar

Agree. It is not obvious that AI will ever be very dangerous. Odds are at least as good that advanced AI will make the world much safer. All other technological advances have made the world safer (for humans), even the tech made specifically as weapons. The way to bet is that further AI advancement will make humans increasingly safe.

It is unreasonable to equate the inevitability of an individual human death with the mere possibility that AI might one day be very dangerous.

Expand full comment
demost_'s avatar

I think this article has picked very, very bad examples.

For the US presidency, just replace "republicans" by "Trump", and the whole argument collapses. And actually, this is even if you replace Trump with any other politicians. If politician A has served 8 terms, what is the probability that they have turned their country into a dictatorship? This probably is really, really high, by all data that we have. Even if you start with very decent politicans, being in power for a long time turns politicians into dictators.

And Penny Panic does not exactly argue that the situation is the same as in 2016. She argues

1) I was concerned in 2016.

2) Since then, Trump has acted to tear down checks and balances.

3) Trump has not succeeded, but due to his partial success, and his increased government experience, it may be easier for him this time.

4) Hence, now we should be more concerned than in 2016.

You can disagree with many of those steps. But saying that we are all oblivious and should assign the same probability now as in 2016 is a gross misrepresentation of the debate, regardless of what probabilities you assign.

Expand full comment
Deiseach's avatar

Penny Panic has been arguing:

(1) 2016 was not right! The election was fixed/stolen! Trump did not win fair and square! Russians hacked the voting machines! Question the results, this is your civic and patriotic duty!

(2) 2020 is fine! Most secure and safest election ever! Impregnable voting machines! Election denialism is the biggest threat to our democracy!

(3) 2024 is not fine! The election was fixed/stolen! Trump did not win fair and square! Musk/Russians hacked the voting machines! Election denialism is your civic and patriotic duty!

I'm not inclined to listen to Penny any time, no matter what she is squawking about as she runs through the town square crying that the sky is falling.

https://www.bbc.com/news/articles/cy9j8r8gg0do

https://www.cbsnews.com/news/election-conspiracies-persist-even-with-different-outcomes-2024-election/

EDIT: This equally applies to Hettie Hysteria who is squawking on the other side about cheese pizza etc.

Though the hair-pulling in the aftermath is a guilty pleasure, even if it is unedifying for both sides. The "13 keys to the White House" guy seemingly lost his cool and said that disagreeing with him was blasphemy?

https://www.newsweek.com/allan-lichtman-cenk-uygur-piers-morgan-fight-1988715

"At one point in the conversation, Lichtman said that he would not stand for "blasphemy" against his reputation, saying: "I admitted I was wrong. I don't need you to call me stupid. I will not sit here and settle for personal attacks or blasphemy against me."

Uygur replied: "Blasphemy against you? Who the hell are you, are you Jesus Christ?"

At this point, everyone needs a nice sit down and a cup of tea and some quiet time.

EDIT EDIT: Maybe watch some Chinese AI-generated video to relax? 😁

https://x.com/Eivor_Koy/status/1859127741209321951

Expand full comment
demost_'s avatar

Huh? I don't see a connection with whether elections are rigged or not.

I have read some articles about whether or not the US democracy is robust against a Trump presidency, and I haven't read even once the opinion that the election was stolen or rigged. Whatever person you are talking about, they have nothing to do with the Penny Panic from my comment.

Expand full comment
Deiseach's avatar

You are fortunate if you're not coming across online "guys, hold on tight, Harris and her campaign are in it for the long run, she's going to call for recounts and challenge the results, Trump is *not* president" messaging.

Granted, there's not a *lot* of it about, but I am seeing it here and there.

https://www.aljazeera.com/news/2024/11/9/fact-check-did-20-million-democratic-votes-disappear

https://www.abc.net.au/news/2024-11-08/democrat-election-conspiracy-theories/104573550

I don't take it seriously but it is funny to see people who would have been saying accusations about 2020 election were all fake and conspiracy, now doing the same kind of accusations themselves.

Expand full comment
Vaclav's avatar

There are dumb, hypocritical people on every side of every issue. Your anti-anti-Trump shtick is not a good substitute for actual thought.

Expand full comment
dionysus's avatar

When did Penny Panic ever argue that the 2024 election was fixed/stolen?

Expand full comment
George H.'s avatar

Hmm, (since I haven't thought about Rome yet today.) I don't see much chance for any particular politician. But I do think there is a chance that the US populace gets so sick of the political games that each party plays that we all say, "F-it, give me a Caesar who can cut through all this BS." And my prior would be that a US Caesar could come from any political party. (or even no party.)

Expand full comment
TGGP's avatar

Trump is not an intelligent guy who seems to learn from his mistakes. He's also going to be as old as Biden at the end of his term, with all the dementia-risk that entails. I wouldn't put much stock in his "increased government experience".

Expand full comment
demost_'s avatar

I don't care too much about the US, but the argument I heard most was roughly the following:

Trump did not enter his first presidency with a plan whom to put positions of power to make substantial changes. He needed very long to decide about his cabinet, and many positions were eventually filled with old-school politicians who would oppose many of his plans. He also did not anticipate much of the resistance from government agencies.

For his second term, he was much quicker in selecting his administration, chose people who are much less likely to oppose his plans, and is determined to remove many of the leading figures of government agencies. Even if Trump himself becomes too demented to pursue his plans further, the people that he installed into power will pursue them.

I do find this convincing. Not necessarily in the sense of making it likely that Trump will dismantle democracy, but in increased likelihood that Trump will get more of his agenda implemented than during his first term. Independent of whether I like the agenda or not. Do you disagree with that?

Expand full comment
TGGP's avatar

Trump is just not good at hiring people who further his plans. Him having to fire people he'd hired was not something he just had to learn about when he started, it was a recurring feature of his administration. One could also say that a recurring feature of working in the Trump administration is disliking Trump.

Expand full comment
Wuffles's avatar

> Trump is not an intelligent guy who seems to learn from his mistakes

Er, I think his cabinet selections alone have rather disproved that point. Whether or not you agree with them, most people have concluded they are pretty much all Trump loyalists that will not undermine his agenda. That seems to indicate a rather high degree of learning from past mistakes.

Expand full comment
TGGP's avatar

His administration hasn't begun. Nobody has actually taken office, so we haven't seen whether they would undermine him. We've just seen him favor people who suck up to him, which is not new.

Expand full comment
Edward Scizorhands's avatar

> Trump is not an intelligent guy who seems to learn from his mistakes

He seems to be putting only loyalists into positions, as opposed to competent people with qualified resumes.

Expand full comment
TGGP's avatar

I agree he's not selecting based on competence. But it remains to be seen whether they'll be any more loyal to him than anybody else who's worked under him.

Expand full comment
Clutzy's avatar

I mean, she may argue that, but there isn't really any evidence for it. The evidence for Biden-Based authoritarianism is much stronger between the similarity of J6 to the Reichstag fire (cui bono?), the subsequent heavy handed prosecutions out of line with precedents relating to other political protestors, and then the prosecutions of the former president himself on the thinnest of grounds.

Expand full comment
Chastity's avatar

It was not thin grounds. January 6 was a coup attempt: there was a coherent plan that Donald Trump was attempting to push to prevent Joe Biden from becoming President, and when Mike Pence refused to go along with it, he sent an angry mob at the Capitol and watched as they beat the shit out of cops because he approved and loved what they did.

Expand full comment
Andrew Clough's avatar

Just wanted to point out that our base rate for whichever party establishing a dictatorship in the US shouldn't be nearly as low as .001%. Presidential (as opposed to parlimentary) democracies tend to be unstable but the US has been blessed for most of its history with low ideological partisanship except early on, when the Republicans displaced the Whigs, and the current moment. I guess I have to thank Andrew Jackson for that. A longer version here https://www.vox.com/2015/3/2/8120063/american-democracy-doomed

Expand full comment
Scott Alexander's avatar

Agreed, I was using this as a deliberately extreme example but I agree it's higher than that.

Expand full comment
Kimmo Merikivi's avatar

>Eventually at some level of provocation, Putin has to respond, and the chance gets higher the more serious the provocations get.

I disagree with this, as there's also a level of response at which point Putin feels he has to give up and fold (that is, continuing the war is a greater risk to his life and power than stopping the war: right now oligarchs have reasons to be happy as war gives them opportunities to steal, and his power is secured by a supposed threat of an outside enemy, but there are countervailing effects like sanctions, overheating economy and Ukrainian strategic air campaign hurting the oligarchs' interests, and resources that could be used to keep the population in check are wasted in meat assaults). And reasonably, that level of response (e.g. giving Ukraine the means to kick Russia out of their internationally recognized borders) arrives well before it's even remotely likely he'll resort to nuclear (e.g. Western allies nuke Moscow).

What would be an analogy to this? Let's suppose there's a theorem (it's exceedingly unlikely there is such a theorem, but let's suppose) that proves 10^24 exaflops is sufficient to create an AI that can provably solve the alignment problem. And we have really really strong reasons to believe existential risk from AI cannot materialize until 10^27 exaflops. Consequently, while it is true that naively pushing more and more compute will at some eventual point pose AI-related risk, our best model of development suggests that some lesser and almost certainly safe additional investment will eliminate the subsequent risk entirely.

And while it's irrelevant to the specific argument being made here, let's not forget that there are other considerations like the general game theory related to blackmail, near-guarantee of nuclear proliferation if it is shown that nuclear powers can get away with blackmail, the risks of NOT acting (mass destruction with conventional weapons - cities like Mariupol have fared no worse than if they had been struck by a strategic nuclear weapon - torture, genocide, using local peoples from occupied territories as cannon fodder in future invasions that Putin has promised will happen once they win and unlike nuclear blackmail has proven with his past actions to be the case...), moral duty to help the invaded, the ongoing immiseration of Russian people for as long as the fascists are allowed to reign, deepening co-operation between neo-Axis powers such as Russia sharing nuclear technology with countries like North Korea in exchange for military aid...

Expand full comment
Arbituram's avatar

Agree with the above. Unfortunately if the past couple of decades have taught us anything it's that countries should rush to get nukes as soon as possible and never give them up no matter what; remember, Ukraine had a huge nuclear arsenal post USSR collapse and gave it up in exchange for guarantees from Russia, the USA and the UK. Oops.

(I'm aware of the argument that Ukraine didn't have control systems etc to *use* the nukes , but don't find that persuasive . They could have figured it out.)

Expand full comment
The Ancient Geek's avatar

So you think Russia would have taken it lying down if Ukraine had kept its nukes and built control systems?

Expand full comment
Arbituram's avatar

At that particular point in history? Yes.

Expand full comment
anomie's avatar

Putin is 72 years old. He's not at the age where he has much to gain by saving his own life. What's the point of living a few extra years as a king of nothing?

Expand full comment
JamesLeng's avatar

What's the point of surviving all that KGB infighting to become king in the first place? At some point, doing whatever will keep his head attached to his neck and his bloodstream polonium-free is just force of habit.

Expand full comment
moonshadow's avatar

Some things are like black swans. Each time you look for one and fail to find one is a little bit more evidence that none exist.

Other things are like Russian roulette. Each time you pull the trigger and survive, your estimate of the risk in pulling the trigger again needs to go /up/, not down.

The hard part is knowing which is which.

Expand full comment
Kimmo Merikivi's avatar

Right, and I posit that Putin's nuclear blackmail is a black swan. There is a set of actions against which we already have reasoned from first principles he might resort to nuclear (military invasion into Russia threatening the continued existence of the state and Putin at its helm) and there's a set of actions we don't believe will result in nuclear response (for starters, everything that took place during the Cold War and didn't launch a nuclear war, up to and including e.g. pilots of the great powers using the most modern military aircraft in proxy conflicts, and everything that has happened in the current conflict so far, up to and including drone attacks against the Kremlin, attacks with Western missiles against targets that according to Russian law are part of Russian territory, and an invasion and continued occupation of internationally recognized Russian territory).

Threats of nuclear response to x are some evidence of nuclear response if x happens. But unfulfilled threats we have seen for an umpteenth time are evidence of those threats being a bluff - evidence for the theory according to which Russia bluffs because it's the only thing they CAN do against overwhelmingly superior military-industrial capacity of Ukraine's allies were they to cease self-deterrence and get serious - and that should significantly increase our confidence in the original working theory (whatever it takes to let Ukraine recover its internationally recognized territory isn't among the nuclear-escalatory set of actions).

Expand full comment
Mark's avatar

+1

Expand full comment
1123581321's avatar

"Some things are like black swans. Each time you look for one and fail to find one is a little bit more evidence that none exist."

Taleb'd have a fit if you tell him that. No. The whole point of the black swan example is that looking for one and not finding it provides no evidence that none exists. Maybe none exist, but maybe you're just looking in the wrong places, and black swans are actually quite common across the pond. Which is exactly what happened with actual black swans.

As an similar example, you look for this ridiculous animal called "snake", what with it having no legs and stuff, and you never find one no matter how hard you look, and now you're convinced this snake thing can't possibly exist, you have all the evidence of it not being there.

Then you go to the nearby city of Dublin and board a ship to the continent...

Expand full comment
The Ancient Geek's avatar

That's a point against using Bayes as an epistemilogical one stop shop. You need an explanatory model as well.

Expand full comment
Pope Spurdo's avatar

I'm not sure that this matters, but the Biden-dementia discussion wasn't really about predicting the future; it was about two competing interpretations of the present. Biden would do something odd - nibble on his wife's finger on stage, claim to have been raised in both Jewish and Puerto Rican neighborhoods, explain that he tied an onion to his belt because it was the style at the time, fall up a staircase - and the Republicans would interpret it as a dementia sign, while the Democrats would dismiss it as something that he had been doing for decades, or "just a stutter." Neither side was being completely honest (some of the videos were less bad in context; Biden unquestionably had lost a step since the 2012 debate with Ryan), but they were interpreting (read: spinning) what they saw, not predicting what would happen.

The debate debacle didn't necessarily cause the Democrats to change their predictions of how demented Biden would be in 2027. It was evidence of present decline that couldn't be plausibly spun as something else, or explained away as deceptive editing. I don't know if Biden actually has dementia, but the vibe that everyone watching got was clearly that this guy should be playing shuffleboard, not making real-time decisions on drone strikes.

Expand full comment
Deiseach's avatar

What fascinates me in all this is that after that one debate, the Democrats threw Biden under the bus. But at the same time, he was perfectly fine to keep being president.

And he seems to be going around doing the presidential things - he went on a trip to the Amazon recently? The first US president to do so? If he's really not capable anymore, why are they letting him do this? And now I see there is some question about him "wandering off" - I think it was more he walked away and didn't take questions and went offstage, as it were, rather than confusedly wandering off, but it still is mind-boggling.

https://www.youtube.com/watch?v=MhL8XcEceKA

Either he's not able to run a campaign and so isn't able to run the presidency, or he is capable but one bad performance threw the Democratic Party into a panic, *or* Biden isn't running the administration in any meaningful fashion, it's all unelected advisers and backroom boys, so no worries, just let Grampa live out his last few days at home.

I don't know which it is and I sort of think it might be important to know? We can't say "ah well, everything is quiet and fine, let him pass through the lame duck period in peace" because Ukraine! Putin! The Middle East! The ICC issuing a warrant for the arrest of Netanyahu as a war criminal! Biden/his administration seem to be making decisions right now involving all these live and hot issues, and if he genuinely is not mentally with-it, then that's risky to say the least.

Though I am sympathetic to the claim that Biden is so angry over how he was shoved aside, that he is enjoying Harris' defeat and he's doing all he can to spite the party after how they treated him.

Expand full comment
Mark's avatar

Is it either/or? A POTUS is expected (to appear) to be up to it, whatever comes your way, thus Biden was obviously unable to win again after that debate put an end to appearances. But in real life, no human person is really up to it. In most cases, it is playing along with some decisions here and there. As long as Biden (anyone) is still able to listen to his advisors during his few better hours a day and give the nod to "we do what we are expected to do, I propose to adjust to one/two degree of more x/y, if that's ok, Mr President", he is 'capable enough' to end his term.

Expand full comment
ProfGerm's avatar

Hearkens back to Hillary's infamous "3AM phone" ad, and in the liberal golden dream of The West Wing there was an arc of "who was running the country" while Bartlet was in surgery being a serious, bordering on treasonous, question (S2E18, "For ninety minutes that night, there was a coup d'état in this country"- Toby Zeigler).

These days, despite Russia being in an active war, there seems to be much less concern about emergencies and procedure. The man's hardly answered a press question in years! May our relatively peaceful moment hold, because he might be up for some routine before the sundowning sets in, but unfortunately, there's a lot of hours left after sundown.

Expand full comment
Andrew Wurzer's avatar

I cannot disagree strongly enough with this argument. Yes, the president will never be perfect. We don't really think they will be. But we expect at least the kind of sharpness a normal reasonably intelligent person is capable of -- i.e., alert and capable of making semi-decent decisions at any time they are required.

Also, perception means a whole hell of a lot when it comes to politics. Looking incapable can invite aggression.

His cabinet should have removed him.

Expand full comment
Jeffrey Soreff's avatar

Basically agreed. To whom has Biden delegated nuclear launch authority? _Is_ there anyone?

Expand full comment
Mark's avatar

Isn't that when their appointed vice-president comes in? - It is not that I disagree with you, Jeffrey and Andrew. I do wish for stricter health/IQ et al testing before taking office (would have to be before one can win the primary). In practical terms, things get very complicated when to decide at what point an acting president is no longer capable enough to stay president. Biden was and is a very experienced politician; I am still more worried about the world the day Trump takes over, than about the few weeks till then. Politicians can do limited good, but unlimited bad. A Taoist approach is fine.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks!

>Isn't that when their appointed vice-president comes in?

Good question! I know that I don't know if Biden has in fact delegated nuclear launch authority to Harris, at the present time (or whenever it became apparent to her and the rest of the POTUS staff that Biden had deteriorated to the point that was visible in his debate with Trump).

>In practical terms, things get very complicated when to decide at what point an acting president is no longer capable enough to stay president.

True! Regrettably, we may get to see this sometime in Trump's term...

I have a dream, that someday, the major parties will improve their process of selecting nominees so that the victory shouts of

"Marginally Lesser Evil! Marginally Lesser Evil! Marginally Lesser Evil!"

will no longer ring out across the land.

>Politicians can do limited good, but unlimited bad.

Pretty much agreed. Politicians can't destroy the universe, but glazing the northern hemisphere is a sufficiently unpleasantly close approximation for me.

Expand full comment
Joshua Greene's avatar

Overall, I agree that Biden should/should have been removed as president. The US system appears to be unable to remove presidents who are unfit for office. AFAIK, even the congressional Republican efforts to impeach Biden don't focus on his fitness, but instead are directed toward the Hunter Biden mess. Which, for me, is potentially sufficient grounds for disqualification, but (1) striking that incompetence isn't the primary claim, and (2) my standards of who should be POTUS are so much higher than the US electorate that my personal opinion is hardly relevant.

>>he is capable but one bad performance threw the Democratic Party into a panic

According to polling (ha ha, I know), Biden was already on track to lose the election badly before the debate and the debate performance sealed the deal. I don't think the Democratic Party calculation was whether Biden was capable of running a campaign, but whether he was capable of winning.

Expand full comment
Andrew Wurzer's avatar

For whatever it's worth, the 25th Amendment to our constitution provisions specifically for scenarios where the president is not fit. It's not a feature of the laws that we cannot do it; it's a feature of the politicians.

Expand full comment
Joshua Greene's avatar

That's true. I was referring to the empirical inability to remove presidents, even though constitutional mechanisms exist to do it.

Expand full comment
complexmeme's avatar

It's really designed well enough to work in situations where the President is _incapacitated_, and would be a complete mess if there was disagreement between VP+Cabinet and the President about whether the President was capable of doing the job.

Expand full comment
Andrew Wurzer's avatar

If there's disagreement among officials then it SHOULD be very hard to do. The idea is that if everyone agrees he's unfit, it should be easy. If everyone agrees, it is not hard. It would be bad if it were an easy way to remove the president in general.

Expand full comment
Joshua Greene's avatar

You are probably right in practice, but at least the process is clearly defined to resolve the argument. That contrasts, for example, with elements that make section 3 of the 14th amendment sufficiently vague as to be practically null (this is a positive statement about the state of the world, not a normative statement about whether it should be like this.)

Expand full comment
John Schilling's avatar

We have the 25th Amendment, yes, but for about a hunderd years we have also had a de facto procedure where the Chief of Staff and the First Lady get together and gather a committee to decide what the Executive Branch is going to do and pretend that it's still the President who is making those decisions. This has worked tolerably well so long as the President isn't so completely incapacitated that he can't be propped up in front of a teleprompter every once in a while.

It may of course fail in the future, possibly even in the next two months ago. But, to the people making the decisions, it's less scary than the 25th Amendment.

Expand full comment
Sui Juris's avatar

There’s a real banter outcome within reach where the Republicans impeach Biden between now and inauguration day, and get enough Democratic votes to remove him so as to make Kamala Harris the first woman president.

Expand full comment
Joshua Greene's avatar

Is there enough time for them to run through the process? I'm guessing no, but don't know the details.

For those who need a definition,

Banter Outcome: The funniest outcome, or the outcome that provides the most irony, amusement, etc (Urban Dictionary)

Expand full comment
Kenneth Almquist's avatar

On March 11, 2020, Trump addressed the nation from the Oval Office and claimed:

<blockquote>To keep new cases from entering our shores, we will be suspending all travel from Europe to the United States for the next 30 days. The new rules will go into effect Friday at midnight.</blockquote>

This caused real disruption in the lives of Americans who were in Europe at the time, as they struggle to get back to the United States before the Friday deadline when travel would supposedly be suspended. Unless you think Trump was intentionally trying to make the pandemic worse, which gets into cartoon villain territory, he was confused about his own policy on the most important challenge facing the nation at the time.

Given that Trump didn’t know that his Administration had decided not to suspend all travel from Europe to the United States, it seems reasonable to suppose that Trump had no meaningful input into the decision not to suspend all travel. I don’t think the evidence you present concerning Biden is on the same level.

https://trumpwhitehouse.archives.gov/briefings-statements/remarks-president-trump-address-nation/

Expand full comment
Vitor's avatar

There's no law that say AI *has* to be dangerous at some technological level. Maybe it is not dangerous at any level. Maybe the "level" actually consists of multiple dimensions (e.g. compute and training data), and we're advancing in one but not the other. I'm highly uncertain about all of this. Where do you get your certainty from?

Expand full comment
Hilarius Bookbinder's avatar

The longer an extremist politician is in power, the more emboldened he becomes, just like a successful petty thief who keeps upping her game until she is the local crime boss. Suppose that each year the extremist is in power, he gets 2% closer to crossing the line into full dictatorship. Not an implausible supposition: the extremist gets more of his cronies into government positions, malignly influences pre-existing institutions, etc. Viewed this way, the problem is just another variant of the Castro or AI examples.

Expand full comment
Aleks's avatar

Scott, I love you, but I think this is wrong and I would challenge this:

> Eventually at some level of provocation, Putin has to respond, and the chance gets higher the more serious the provocations get.

Two claims:

1. Throughout the war, Putin never *responds* – he's always the one to escalate first. Example: North Korea involvement. And when his escalation is met, he doesn't escalate further *as a response* – usually the opposite happens. Example: Kursk incursion.

2. Chance of escalation doesn't depend on what the world outside does or doesn't do – only depends on Putin's direct political goals (which also don't include nuclear war).

Putin only pretends to play some kind of poker with US, his real game is how much money is in his bank and how low is the risk of his death. He doesn't care about "retaliating" as long as these goals are met. His worldview is highly malleable. If anything, he's strengthened by "enablers" that play along to his game. Example: during Wagner's coup, Putin laid low and wasn't even close a kind of person he pretends to be right now.

At the very least, the quote from the article is a wrong framing that perceives Putin as passive background, something like a tornado or a tsunami, versus being an active agent.

Source: I lived in Russia for 10 years and there's some stuff that democratic-born people can't intuitively grasp about dictators. Putin's mentality is much closer to a street gangster, a stooge, than one of an Italian mafiosi.

Edit:

Inspiration: Mostly this analysis by an opposition Russian journalist https://www.youtube.com/watch?v=TwZgC2udXnA

I also realized I didn't give alternative claim, which is unfair. This is an alternative model I'd propose:

> In case Western help directly actually threatens Putin's life or political stability, he goes 100% silent

> In case Western help only damages Russian army or Russian people, the saber-rattling strategy is intentional – it works great for internal and external audience to capture a few points to help his army. No action is needed, cause it already works

Expand full comment
demost_'s avatar

Very interesting analysis! I recently read a somewhat related analysis about China: that China uses a strategy of slow and gradual escalation, always testing how far they can go. But when they meet resistance, they stop escalating almost immediately.

The claim was by Brahma Chellaney, an Indian geostrategy expert.

He gave two examples of Chinese retraction:

- In 2020 China secretly advanced their position in Himalaya during winter (while in former years both sides would retreat from their highest altitude positions in winter). India reacted with confronting them, leading to dozens of casualties on both sides. Since then, China has not escalated, and recently both sides have signed a treaty to deescalate the situation.

- In the Southern Chinese Sea, China banked up small islands to substantiate their claims on disputed regions. First with small ships, and when the US didn't react, the ships and islands became larger. In a different region, at Second Thomas Shoal, Philippines opposed the island-building. China did not pursue this further, and now has signed a treaty with Philippines to de-escalate the situation at this reef.

His prediction was that China will likely use similar strategies against Taiwan: not an all-in assault, but many small escalating steps (they are already doing plenty of those). Eventually capping internet cables and blocking travel routes. Then they see whether and how the US react on that, and continue escalating - or not.

Expand full comment
thefance's avatar

I don't follow the Ukrainian War closely, but I vaguely remember takes about Putin wanting to restore the glory of Imperial Russia, or something like that. How much weight to you put on claims about nationalistic motives like this, as a former resident?

Expand full comment
BlaMario's avatar

> Example: North Korea involvement. And when his escalation is met, he doesn't escalate further *as a response* – usually the opposite happens. Example: Kursk incursion

Didn't the Kursk intrusion happen *before* the North Korean involvement? How can it be the example of Putin's escalation met?

There was a recent article in FT claiming that Russia refrained from targeting Ukraine's electric network last year because they had a mutual agreement. Then Ukraine escalated and Russia counter-escalated.

Expand full comment
Dan L's avatar

Enough people have made generalized argument such that the one I'd like to make won't stick out, so I'm going to engage with a particular part to an annoying degree of specificity.

> I don’t actually know anything about Ukraine, but a warning about HIMARS causing WWIII seems less like “this will definitely be what does it” and more like “there’s a 2% chance this is the straw that breaks the camel’s back”. Suppose we have two theories, Escalatory-Putin and Non-Escalatory-Putin. EP says that for each new weapon we give, there’s a 2% chance Putin launches a tactical nuke. NEP says there’s a 0% chance. If we start out with even odds on both theories, after three new weapons with no nukes, our odds should only go down to 48.5% - 51.5%.

This is a bad model. Either you're tracking individual weapons systems and your odds should be approaching epsilon, or you're allowing delivery of W88 thermonuclear warheads to only count as a 2% incremental risk. Time to switch models.

"Oh, I meant a 2% risk of every marginal increase in effectiveness." Can you operationalize that in a way that doesn't immediately twist into a pretzel upon contact with a DoD specs sheet? How many different models of the M1 Abrams are there *really*?

IMO, the focus on weapon systems is entirely a distraction - the relevant underlying variable is the progression of the war and not the tools used in the process. (There was a real threat of nuclear escalation Oct. 2022, and that had less to do with HIMARS and more to do with the collapse of the Kharkiv front.) This resists easy quantification but too bad, that's a streetlight fallacy problem. It also makes apologia *far* more obvious, since it's closer to arguing desired ends than desired methods.

"There is a 2% chance Escalatory-Putin will start WWIII every time he is refused an annexation" is a plausible theory, but it implies very different things about the situation and the arguer.

Expand full comment
anomie's avatar

That is a good point: as long as the extra arnaments are only serving to keep the war at a stable equilibrium, there shouldn't be an increased chance of nuclear retaliation... in theory, at least.

Expand full comment
John Schilling's avatar

This. There is approximately zero chance of Putin noticing a Ukrainian armored division about to thunder-run the streets of Moscow while the Russian army flees for the Urals, and says "Oh but they're only using Ukrainian tanks under the cover of Ukrainian-made drones, so I can't *nuke* them". There is also approximately zero chance of the Russian army marching into Kyiv and saying "we had to do this under the fire of way too many ATACMs; I guess we have to nuke Warsaw and Helsinki now".

The only questions that matter are, what level of defeat do we think can be safely imposed on Russia, and what level of imminent defeat will result in Putin going nuclear. Specific weapons, targets, or tactics are irrelevant unless they change the outcome of the war, and whatever outcome we're going for has the same finite risk of escalation no matter what weapons etc are used to reach it.

Expand full comment
MVDZ's avatar

I feel like the analogies are based on category error. Example 1 and 2 are pretty much absolute, as in everyone dies and all substances become toxic at a certain level. Example 4 is just very likely based on what we know about old people and Biden's behavior, but there are also plenty of old people who die with their wits intact.

Example 4 is much closer to where you want to go with AI, in that there's a consequence we can't know beforehand but we can expect to exist. But there's a defined space in which those unknown consequences live. We know he'll nuke back if the US nukes Moscow: okay, there's a known consequence there. We also know he doesn't drastically escalate if we give ATACMs with limited range. Okay, a known un-consequence there.

What makes this complicated is that 'escalation' is a fluid concept. For some people, looking funny in their direction is enough cause for an ass-whooping. Other people don't care. We don't know exactly what Putin considers enough escalation, but we can infer somewhat based on his past behavior where the line will be, and take risks accordingly.

With AI there are multiple fluid concepts involved. What is intelligence? What is consciousness? What is autonomy?

We don't even know if large data-fed models will eventually lead to something actually intelligent, rather than what current LLM's are i.e. a pretty narrow tool for pattern recognition. We don't even know if computing as we know it can model what we perceive as intelligence. And we certainly don't know that it'll lead to something which then has the inclination and ability to act autonomously.

Then there are so many secondary issues, like, can we solve the energy-constraints of the planet before we are weaned off of fossil fuels in time to get enough energy to a superintelligence? Who knows. I don't have to try and make an exhaustive list, you get the idea.

So we can't assume AI is getting powerful. But for your argument to work I don't think you have to make that assumption.

Say you're driving a car with two passengers. One passenger says 'you better slow down I think there's a brick wall across that hill'. The other says 'brick wall? Nahhhhh that's not true my dude, step on it!'. Since you can *imagine* the first person to be right, even without knowing anything about the likelihood of it being true, it's probably prudent to slow down. The fact that it's possible at all and the consequences of it happening are dire, means you have to treat it as reality and act accordingly.

The same goes for most if not all existential risks. We can imagine a super plague happening. We can imagine a large asteroid wiping us out. And in the past we were able to imagine man-made climate change happening (now we know it does). All those things have some level real probability in a way that 'an interdimensional gateway opens up and sentient dragons attack us' doesn't.

Of course, having a sense of certainty about the probability of things helps to decide where to invest your caution. But that only really becomes relevant once you know what you're talking about, like 'how much is too much for Putin', which is fluid concept with an unknown outcome but a known range of tradeoffs.

AI might be a complete unknown, but we can reasonably imagine it to be horrible *if* it ever happens. Therefore, caution is warranted.

Expand full comment
10240's avatar

Related: Against taking the wrong moral from the tale of the boy who cried wolf: The boy eventually did get eaten by a wolf! If a boy falsely cries wolf many times, that's dangerous for him, and it justifies not blindly believing him the next time, but it doesn't justify blindly assuming he's wrong. And while in that story only the boy got eaten, so one could say it's his problem, but in many cases where "crying wolf" is brought up as an analogue, if there's actually a wolf around, it will eat us all if we ignore it, not just the people who have falsely cried wolf before.

----

Regarding the specific examples:

Putin has to respond to some level of escalation, but that doesn't mean he'd respond with strikes against suppliers of Ukraine if those don't attack Russia directly (and arguably it should be up to Ukraine's leaders to decide how to manage the risk of retaliation against Ukraine), nor that he'd respond with nuclear strikes to non-nuclear attacks, much less at a point when Russia still holds more territory in Ukraine than the other way around (and if that stopped being true, the West would probably stop supplying further attacks by Ukraine on the basis of being no longer needed). The Schelling point the West should defend is that supplying Ukraine with any weapon doesn't count as joining the war, and Russia doesn't dare respond with a direct attack against us because we'd attack it back directly. Holding back out of a fear of escalation weakens that Schelling point, and risks acknowledging a different implicit Schelling point where we have less freedom of action.

Someone doesn't have to get demented at some point before dying, or before serving out their term limits.

It's also unclear that every drug has to become dangerous before you just can't fit more in your stomach, or that repeatedly doubling the dose isn't a safe way to reach a point where it's either effective, or you notice a side-effect that's bad enough for you to stop experimenting on yourself, but not enough to cause permanent harm.

Conversely, if there were a 5% chance of Republicans establishing a dictatorship per term, that would be to many people a strong reason not to vote for them, and if it didn't happen so far doesn't disprove it. (Of course we'd also have to estimate the risk of Democrats establishing a dictatorship, or a de facto dictatorship if people never voted Republican out of concern of them establishing a dictatorship, no matter how poorly Democrats govern.)

----

So IMO "does it have to happen at some point" isn't a good determinant of whether to dismiss arguments for caution. Something can be a significant risk even if it doesn't have to eventually happen; and conversely, even if something has to happen at some point, there may be a strong reason to think there is a concrete point up to which there's no need to worry about it, and we're nowhere near that point.

Expand full comment
Mark's avatar

Ukraine and Putin, my topik(sic): The escalation risk as in "escalating by dropping nuclear bombs on western capitals" (Russian state TV threatens London with it regularly) is in a category as the R/D party establishing a dictatorship: It won't happen by chance! Kids, you all forgot the cold war? Ok, let a boomer explain: A state is considered justified to use nuclear weapons under few conditions - except gotten nuked. In Putin's war against Ukraine: If a) attacked (The small area in Kursk could count, but see: no nukes - Putin even downplayed the intrusion to signal, he won't go nuclear about it) AND threatened with defeat including loss of territory/regime change - and NONE of that is on the table. Thus even NATO joining Ukraine at this point and creating a no fly zone with F-35, has as much likelihood to cause Armageddon as Trump establishing a dictatorship in the US. Maybe not zero, but close, and NOT something increasing each day (as in the likelihood of Jimmy Carter's demise) and no matter how much or how little we step up our help Ukraine.

See it from Putin's side:

The worst case is: Russia goes back to her official borders (as before 2014). The most likely case: Stop of the invasion at mostly the current lines. None of that outcomes justifies the use of nuclear weapons. In case of a tactical nuke, the US/NATO seems to have promised massive conventional interventions. In case of a nuke on Kiew: Fly, don't drive, to the nearest desert island.

Disclosure: I live in Germany, have friends in the occupied territory; German friend of mine had to go to therapy when the big war started, as she was massively frightened of Putin nuking us.

Expand full comment
Kimmo Merikivi's avatar

I think your reasoning and conclusions are generally correct, but while no explicit argument was made to the contrary, I think it needs to be stressed that the relevant actor is Putin, and not Russian Federation. I believe Putin doesn't particularly care if a hundred million Russians live in misery, if hundreds of thousands of Russians die in war, if Russia loses its perceived great power status, if Russian territory gets invaded and tens of thousands of people are displaced semi-permanently, etc. The only relevant term in his utility function is the welfare of numero uno, and that is indirectly influenced by the fate of Russian Federation (e.g. if things start going really bad and even wartime opportunities to steal don't compensate for other forms of economic hardship, the oligarchs have an incentive to stage a palace coup), but it's wrong to think of anthropomorphized Russia as the actor here looking after its territorial integrity or what not. If it was Russia looking after its rational self-interest, it wouldn't have invaded in the first place, or if they had invaded then immediately said "sorry about that, we'll return to 1992 borders immediately": industrial war is an inherently negative-sum game that you cannot materially win, as the things of value (industry and infrastructure) are destroyed in the process of conquest. Putin, in contrast, can justify the continuation of his kleptocratic rule by an outside enemy: he NEEDS war for its own sake, although he does prefer to be remembered as a conquering Tsar like Peter I (a Tsar who absolutely didn't work for the benefit of the Russian people one may note).

Expand full comment
Mark's avatar

I wrote "Russia" only once, as a shorter stand-in for "the government of Russia, ie today: Putin". I agree with you, that Putin does not really care that much about Russia/the population - less Russians are less people to share the export-revenue with. Still, he could justify the use of a few nukes if Ukraine went to occupy all south of Tula for good - and NATO would not feel like retaliating then.

Putin "needing" this war is something I strongly disagree with. Russians after a century of KGB and 25 years rule of a KGBshnik, won't revolt. This is not France 1869 or Argentina 1982.

Expand full comment
Kimmo Merikivi's avatar

I agree a popular revolt is exceedingly unlikely (at least unless the population is pushed to unrealistic extremes like WW2 levels of casualties => exceedingly unlikely), but even dictators need some supporters to delegate power to, and if they collectively decide having Putin at the helm no longer serves their interests, it's good riddance Putin. Or, in other words, a palace coup. That's not at all unthinkable and I would assign a decent bit of probability mass (some 10%) for a successful palace coup happening and in the aftermath determining the outcome of the war. Indeed, we already kinda had one attempt with Prigozhin. However, the actual trigger is unlikely to be visible from the outside.

Expand full comment
Nancy Lebovitz's avatar

If AI is any good at helping people with complicated projects, it's might already be used by terrorists.

Admittedly, this is an ad, but Notion AI sounds pretty good as a memory aid for your own work.

https://youtu.be/fOm6dAaKjNE?t=466

https://www.notion.com/product/ai?utm_source=yt&utm_medium=influencer&utm_campaign=PhilEdwards

What do you make of fears about RNA vaccines? Sooner or later there may well be a new drug which works out badly, especially since there's no way to test for all the possible drug interactions and I don't think they try very hard to deal with that unmanageable problem.

Expand full comment
TGGP's avatar

There have already BEEN drugs that worked out badly! https://westhunt.wordpress.com/2016/08/11/vioxx/

Expand full comment
YesNoMaybe's avatar

I started using notion around the time they introduced their AI assistant. I asked it about hotkeys for the Notion desktop client, as I like using hotkeys and but didn't feel like searching their docs. It was unable to answer the question correctly within 3 tries, so I had to look it up. What a joke.

I occasionally get prompted about some of it's features, like it can autofill columns in db tables for you. I tried it once for shits and giggles. Went exactly like you'd expect.

Expand full comment
Nancy Lebovitz's avatar

Then maybe we should hope that terrorists do try to use it.

Expand full comment
YesNoMaybe's avatar

> 3 suspected terrorists found dead following an explosion in an apartment in<city>. Police found <bomb-making equipment> and a damaged laptop at the scene. Burned into the laptop screen were the Notion logo and the words "... light the fuse and wait for 1 minute"

Expand full comment
The Birds 'n' the Bayes's avatar

One thing that really matters and often gets forgotten is the *shape* of the probability distribution.

Take a simple example where you're waiting at a bus stop with no timetable or departure board or anything, having just missed the last bus and for whatever reason your expectation of how long you'll wait for a bus is 10m, with some uncertainty around that. Essentially there's a distribution of intervals between buses and you're drawing one from it. Now you wait for 8m and no bus has turned up yet. How long do you think you have left to wait? It depends on the *shape* of the distribution of intervals (or your probability distribution for your one draw), not just its width.

Two things have happened: For one, time has passed, so whenever the bus is eventually going to get here, it's closer now than it was 8m ago; For two, you've learned the bus definitely wasn't going to turn up in the first 8m. Assuming your probability distribution was wide enough that any of it at all was below 8m, your learning it didn't arrive in the first 8m shifts your expectation back so that you now expect it some amount of time later than 10m after you first arrived. But also, time has passed. Which effect dominates?

If your probability distribution is normal, then the passage of time effect dominates, and your expected time remaining to wait is always coming down, as your expected point of bus arrival recedes slower than you approach it. If your probability distribution is log normal, then the reverse is true, and your expected arrival time recedes faster than you move through time towards it. The longer you wait, the longer you expect to have to keep waiting.

There are special cases: Trivially, if you have a delta distribution (you know exactly when the bus will arrive) then there is no learning effect, and for every minute that passes, you expect to wait one minute less. If you have a uniform distribution, then for every minute that passes, the expected point of arrival recedes two minutes, so you always expect to wait as long as you already have waited (this is a bit hard to conceptualise for this example because no actual bus service has busses arrive at a uniform distribution of intervals).

The most annoying one is if the distribution of intervals is exponential (or the distribution of arrivals is Poisson): In this case the two effects precisely balance, so however long you expected to wait when you arrived, you always expect to wait exactly this long beyond whatever time you've waited so far. People at the bus stop start mocking you, saying the bus is "always ten minutes away, just like it was ten minutes ago" as if this is an obvious knock-down argument that you must be talking nonsense, and no matter how much you explain Poisson distributions to them, they'll just laugh at you until they suddenly get run over by the bus, which always arrives ten minutes earlier than you just said it would so they blame you for that as well.

These are idealised distributions in which the bus could reasonably turn up at exactly the same time as the previous bus and could take an arbitrarily long amount of time to arrive. Real situations are have some strange limit effects that mean they work a little differently. But they do provide important insight. The distribution of human lifetimes is roughly normal (which is why even though the expected age at death of a 70-year-old is higher than a 50-year-old, because we know the 70-year-old didn't die between 50 and 70, it's not so much higher that their expected time left to live goes up across that period). Therefore, as one keeps "getting lucky", one still wants to be more and more worried. The lifetime distribution of flourescent light bulbs is exponential, so there's never a sensible time to pre-emptively change them; they're always good as new until they suddenly fail. The distribution of how long a sticky hook thing stays on a wall is surely log normal; half the time they fall off within seconds but if they hold for a few minutes you're probably golden for the forseeable.

So, what is the shape of the distribution for the escalation point at which Putin goes nuclear? The answer to that question can tell you whether the lack of nuclear retaliation so far should make us more or less confident that further escalation will also go well. Those making the argument "well it's gone well so far so hang the naysayers and let's speed up the escalation" are implicitly assuming log normal or some similar distribution; those saying "every time we push it, the ice is thinner and thinner, for goodness' sake stop before your luck runs out" are implicitly assuming normal or something similar. I've never seen anyone actually trying to work it out! I don't have an intuitive guess!

On the other hand, I have a strong intuition that AGI timelines are log normal or close to it, so with every passing day, and particularly every new more powerful model that doesn't cause a big issue, I do actually feel a bit safer. I'm eliding here a bit between timeline in years vs point in training FLOPs where we hebben een serieus probleem and given training FLOPs advance logarithmically in time I probably need to think harder about that, but that's not for this comment.

Expand full comment
Voloplasy Shershevnichny's avatar

This is an excellent comment.

Expand full comment
MicaiahC's avatar

+2, it's so good that I subscribed to his newsletter.

Expand full comment
Kalimac's avatar

I've actually met a Trump supporter who claimed that it was certain that Trump wouldn't do in office any of the dictator-like things he's been saying he would do, and that I should know this is the case.

The supporter's reason was that Trump didn't do any of those things in his first term, after saying he would do them, so he won't do them this time either.

But even if the premise were true (which it isn't), the consequent does not follow. (Orban's first term in office was hardly the dictatorship he's running now.) What stopped Trump before was the adults in the room, and Trump has now learned that if he hires adults, i.e. qualified competents, they'll interfere with his plans. And so far, his nominations have demonstrated that he's learned that lesson and is going in quite a different direction.

Expand full comment
Thegnskald's avatar

A reminder that the media, and everybody around Biden, was lying to you about Biden right up until the moment the lie became too obvious to the public to sustain, and then people who had enough experience around Biden to know the truth feigned surprise about it.

Remember that article you wrote about things the media would and wouldn't do? About how an advanced understanding of the media lets you infer things the media doesn't actually tell you? The media did things you didn't think they would do. Pay attention to that, it's important.

Being blunt to the point of rudeness, I think you're trying to wrap everything in a logical bundle that makes it so you don't have to actually update on what happened; it wasn't about the people you trusted lying to you, and it certainly couldn't be that the Republicans were right - it was about Republicans crying wolf until they happened to be right because they were making a claim that would eventually be right anyways.

Expand full comment
thefance's avatar

Scott, imma have to agree with Thegnskald. I can understand the mistake of denying Biden's dementia, and it's admirable to admit fault. But "GOP cried wolf" is pure, colombian cope.

Expand full comment
Timothy M.'s avatar

I have a couple major issues with this claim.

First, anything that acts like "the media" is some planned, monolithic organ is clearly false. Even when Biden had a lock on the Democratic party and it was still political heresy to question his reelection campaign, there were a ton of articles cataloging his every verbal misstep and name mix-up, with every one noting it "amplified concerns about his age" or similar I'm-telling-you-what-to-think-but-in-the-third-person language.

Second, Presidents have tons of staff and are carefully stage-managed (when they allow themselves to be, anyway). We might not have another FDR-with-polio scenario these days but we can totally have a Reagan-with-maybe-Alzheimer's without anybody getting a conclusive sense of it. You're making a very strong claim that this had already failed so completely that the media DEFINITELY KNEW that Biden had lost it and willfully covered this up (WHILE running a bunch of articles about him mixing up bakes of world leaders). This is a much less plausible explanation than "his staff did a good job selectively putting him in front of people on good days".

Expand full comment
Thegnskald's avatar

Except there were frequent cases of his staff -not- picking good days. They -didn't- do a good job hiding it; the reporting you consumed did.

You're still acting like this was a surprise nobody saw coming, when half the country saw it coming and had been trying to get you to pay attention for literal years.

Expand full comment
Shankar Sivarajan's avatar

The "cheapfake" tactic they tried was pretty funny though.

Expand full comment
Timothy M.'s avatar

The same reporting I mentioned that constantly pointed out his verbal missteps? I'm not saying I'm surprised Biden is aging out/has aged out of competence; I'm saying the media didn't DEFINITELY know this and willfully lie about it. They covered Biden's behavior and mistakes and at least made it clear he wasn't performing as well as he used to. My take on this election was that we basically had two candidates who were obviously so far past their prime as to be embarrassing and depressing for everybody. (I mean, plus the Senate.)

By the logic you're describing, do you think half of the media is DEFINITELY LYING about Trump? Half the country thinks he's terrible and incoherent and so forth as well and has been warning about him for years and increasingly pointing to his lengthy rambling and non sequiters and so forth.

Expand full comment
Thegnskald's avatar

First, what's this partisan nonsense in the second paragraph? You think I'm right-wing or something?

I bring that up first because at this point, yeah, sure, I'm right-wing, and I became right-wing around the time when the left decided that publicly noticing that Biden didn't have his full mental faculties was a right-wing position, because at this point the left is just the people who haven't been kicked out yet.

So, no. I'm not buying that entire first paragraph. The coverage wasn't a factual "Biden made a mistake today and called somebody by the wrong name", it was, and only several days after the right had started making it too loud to ignore, "Well that's perfectly normal and fine, we all do that sometimes"; whatever the behavior there was a rationalization for why it was perfectly fine and nothing to be concerned about, and why they hadn't even bothered reporting on it initially but the right-wing misinformation engine just wouldn't let this one go so as responsible political commentators they had to correct the record.

Expand full comment
Shankar Sivarajan's avatar

> You think I'm right-wing or something?

> yeah, sure, I'm right-wing

You're literally doing the meme!

https://www.reddit.com/r/MemeTemplatesOfficial/comments/qtgdnd/will_smith_but_not_because_im_black/

Expand full comment
Thegnskald's avatar

I'm not, because I'm not actually right-wing, that was a rhetorical flourish intended to play a long period of pretty negative social interactions for laughs.

Expand full comment
Timothy M.'s avatar

When you say things like "half the country saw it coming and had been trying to get you to pay attention for literal years" it does tend to make me assume you're writing from a right-wing perspective, but I also brought it up as a challenge to your original claim that the news media was deliberately lying: if the standard is "gave wrong information about something that half the country believed" then I think a lot of reporting would be subsequently called "lies" if it turns out it's inaccurate, which strikes me as too strong.

Other than that, I would say the rest of this is pretty vibes-y to me. I certainly didn't feel a sense of reassurance from reading NYT/WaPo/etc. coverage that Biden was up to snuff. I would agree there were some obvious hacks who insisted that he DEFINITELY was great, but it didn't dominate the conversation in my experience. But your perspective may differ depending on what you saw, so I can't say that you're wrong. Or lying.

Expand full comment
Thegnskald's avatar

So the debate didn't come as a terrible surprise?

Expand full comment
ProfGerm's avatar

Nothing about it has to be planned, and while media isn't monolithic it is politically undiverse. "The media" is composed of people that work in a particular field (which suggests similar incentives across the board), and something like 90% are registered Democrats.

The flight of starlings is unplanned, and yet they move together.

There's a lot of selection effects to my next statement (based on my own media preferences/bubble), but I don't think it's entirely coincidental that the voices I remember pointing out his weaknesses were more new media/Substacky, like Klein, Yglesias, and Silver. Not sure if this is a difference in incentives between new/old media or something about differences in their broader networks (like editorial influences, backroom dealing regarding access, who knows).

Expand full comment
Thegnskald's avatar

Note that all the new voices suggesting concern were relentlessly attacked as being right-wing propagandists by - probably not all that many people, but enough to make them defensive and comment about it.

This is an ecosystem in which even the most tactfully truthful reporting (Silver suggesting that young people thought Biden was too old to be president, and this is an important electoral group to have on your side, in articles full of apologies to be the bearer of bad new) resulted in the reporter regretting having reported on it.

Expand full comment
AngolaMaldives's avatar

I think that article about the ways in which the media lies is still more or less accurate, with the significant caveat that it applies only to factual 'news pieces' and not 'opinion pieces' which haev always operated on the basis that the author can make contentious not-strictly-disprovable assumptions. I think it's fair to say that some partisan mainstream media (like the NYT, which for some reason my supermarket in the UK gets) has been blurring the line between these types more and more lately and in doing so edged closer to telling direct lies but still currently stops short. Before the debate, denying biden was demented wasn't strictly a lie, just motivated reasoning that partisan media was always going to follow.

Republican-aligned media operates on similar incentives, so even if they were right all along, it doesn't mean much because in the parallel universe where biden really was just stuttering or whatever (which was plausible earlier on, considering how examples can be cherrypicked), they would have said all the same stuff, so it has little signalling value. The lesson to take home is "Dont't trust partisan media on matters of factual contention", not "Most Republican commentators have good epistemic norms", as evidence abounds that the latter is false.

Expand full comment
Thegnskald's avatar

If you know for a fact that Biden is not okay, "Biden is okay" is a lie even if nobody can prove you wrong right now.

Expand full comment
John Schilling's avatar

Right, but the media mostly didn't (outside of explicitly opinion pieces) say "Biden is okay". They mostly said, "Sources X, Y, and Z said that Biden is okay",and trusted the audience to round that to just the last three words. Which, by the pedantically literal standards the media uses to justify their self-image, is not a "lie" even if they know that Biden is not okay and that the sources are wrong. The sources did in fact say that, and the media is just reporting the facts. Well, some of the facts.

To make effective use of bounded distrust of the media, you need to know not only what the media is willing to lie about (for various definitions of lying), but also what the media's *sources* are willing to lie about, and when the media will turn a blind eye to lying sources.

Expand full comment
Paul Zrimsek's avatar

Or as the case may be, "Here are some doctors who say pay no attention to that guy from the Justice Department who interviewed Biden for hours and said he is not okay."

Expand full comment
John Schilling's avatar

Right. There's always someone whose statements will support your preferred narrative, and there's always someone whose statements will contradict your preferred narrative, and you can't quote *everyone*, so...

Expand full comment
Keller Scholl's avatar

The particular issue here with Russia is that states say things, and this is ordinarily believed. Russia said that various actions would result in certain escalations, and then did not follow through. Russian statements are now much less credible. It's also pretty unreasonable to expect that hitting military bases in Russia convinces Russia to escalate to nuclear war, since that has a substantial chance of Moscow being levelled. Russia using nuclear weapons against Ukraine also has pretty serious costs to Russia. So since their statements are no longer credible, we look to their incentives, and their benefit to nuclear escalation does not seem particularly high for anything short of "land invasion while Putin is actively offering tolerated NATO membership and full return of 2015 borders".

Expand full comment
BlaMario's avatar

Russia gave NATO an ultimatum in December 2021, that unless they withdrew they'd use "military-technical measures". Did they?

Expand full comment
Keller Scholl's avatar

Well, at no point has Ukraine been part of NATO, and the recent expansion of NATO was in response to the Russian invasion in 2022. And I'm specifically talking about nuclear credibility, where they've repeatedly postured and then backed down.

Expand full comment
AdamB's avatar

I don't think the Castro/Biden arguments fit with the Putin/pill/AI arguments. If I am >99.9% certain that the pills are placebos made of salt and sugar, the LD50 is immense, in an entirely different regime, where the patient is practically unable to reach it (let's say they can take at most 1 50mg pill per second for 16 hours a day or something). So saying "everything is toxic eventually", while technically true, does not meaningfully apply.

For mortality/dementia there is a fairly tight distribution on what "eventually" means in practice and we all basically agree what it is. Not so for AI and Putin. Your opponents in this argument aren't making the Castro/Biden mistake, they just have a very different distribution of "eventually".

Edit: "The Birds 'n' the Bayes" made this point much better than me, 4 comments ago.

Expand full comment
TGGP's avatar

> Third, this is obviously what’s going on with AI right now.

One of these things is not like the others. We have lots of experience with people being mortal and vulnerable to dementia as they age. We have the phrase "the poison is in the dosage" because there are so many examples of that being the case (even with water, as you point out). We also know there are a couple instances in which nuclear weapons have been used in war, so we know why the USSR built its own nukes and Putin has them now. We don't have any examples of AI reaching whatever threshold you're worried about. It's not like regulating pollution we're already familiar with https://www.grumpy-economist.com/p/ai-society-and-democracy-just-relax It's thus not "obvious" and the priority should be trying to understand it rather than assuming we already do.

> AI still can’t hack important systems, or help terrorists commit attacks or anything like that

People already use computers (including with pre-written scripts) to hack systems, and I assume terrorists already use computers (such as smart phones) to help them commit attacks.

> So we’re arguing about when we reach that threshold.

No, we've already reached the threshold in which a tool can help bad actors, because tools are useful in general for their users.

> Eventually at some level of technological advance, AI has to be powerful, and the chance gets higher the further into the future you go.

It's already "powerful": it has some powers. This isn't a useful threshold at all.

Expand full comment
Malcolm Storey's avatar

Why would Putin escalate anyway? He can take over a significant chunk of Ukraine, make it an "independent" state (Novaya Ukraina) under the protection of a Russian "NATO", then make it attack NATO with Russian-supplied conventional missiles and says it's nothing to do with him. And rightly claim he's done nothing that the West hasn't done beyond doing in offense what we've done in defense. And if he gets away with that it'll be nukes next.

Expand full comment
John Schilling's avatar

The type of Army that Russia has been able to build in parts of Russian-occupied Ukraine, has proven itself quite ineffective in the current conflict. Giving them shinier weapons wouldn't change that, it would just waste those weapons. To actually conquer any seriously defended place, he's going to need a proper Russian army and he knows it.

Which, to be clear, is an army of mostly-not-ethnic-Russians at least at the sharp edge. But Not-Russians who have been trained from birth to accept that Russia Is In Charge, and then deployed far enough from home that desertion isn't really practical.

Expand full comment
Malcolm Storey's avatar

I think he just wants to get his own back a bit. Like Trump he's playing ego games.

Expand full comment
John Schilling's avatar

Right, but "his own" is defined from the POV of a late cold war KGB officer - a de facto Empire stretching from Berlin to Vladivostok, under the rule of a Strong Man.

He might accept the unification of Germany putting Berlin out of his reach. But I doubt there will ever be a day when Vladimir Putin isn't looking *somewhere* outside the borders of Russia saying "That's supposed to be mine! Why isn't it mine?"

I do expect that, being a former KGB officer and not a former Red Army general, his plan A is usually going to be political subversion, not military conquest. See e.g. the recent election in Georgia, or the history of Ukraine from 1992-2014. But that could still result in nastiness like e.g. Viktor Orban refusing to concede a lost election and calling on the Russian army to prop up his regime, while the lawfully-elected Hungarian government invokes Article 5. So we really want the outcome of the Ukrainian War to maye it very, very clear that Russia won't be able to get away with things like that.

Expand full comment
Malcolm Storey's avatar

Can't disagree with that!

Shame we didn't do that after his previous non-europe foreign escapades.

Expand full comment
bloom_unfiltered's avatar

Has the West escalated the war in Ukraine even once? IMO if Russia fires missiles into Ukraine and Ukraine responds by firing missiles back into Russia, that isn't an escalation.

Expand full comment
Reece's avatar

The word does have a negative connotation, but I think all that is meant here is the literal meaning, just an increase in what the West is doing for Ukraine.

Expand full comment
Rothwed's avatar

So you don't think there is any difference between Ukraine attacking Russia and a neutral third party supplying Ukraine with the weapons used to attack Russia? Further, let's suppose we switch the outgroup to mix things up. During the Iraq war, Russia gives the Iraqis a bunch of missiles that are then used against the US and kill American soldiers. Would you still shrug and say that's not an escalation because the Iraqis were already fighting the US anyway?

Expand full comment
Theo's avatar

A couple of weeks ago Gary Marcus wrote this:

"You are insanely intellectually dishonest. I was here long before you, coining the phrase 'deep learning is hitting a wall' in a March 2022 Nautilus essay"

(https://www.threads.net/@garymarcus/post/DCUEfIApo32)

Gary demands credit for staking out the position "deep learning is hitting a wall" just before the release of ChatGPT.

Maybe Gary is right and deep learning will inevitably hit a wall. Or maybe Gary is a charlatan who accidentally exposed his whole ass for everyone to see.

Expand full comment
Ashley Yakeley's avatar

The assumption is that AI is only ever good at what it's trained on, no matter how much computing power it has. More computing power simply means becoming increasingly sophisticated at what it's trained to do. In the case of LLMs, that's emulating the human corpus of writing.

A 2050 LLM will be able to write like the most intelligent humans have written. This will include synthesizing information in the corpus that may reveal how to hack important systems or other vulnerabilities. But the same information can be used to improve security as well.

An LLM is never going be able to take over the world, however, because it simply hasn't been trained to do that. At most it might come up with a plausible-seeming plan to take over the world based on the understanding of the world represented in the human corpus.

Expand full comment
moonshadow's avatar

> AI is only ever good at what it's trained on

This is true, but you have to be very careful, because it does not mean quite what an English speaker’s intuition might suggest it means.

AIs that literally only deal with the exact situations that occurred in their training are not very useful, and that is not what anyone builds.

A (current gen) AI is a statistical model that can perform interpolation and extrapolation. It is able to generate statistically likely responses for situations that never occurred in the training set, or in the general volume of concept space the training set encloses. You can see trivial examples of this e.g. with Go or Chess - we can build AI capable of responding to game positions it was never explicitly trained on; it is not even very difficult.

The real issue is that we can’t sensibly reason about what AI will do when placed in a situation distant from its training data. The doom scenario isn’t “Terminator”, it’s “Wizard’s apprentice”. The thing keeping AI from doing harm isn’t that it isn’t capable of doing harm, it’s that no sane person will wire a random number generator to anything capable of causing harm. Right? …right?

https://www.forbes.com/sites/stevebanker/2023/12/18/tesla-has-the-highest-accident-rate-of-any-auto-brand/

https://github.com/randombk/llm2sh

Expand full comment
EngineOfCreation's avatar

"It would be surprising if AI never became dangerous - if, in 2500 AD, AI still can’t hack important systems, or help terrorists commit attacks or anything like that."

Is that the new definition of dangerous AI? I thought it was about literal world domination or a paperclip maximizer, not slightly more powerful tools for script kiddies or a better customer management system?

Expand full comment
Roko Maria's avatar

I think “how many times can you put a Fascist in the oval office before they successfully start a dictatorship” is closer to the dementia and Putin questions than you realize here. Trump is at the very least eroding our norms against things like blatant corruption, government overreach, coup attempts, threats against political enemies, etc. I don’t know how many election cycles of these things being more and more normalized we can go through before we stop being a democracy, but I don’t think “it didn’t happen that time before” is a good argument for the reasons the initial cases suggested.

It also feels weirdly placed in your piece, like you have to pause the whole article to pander to Trump supporters.

Expand full comment
awenonian's avatar

It feels like you've written Against Ignoring Boys Who Cry Wolf, from both the perspective of the Boy, and the Village. (Both "I should've listened to warnings of Biden's dementia" and "Others should listen to AI warnings")

But the moral of that story isn't for the Village. Yes, the Village is wrong, because the last time the Boy Cries, there really is a Wolf. But the lesson is to not Cry frivolously, or people won't listen. This may not be Bayesian-ly correct, but it is, evidently, human nature.

As always, in communication and in general, it's easier to change your own actions than those of everyone else. If Crying Wolf loses you credence, perhaps it's worth avoiding more than you currently think prudent. Or perhaps you need a different tactic (e.g. instead of Crying Wolf, you suggest it would be prudent to build a fence that stops Wolves) (e.g. stress that you don't think 10^25 flops is specifically dangerous, but you need to start testing at some point, and too early is better than too late? I'm not sure what's best here, I'm not well experienced in policy).

(I don't feel confident my specific suggestions for communication are good. But I am confident in "it's easier to change your own actions than those of everyone else." So you may draw different corollaries.)

Expand full comment
Ryan L's avatar

I think your ignoring that there is often more information available to help guide your decision making. Yes, we know that there is some red line that would cause Putin to go nuclear, but we have other reasons beyond the specific past actions taken in this war, like MAD theory and NATO's superiority, that give us confidence that the red line is above what has already been done (and what is still being contemplated).

Similarly, we had more than just three data points to judge Biden's cognitive abilities (2020 debate, State of the Union, 2024 debate). There were many, many public appearances and some good reporting suggesting that gave us good reason to think that he was already starting to slip all the way back in 2020, and that it was just getting worse.

Is AI similar? I'm not enough of an expert to have a strong opinion, but I am confident that we have more evidence than "10^23 FLOPS was safe and 10^24 FLOPS was safe, but...". You've already assumed the conclusion by stating that you think there is definitely (or almost definitely) a point at which AI becomes safe.

Expand full comment
FluffyBuffalo's avatar

Can I throw in two more examples that have bothered me in recent years?

Overpopulation: "Yeah, the Club of Rome warned that we're headed for overpopulation and famine 50 years ago, and where are we now? At 8 billion, and we've figured out better and better ways to feed everyone! Clearly, we don't have to worry about overpopulation!" ...right, but there are only so many clever tricks to increase calories-per-acre before you hit the limits of biology and physics. Not to mention that the distribution of goods and food has come to rely on a ridiculously complex, interdependent, increasingly fragile network of supply chains that could go down the drain with the first Chinese missile on Taiwan (just to name one scenario).

Fossil fuels: "People have predicted that we'd reach "Peak Oil" in 2000, and here we are, 25 years later, and thanks to fracking and oil sand and newly discovered deposits oil is still cheap and abundant. No need to switch to renewables, we'll be fine!" ...right, but no matter how you slice and dice it, you won't find an infinite amount worth of new deposits, and if you consume a finite resource at a finite rate, it will run out in finite time. Whether you reach the maximum in 2000 or 2030 or 2050 doesn't change the fundamental problem.

Expand full comment
Rothwed's avatar

The overpopulation angle really was solved by the advent of synthetic nitrogen fixation. There's still a ceiling somewhere but even at the projected 10 billion we're headed for, the entire world could be fed by the available farmland in the US alone. (If all land suitable for food production was used for that purpose). We might run out of rare earth elements or fossil fuels, but we won't run out of food.

Expand full comment
Jeffrey Soreff's avatar

>Not to mention that the distribution of goods and food has come to rely on a ridiculously complex, interdependent, increasingly fragile network of supply chains that could go down the drain with the first Chinese missile on Taiwan (just to name one scenario).

True. Note also that the supply chains didn't do well with Covid, and couldn't even pivot enough to shift office toilet paper to be supplied to consumers through supermarkets when WFH became a major chunk of the workforce. The supply chains literally couldn't deal with shit moving a few miles.

Expand full comment
Reece's avatar

The Ukraine one is not as good of an example as the others, because there is an escalatory chance the opposite direction too: the West not responding to escalations from Russia encourages Russia to escalate.

For the extreme cases: If the West launches nukes at Russia, they are obviously going to respond with nukes. And if the West decided to be completely pacifist, Russia is obviously going to respond by taking Ukraine, Latvia, Lithuania, Estonia, Moldova, Poland, and whatever else they want.

In the most recent case, Russia brought in 12,000 or so North Korean soldiers. There was a report of Russia considering bringing in 100,000. If the West gave no response to this escalation, what would stop him from doing this?

I think tit-for-tat response would have been wiser. If the West had started with tit-for-tat (you use tanks, we send tanks to Ukraine, etc.) then Russia would know exactly the response from the West to their escalations. Russia, of course, has already been hitting Ukraine with missiles since the start; so this would have been allowed in response immediately. The use of North Korean soldiers would have been discouraged not by letting Ukraine do something Russia was already doing to them, but by sending 12,000 NATO troops to man the Belarus border or something.

Expand full comment
BlaMario's avatar

> And if the West decided to be completely pacifist, Russia is obviously going to respond by taking Ukraine, Latvia, Lithuania, Estonia, Moldova, Poland, and whatever else they want.

If this is obvious to you, can you explain how this would be good for Russia, or for Putin personally?

Expand full comment
Viliam's avatar

Putin could extract resources from the conquered countries, and the victories would reduce the political tensions in Russia by making Russian nationalists happy.

Also, russifying the Ukrainian children would improve the demographic curve of ethnic Russians.

Expand full comment
John Schilling's avatar

I suspect your idea of what is "good" for Russia and/or Putin, is rather different than Vladimir Putin's idea of what is good for him and his nation. But I'll let Londo Mollari explain it to you: https://youtu.be/YbckvO7VYxk

Russia, Ukraine, Latvia, Lithuania, Estonia, Moldova, Poland; all back, the way that it was.

Expand full comment
Chastity's avatar

The same way that the consequences of invading Ukraine would be good for Russia or Putin personally? Obviously, he felt that was a good idea, and we're assuming a completely pacifist West, much like he assumed going into Feb 2022.

The answer that speaks to Putin's psychology is that he views Russian imperialism and domination of its neighbors as good, natural, right, etc.

Expand full comment
aretae's avatar

This is a fair critique

ALSO, it's the motte for the motte-and-bailey of the precautionary principle and generalized catastrophism.

I prefer to take catastrophism and lump it all into a single bucket of catastrophism, and update on that method of thinking. How many: We're gonna run out, the trend is dangerous, we need to regulate this or else ... do we have to hear be wrong ... before the entire method of thinking that leads to catastrophism gets kicked down to the 0.00001%.

Expand full comment
FluffyBuffalo's avatar

Regarding AI: is the episode recounted in https://www.youtube.com/watch?v=oUB7CKaVZqA (a deranged chatbot placed at the center of a crypto scam/ meme-cult) evidence that AI is starting to become dangerous already?

Expand full comment
JamesLeng's avatar

Did it accomplish anything that a human demagogue couldn't have, given the same pool of loyal cultists and non-AI technical resources?

Expand full comment
FluffyBuffalo's avatar

Not as far as I can tell. But it seems like an indication that in addition to free, highly capable artificial engineers, medical experts and analysts we'll also have to deal with free, highly capable scammers, demagogues, cult leaders and propagandists.

ETA: maybe, to use an analogy from the Ukraine war, we need to broaden our conception of what constitutes danger. The opponent doesn't have to use a nuclear bomb to be dangerous; if they manage to crank out a million 5-lb FPV drones a month, we're screwed as well.

Expand full comment
JamesLeng's avatar

My preferred solution to that would be sorting out various well-known problems with the economy, so that there are more prosocial career options available for people with a psychological profile usefully summarized as "potential cultist."

Expand full comment
Belobog's avatar

I'll just note that you can play this same game in the other direction. The speed of light and the amount of energy needed to store a bit of information place hard physical limits on how powerful an AI could possibly be, so we know that eventually AI performance must necessarily plateau. We can then say the increased performance in newer, bigger, AIs doesn't really count as evidence that they won't hit a plateau soon.

Expand full comment
darwin's avatar

You talk about people giving early warnings due to appropriate caution, which is true.

But there's another way to model this, which is 'assuming something will almost definitely happen at some point in the future/at some level of the causal factor (drug dose, provocation level, etc), and you ask 1,000,000 people to guess precisely when it will happen along that axis, what will the distribution of their predictions be?'

If people have some good information about the causal relationship but not perfect or uniform information, the distribution of predictions should usually be a normal distribution, centered on the correct point where it actually happens, but with plenty of predictions falling on either side, and long tails.

From this models, you can jump out of just asking 'how many times have people predicted X' and instead say 'how *many* people predicted X at each point, does the number of people predicting it correlate to the causal variable (time/dosage.etc) increasing, does that relationship follow a normal distribution? And if so, can we compare the number predicting it *now* to the total number of people in the predicting-this game, and estimate where along the distribution we might be?'

Of course, if as you say people are cautious and make premature predictions as warnings, the distribution could be skewed to the left and this method would still make too-early predictions. But it could be tweaked to use the rate of predictions at each point as some type of forecasting, which I think could be useful. It can also diagnose when the number of predictions is *not* correlated to the level of the causal factor, a clue that the repeated warnings may be rhetorically motivated instead of true best-guess predictions.

Expand full comment
Erusian's avatar

> Suppose something important will happen at a certain unknown point.

This is "assume I'm right, therefore I'm right." There's no reason to believe any given thing will happen. Even Fidel Castro dying is a different thing based on whether it happens in 1960 or 2000.

> If you’re being appropriately cautious, you’ll warn about it before it happens. Then your warning will be wrong.

No, you'll be wrong if and only if you say something wrong. "Eventually Fidel Castro will die" vs "Fidel Castro will die on January 1st, 1990." The issue your movement has is that it doesn't have any way to get to that specificity.

This is true in general. If you can tell me the exact day a stock will go up then you can make a bunch of money. The week less money. If you just tell me it will at some point go up then that's worthless.

> Then the thing will happen and they’ll be unprepared.

Or the thing will never happen. Or it will happen and not be as you predicted. You're assuming your prediction is right as a precondition. This is a mistake.

> The lesson is: “maybe this thing that will happen eventually will happen now” doesn’t count as a failed prediction.

Then what does? Is your brand of AI risk a falsifiable belief? Is it scientific? What would you need to see to disprove it? If it is not falsifiable then it is not science, it is faith. (It being hard to falsify is not an excuse. All faith based beliefs are hard to falsify and many set their believers off looking for scientific proof in the 19th century.)

I think there are risks to AI but I can name what would prove me right or wrong in them.

> Third, this is obviously what’s going on with AI right now.

No, it's not "obviously" what's going on with AI right now. Maybe it seems that way to you. But even if you're right it's not obvious. This distinction is important especially if you want the political remedies you've said you want.

> It would be surprising if AI never became dangerous

Dangerous, yes, everything is potentially dangerous. It would be surprising if cheese never became dangerous. (And cheese does in fact kill people every year. Far more than AI has. So we have cheese safety.) The objection is not that AI is never dangerous. The objection is that something like the precautionary principle is more damaging than helpful. Imagine if we responded to that by banning cheese or going through a "cheese pause" where no cheese was produced until we had infinitely shelf stable cheese that never made anyone sick.

That would be eliminating cheese and not an effective way to make it safer anyway. It would also be politically impossible to get through or maintain. This is the point I've made a few times now: that not only is this not going to make us safer, it's not going to work period. And considering that both China and the US are rushing toward Manhattan style AGI projects I think I've got some evidence I'm right.

You cannot just be directionally right. You must be specifically right and operate within the material, political, and economic constraints you find yourself in. And your movement gives itself a lot of leeway to not be.

Expand full comment
Jeffrey Soreff's avatar

>Then what does? Is your brand of AI risk a falsifiable belief? Is it scientific? What would you need to see to disprove it? If it is not falsifiable then it is not science

Well said!

Expand full comment
Freddie deBoer's avatar

This logic is what leads people to sell prime Wyoming real estate at a cut-rate cost because they heard the Yellowstone caldera will eventually erupt. Scott, YOU DO NOT LIVE IN A SPECIAL TIME.

Expand full comment
Arbituram's avatar

If you've got any listings for cheap real estate in a beautiful area I'm all ears!

Expand full comment
Torches Together's avatar

As I'm sure you wouldn't dispute, the logic would hold up if the Caldera was inevitably going to erupt on actuarial timelines.

But I'd like to see you, as a public intellectual, bet on your beliefs here. If you genuinely believe that "we don't live in a special time" with regards to AI development, I think there are bets you could make regarding your beliefs of AI progress and its economic significance in the coming decade that could (from your perspective) make you a lot of money.

Expand full comment
Victor's avatar

"The lesson is: “maybe this thing that will happen eventually will happen now” doesn’t count as a failed prediction.'

I'm not sure. I think rather the problem is with the wording of the original prediction. It shouldn't be "This thing is coming" but "This thing has X chance of happening, and the chance goes up every time you risk it.

Because I think that's objectively the correct way to think about it. It's about probability. We all know people who smoked their whole lives and didn't get cancer. They got lucky, lots of other people got sick.

The duty of a professional is not to prevent people from inevitable disaster (that's a very rare situation). Far more often it's to help people make informed choices, given the odds of various outcomes.

Expand full comment
smopecakes's avatar

For me the reaction to Biden's debate revealed the existence of an epistemological black box in which actual truth is not prioritized, and is actually not visible to its users

I asked ChatGPT if there were any instances of liberal commentators saying it was morally wrong to be dishonest about Biden's state, of which there were obvious indicators, very memorably acknowledged and dismissed as cheapfakes

"There does not appear to be significant documentation of liberal commentators explicitly stating that it was morally wrong to be dishonest about President Biden's mental acuity. Most discussions around this topic focus on strategic, journalistic, or political consequences rather than framing it in moral terms."

It's a black box. The failure to report things accurately is not about truth, it's about consequences. This black box is so tightly bound that the president's mental acuity can remain fully obscured in public reportage. When this turned out not to be useful, the box was adjusted. We can expect that the actual truth is still largely obscured to the epistemic black boxxeri, except to the extent that they believe it will be more strategically useful

The truly powerful thing about this is that black box users don't actually know how true something is. While it's difficult to actually and specifically lie on a large scale, it's much more possible to use a common black box in which actual truth is obscured to the strategic maximum. You're just reporting the results of the widely respected publicly acceptable version of reality gadget

If this gadget encounters a poll of economists where 30% say minimum wage hikes reduce employment and 10% say they do not, the gadget will return the headline, "most economists don't believe minimum wages reduce employment". The gadget user will not have thought carefully enough about the actual result to recognize that this is factually untrue by normal standards

Expand full comment
Jason Crawford's avatar

I am reminded of Eric Lippert's comment on the Space Shuttle program: “NASA management seems to have the curious and obviously false belief that the fact you didn't fall off a cliff yesterday is proof that it is safe to walk even closer to the edge tomorrow, even if you're not precisely clear on where the edge is. That is actually an algorithm for ensuring that you always walk off a cliff.”

https://web.archive.org/web/20130425080054/http://blogs.msdn.com/b/ericlippert/archive/2011/07/21/i-m-glad-and-sad-that-that-s-over.aspx

Expand full comment
Sam's avatar

I think it's useful to note how the examples differ. The doctor has the most solid position for caution: even if he has no data at all on the drug, that's a reason for caution because it would be easy and cheap to gather some basic safety data first. The war strategists are debating the magnitude of a small probability: if the chance of a given escalation leading to WWIII is small enough, then the benefits of the escalation may outweigh the risk. Republicans saying Biden would develop dementia were not actually interested in whether he developed dementia, only in whether it would help them win elections, but there was a separate good reason to believe he might develop dementia.

AI safety researchers are working from theoretical models that show (simplifying for brevity) when a system becomes advanced enough, it can become dangerous. Is this similar to one of the examples? Unlike the doctor they don't have empirical examples of AI catastrophe, so you have to trust that their untested models are correct. It's easy for AI optimists to see them as like the Republicans, when there are clearly some people using AI caution cynically, but there could still be valid reasons to worry about AI. Or just Penny Panic with no sound reason at all to believe catastrophe is more likely today than yesterday? This amounts to dismissing the AI risk theories completely, which I don't feel qualified to do. This leaves us with the war strategists, who like the AI safety researchers don't have real world examples of a nuclear war breaking out to compare their theories against. They have also repeatedly predicted a chance of disaster and so far the world has averted disaster. I'm not sure if these similarities imply anything interesting about AI, but maybe it at least points to a communication problem. Maybe acting more like a nuclear strategist in certain ways, for instance by taking the geopolitics into account instead of just dismissing the admittedly annoying "what about China" argument, would make AI safety people more helpful and credible participants in discussions on the topic.

Expand full comment
LGS's avatar

The implicit point of people saying "you were clearly wrong last time" is that they expect warning signs. They expect Putin to escalate in a non-nuclear way before jumping to nukes. They expect Castro to be bedridden or frail before death. They expect Biden to show some signs of frailty (this arguably happened) before full dementia. They expect AI to cause minor damage before major damage.

Most things worth warning about are not binary. They're not "did it happen yet yes/no". There are visible close calls. There are exceptions to this, of course, but I think this is where your disagreement lies, not in the question of how to behave in the idealized scenario you sketch.

Expand full comment
Antropofagi's avatar

Perhaps not escalating against Russia is the greater risk? After all they launched the attack on Ukraine without any reason of the sort. Perhaps the "Peace in our time"-strategy is the one that is flawed when dealing with authoritarian imperialists?

Expand full comment
Humphrey Appleby's avatar

...and therefore if anyone looks at us funny we should immediately nuke 'em.

WTF? Sometimes escalation is the right strategy, sometimes it's the wrong strategy. Similarly, sometimes `appeasement' of authoritarian imperialists works (see e.g. cold war, or the behavior of Spain in WW2, or Denmark in WW1, or Switzerland basically always), sometimes it fails. And sometimes escalation blows up in your face (see e.g. Imperial Japan escalating against the US blockade). You can't replace grand strategy by a rock that says `always escalate' or `never escalate.'

Expand full comment
Antropofagi's avatar

Ok. I wrote "perhaps not escalating against Russia is the greater risk" - does that equal a rock saying "always escalate"?

The appeasement leading up to ww2 was obviously bad as far as I can tell, as well as the appeasement with regards to putinist Russia from the munich-speech and forwards, including the occupation of Krym.

Switzerland might be a slightly off example, depending on ones perspective, since they are very entangled with Russian oligarchs and pose a problem with regards to european rearmament and military capacity, hindering use of swiss weapons againt the russians. So if you have an outside (of Switzerland) perspective, the swiss "appeasment" is not necessarily good but further enabling war, aggression, erosion of rule-based world order, etc.

The japanese example is also off, imo., because of the aggressive colonialism and blatand breaches of ruled based IR and for that matter (although anachronistically) human rigts. Or in other words, escalation probably emaneted from japanese behaviour rather than US. Obviously the japanese further escalation after US embargos was not beneficial to japanese interests in the long run (unless you cosider the subsequent defeat and demilitarization beneficial, which you as well migh - but that's more of a "we'll see said the zen-master" unpredictable longt term outcome). More to the point Japan didn't adhere neither to democratic standards of the time, nor rule based IR-behaviour, but were blatantly resource-grabbing, violating state souvreignty and massacring civilian populations. The US did Gods work ending all that, som might say up to and including the nukes.

A better example is that of Israel contra Hamas were it clearly was a mistake to not act againt Hamas much sooner, perhaps as early as 2007. Israeli actions against Hamas as well as Hizbollah seem warranted in the light of their constant aggression. Yet vocal parts of the world opinion is critical to say the least, and seem to be calling for israeli appeasement-politics. That would be a mistake in my view.

Then we might end up with a model were democratic states should punish authoritarian states and other bad actors when violating state souvereignty and/or ruled based world order (and/or, but more complicated as it poses a potential loophole in the rule-based world order, violation of human rights). We generally does not hold authoritarian states to any such standard since being a bad actor exclude them from potential plus sum interaction.

Norms are for peers, not for outsiders. The purpose of norms is to enable coordination between mostly benevolent parties. It is thus meaningless to complain about Russian escalation. Concider the (actual) dogs of war (in the roman army) - the only way is to actually kill them since they cannot be reasoned with. Their unreasonableness is what's scary about them.

The norm for benevolent actors - which in a rule-based world order is non-aggressive and mostly democratically leaning states - should be to not escalate towards each other, but to cooperate to punish norm violators and kill off the rabid dogs.

Pardon my english, not native tounge.

Expand full comment
Antropofagi's avatar

Actually, one could compare this to rule of law and the justice system "escalating" against a criminal. A criminal with nukes is perhaps another thing, but still.

However, my main point was not about the response to Russias invasion specifically, but that "not acting" can pose a separate risk. One might as well turn the argument around and apply a cautionary principle to norm-violations in a rule based world order: Act when Russia invades Krym, or else we will risk a full scale attack against Europe and/or Nato; act when Russia occupies South Ossetia, or else we will risk further territorial grabs and hybrid warfare.

Caution to act and caution not to act. The Castro-situation is another story since it is a clean prediction without analogous policy-implications.

Expand full comment
Tibor's avatar

I think the Putin example is not great, or rather is a different situation because it is the only example where the event is triggered by an agent which responds to your "provocation". Castro couldn't have increased his lifespan in a meaningful way (well, cutting down on cigars perhaps :) ), the drug is bad for you at a fixed dosage which does not change, AI research is not something decided by a single person who reacts to specific acts.

With Putin, the argument (I believe) goes like this:

1. Putin has been talking a lot about red lines, yes, but a lot of that seems to be bluffing and stalling as proven by nothing happening when those "red lines" are crossed. That is not evidence for there being no red lines because obviously they exist (like your example with Moscow being nuked), but it is an argument for Putin having a reason to bluff.

2. Putin doesn't achieve much by a real dangerous escalation. If Putin decides to blow up one nuke in Ukraine, he will not stop the war and provokes a massive response from the west after which Russia almost definitely loses ... or he reacts to that by escalating further which would bring WW3, which is not good for Russia nor for Putin.

So the best strategy for Putin is to keep people unsure about where the red lines are and escalate/threaten in mostly symbolic ways. Sending an ICBM (if it really was an ICBM) to Ukraine with a conventional payload does not really do much, but it is symbolically powerful and makes people unsure. It can still backfire and provoke more support from the west but he probably believes to actually make people scared and hesitate with support. Rewriting the nuclear doctrine in ways that sound scary (but in practice not much changes and Russia cannot really be forced to follow its doctrine anyway) is another example, the same when Putin said he gave an order for the nuclear branch of the army to be at a "high readiness level" (after which nothing really happened).

Putin simultanenously needs to try what he can do without an escalation from the west (because western conventional escalation can hurt him more than any conventional escalation that Russia can introduce) and ideally convince the west to de-escalate by posturing and nuclear sabre rattling.

The real red lines exist but both an "escalation cost-benefit analysis" for Russia and patterns of Russian actual behaviour suggest that they are about where one would actually expect them to be, which is very far from what the stated red lines. Every time one of these red lines are crossed is an update to this, because it confirms the hypothesis that Russia would actually only use nukes if genuinely directly threatened as one would expect (or even its existence threatened as their original nuclear doctrine stated).

Expand full comment
Wesley Fenza's avatar

Counterpoint:

"Stop crying wolf. God forbid, one day we might have somebody who doesn’t give speeches about how diversity makes this country great and how he wants to fight for minorities, who doesn’t pose holding a rainbow flag and state that he proudly supports transgender people, who doesn’t outperform his party among minority voters, who wasn’t the leader of the Salute to Israel Parade, and who doesn’t offer minorities major cabinet positions. And we won’t be able to call that guy an “openly white supremacist Nazi homophobe”, because we already wasted all those terms this year."

https://slatestarcodex.com/2016/11/16/you-are-still-crying-wolf/

Maybe somewhere during his post on how wolf-crying shouldn't make us update against the existence of wolves, Scott could have spared a sentence or two about how people shouldn't cry wolf

Especially because in many of his examples, most people don't have the time, attention, or inclination to figure out the base rates themselves and mostly rely on experts, and those experts are the ones who keep crying wolf

"It would be surprising if AI never became dangerous - if, in 2500 AD, AI still can’t hack important systems, or help terrorists commit attacks or anything like that. So we’re arguing about when we reach that threshold"

No we're not! Everyone agrees that AI will enhance human capabilities. AI can help people hack important systems and help terrorists commit attacks right now. The AI wolf-criers are the ones who say that any day now, AI is going to go rogue, decide the existence of humanity is incompatible with its goals, and exterminate us.

AI doom is not like the Castro situation. There is no certain outcome that we're inexorably moving toward. AI might turn out to be hostile by default, but nobody has any way of knowing without trusting experts at the cutting edge of the field. If those experts prove themselves untrustworthy, it doesn't make sense to say that it can never happen, but it does put most of us in a state of epistemic nihilism where we aren't capable of understanding how these systems work and we can't trust the people who claim to understand.

The way out of this is for the experts to be trustworthy. Not for non-experts to "do you own research" and calculate the p(doom) ourselves.

Expand full comment
walruss's avatar

I think the last part was meant to address this - essentially he says it's different when people cry wolf over republicans becoming fascist dictators because there's no reason to assume that's inevitable, that the chance of republican dictatorship doesn't rise every time they win an election.

But of course, the argument these people consistently make is that Republicans move both the norms of government towards unilateral action and the public conversation towards fascist rhetoric. They do believe that it's inevitable that republicans will erode democracy until they can cancel elections, and that each time they win it becomes more likely.

Likewise, evangelical Christians believe in the inevitability of the rapture, despite multiple incorrect predictions. All these people can cite reams of evidence for their beliefs. The evidence isn't good but it's built a web of connections strong enough that removing any one link doesn't disrupt the web.

The more "rationalists" write about how this or that shouldn't cause you to update your prior beliefs much, and how we should treat low chances of world-ending stuff happening as significant even if predictions repeatedly fail, the more impervious they become to having their web dismantled.

This is essentially an argument for conserving bias. If an event you thought would happen not happening is always weak evidence, you can be as "Beyesian" as you want, you're never going to change your views on anything.

Expand full comment
sclmlw's avatar

Actuarial tables for when you die act opposite of the traditional Bayesian case. For anyone who doesn't die this year, their probability for dying next year goes UP, not down. Everyone dies, so your probability of death each year should rise after each year you fail to do so.

But we have biology and thousands of years of experience to help us calibrate those pretest and posttest probability estimates. For something that has never happened, we don't have the same accuracy, so plugging wild estimates into Bayesian calculations risks giving us a false sense of accuracy/knowledge. You can't do accurate Bayesian calculations on the probability of AGI if you have to guess most of the variables.

Say a physicist is trying to accelerate a proton to the speed of light. He builds a small cyclotron and gets it going an appreciable percentage of the speed of light. He'll get "closer" with each successive accelerator, but never get there, because it's an asymptote. We know that for the speed of light, but for other things we know 'just a little more' was enough to get dramatic results.

What about driverless cars using the Tesla approach of machine vision? Is it really close, or an asymptote? What about hyperinflation or loss of trust in the USD from moderate, cumulative government overspending?

Michelson and Morley thought they were approaching the precision in their measurement of the speed of light when they'd finally be able to detect the luminiferous ether. Instead, they missed out on relativity. Sometimes it's difficult to tell the difference between "really close" and "impossible".

I think where Scott gets this wrong is that where there's a qualitative question of possible/impossible. Bayes doesn't help when you're forced to guess terms you're plugging into your Bayesian equations. You end up fooling yourself into thinking you know something that's unknown.

Expand full comment
Victor's avatar

"I don’t actually know anything about Ukraine, but a warning about HIMARS causing WWIII seems less like “this will definitely be what does it” and more like “there’s a 2% chance this is the straw that breaks the camel’s back”. Suppose we have two theories, Escalatory-Putin and Non-Escalatory-Putin. EP says that for each new weapon we give, there’s a 2% chance Putin launches a tactical nuke. NEP says there’s a 0% chance. If we start out with even odds on both theories, after three new weapons with no nukes, our odds should only go down to 48.5% - 51.5%. "

I don't think it works like that. It's not that every escalation changes the odds that Putin will escalate. We assume, I think, that Putin knows when he will use nukes, and this doesn't change in response to escalations. Instead, each escalation provides us with new information on how likely it ever was that Putin would nuke anyone. If he were getting closer to using nuclear weapons, experts expect that he would reveal this with incremental escalations (because that is how the US and the USSR/then Russia have always done it). Since no such incremental escalations are in evidence, the conclusion is that we are not approaching Putin's criteria for going nuclear.

This makes theoretical sense, because going nuclear is such a high risk potentially catastrophic move for Russia to take that we all assume Putin won't do this unless he feels he has no other choice, which seems unlikely in response to anything happening in Ukraine.

Expand full comment
Victor's avatar

"It would be surprising if AI never became dangerous - if, in 2500 AD, AI still can’t hack important systems, or help terrorists commit attacks or anything like that. So we’re arguing about when we reach that threshold."

Again, I don't think that this is how it works. Whether or not AI "becomes dangerous" whatever that is taken to mean, isn't a simple function of how powerful it is. It's a ration of how powerful it is against how many safeguards and other barriers there are to prevent it from being used to cause damage. AI is already dangerous, it already has the capacity to facilitate crime and other potentially dangerous activities, but there are barriers in the way of anyone using it catastrophically. As AI continues to grow in it's capabilities, we can also expect these barriers to grow along with it. For all we know, AI is destined to become relatively safer.

I think what we really need for all such scenarios isn't a time series of predictions, but an explanation of how the dangerous outcomes would work in practice, and compare that to current context and projections of future situational changes. Is there any objective data that Putin is getting closer to using nukes? That we are losing control over AI? That a given medicine becomes more dangerous the more you use it, or the higher the dosage? That's what we need to know to answer the question.

Expand full comment
Roger R's avatar

The idea of AI developing to a point where it takes us out Skynet-style, or paperclip-maximizing style... fine, whatever. Maybe there's some truth to that idea. But it's not a 100% certainty.

Like you wrote, what *is* already certain is the way AI is already dangerous when misused/abused by bad human actors.

Imagine a world with a lot more air pollution than our own. Where there's more smog and many people getting sick from it. Where laws against basic pollution are more lax than they are in our world. So there are immediate serious issues caused by pollution. In such a world, would it make sense for the top environmentalists to be focused mostly on the long-term dangers of climate change? Or would it make more sense for the top environmentalists to focus on the already-present problem of widespread smog and the immediate health problems it's causing people?

This is similar to how I now feel about those who focus on AI X-risks while saying (almost) nothing on the already-existing problems caused by people misusing AI. There's this story from only a few days ago: https://apnews.com/article/artificial-intelligence-deepfake-lancaster-280a2959356c65e0c3d40e7e212b9db8

Expand full comment
Malcolm Storey's avatar

Skynet requires both awareness and motivation. Neither are part of current AI which, as the acronym implies, only tries to emulate intelligence. The real dangers are AA and AM.

Expand full comment
Victor's avatar

I think it would make the most sense for future environmentalists in that scenario to examine what led to their world becoming so polluted in the first place, and address that mechanism.

The obvious parallel is climate change. The major reason that I support efforts to reduce carbon emissions isn't the data showing temperatures climbing over the years, while important, the real game-maker for me is that I think I understand the underlying mechanism: how carbon gets into the air, and why it has the effects it has. This theoretical understanding of mine attaches meaning to the data. I find that far more persuasive than any prediction series by itself ever could be.

Expand full comment
warty dog's avatar

another one is overpopulation, many overpopulation debunkings woule benefit from adding "though it will obviously be a problem at some point, I think it's unlikely before 10^[n] years"

Expand full comment
walruss's avatar

This strikes me as a problem with Beyesian reasoning. Not that thinking in this way isn't better than thinking in other ways, but that it's still extremely limited. This isn't because of the math which is obviously correct, it's because we're splashing in waves on the beach of the evidence ocean, so updates have to be small.

Consider this: One of my strong priors is that as generative AI systems ingest more and more data, they will eventually reach a point of diminishing returns. This particular AI model, I suspect, will not grow monotonically better forever based on how much data it ingests. At 10^23 FLOPs, the model improved significantly. At 10^24 FLOPs the model improved less significantly, but still much more than I would have assumed. My basic assumption is the opposite of the one you describe - that models will eventually stop getting better (until there's some improvement in the underlying model, but that's beside the point).

Now, I want to update this assumption based on the fact that I was wrong. Are you saying I shouldn't?

Expand full comment
Hellbender's avatar

If you assumed that 10^24 flops wouldn’t be as large an improvement as it was, it sounds like you were making an assumption more specific than “they will eventually reach a point of diminishing returns.” You should update against that more specific assumption.

Regarding your broader assumption, I’m unsure. I guess it boils down to “at what point should you update away from believing a causal relationship is logarithmic if the curve stays linear for larger and larger values of X”

Expand full comment
RandomWalker's avatar

Enjoyed this! Here's a quick note. Unless Im missing something:

Dementia in Year 1 and not in Year 2 3.84%

Not in Year 1 and Dementia in Year 2 3.84%

Dementia in both years 0.16%

No dementia in either year 92.16%

(Assuming the probabilities are independent each year.)

In this case dementia in at least one year happens to round up to 8% but it should say '[about] 8%'

Expand full comment
Lee Dennis's avatar

WaPo this morning:

Russian President Vladimir Putin said Thursday that Russia hit Dnipro with a medium-range “nonnuclear hypersonic ballistic” missile dubbed “Oreshnik,” which means hazel in Russian. The Pentagon said the missile, which was armed with a conventional warhead, was an experimental variant of Russia’s RS-26 Rubezh intermediate-range ballistic missile.

Putin said the attack on Dnipro was a “test” of the weapon, in response to the Biden administration’s recent decision authorizing Ukraine to fire its U.S.-supplied Army Tactical Missile System (ATACMS), with a range of up 190 miles, at targets inside Russia.

Expand full comment
Kevin Barry's avatar

That's why people should give estimated percentage chances with warnings, lest they be ignored.

Expand full comment
Alexej.Gerstmaier's avatar

Taleb calls this the Turkey 🦃 problem

Expand full comment
Shankar Sivarajan's avatar

Dates back at least to Bertrand Russell: https://en.wikipedia.org/wiki/Turkey_illusion

Expand full comment
tg56's avatar

How would this analysis approach something like Peak Oil (as a catastrophe)? Earth's oil supply is finite, but the argument largely failed on discounting technological progress (e.g. fracking adding supply, and likely battery technology reducing demand in the near future, and more speculatively artificial fuels long term). I guess the failure mode is something like does the eventual event actually necessarily imply the consequences? E.g. Castro dies, does that actually necessarily lead to a thawing of US and Cuba relations or a freer Cuba (no! it doesn't!).

Climate Change is another potential lens. If you keep adding CO2 to the atmosphere you eventually at the limit get Venus which is a catastrophe, but David Friedman makes a good argument that it's tricky to show that level of global warming so far is even net negative at all (there are positive aspects to global warming and higher CO2 levels as well as negative and pre-industrial levels of CO2 were very likely below what would be 'optimal' for human flourishing, see e.g. little ice age etc.), significant technological changes in this space appear on the horizon (solar+battery, sequestration, geoengineering) and the economic impact estimates by e.g. the ICC on climate change (over the next 100 years) are surprisingly low.

The peak-oil folks being wrong doesn't make the luke-warm folks right, but both seem to make a similar argument to the ones here and I'm not sure this lens necessarily helps. Sure Biden will eventually get dementia or cognitive decline (unless we cure aging!), but we have a vice president so maybe that's not actually a bad outcome (sans hindsight, e.g. maybe Vance will be a fine Trump replacement for those who like Trump)?

Expand full comment
Paul Zrimsek's avatar

Part of what tripped up the Peak Oil crowd was a self-inflicted confusion between motte-finite ("not infinite") and bailey-finite ("on the point of running out"). I'm not sure how much their misadventures can be generalized to alarmists of other sorts.

Expand full comment
Mark Kumhyr's avatar

Great point that ties into recency bias. Government spending tends to focus on the last disaster, rather than proactively preparing across the spectrum of outcomes. We saw this after COVID-19, the GFC in 2008 and of course 9/11.

Expand full comment
L. Scott Urban's avatar

Is anyone else mildly confused by the recent trend of "don't change your mind" posts from ACX? If we aren't supposed to change our mind when something dramatically unexpected happens, and we aren't supposed to change our mind when repeatedly proven wrong, then when exactly are we supposed to switch viewpoints? I can understand some pushback to the viewpoint whiplash/extremism of the internet age, but at some point, surely this is just another convoluted method of stubbornly claiming to be right about everything.

There is merit in claiming that a 100 mg dose is harmless when a 100 mg dose is harmless, even if a 1000 mg dose is fatal. It is unreasonable to side with the doomsayers when they are overcorrecting, and an equally important strategy here is likely to discourage doomsayer overcorrection. Sometimes, people should substantially change their opinion, when a position is proven drastically, or repeatedly, wrong. After all, this discourages other people from taking positions that are drastically and repeatedly wrong.

Expand full comment
walruss's avatar

I'm not at all confused but I am concerned - Scott has built a strong enough web of assumptions that he feels confident rejecting empirical evidence if it does not confirm those assumptions.

That's not always incorrect! The history of humanity is largely a story of us seeing an empirical correlation and creating wrong assumptions based on that correlation. But it does mean that real-world evidence has become irrelevant to this blog, so unless I am convinced that his worldview is right (and I'm not), what was once a useful source of information and contextualization for me has become another admittedly well-written and entertaining pundit blog.

Expand full comment
L. Scott Urban's avatar

Well put. He's historically been quite even-handed, it is a bit troubling to see his opinions calcifying. Kind of enlightening to see his mental process for doing so, though. People aren't usually this verbose when claiming the timeless adage: "la la la I'm not listening to you".

Expand full comment
MicaiahC's avatar

This is overly reductive. It's talking about valid mind changing tech, not about the general notion of changing your mind in general.

The claim isn't that you shouldn't change your mind, but that particular ways of changing your mind, either based on dramatics or based on "X will eventually happen, it didn't happen yet, therefore the model of happening eventually has been falsified". He has given ways where changing your mind is *justified*, when a dramatic event happens repeatedly, or if specific predictions of high probability get made.

Of course when you reduce a position to something that it's not, it's absurd.

Expand full comment
L. Scott Urban's avatar

There's pretty good evidence that he is referring to general, rule of thumb methods for decision making here, I don't think I'm misrepresenting him. He cites several fairly disconnected examples, and the article is framed as countering a general genre of argumentation, not a niche sub-type.

He is using the idea that something will eventually definitely be true to justify a cautious approach, even under circumstances where caution is repeatedly proven unwarranted. Future hypotheticals argue against real world, present day evidence, and in his mind, bear more decision making weight. Don't get me wrong, future hypotheticals can be extremely useful, but it really rubs me the wrong way when they are being used over and above practical experience. Abstract models are far easier to bend towards personal preference than concrete events. Sometimes things are repeatedly proven wrong because our vision of a predetermined future is way off base, and we should change our minds when that occurs, not cling to our prior misconceptions.

Expand full comment
MicaiahC's avatar

> There's pretty good evidence that he is referring to general, rule of thumb methods for decision making here, I don't think I'm misrepresenting him.

You are confusing a *general* illustration of a *specific* thinking pattern with a *general* thinking pattern. The title of this post was "against anti caution arguments" and not "against anti caution".

Scott's not saying that not changing your mind is correct, but that there are valid and invalid ways of changing your mind, and that a particular anti caution argument is invalid. Not that anti caution itself is invalid. The fact that he's illustrating that he was wrong in *not* changing his mind when evidence for Biden's senility came in should be enough evidence that this is not about Scott thinking that not changing your mind is virtuous.

> He is using the idea that something will eventually definitely be true to justify a cautious approach, even under circumstances where caution is repeatedly proven unwarranted.

Once again, no. It's about how *more general* predictions "X will happen at some point" is much more difficult to falsify than " X will happen at this specific point", and reasoning that would apply to the latter should not apply. People who make vaguer predictions should be penalized for their vagueness, but penalizing for both vagueness and "a single data point contradicted it" is double counting your evidence.

He's not assuming his correctness, but stating the assumptions and consequences of his model. And it turns out that those assumptions and consequences are much more relaxed than an interpretation that rounds off "this may happen eventually" with "this will definitely happen at this time". It is indeed correct to heavily penalize the latter if it has been falsified, but I'm pretty sure it's probabilistically incoherent to pretend these two situations are equivalent.

You are talking as if they are equivalent. That "at least one instance of..." statements are equivalent to "all instances of..." statements. If someone claims that a coin is fair, and then it comes up with three heads, you can't say this has "discredited their theory multiple times", because it's not *that* unlikely for even fair coins to have that result, and the comment was about general behavior of the distribution and not about the specific sequence of coin flips. Treating statements about coin flip distributions as if they were specific predictions about specific instances of flipping is a category error. Note that people can *still be wrong* about the fairness of the coin, and yeah, in worlds where the coin ends up unfair you would also see runs of 3 towards the biased side. But you can't understand the other side's argument *unless* you understand which outcomes are ruled out or ruled in by their model, and thought around "this thing that hasn't happened yet may well happen" can still be consistent in worlds where it doesn't happen, even for a long while.

You misinterpreted "these events are consistent with my model" with "my model is correct, despite the evidence against it." And that's where you're wrong.

Expand full comment
L. Scott Urban's avatar

Once again, I'm not misinterpreting. If "these events are consistent with my model" is expanded to events which are not consistent with reality, then it becomes "my model is correct, despite evidence against it." Flat earthers have been repeatedly disproven, in multiple different ways, but because they view all major institutions as inherently biased against their ideology, they can claim that "these events are consistent with my model." Sometimes the model is what needs to be changed, especially if it is being repeatedly proven wrong in notable ways.

Alright, I kind of started with the end there, so moving backwards... I'm a bit lost on your intro, honestly. Against anti-caution arguments is pretty much the same as against anti-caution, unless the specific arguments being addressed are unrelated to core anti-caution claims. The points being addressed seemed pretty core to me. Could you articulate exactly which arguments are being addressed, and why they don't relate to anti-caution in general?

Moving on, "X will happen at some point" and "X will happen at this specific point" are tied together, they cannot exist on separate logical ground. The only reason we can claim that Castro will eventually die is because such a large body of people who are extremely similar to Castro are currently dying. The only reason we can say that Russia will eventually respond with nuclear force is because nations have historically been willing to respond with greater force in order to achieve their goals, when hard pressed. Even the idea that AI will drastically recontextualize the world through application of intelligence is explicitly based on widespread recognition of the fact that humans have drastically recontextualized the world through application of intelligence. It is unreasonable to split these apart. A coin flip which lands three times in your opponent's favor should be regarded skeptically, because this is a more likely result from a weighted coin than a fair one. Even when the coin is fair, this is good evidence that it is weighted, and should be regarded as such. ACX is right to push back against treating that evidence as gospel, disregarding any future event which might contradict it, but wrong to treat the statistical deviation as gospel, disregarding real world, present day events which contradict the theory. Specific predictions aside, each day which passes without catastrophe is a counterargument against catastrophe. Not an invincible one, but certainly one which we should pay attention to.

Expand full comment
MicaiahC's avatar

> Could you articulate exactly which arguments are being addressed, and why they don't relate to anti-caution in general?

The title of the post, is exactly "Against The Generalized Anti-Caution Argument" the use of the word "the" is referring to a specific anti caution argument: that of rounding off "there will be an unknown parameter at which catastrophic events happening" to "the catastrophic event will happen at the lowest value of the parameter that has not yet happened". The post ends with

> If you don’t do this, then “They said it would happen N years ago, they said it would happen N-1 years ago, they said it would happen N-2 years ago […] and it didn’t happen!” becomes a general argument against caution, one that you can always use to dismiss any warnings. Of course smart people who have your best interest in mind will warn you about a dangerous outcome before the moment when it is 100% guaranteed to happen! Don’t close off your ability to listen to them!

Clearly an injunction for people to *listen* to people warning about things, or probably more accurate to engage with object level arguments instead of haha I win generic knockdown ones

You will also note the paragraphs before that tells you when it is accurate to go back to your prior,

See:

> (if you thought the chance was 0.00001%, and Penny thought it was 90%, and you previously thought you and Penny were about equally likely to be right and Aumann updated to 45%, then after three safe elections, you should update from 45% to 0.09%. On the other hand, if Penny thought the chance was 2%, you thought it was 2%, and your carefree friend thought it was 0.0001%, then after the same three safe elections, then you’re still only at 49-51 between you and your friend)

Seriously, in what way are *any* of your statements consistent with the conclusion of the post? The flat earth example clearly gets updated to close to zero because the amount of evidence coming in is overwhelmingly against that of flat earth, and flat earthers have high probabilities on the flat earth prediction. If flat earthers instead predicted "there is at least one locally flat area on earth" which, yeah is probably why they believe in a flat earth, not only is that less wrong than their previous belief, but is *actually* how general relativity models space. If you use the heuristic of "false ideas should get discredited maximally" you would *in fact* believe actually wrong things.

Vague models should get penalized for being true in a lot more worlds than worlds we care about ("X political leader will die" are true in interesting worlds where they get assassinated as well as in uninteresting worlds where they die of old age after they are out of office"). It is correct to downrate the informativeness of "safetyist" thinking, because it's wooly or vague because of this, but you *cannot* also use the fact that a leader doesn't die literally right at this moment as evidence against the model.

> If "these events are consistent with my model" is expanded to events which are not consistent with reality, then it becomes "my model is correct, despite evidence against it."

I don't know if you mistyped, but a lot of correct models will just have predictions that do not match with reality. "X political leader will win Y election" predicts voting leads from approximately 1 voter to the entire eligible voting population. So implicitly, *most* of your model doesn't match reality since there can only be one correct vote win count. Your stance would be much closer to seeing 5 votes in a row for a certain candidate saying "haha! Your model has been repeatedly falsified" rather than, you know, actually doing the hard work of updating properly.

Anyway I feel like the extended digressions on the post itself about accurate Bayesian updating essentially answers all of your objections. I am uninterested in responding more until you explain how your stance that Scott advocates not changing your mind is compatible with all of the statements about accurate updating, which I note *also* contains examples about *correctly changing your mind*.

Expand full comment
L. Scott Urban's avatar

Yeesh, maybe my replies have been coming off the wrong way, every word I hear from you is so combative. Feels like anything I say is immediately dismissed on your end. If it isn't clear, but I actually agree with a lot of what Scott says in the post. The Bayesian stuff, as you point out, is super useful. I'm glad he posted this, it made for an interesting read that made me think. It also led me think about the opposite side of things, an argument which allows a person to continually claim that the end is nigh, regardless of how many times they are proven wrong. Discussing that idea is why I posted the initial comment.

I agree, this reply chain has gone on longer than it should have. Reading back through the article, I'll agree that I've misstated ACX's position in a few key areas, but reading back through our comments, it's clear that you don't really understand my position either. In fact, you seem far too willing to condemn me based on speculation about what/how I am thinking. I hope you understand just how frustrating that is from my end. Even if I'm way off base, being called wrong for thoughts I never thought kind of sucks.

But oh well, such is the lot of those which dither about on the internet. Hope you had a nice Thanksgiving, stranger. Fare thee well, 'pon life's many travails!

Expand full comment
MicaiahC's avatar

> Feels like anything I say is immediately dismissed on your end.

Sorry, I'm in general really frustrated at comments in general (not yours) being point missing and maybe have been angry at you instead of treating you like a person.

> In fact, you seem far too willing to condemn me based on speculation about what/how I am thinking.

To be clear, I *don't* think any of the dumb examples in my post are what you are thinking, but what I thought your thinking was like. So I don't think you are wrong or bad, and almost definitely I'm wrong on this. Which is to say:

I don't think you were trying to win an argument in a haha I win generic way, or that your stance is "false ideas get discarded automatically" that you actually hold. This is a bad way to hold a discussion with strangers in comments sections.

I do think I'm in the wrong here, re: tone for what it's worth.

> It also led me think about the opposite side of things, an argument which allows a person to continually claim that the end is nigh, regardless of how many times they are proven wrong.

Now if you want some advice about wording, think about how this comes off. Scott is discussing AI risk and saying "No, I am not saying that AI will end the world tomorrow just that there is an unknown time". And then you make comments rounding off Scott's position to "lalalalala I'm not listening to you". Do you think this is respectful and not dismissive? That you have summarized Scott's position in a fair and balanced way and he'd agree if he read your comments? I don't think you "get" to say that, then cry foul when someone substantially and (from their point of view) correctly points out that you misread.

If you think it's frustrating that one person is dismissive and combative please read all the other top thread comments during AI risk discussions and tell me how fair and not frustrating the median comments section is.

It's not right for you to be frustrated because I am frustrated *more*. But by the same token *you do not get to be frustrating because you don't want to be respectful of people who believe in AI risk*.

And regardless of how frustrated I feel, you *did* make a top 1% comment in even admitting you got anything wrong. Which I non-ironically think is impressive and the mark of moral character. So happy holidays too.

Expand full comment
Oliver's avatar

My read of this is thar everyone needs to express their views probabilistically and that a non probalistic warning is basically worthless.

You can interpret "will do x" statements as 100% guesses, but that doesn't really work, the person making them just doesn't understand the world and they aren't being useful.

Expand full comment
Bugmaster's avatar

By the same token, though, we should take failed predictions into account. If a very smart person says, "my model predicts that Castro will die in '95 with 70% probability", and Castro doesn't die, that's a hit against the model. If that same person says "I've updated my model, and now it's '00, 73% probability", and Castro still doesn't die, then we should trust this very smart guy and his model even less. This doesn't mean that Castro is immortal; all it means is that this guy is bad at computing actuarial tables.

Of course, building a model to predict when a person will die is relatively easy, since we have tons and tons of demographic data to rely upon, and we understand the mechanisms involved (i.e. aging, damage due to alcohol consumption etc.). Building a model to predict when the Singularity will happen is much more difficult (especially since no one knows how the mechanisms involved are even supposed to work), so our initial trust in such models should be a lot lower (as compared to the models of human aging). Failed predictions could push our confidence in these models even lower, to the point where they are not worth seriously considering.

Expand full comment
Level 50 Lapras's avatar

The problem here is that you're assuming your conclusion! Your argument is that "if X is inevitable, then you shouldn't update on it failing to happen". But the whole question under dispute is whether X is actually inevitable or not!

This is most obvious with the AI thing, where "AI will inevitably take over the world and it's only a question of when" is very much not a universally held assumption. But your Russia example is just as bad.

It is not inevitable that Russia will escalate to WW3 because the US is not actually going to keep pushing it until that happens. In particular, your argument is "well if the US invades Moscow, obviously the nukes will fly, so it's just a question of degree". But the US isn't planning to invade Moscow!

With regards to Ukraine, the actual question under discussion is whether sending increased aid to Ukraine would cause Russia to start WW3, and there's no reason to think that is "inevitable". In fact, we have many historical examples of major power proxy wars in the nuclear era that did *not* trigger WW3.

Expand full comment
TotallyHuman's avatar

One of these generalized anti-caution arguments, one which I actually feel quite persuaded by, is that the youth are not being corrupted by new technology. Paper was going to destroy our ability to remember things. Pulp paperbacks, radio, television, video games -- all were going to rot our brains, kill social interaction, cause an epidemic of violence, etc.

So when I hear people talking about how short-form video content is going to ruin attention spans, I am very skeptical. Is this fair? I can think of some technologies which did in fact corrupt the youth, like leaded gasoline, so maybe I should take concerns about tiktok or whatever more seriously. But when the same arguments are made over and over again, and seem to be coming from an instinctive reaction to change rather than an honest appraisal of risks, I find it hard to see those arguments as anything but noise.

Expand full comment
Eremolalos's avatar

I don't think the worry about short-form video is far-fetched, or just another instance of the older generation thinking the younger one is screwed up. I personally find that after a week of having all my reading be online, I find it quite hard to read a book. I keep wanting to click on this or that to check it. I keep wanting to be able to zoom over to a few other possible books to see whether some of them are better. The book in my laps feels static and boring. It takes an hour or so for that to wear off, then I am back to reading and enjoying it that way I always have. I have seen many other people describing the same phenomenon. I have asked quite a few people whether they find it hard to return to long form reading of print after a period of binging on online fare, and most say yes.

Expand full comment
The original Mr. X's avatar

If you could take the average person from, say, 1950, and show them the world today, they'd almost certainly consider us very corrupt. Granted it's hard to tease out how far this is due to technological advances specifically, and how far it's due to social trends that could plausibly have occurred even if technology had remained static, but in general, I think a lot of anti-corruptionist sentiment is just because corruption gradually gets normalised so we no longer notice it. It's like a social equivalent of the "X is just a right-wing straw man, of course nobody believes this" --> "X is just basic human decency, and I literally have no idea how anyone could oppose it" political treadmill.

Expand full comment
geist's avatar

>I worry that people aren’t starting with some kind of rapidly rising graph for Putin’s level of response to various provocations, for elderly politicians’ dementia risk per year (hey, isn’t Trump 78?)

Seems like an odd thing to say given that Trump is already as of today in worse cognitive shape than Biden

Expand full comment
Robert Benkeser's avatar

Regarding discussion on Putin using a nuclear weapon, I agree with your logic. However, we need to consider many risks beyond whether Putin himself decides to launch a nuke. If we take his threats seriously and self-deter, then that decision not only massively increases the risk of worldwide nuclear proliferation and the risk that other countries will adopt a nuclear saber rattling strategy. So it’s not at all clear to me that “kicking the can down the road” necessarily reduces the risk of nuclear war for humanity.

Expand full comment
Anton's avatar

I wonder why nobody has brought up Pascal's wager yet.

The argument about escalating Putin is only valid if the initial probability of him going nuclear is non-zero. But this is what many people with principles and value don't understand about him. He is rational in being a selfish psycopath. But he isn't crazy, he values his own life, safety and wealth much more than his reputation or image. It's just that he is using the reputation and image to accumulate wealth and safety, but he won't go nuclear for anything but direct nuclear strike against Russia. And again, not because of some moral grounds, but because it will turn away China and India and won't bring him closer to whatever goal he might want to achieve. If nuclear bombing against Ukraine was an option, he would do it from the very beginning.

Expand full comment
anomie's avatar

> But he isn't crazy, he values his own life, safety and wealth much more than his reputation or image.

Again, he's 72. Why is everyone acting like he has a long, fruitful life ahead of him?

Expand full comment
Jeffrey Soreff's avatar

I cheated and used ChatGPT:

Vladimir Putin's parents both lived into their late 80s:

Father: Vladimir Spiridonovich Putin was born on February 23, 1911, and passed away on August 2, 1999, making him 88 years old at the time of his death.

Mother: Maria Ivanovna Shelomova was born on October 17, 1911, and died on July 6, 1998, which means she was 86 years old when she passed away.

Regarding his grandparents, information is more limited:

Paternal Grandfather: Spiridon Ivanovich Putin was born on December 19, 1879, and died in 1965, reaching the age of 85. He was known to have worked as a cook for Lenin and Stalin.

Paternal Grandmother: Details about her are scarce, and her name and lifespan are not widely documented.

Maternal Grandparents: Similarly, there is limited public information about them, and their ages at death are not readily available.

Please note that precise details about his grandparents are not extensively covered in public records or official biographies.

So, figure on Putin having 15 more years or so (huge error bars, of course)

Expand full comment
anomie's avatar

Oh great, he has a decade and a half to watch his body rot away, knowing that the world will brand him a failure and a disgrace the moment he dies.

...You guys are seriously overvaluing living.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks for the response! It all depends on his state of health. My parents are about the age that Putin's parents lived to, and they still enjoy music, shows, walks, their apartment, and their cats. If his health is bad, you may be right. I know that I don't know how healthy or not Putin is. Do you have better information?

I also don't know whether or not he cares what the world will think of him after he dies. He will, of course, be past caring at that point. Personally, I expect to be forgotten instantly once I die, and I, likewise, will be past caring at that point.

Expand full comment
Humphrey Appleby's avatar

Arguably the threshold for an existential threat to Putin is far lower than that for an existential threat to Russia. A sufficiently embarrassing conventional defeat might be an existential threat to Putin (in the sense of likely to get him couped and killed). A bullet to the head will render him just as dead as a nuke to Moscow. How confident are you that Putin will accept probable death-by-bullet over rolling the nuclear dice? Remember you are betting not only your own life but that of everyone you know.

Expand full comment
JamesLeng's avatar

What's the scenario where actually pushing the big red button makes him less likely to be assassinated? When a column of Wagner Group tanks headed for Moscow, no credible conventional force in position to stop them, Putin negotiated rather than escalating.

Expand full comment
Humphrey Appleby's avatar

The one where Putin drops a nuke on Ukraine and the West declines to make a nuclear response. In this scenario Ukraine either abruptly sues for peace, or turns into radioactive glass. Of course, if the West makes a nuclear response, we all die (including Putin). But if he's going to die anyway...

Escalating past the nuclear threshold is playing chicken, but are we really going to risk New York, Chicago, Houston etc over Kiev? I think we'd be crazy to do so. Especially since we have no treaty commitments to Ukraine. We've promised to die for London, and also for Tallinn (I think the latter was a mistake but its done now), but we've never promised to die for Kiev.

As for the Wagner scenario - what was Putin going to do? Nuke Moscow to save it? Wagner didn't exactly have a capital city (or any population centers) to hold hostage, now, did it? I think his not going nuclear over Wagner tells us very little. His not going nuclear over the recent small scale Ukrainian invasion does tell us something, but maybe only that he was confident said invasion could be conventionally defeated, and that his grip on power was for the moment secure.

Bottom line, it's dumb for us to unnecessarily play chicken.

Expand full comment
JamesLeng's avatar

> But if he's going to die anyway...

Nuking Ukraine only improves his survival odds in that scenario if it convinces whichever internal faction was planning to kill him for losing the war to back off, with such certainty that it outweighs the increased external risk of provoking everybody else's "second strike" reflexes.

I suspect he would be far more confident in his ability to fight off conventional assassination, talk his way out of a show trial, or flee the country before an angry mob caught up with him - just another day on the job at the KGB.

Expand full comment
Humphrey Appleby's avatar

By the same logic, the Kaiser would never authorize unrestricted submarine warfare, Imperial Japan would never respond to the US blockade with war, etc. If you put people in a corner where they feel their only choices are `death today, or death tomorrow,' they generally choose `tomorrow' and hope that maybe the horse will learn to sing. As for fleeing the country - where would he go, that he wouldn't end up facing trial for war crimes? North Korea? (And how does he know Kim won't hand him over to his successor in Moscow?). Nah, I don't think he has any plausible route to a peaceful retirement. It's win or die. Or at least avoid defeat long enough to die of natural causes.

Best avoid putting him in a position where he feels its `death today or big red button and maybe death tomorrow.' Bleeding Russia by defending Ukraine to the last Ukrainian is fine. In fact, our current policy seems pretty well calibrated to maximize our geopolitical position while minimizing risk.

Whether the status quo is better for Ukraine than a negotiated solution is less clear to me, but we're not forcing them to fight. They have agency. Or at least Zelensky does. To what extent he represents the will of the Ukrainian people is not clear to me. And of course `the Ukrainian people' are not a hive mind - probably there is a distribution of opinions on whether to continue to fight. But Zelensky is the one whose opinion matters.

Expand full comment
JamesLeng's avatar

The Kaiser could be reasonably certain that fleet of submarines still existed, and was in good working order. If the officer in charge of some missile silo had replaced warheads with cleverly-arranged contaminated garbage, spent half the maintenance budget on bribing inspectors to overlook any resultant oddities, then pocketed the other half, he certainly would not have reported that to Putin.

Seeing how poorly the conventional army performed after everyone assured him it was up to the task, Putin would naturally be wondering whether his elite cadre of professional thieves and liars had done the exact same to the nuclear arsenal. if they did, and he pushes the big red button but it's a dud... well, he'd better hope it's the sort of dud he can pass off as "just testing the missile with a conventional warhead" again. Visibly trying and failing to deploy nukes gets him all the costs of that desperation strategy, and more, with none of the reward.

Expand full comment
dmm's avatar

Isn’t the fact that dementia gradually worsens relevant, invalidating that example?

Expand full comment
Eremolalos's avatar

I actually do not think it is inevitable that AI will be become far more powerful than it is now or, as Scott seems to imply, that AI itself will become dangerous, as opposed to just being a tool that can be used to cause harm. (It already is dangerous in that way ). Here are 3 paths I see to that not happening:

1. It is possible that the thing we are calling AI is mostly maxed out as of 2024, and will not become much smarter and more powerful. We will try various clever things make AI able to do crucial things it can barely do right now. For example, we can tweak its training or add on modules to enable it to reason or brainstorm in a free-form wide-ranging way, not via an algorithm it’s been taught; and to have knowledge of the world, as experienced through the 5 senses, knowledge that is integrated with its knowledge about words and sentences; and to have plans and goals that are internally generated. But maybe those tweaks won’t work well, and won’t make much difference. Instead, it may happen that as technology advances, we will choose another route to building something that empowers us to be smarter than we are now. Seems to me that a promising route might be to link some of the capabilities of an AI (huge memory, much faster than us at some things, much better than us at abstracting patterns from huge data sets) to a biological brain, which would then supply the ability to do the things we can’t figure out how to get AI to do. So link it to a human brain — or conceivably the brain of, say, an octopus. But if that is the route that succeeds, I don’t think AI would be the natural name for what we’ve got. We’d see the thing we’d made as a hybrid, and call it something like CEBI (cyber-enhanced biological intelligence.). So while the thing would probably be dangerous, and might kill us all, it would not be accurate to call that an instance of AI doing us in.

2. I am sure that as tech advances, we will create things that have greater capacity to harm and kill us than the things we have now. But I don’t think one can be sure that tech will keep advancing. Something might happen — a plague, a prolonged no-holds-barred war — that knocks our progress way back, so that we have to spend 100 years recovering. And the something might recur over and over, so that the technology winter lasts thousands of years. Or the something might just kill us all off by the 4th our so time we cycle through it. I don’t think that’s so unlikely that we should round the probabilty down to zero. Many species have been knocked way back and had to recover, many have gone extinct. Ours can too.

3. A third possibility is that by the time we develop superintellgent AI *we* will be so different that the people of 2024, predicting what will happen to the human race, would not consider these future beings to be members of the human race. In that case, too, we can say that AI never destroys us (instead, it destroys those weird non-human future descendants.) Expanding on that a little: In 500 years our descendants may look pretty different, as a result of genetic manipulation. And they may live in a way that to us is so incomprehensibly weird that we feel no connection with them. Maybe none of them ever engages with any of the others, but only with some digital realm. Maybe we will all engage in science-based cannibalism, as a result of the discovery of the great benefits of eating human frontal lobes. Perhaps once this was discovered the possibilities were so wonderful that everyone’s ethics and everyone’s sense of what counted as a person just morphed into a shape that made the eating of human brains no big deal. While I’m sure the exact details I made up are unlikely, I don’t think the possibility is tiny that our species in 500 years will be so unlike us that we would not think of them as human.

Expand full comment
Donald's avatar

> It is possible that the thing we are calling AI is mostly maxed out as of 2024, and will not become much smarter and more powerful.

At most in the sense that current aircraft are mostly maxed out, despite nuclear having a Way higher energy density. So in the sense that there are fundamentally more powerful techniques out there, but you need a major redesign to find them.

> But I don’t think one can be sure that tech will keep advancing. Something might happen — a plague, a prolonged no-holds-barred war — that knocks our progress way back, so that we have to spend 100 years recovering.

I will totally grant that.

> Many species have been knocked way back and had to recover, many have gone extinct. Ours can too.

"It's possible something else kills everyone before AI has a chance to" is not exactly a reason not to worry about AI.

> I don’t think the possibility is tiny that our species in 500 years will be so unlike us that we would not think of them as human.

Arguably another version of 2.

Expand full comment
Richard Sprague's avatar

Some of these probabilities are distorted by ill-formed questions. "It would be surprising if AI never becomes dangerous" is presupposing the idea that "dangerous" means something like Skynet. Arguably AI is already dangerous to, say, that Air Canada guy whose AI Chatbot made bad promises. Often the future, especially the far future, doesn't so much answer questions as change their relevance. "When will the sharp rise in horse manure overtake New York's ability to dispose of it" became irrelevant, not because somebody heeded the warnings, but because of automobiles.

Expand full comment
Donald's avatar

It would be surprising if AI never becomes what you might call skynet-dangerous

Lets be more specific. It would be surprising if human brains were anywhere near optimal at any kind of reality understanding and controlling activities of importance in high tech contexts.

Humans got in control of the earth because of our higher intelligence. And are largely held in that position by our higher intelligence. If humans mind-swapped with dogs, the humans with dog bodies would soon be in charge over the dogs with human bodies.

So, what does a world where this problem becomes "irrelevant" look like?

Steam engines were a thing when there was a "When will the sharp rise in horse manure overtake New York's ability to dispose of it" problem. So the solution was reasonably foreseeable. (Although electric motors were also a thing, so electric cars were also a possibility)

Still, forseeable in the sense that what really happened was among a short list of plausible options.

Can you describe any future where AI risk becomes irrelevant?

Expand full comment
Jeffrey Soreff's avatar

>Can you describe any future where AI risk becomes irrelevant?

Suppose that both something else became the Big New Thing and scaling of LLMs really is plateauing. For concreteness, say the Big New Thing is that someone _does_ find a room temperature superconductor, and all of the excitable investors and newly-minted Ph.D.s swarm to that for the next 20 years, while GPT-o1 languishes as a still-hallucinates-too-much "dusty deck". LLMs in 2050 might have all of the interest and concern that analog electronic controllers have.

Expand full comment
Donald's avatar

This feels like a 100 year old castro.

In this world, sure the hype cycle has moved on. But the hype cycle isn't that important anyway.

And sure, there was a lull in LLM tech. But this is less a world where AI is irrelevant. More a world where nothing really smart has been invented yet.

Sooner or later it's likely someone finds a way to seriously improve on 2024 LLM levels.

In this world, ASI is still physically possible. And likely to happen eventually.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks!

>In this world, sure the hype cycle has moved on. But the hype cycle isn't that important anyway.

Mild disagree. The hype cycle both reflects and, to a degree, directs where both financial investment and the attention of bright people are concentrated.

>Sooner or later it's likely someone finds a way to seriously improve on 2024 LLM levels.

>In this world, ASI is still physically possible. And likely to happen eventually.

Maybe yes, maybe no.

Re AGI : Well, _we_ are physically possible, and we are generally intelligent, so we are an existence proof for physical systems with human levels of intelligence. I'll also make the stronger claim that, since we are neural networks, there must be some way of building a neural network that has a human level of intelligence.

BUT, that doesn't mean that current LLMs are close enough to the right interconnection pattern to achieve AGI. Amongst other limits, if I understand correctly, they are generally feed_forward_ systems, while our brains also contain feedback connections.

A lot depends on whether we are currently barking up the right tree, or close to the right tree.

One limitation that specifically LLMs have, is that their training process optimizes for predicting the next token, _not_ for generating the right answer (other machine learning endeavors, e.g. AlphaFold, don't have this problem). Now, predict-the-next-token was a _BRILLIANT_ way to get enormous volumes of labelled training data. And it isn't _that_ different from what humans do in coughing up the desired answers to problems... But it still isn't fundamentally training for correctness, and, even with the very impressive things ChatGPT can do, it still generates lots of wrong answers.

We might have another AI winter. I hope not. I'd like to have a nice quiet chat with a real HAL9000. But if the reliability can't be pushed up in more-or-less the architecture and training process we have, we might be stuck with technology that can replace some jobs, but is not true AGI.

Re ASI: At least for AGI, we are the existence proof. For ASI, the best we can say is that structures analogous to human organizations are existence proofs of that level of capability - which is indeed better than individual humans' ability, but we just don't know how much further that can be pushed - or if the returns to intelligence itself saturate at some nearby level.

Expand full comment
Donald's avatar

> A lot depends on whether we are currently barking up the right tree, or close to the right tree.

Imagine LLM's are totally 100% the wrong tree. (Note there can be multiple right trees, the brain needn't be the only design that works)

The right tree still exists. It's just not related to LLM's.

> if I understand correctly, they are generally feed_forward_ systems, while our brains also contain feedback connections.

Brain signals feed forward in time. So think of each slice of a LLM as a instant snapshot of a brain, feeding forward into the next instant.

> but we just don't know how much further that can be pushed - or if the returns to intelligence itself saturate at some nearby level.

We can be pretty sure that the returns to intelligence go at least somewhat beyond human. Looking at human minds there are various things that seem blatantly sub-optimal. Like neuron signals traveling at a millionth of light speed, or how we totally suck at arithmetic. Or brains needing to fit through birth canals and having only 20 watts.

Expand full comment
Jeffrey Soreff's avatar

>Imagine LLM's are totally 100% the wrong tree.

Ok.

> (Note there can be multiple right trees, the brain needn't be the only design that works)

True!

>The right tree still exists. It's just not related to LLM's.

Many Thanks, agreed! But, in that case, the successes of LLMs tell us nothing about where the right tree is, or how hard it is to climb. This is essentially restarting from scratch. I doubt that this is the case, but we can't really rule it out till we are within clear sight of AGI, in the sense that the remaining problems are _purely_ engineering ones with clearly bounded effort.

>Brain signals feed forward in time. So think of each slice of a LLM as a instant snapshot of a brain, feeding forward into the next instant.

But if we are reusing the same network for multiple passes, then backpropagation has to take into account the effect of repeated use of the _same_ neuron in all of the passes. E.g. one can't increase a weight for a pass 1 use of a neuron while also decreasing the same weight for a pass 2 use of the same neuron. I don't know the details, but I think that there are training methods for such architectures, but I think that they are more complex and expensive than backpropagation.

>We can be pretty sure that the returns to intelligence go at least somewhat beyond human.

Yes, and I think we have examples of this in human organizations which are able to do things that no individual human can do.

>Looking at human minds there are various things that seem blatantly sub-optimal. Like ..., or how we totally suck at arithmetic.

Agreed.

One other hypothetical case that I should have mentioned earlier in the discussion:

Another way that progress could go is if we achieve more and more of human capabilities, but the economic cost of using the resulting model winds up being _greater_ than the cost to use a human. For all our progress in semiconductors, the cost of propagating through a simulated neuron is greater than the cost of a firing in a biological neuron. Conceivably, we could wind up with full AGI, but where the cost renders it of only academic interest.

Expand full comment
None of the Above's avatar

One possibility would be a big war that wrecks the world's chip fabs and major datacenters and plunges the world into a massive decades-long recession, from which (thanks to depleted natural resources, declining population, or some such things) we never really recover. That would potentially stave off AI risk concerns for a very long time, maybe forever. Though it doesn't exactly sound like a *good* outcome....

Expand full comment
Joel Long's avatar

I think your AI version of this elides the possible argument that the current *training methodology* will not scale to AGI even with arbitrary compute, which is different from asserting no technology with arbitrary compute reaches AGI. Given the recent discussion of plateaus I think this is an important distinction.

Expand full comment
dogiv's avatar

Not that it makes much difference, but I think you misread the article on dementia rates. The numbers given are diagnosis rates per thousand person-years as a function of age, and the diagnosis rate increases linearly. For an 80 year old it would be about 70 per 1000, or 7% annually, if I'm doing the math right.

Expand full comment
Yash Bharadwaj's avatar

Beautiful Blog 🫶🏻

Expand full comment
Matto's avatar

The phenomenon you're describing is well known in some branches of engineering. Here's an excellent Dan Luu post about "normalization of deviance": https://danluu.com/wat/

A more formal take is Rasmussen's model of safety (https://medium.com/10x-curiosity/boundaries-of-failure-rasmussens-model-of-how-accidents-happen-58dc61eb1cf) which points out that safety margins in systems are most often invisible, which makes it a slipper y slope to borrow from those margins to improve other margins (eg. Productivity, profits) until you run out and your factory or exabyte-scale store system explodes.

Interesting to see this applied to epistemology.

Expand full comment
Wolf Solent's avatar

Isn’t this just a negative version of the inductive fallacy? “X will continue NOT to happen because it’s always failed to happen in the past” vs the more standard “X will continue to happen because it’s always happened in the past.”

Expand full comment
RandomHandle's avatar

Regarding the probabilities with Putin: After three weapon escalations fail to cause nukes, the probability that Putin is non-escalatory is slightly higher, but the probability of nuking is higher, since in the 48.5% chance that Putin is escalatory, the odds we're getting closer to the crossed line will have increased.

Expand full comment
Dan Megill's avatar

I agree with the individual points, but Scott sidesteps the question of motive.. If I were to tell someone to "take warnings of caution for what they are:", I'd end that sentence with "a person advocating for their own interests"

The general isn't really saying "Beware of Putin", he's saying "give me resources." A politician isn't saying "Biden has Dementia", he's saying "vote for me." A doctor isn't saying "Don't take 100mg" he's saying "Please don't sue me."

Finding an informed individual who's indifferent to whether or not I vote for Biden, and is giving a purely disinterested opinion on his health, is vanishingly rare.

Expand full comment
le raz's avatar

Re Putin: why an earth would he nuke anyone? It wouldn't serve his interests, or the interests of the russian oligarchs. It is in his interest to make the threat, but not to carry it through.

If there one thing that the invasion of Ukraine has demonstrated, it is that Russia has a borderline incompetent military that is in no way ready for world war 3.

The west isn't served well by being cowards. It's always the temptation, but cowardice is what actually causes world wars.

Expand full comment
Humphrey Appleby's avatar

right. like world war 1, which happened because everyone was too cowardly to fight.

A little knowledge is a dangerous thing. Not every year is 1938. Sometimes it's 1914. Or 1962. The trick is knowing which category 2024 is in (or maybe its in a different category altogether).

Expand full comment
Chastity's avatar

The invasion of Ukraine shows Russia is more like WW2-Germany (engaged in constant escalations and annexations until stopped) than WW1-Germany (concerned about its own national security and only taking military actions to protect itself). Ukraine in 2022 presented zero military threat to Russia. No, there was no threat of NATO expansion into Ukraine, because NATO membership requires no ongoing territorial disputes, and Ukraine had an ongoing territorial dispute with Russia (Crimea). If you modeled Russia as WW1-Germany, the logical conclusion was that Putin would not invade in 2022, because he had already stopped NATO expansion on his periphery via various frozen conflicts. But Putin did invade in 2022, because Russia is not WW1-Germany.

Expand full comment
Humphrey Appleby's avatar

Maybe, or maybe it's like 1870 Prussia (Franco Prussian war). Or like 1846 America (Mexican American war). Or like 1853 Russia (Crimean war). Or like 1979 Russia (Soviet invasion of Afghanistan).

Seriously, there is more than one war in history that one can make analogies to! Blindly mapping everything to 1938 is lazy, a form of Godwin's law, and imo anti-credible.

Expand full comment
Chastity's avatar

Okay, what analogous behavior on Putin's part suggests he *isn't* interested in territorial expansionism? Because he has literally invaded a sovereign country and annexed large chunks of territory by military force, which represented no plausible security threat.

Expand full comment
Humphrey Appleby's avatar

Hitler is hardly the only person in history to ever invade a sovereign country, not is he the only person to annex large chunks of territory by military force, including from countries that represented no plausible security threat. To stick to the examples I gave above, 1870 Prussia annexed Alsace-Lorraine, 1846 America annexed Texas/NewMexico/Arizona/California, 1853 Russia tried to annex parts of Romania, 1979 Russia invaded and conquered Afghanistan... `Invading a sovereign country' and `annexing large chunks of territory by military force' is basically every war in history, prior to 1945.

The meaningful question is not `is Putin interested in territorial expansionism,' it is whether he can be deterred from crossing *our* red lines without direct military confrontation between NATO and Russia. Because direct military confrontation between NATO and Russia has a very high risk of ending up with all of us dead. And I don't think we have any red lines or critical interests in Ukraine.

Current Western administrations, by the way, seem to be modeling 2022 Russia as similar to 1979 Russia, and responding in much the same way (by supplying the guys they've invaded with weapons, but not by directly declaring war). I suspect this is roughly the right model, and `Germany 1938' is the wrong one. We can debate at the margin whether we should be sending more weapons or less, but if you think we ought to be waging war on Russia (because Putin is Hitler!) you are imo insane.

Expand full comment
Chastity's avatar

> `Invading a sovereign country' and `annexing large chunks of territory by military force' is basically every war in history, prior to 1945.

Hm, so just a (good!) norm that's lasted 80 years and been a major contributor to the long-term global peace.

> We can debate at the margin whether we should be sending more weapons or less, but if you think we ought to be waging war on Russia (because Putin is Hitler!) you are imo insane.

If Hitler had nukes we would also be foolish to go to direct war with him, so that doesn't make sense as a counterpoint.

There are in effect two ways to deal with potential conflict with an adversary: either we want to deter them (by increasing our military threat), or we want to avoid provoking them (by decreasing our military threat). Avoiding provocation works when the other party basically wants to sit around and not do much, such as the DPRK. It does not work when the other party continuously works to increase their power by constantly militarily intervening in their neighbors, like Russia has been doing for over a decade now.

Expand full comment
anomie's avatar

Everyone thought Ukraine would fall immediately as well, and look how that turned out. There's nothing to be lost by betting it all when you have nothing to lose.

Expand full comment
neco-arctic's avatar

While the general case seems sound, I don't think a single one of your specific points is valid. Nobody actually said Fidel Castro was invincible, for instance.

I don't think anyone would actually behave like you're describing in response to a new, untested drug. Maybe some people are very stupid but it is widely known that drugs have the potential to kill you if you take too much.

Russia will, of course, use nuclear weapons in response to some level of escalation. However, that level is actually clearly spelled out in the official Russian nuclear policy, which Ukraine is pretty much never going to be in a position to violate. Even Putin knows that the political ramifications of nuclear first-use would be suicidal - a limited use would likely cause his allies to abandon him and trigger direct NATO intervention in the war. Countervalue use would likely trigger direct counterforce nuclear intervention - probably from NATO, maybe from anyone else who doesn't want the nuclear taboo broken (Read: Everyone). It is not worth doing this to deter weapons supplies. What Russia is actually doing to deter weapons supplies from the west is engaging in covert operations against dual-use infrastructure in Europe, which is dickish but a lot more reasonable.

I do not think Joe Biden has dementia. The speech he gave after the election concluded was very eloquent and had a lot less stuttering than usual. It is not possible to cure dementia right now, so I think that his pre-existing speech impediments were probably just exacerbated by the stress of the campaign trail. Once that was taken away, a load was lifted off his chest. I don't know why that wouldn't have been the case in 2020. Maybe some sort of classified thing was also taking place that we don't know about which was stressing him out, or maybe he was putting the whole thing on to help Donald Trump for some reason.

With AI, I think that my main objection is with the implication that powerful AI will necessarily be malevolent. The orthogonality thesis says that intelligence and values are decoupled from one another, so I don't really see why we should automatically expect a more powerful version of existing AI systems to take a sharp left turn on us. I can go into depth but that is my main objection to what I think is being implied here. Also, we don't really know the limits of where this paradigm will get us, and it's very possible in my opinion that you can't get much further than capability analogous to a long-lived immortal human being, on account of the fact that an AI model wouldn't have any superintelligences to model itself off of, only us. I don't think this, but I think it's a fairly reasonable possibility.

Expand full comment
JamesLeng's avatar

> Russia will, of course, use nuclear weapons in response to some level of escalation.

Unless they haven't got any working nukes left, which seems like a realistic possibility given the lack of tests, and how much other soviet legacy tech was hollowed out by kleptocrats.

Expand full comment
neco-arctic's avatar

Do you want to bet your life on it? How about 50 million lives?

Thankfully, the situation in which the nukes may actually be used is basically impossible anyway, so it's really not something we should worry too much about.

Expand full comment
John Schilling's avatar

This is not a realistic possibility. You don't need tests involving nuclear explosions to maintain a reliable stockpile of legacy nuclear weapons. Lots of Soviet legacy tech was *not* completely hollowed out by kleptocrats, e.g. at least 90% of their spaceships and 40% of their tactical missiles work, and nuclear weapons would be at the very top of the "don't let the kleptocrats mess up this stuff" list.

If you want to say that only 60% of Russian nuclear weapons are likely to work in a crunch, then yeah, that might be a pretty good estimate. If you want to say "most...", then that's a bit far for my taste but not wholly implausible. If by "most" you mean "99.9%", then no.

Expand full comment
Korakys's avatar

If we still have hackable systems and terrorists in 2500 then something went quite wrong.

It's much more natural to speculate about possible new threats than possible new defences.

Expand full comment
Darkside007's avatar

The chance of Putin going nuclear over Ukraine is zero. Why? Because it would the end of the Russian state and of Putin. and a Putin insane enough to commit suicide over Ukraine would necessarily be insane not to stop there and to push until someone forced him nuclear anyway. So whether or not Putin would go nuclear, the most rational response is to assume he will not.

As for Biden, I'll note that you *didn't* update to "people saw something I didn't" after they were proved correct, but instead "They were wrong every other time and were right entirely by accident."

As someone who believed Biden was losing his cognitive ability in concrete ways, there were more than sufficient signs over the past Presidential term to make that a straightforward assumption. And, as a reminder, AFTER Biden collapsed onstage with "Anyway, we beat Medicare", the same people who said he was as sharp as ever continued to lie and say he was still razor-sharp in private, even as all public events and questions were shut down and he quit the campaign after winning the nomination. BUT HE IS STILL OFFICIALLY PRESIDENT. This administration is straight-up Weekend at Bernie's - who is even in charge right now? Harris? Jill Biden?

Expand full comment
Humphrey Appleby's avatar

>> a Putin insane enough to commit suicide over Ukraine would necessarily be insane not to stop there and to push until someone forced him nuclear anyway

He's an old man. Keep him bogged down in Ukraine for ten years, and he'll probably be dead of natural causes. But in the meantime, best to avoid putting him in a situation where he feels he has a choice between `dying today, or pressing the button and maybe dying tomorrow.'

Our current policy seems pretty well calibrated. Significant escalation seems like an unwarranted risk.

Expand full comment
Melvin's avatar

> He's an old man. Keep him bogged down in Ukraine for ten years, and he'll probably be dead of natural causes

"They predicted Putin would die soon in 2025, 2030, 2035, 2040, 2045 and 2050 and were wrong each time..."

If he lives as long as Castro he'll still around to bug us in 2042, and if he lives as long as Jimmy Carter he'll be around in 2052.

Expand full comment
David Piepgrass's avatar

How would it end the Russian state?

If Putin nukes Ukraine, it might well be the beginning of the end of the Putin regime (as even China would likely shun it), and the end of nonproliferation as a viable policy. But nobody will be willing to start a nuclear exchange with Russia in response. You can expect a very strong conventional response from the West that would likely push Putin out of Ukraine, though.

Expand full comment
Stephen Pimentel's avatar

I agree with Scott that we should reject the Generalized Anti-Caution Argument. But it simply does not follow from this that we should accept any given Caution Argument.

The Caution Argument follows a particular form, of which Scott has given several examples. But we can construct an argument of this form for damn near anything. For example, we could construct a Caution Argument that maybe Mexico is going to invade the U.S. soon, and we should heavily invest in preventing this. It would be perfectly reasonable to reject this argument, based on particular knowledge about Mexico.

Likewise, whether to accept or reject a Caution Argument about AI should turn on our particular knowledge and beliefs about AI, not on our rejection of the Generalized Anti-Caution Argument. To do the latter would be bogus psychologizing, not anything like a sound argument.

Expand full comment
Jeffrey Soreff's avatar

>First, in discussion of the Ukraine War, some people have worried that Putin will escalate (to tactical nukes? to WWIII?) if the US gives Ukraine too many new weapons. Lately there’s a genre of commentary (1, 2, 3, 4, 5, 6, 7) that says “Well, Putin didn’t start WWIII when we gave Ukraine HIMARS. They didn’t start WWIII when we gave Ukraine ATACMS. He didn’t start WWIII when we gave Ukraine F-16s. So the people who believe Putin might start WWIII have been proven wrong, and we should escalate as much as possible.”

>There’s obviously some level of escalation that would start WWIII (example: nuking Moscow). So we’re just debating where the line is. Since nobody (except Putin?) knows where the line is, it’s always reasonable to be cautious.

Agreed. I take a rather fatalistic view of this particular risk, because there is one special aspect to it:

Putin _actively conceals_ where his actual red line is. Yeah, yeah, he is motivated to do this by the logic of brinksmanship, but since he _does_ claim that various escalations which have already been passed allegedly "were" his red line, we are _forced_ to do more guessing than in e.g. your medical example.

BTW, why no earthquake example? As in the mortal Castro example, the odds go up every year, and we know (with very good probability) that _eventually_ the San Andreas will slip once again.

Expand full comment
Ryan W.'s avatar

With Biden, I think we had two games going on. The first was 'How likely is Biden to be dementing.' And the second was: 'how likely are people "in the know" to honestly relate the early onset of symptoms?' When it comes to insider information, recognizing that the Republicans were throwing everything at the wall hoping it would stick really didn't inform us about whether or not Democratic insiders would run a candidate who showed early signs of dementia. (Which, in retrospect, they must have seen before we did.) It was a mistake, in retrospect, to trust Democratic insiders. But I can appreciate the choice. Given an event which is *gradual in onset* and given that event is one where insiders will see warning signs before the general public, I really do think we can contemplate the possibility that the event in question is not likely *within a given near-term timeframe* if the mechanism of failure is known and we trust the insiders.

No smoke in 2024 should translate into no fire in 2025.

i.e. This drug will be hepatotoxic eventually. But since liver enzyme tests show zero signs of toxicity at the current dose we're probably not getting close to a lethal hepatotoxic dose. So we can safely increase the dosage by 10mg till we see the early signs of liver failure. (I acknowledge that there are many toxins that have very sharp mortality cliffs or kill without being hepatotoxic or take long enough to act that once warning signs appear the course of the disease leads inevitably to catastrophe. But this is a simplified model.)

Regarding Putin, the use of tactical nukes is certainly a fairly binary event and one with relatively little advance warning. But with Ukraine there's the added complexity that the West is playing a game of chicken with Putin. In a game of chicken, broadcasting caution is a potentially poor strategy. A willingness to take your steering wheel, detach it, and throw it out the window in full view of the opposition is a potential strategy that merits its own impacts on your adversaries predictions.

With AI, then, there's the question of "What will the first signs of an AI apocalypse look like? How much warning will we get. And will we be in a position to heed that warning and change direction when we receive it. Or is some event, like the military use of AI guided drones, an irrevocable turning point."

Since I still don't know what an AI apocalypse would look like, I'm not sure what to avoid. "Terrorists manufacturing bioweapons" seems like a survivable event, and worth it if we get faster vaccine development in exchange and other medical advances. AI hacking certain systems implies that AI could also defend those systems and isolate vulnerabilities, bringing us back to the much hated "race to AGI."

Part of my frustration with this topic is that "AI apocalypse" is really a lot of different events. Just like "Humans" is a lot of different events. Each event type would need to be broken out and explored separately and their warning conditions weighed on their own individual scale.

Expand full comment
Melvin's avatar

I think that this is just a subspecies of the very popular general argument "People have been wrong in the past, therefore you are wrong now".

Other popular subspecies include "Nobody believed [famous genius] either".

I don't mean to denigrate the argument completely. The fact that people have been wrong about something vaguely analogous in the past should sometimes prompt you to consider whether there's an analogous mistake you're making or a systematic bias that might cause you to think in a certain way. If there's frequently rumours that so-and-so is in ill health and dying, and he never does, then you should consider whether his enemies are constantly spreading bullshit rumours when you hear a rumour that he's sick and dying.

Expand full comment
GasStationManager's avatar

(disclaimer: self plug)

Broadly agree that AIs will eventually become dangerous, the only question is when. I think restrictions based on size is only part of the answer. Not saying we shouldn't do it.

I think it is likely that there are multiple, distinct ways to achieve AGI. (Here's a handwavy Bayesian argument. Humanity is an existence proof that general intelligence (GI) is possible. The next AGI would be the second instance of GI. Conditioned on that AGI is possible, how likely are that there are exactly two ways to achieve GI?)

I would say this is true even among the approaches that people are pursuing right now.

So it is a race, where winner takes all. Some approaches are safer than others. How do we make sure the safe approach gets there first? This is a mechanism design problem.

Here's a thought experiment: in the future where AI become potentially dangerous, suppose we have not yet given AI the authority to autonomously execute actions; they can only give advice. Does there exist a mechanism under which we can be sure the AI has done the work correctly according to our prompt, and therefore we can safely following its advice? If such a mechanism exists, we can apply it now, or start building the infrastructure so that it is ready when the time comes.

I have proposed an answer (https://gasstationmanager.github.io/ai/2024/11/04/a-proposal.html), for the restricted but important domain of code generation.

The general case will be harder, but people have done some research on it (https://arxiv.org/abs/2405.06624).

Even if you disagree with my specific approach, it can be useful to try to come up with your own answer to the above hypothetical.

Expand full comment
BxM11's avatar

I think this is a little overly reductive because this basically covers safety as an only-upside position. In the example of the doctor not wanting the patient to use a drug, that's a pretty clear safety failure. Presumably, there's benefit in some way to using said drug, and if the doctor is trying to set a barrier at 100mg when the lethal dose is 1000mg, that's a problem. The doctor has just denied whatever the benfits would be because of fears of an overdose an order of magnitude later. That's the sort of safetyism that correctly brants someone as "not worth listening to" on topics where there are even the remotest semblence of tradeoffs.

Expand full comment
Random Musings and History's avatar

With Putin, there's also an argument that he wouldn't want to risk nuclear war and the death of both himself and all of the people whom he loves and values for the sake of Ukraine. And this might very well be right, but there's also a risk of an accidental nuclear war, as there almost was in 1962 in real life.

Expand full comment
Hoopdawg's avatar

I think most of what's important here can be summed as (and subsumed into) the following general rule: "Don't judge predictions, judge the reasoning/model behind them."

"Castro will die in the near future because human lifespan is finite and he's getting older." - yeah.

"Putin has stage 4 cancer and will kick the bucket any day now!" - so far, obvious bullshit.

I also think more committed Bayesians may be unwilling to phrase it that way, because their goal of thinking in probabilities is more than to provide a model of how we reason, it's to provide a guideline of how to reason, and this demands that the probabilities be meaningful in and of themselves. I think it's a misleading way of thinking, bare probabilities are a basic messy heuristic for when we lack anything else and should be substituted for (or rather, evolved into) a knowledge/inference/cause-and-effect-based reasoning as soon as possible.

Expand full comment
Shady Maples's avatar

A colleague of mine in military intelligence (not as cool as it sounds) remarked that his shop tabooed the word "escalation" in relation to Russia's invasion of Ukraine. The word "escalation" has been abused so much in both intelligence reporting and mainstream media that it entered the Penny Panic zone well over a year ago. It's absurd to wring our hands about whether ATACMS strikes in Russia are "escalation." Conventional deterrence failed and the conflict escalated past the point of missile strikes when Putin launched a full blown ground invasion in 2022. The tight constraints on Ukrainian retaliation appear to have forced them into a dominated strategy.

Expand full comment
Neil's avatar

Some of these are better handled by realising that just because someone lies/is wrong a lot doesn't mean the opposite of what they say is always true. While this sounds obvious, it is a mistake I keep falling for. One of the most confusing things about Trump is that he isn't always wrong. Which is a shame, because otherwise we could just ask him everything, then do the exact opposite of what he said, and usher in a new age of glory and plenty.

Applying to your examples: Republicans were always claiming that Biden had dementia, even when he didn't. The correct conclusion was "Republican claims about Biden have no informational content, and we should reach a conclusion independent of them"

Putin keeps threatening that the next escalation is going to bring about terrible retriubtion, when it hasn't. The correct reasoning is "Putin's threats have no informational content, and we should reach a conclusion about his actions independent of them."

I agree that your framework better applies to other examples where we're being warned that we're taking a risk and something might go wrong, such as the doctor on dosage.

Expand full comment
sfregaa's avatar

There seem to be some misapprehensions here how nuclear conflicts work. Nuclear powers have doctrines that determine when to use their weapons and they publish these doctrines to establish deterrence against existential threats - in order to deter you have to actually communicate that you will use your nukes, and also when (ignoring the concept of ambiguity for a moment). During the cold war NATO expected massive warsaw pact tank armies attacking western europe and planned to stop them with tactical nukes. Both sides expected this to escalate to strategic strikes within 48 hours. The rationale behind this was of course the expectation that the side that strikes first suffers less damage because it would destroy at least some part of the others strategic arsenal and hope it can intercept most of the rest (which is why some anti-air systems can be seen as first-strike-enablers - you have an incentive to strike first because your aegis batteries or whatever will cripple the retaliation strike). Therefore this other side would launch as quickly as possible after it had reliably determined an attack has started. obviously, the shorter the warning time, the more likely a first strike is successful, which make the decision to launch yourself first more likely. This is why we had a treaty banning medium range nuclear missiles in europe - the warning times were so short that the risk of an unintended escalation were simply to high. That treaty is gone now, together with a lot of other arms- and conflict-control treaties. Additionally, missile technology has improved so much that in europe NATO and Russia can hit each other within less then 10 minutes, maybe within less then 5. The warning times was additionally reduced by moving NATO launchers closer to Moscow. The eastern border of ukraine is not only an excellent starting position if you ever want to drive tanks over flat land to the russian capital, but it also allows *really* short flight times for your missiles if you want to hit the kremlin. So putting american missiles there would be a big red line for russia, and in 2021/2022 the state department basically said that they would not exclude putting them there.

That‘s the general context. Now, nobody would start a nuclear war simply because someone shot a conventional cruise missile at one of your installations and killed a few people, or even a few thousand people. Armed conflicts don’t work like that, unless you‘re in a hair trigger situation like during the cold war when everybody expected a real conflict to escalate to nukes within hours or days anyway. But this is not the situation we have anymore, we‘re in normal conflict territory (at least i hope we are because a lot of the deescalation mechanisms have been dismantled since the 80ies). However, the really bad wars, and especially WW1, were far in the „we did not do it because it was easy, but because we thought it was easy“ territory. This is typical. Escalation happens bit by bit, because at every step even if you try to respond exactly proportional to your enemies actions there are always good reasons to do *more*. Doing less than the enemy would signal weakness which is something we just cannot allow. We must send a strong message to *really* deter the enemy so he will become scared and not take the next step. This is unacceptable, so we must hit back hard. The enemy is a barbaric devil and any restraint is appeasement and diplomacy is something only traitors want. This dynamic has been researched ad nauseum and any history book will show multiple examples, again and again. Germany started with sending helmets and by now has politicians demanding to give the ukrainians cruise missiles preprogrammed to strike the kremlin.

When Putin is reminding the west of its nuclear doctrine he is not threatening nuking Washington due to a few atacms striking russian territory. He’s talking about the final stage of a process that has a lot of escalatory steps in between. But these steps happen naturally and necessarily. Russia *must* respond to strikes that - irrespective of the domestic propaganda declaring them purely ukrainian attacks - were planned, supported and programmed by US and UK specialists, just like any country that wants to keep some credibility. There are many ways russia can react, most probably in a similarly deniable way, like by giving some groups opposing Us/uk advanced weapons or intelligence, by covert operations in some third country, or simply by striking NATO personnel in the ukraine (i suppose there are targets there that they had spared until now, e.g. embassies or similar installations). And this will trigger another reaction from the west. And russia will react again, etc etc.

I remember a time when this was common knowledge.

Expand full comment
Jonathan Weil's avatar

This is by far the best explication of the “cautious” argument against military aid to Ukraine that I’ve seen, so thanks for that.

Regarding WW1, and “we thought it was easy”: I think the “over by Christmas” attitude was a lot less widespread, especially at the highest levels, than you say here. See eg Grey’s “lights going out all over Europe” remark; the strong desire by most British and German statesmen to avoid Britain entering the war (and their belief, right up to the last minute, that this was achievable); the feeling, on the part of leaders in Germany and Austria and France and Russia, that conflict was inevitable, they had no choice, and 1914 was the least bad time to do it.

Expand full comment
sfregaa's avatar

I remember reading in several sources that Lord Kitchener (the guy on the "Britain wants YOU" poster, pointing at the viewer) was the only one in the British cabinet to actually expect a long and bloody war, due to his experience with modern weapons in the colonial wars. I know that the german military establishment was kind of looking forward to the fight. But it is clear that by 1919 absolutely nobody thought the war had been a good idea and everybody would have turned back the clock if they would have been able to. Especially at the highest level because these were the aristocrats whose sons where slaughtered as they made up most of the office corps. But at the same time, as it became clear how bloody that war turned out to be, it couldn't be stopped anymore. It's almost a one way street.

Another thing worth pointing out: The media representation of the Ukraine war is complete bullshit and more a form of entertainment and "strategic communication" to keep the masses in line. All these "red lines" and "nuclear retaliation" that Putin supposedly threatens us with are simply inventions of the western press. The one real red line that was crossed was "bringing Ukraine into NATO", which then resulted in Russias military intervention. The other explicitely stated red line was "if NATO troops attack Russian territory, we will strike back at western assets". That was the point of the test of this quasi-ICBM strike as of a few days ago: we have non-nuclear (and therefore credible) means to reach out and touch you, wherever you are in Europe and no matter which bunker you're sitting in.

Any state must retaliate after being attacked,

- to show other states they are not a punching ball that anyone can hit as they please,

- to satisfy the demands of their population

- and for their government not to lose face

and therefore the strikes that we are talking about at the moment are simply symbolic, a form of communication. Ukraine has about 100 ATACMs and maybe 50 Scalp/Storm Shadows. Whether they use them or not is almost irrelevant for the trajectory of the war, and any retaliation strike would be the same. Maybe painful, but ultimately pointless. It is, however, a good step further on the slippery slope to doom.

Expand full comment
Jonathan Weil's avatar

There’s a big difference between “didn’t know how bad modern war would be” and “thought it would be easy.” Asquith wept when telling his wife that war was inevitable. Grey — by far the strongest proponent of war in the British cabinet — saw it and its consequences as a darkening that might last his entire lifetime. As for the German high command, I think they’ve been a little misrepresented over the years. Recent scholarship is more nuanced than “looking forward to the fight.” Eg the often-cited scene of the Kaiser cracking open the champagne: what they were actually celebrating was an assurance from their foreign service that Britain would not enter the war. The buzz was killed stone dead when news arrived that, actually, Britain was going to fight.

Expand full comment
sfregaa's avatar

you are right, I suppose today's representation of these events is rather biased in it's focus on the flag waving and cheering parts of society back then. But still, I don't think anybody expected the full extent to which the war would develop.

Expand full comment
John Schilling's avatar

The difference in flight time for a missile from Ukraine to Moscow, and one from Latvia (or now Helsinki) to Moscow, is negligible. And thus so is the probability that "Oh Noes! The Americans haven't absolutely ruled out putting missiles in Ukraine" was ever a serious part of Vladimir Putin's thinking.

Expand full comment
Kirby's avatar

I’m not sure this is sound logic; I think “suppose something will eventually happen” is begging the question in all the scenarios where caution is actually warranted.

For example, Joe Biden will eventually get dementia or die, but people are specifically concerned about the probability that happens in office - and those stats are similar for Trump’s 4-year term.

Or consider AGI: there is some -level of intelligence- where AI is “dangerous” by some definition, but the relevant question is whether that is achievable outside of a controlled setting (for example, if something proposes a novel superbug but cannot make it, or the time/power required make it impractical to run for long enough to realize the danger, or the world is destroyed by some other route).

You’re privileging a particular hypothesis (X will/not happen) that is technically inevitable, but ignoring the branching paths where that inevitability is not realized.

Expand full comment
David Piepgrass's avatar

> There’s obviously some level of escalation that would start WWIII (example: nuking Moscow). So we’re just debating where the line is. Since nobody (except Putin?) knows where the line is, it’s always reasonable to be cautious.

Although this is true, it's also true that the Kremlin always just suggests that the next thing that the U.S. hasn't done yet is a very dangerous Red Line, and that the last thing they just did is taking us recklessly on the course to WWIII. The Kremlin doesn't want to communicate truthfully where their real red lines are because it would negate the battlefield advantage of delaying Ukraine aid produced by lying, so if you're anti-Putin you want to negate this advantage as much as possible, which is typically done by arguing that all the redlines are always pretend.

I've been following the war very closely, and my read is that Putin's personality is that of a gambler, yet a careful/risk-averse gambler who doesn't actually want WWIII; poker, or blackjack+card counting, rather than slot machine. Putin knows perfectly well that he's lying about his red lines, so I think that Putin isn't going to flip his lid and suddenly launch the nukes just because his most recent threat is taken exactly as seriously as all the others. I think Putin is smart enough to indicate where the real red lines are in a way that is hard to mistake, by sending a signal that is unusually costly. For example, perhaps resuming nuclear tests on Russian soil would be politically costly or risky, and thereby tell us that we're actually close to a real red line. Or, maybe they would blow up something just inside the border of Poland (which risks triggering Article V) and call it a "warning shot". The next *real* escalatory step is nuking Ukraine, or at least causing a meltdown in one of Ukraine's nuclear plants. But politically this would be not only very risky and very costly, but also ramps up the cognitive dissonance in Russians who nod along as State TV talks about how peaceful Russia is. So, I don't think he'll jump straight to that without some intermediate step (as for the recent ICBM hitting Dnipro―well, was it a costly signal? It doesn't seem so.)

I find it a bit remarkable that they made a formal alliance with North Korea in order to hire North Koreans to fight along the border with Ukraine. This sounds like crossing a major escalation line, but Russian propaganda AFAIK just treated it as no big deal[1], just a logical step after buying artillery shells from their valued ally, more than "we're preparing to fight WWIII". I think the significance of North Korean troops is actually just that Putin considers forming a new alliance to fight Ukraine to be safer (for his own regime) than just doing another partial mobilization (even though the latter would cost fewer rubles and fewer reputation points outside Russia).

[1] https://www.youtube.com/watch?v=-T1dcxZoIR0

Expand full comment
Andy G's avatar

In the Biden dementia case, you describe Republicans as “chicken Little” but motivated.

What you missed is that Republicans paying attention were fed a regular diet of his gaffes by right-of-center media, while you who consumed primarily left-of-center media, and specifically no right-of-center television or radio media, quite literally had a lot less information than the average politically interested right-of-center person

Expand full comment
Kaspars Melkis's avatar

I find these arguments too framed.

Castro eventually died but it didn't change much about life in Cuba or its international relationships. Why would someone even warn so much about Castro dying? They are not meaningful.

Now a doctor warning about not exceeding the dosage. It seems that this doctor doesn't know much about the drug either. If the drug was vitamin C then any dosage is basically fine. Saying that even water is lethal in certain amount misses the point. If the drug was insulin and the patient was getting better from lower dosage but not sufficiently then increasing the dosage to a certain point would make him feel better but then at some point he could get hypoglycaemia and die. The question is really about finding the proper dose instead of constant warning not to take more.

And I don't like argument about Ukraine as well. If you frame the question that eventually Russia will start a nuclear war, then all you can do is let Russia to gradually conquer the world – because any resistance has some change of causing a nuclear war. We cannot agree to this and this way at looking at things are lazy, the same way as the doctor warning not to increase the dose of insulin despite prescribing only a ridiculously low dose that only partially helps.

We have to think out of a box instead.

Expand full comment
Peter Gerdes's avatar

I have two big issues with your Putin example. First, it assumes provocation is the right framing for this question rather than something like game theoretic advantage. Indeed, it seems like a good first approximation is to model Putin as a rational actor and then adjust as necessary to account for deviations from that. And what it seems like Putin wants is primarily to remain in power and remain alive and secondarily to return Russia to great power status (in his mind) by securing territory and status lost by the breakup of the USSR.

Ok, given that background, what determines Putin's likely response isn't how much of a provocation it represents but what maximizes his interests. I agree that there are real escalation risks but they have nothing to do with provocations and everything to do with whether Putin feels backed into a corner.

Second, it seems to be temporarily inconsistent in that if you asked someone to come up with a graph of provocations prior to the invasion of Ukraine they'd likely rate even a true commitment to defend Ukraine from Russian agression as not representing much of a provocation at all. That doesn't mean they can't have become provocations but urges caution.

Expand full comment
Navigator's avatar

Some observations:

1) A lot of your examples are so unlike each other that they even detract from the abstract point. For example, the eventual age and decline of human beings is absolutely proven. There are billions of examples and nothing approaching a counter-example.

b) Toxic really doesn't mean that "people will die if you forcibly inject or cram down gallons or tonnes of this thing down their throat". It's hard to infer anything from this as plenty of substances will be fine unless you consume so much that you're too full and at that point an adverse reaction isn't a comment on the substance at all.

c) It's highly likely given Putin's makeup, but they are plenty of stupid or pacifist or cowardly people who actually wouldn't respond in time and might dither so much that they actually lose the ability to fight back; in this context maybe they are so weak USA has time to literally destroy their response capability or command structure. I don't think that's likely, but there's a non-zero chance. At any rate, this is not the same kind of thing as "this old guy is going to die pretty soon".

d) Artificial Intelligence is another example. To highly online people in your circles it seems completely obvious AI will become AGI or ASI. That's not at all obvious. AI just leverages calculation too complex for humans. It's tautological that given some amount of calculating power, AI really will be super-intelligent, but I think the chances this actually happen are minimal. This is more like Y2K. Also, most of what people like you believe about AI comes from listening from people with enormous financial motivation to sell this incorrect premise that this level of calculation will become ubiquitous.

AI can help humans enormously, but autonomous killer robot AI is just 100% nonsense. Ok, maybe I'm wrong, but now we have proven natural facts: ageing, a very confused claim about toxic substances, speculation about how a specific individual would react under pressure, and now timelines for technological advance. All these completely different things are being conflated into a category of "things that will eventually happen". I'd say one is a fact (ageing), one is very likely (Putin's reactions), one is a confused claim that's hard to reduce to clear sentence, and the last one is wild speculation about how far technology can advance.

Then finally the analysis of how you would price "Republicans overthrow democracy" reveals the blindspots in your way of thinking. It's a ridiculous way to try and use numbers in an equal seeming way even though thinking from first-principles will tell you how to price this. Here's how you price it, ask these questions:

1) What is the probability they would overthrow democracy if they could do so?

2) What is the likelihood that they think they can get away with it?

3) What is the likelihood they will be stopped if they attempt it?

Just multiply the 3 percentage figures together and you'll get a number that actually makes sense rather than using a 'Bayesian' approach. That approach is just not a good fit for situations where someone's waiting for a chance to do something that they can't easily do openly.

In any case, I think all political factions would have answers of close to 100% for installing themselves as dictators if they were 100% sure it worked, so this analysis needs to be about institutional guard-rails and the mood of the population rather than the character of the Democrats or Republican politicians. Anyone in that struggle wants power anyway.

Expand full comment
Maynard Handley's avatar

This feels like everything else in the EA space -- an attempt to quantify the unquantifiable, which looks clever to EA folks, and absolutely insane to everyone outside the cult.

Whether it's trying to quantify the inner-life of shrimp, or the "probability" (whatever that means) of "AI" "taking over the world", I see very little of value in these exercises. You start off with numbers and generate the result you want. OK, I can start with numbers that differ from yours by a factor of 10^6 and get the results I want. So what?

That's "unfair". Well, is it? What proof of any useful sort do you have that shrimp pain is not 10^-6 less than you claim? What proof of any useful sort do you have that Putin's fear of nuclear war (or associated matters, maybe hell, maybe destruction of the Russian Nation, maybe going down in history as worse than Hitler) is not 10^6 larger than you imagine?

I think normal people are correctly skeptical of these analyses because they can see the bullshit that goes into them. I DO NOT CARE if you are using Kolmogorov's Ultra-Complex Theorem for Extracting Conclusions if the input into the theorem is garbage. And IT IS in these sorts of cases.

The Tragedy of Rationalism (as David Chapman summarized) is that it assumed *everything* could be derived from the correct axioms -- and did not realize just how perverse those axioms might be (and in fact are, in math and physics, the two cases where we have at least a mild clue as to WTF we are doing). But without the correct axioms, you need to be a lot more skeptical of your conclusions.

Expand full comment
Navigator's avatar

Well said. It's always the assumptions here. And people making these arguments are never going to engage with anyone who questions their ludicrous assumptions.

Expand full comment
Socrateehees's avatar

Fwiw, I think the issue with Putin threatening nukes is a Schelling fence issue. We simply cannot allow someone to threaten to use nukes because they are losing the conventional war they chose to launch in a neighboring country. Giving any credence to such a threat opens a Pandora's box of future bad-faith negotiating ploys for anyone that has nuclear weapons.

Expand full comment
Humphrey Appleby's avatar

``we cannot allow someone to threaten to use..."

Strange turn of phrase. What are we going to do about it?

Expand full comment
John Schilling's avatar

Deny them the benefit they seek to obtain by making such threats, in this case the conquest of Ukraine. Or, more generally, ignore such threats as a matter of policy - a "threat" that nobody pays attention to is arguably no threat at all.

Expand full comment
Humphrey Appleby's avatar

How do we deny them that benefit, without actually making war?

Seems to me that we've tried arming their enemies, and it hasn't been enough. It was worth trying, it's probably worth continuing to do (bleed the red army on the cheap), but do we really have any additional levers left to pull, short of starting a shooting war between NATO and Russia?

Expand full comment
FluffyBuffalo's avatar

"Arming their enemies" is not a yes/no thing. The impression I get is that Ukraine can keep up with Russia as long as the supply of arms is generous, and starts losing ground when they run short on ammo and gear. Also, as we read in the news every day, the US still imposes restrictions on the use of the weapons they deliver, and Putin obviously doesn't like it at all when these restrictions are gradually removed.

Expand full comment
Humphrey Appleby's avatar

So be specific. Some googling leads me to a press release saying the US has given 56b of military assistance to Ukraine. That sounds like a lot to me, but you say it's not enough. OK, how much should we be giving? 500b? That would be roughly the entirety of the DoD budget. 5t? That would be roughly the entirety of the US FedGov budget. Are we allowed to limit just how much resources we pour into a conflict in some remote backwater?

Expand full comment
FluffyBuffalo's avatar

I don't know what the sufficient number is, but the current level seems to be the bare minimum, so 50% more plus a lifting of restrictions should be a good step in the right direction. I also think that Europe should do its part along with the US.

And no, it's not a remote backwater. It's in Europe, right next door to the US' NATO allies, which the US has sworn to defend if they are attacked, and the war is a continuation of the conflict that dominated every aspect of US foreign policy in the decades after WWII.

Russia is obviously still a problem; the question is whether to contain and resolve it while the Ukrainians bear the brunt of the suffering, or to allow it to escalate to other countries.

Expand full comment
John Schilling's avatar

I'm pretty sure the answer *was* less than fifty billion, in 2022, no strings attached. But that would have meant Russia losing the war, and we apparently couldn't have *that*.

Unfortunately, the way we did deliver that aid has allows the Russians to adapt to everything we've throw at them, well, thrown at the Ukrainians to throw at them. To put it in immunological terms, we gave the Ukrainians an inadequate course of antibiotics, which produced a temporary remission of symptoms, and when that stopped working we gave them an inadequate dose of a different antibiotic, lather rinse repeat and now Ukraine is infected by a strain of highly resistant Russians.

That was our mistake, so we're arguably on the hook for fixing it. I *think* we could get the job done for less than another $50 billion if we got smart about it, but I haven't put a lot of thought into it because it's kind of a moot point now. And there's room for Europe to help as well, possibly just by paying for American weapons to deliver to Ukraine (though that usually has domestic political problems).

Expand full comment
AlanDee's avatar

One wonders if the entire purpose of this post is to pretend that the risk that Trump poses to democracy is (i) not real and (ii) not significantly greater than the risk was when he first took office in 2016?

The risk is real and increasing.

In 2016 he had no idea what he was doing, no idea of what the job was or how to perform it, and no idea of how the other institutions operated.

By Jan 6 2020 he did manage to get a large mob to disrupt election certification. And no one knows what would have happened if things had gone differently on that day.

Now it is 4 years later, he knows the job, he knows the institutions, he has appointment in mind, he is prepared, he knows he can rally the masses, and he has explicitly and implicitly indicated his intentions prior to being elected.

Starting the discussion by looking at the risk as it stood before is an irresponsible misdirection.

Expand full comment
Krenn's avatar

"Eventually at some level of technological advance, AI has to be powerful, and the chance gets higher the further into the future you go."

I'm not convinced that's true. At least, I'm not convinced that AI is ever going to be meaningfully more powerful and more dangerous than, say, Elon Musk, and we don't run safety hearings on Elon Musk's fundamental right to exist as a living entity.

Any plausible version of AI has to live on a huge server farm, it it has to draw huge amounts of electricity and cooling under human control, it has to act and communicate through human-provided institutions and resources, and it has to always be persuading humans to NOT simply press the off button for that AI's power source.

So, what's the worst thing AI could possibly do? Publish nuclear weapon design documents to small children? That's already legal, those documents already exist, and twitter is already a thing. And if an AI breaks, and refuses to stop spamming every child with every nuclear weapon design document, we have off buttons.

I'm not seeing how any safety council on AI uses, or any AI testing mandate imaginable, could possibly have any work to do. What's it going to do, tell us that AI learned from humans and humans are jerks? We already knew that. We already decided to let super-powerful humans exist anyway. We already decided to let free investigation of dangerous research topics exist without prior restraint.

AI can't really do anything to us we're not already doing to ourselves, so what on earth could a safety council possibly have within their power to do that would help more than it hurt, and how did we ever decide that giving a 'safety council' that kind of pre-emptive review power was a good idea, after repeatedly rejecting giving any other organization that kind of power in every prior similar context?

Expand full comment
Joel's avatar

Biden had visible mental decline in 2020, and it progressed as the years went by. Just because there existed instances of him looking fine doesn't wipe away all the nonsense, rambling and wandering. Including it in this post tells us more about Scott's bias than it does about any other point he might want to make.

Expand full comment