Suppose I think something is inevitably coming and it’s just a matter of time before it does, but actually this is false.
When should I make the switch to believing that it’s not coming eventually? How high of a dose repeated how many times could the patient take before the doctor should say, well I guess it really is harmless and I was wrong all along?
I think the problem is that single data points carry little weight when it comes to determining whether a threshold for an effect exists. If you can approach that threshold carefully in safe conditions with statistical confidence (i.e. a large double-blind study or whatever, for a drug), then that helps confirm the existence of the threshold, and if it doesn't, you can do another study that carefully explores larger dosages, until you've explored the relevant space of interesting dosages.
In the case of thresholds where we all could potentially die if we accidentally reach it unprepared, it seems trickier.
If I thought something was "inevitably coming", I hope I'd have very strong arguments for making that case in the first place. Then, the probability would be updated as time goes on and new information can be tacked on to the original case. I think there's a spectrum between the outlandish and the probable, and the updating of that probability will be impacted by the nature of the claim as well as the duration of the claim. If I believe "AI will inevitably bring about the end of civilization", it's unlikely I'll adjust my probability downward rapidly because anytime I see progress in AI, in my framework it adds to that probability even if the end of civilization hasn't materialized in any tangible way. In this case I would probably try to come up with reasonable arguments against my case of inevitability, because I'm not sure I want to live with such confidence in a belief that has such a long expiry date. If I believe "a recession will inevitably hit the US economy before 2028", we're in much easier territory. I will update my belief as macroeconomic data comes along, I will stay informed by reading the financial stability reports by the leading central banks, in one breath I will adjust my belief downward or upward as time goes on, with a "time value" aspect that decreases more and more rapidly as we come to 2028.
Here are a couple of things inevitably coming: an extinction-level extraterrestrial impact, and the Yellowstone supervolcano erupting. I don't see how anyone can dispute either of these things happening, and we can even give an approximate time range for when they are likely, but no one can tell within a useful time range when they will occur.
On the other hand, some people believe the Rapture to be inevitable, and it is even stated that no one will know in advance that it's here. It seems to be opinion-based whether the Rapture is coming, depending on who you ask.
Satya Benson was specifically talking about believing in something that is inevitably coming, but that turns out to be false. I agree that certain examples like religious ones are firmly opinion-based and therefore cannot be much changed by incoming information, except if the person slowly becomes an atheist or changes religion as they update their belief system.
I like your example of the Rapture; I believe it is inevitably *never* going to happen, and I don't have space in my framework for updating my probability on that one. If that's false, I'll be dead wrong (ha!), never having given it a second thought.
Partially agreed. It is arguable as to whether the non-apocalyptic version is the "same" religion. ( And I'd give better than 50/50 odds that, given a few members each of the apocalyptic and non-apocalyptic versions in the same room, that they will indeed argue :-) )
I can dispute the extinction-level extraterrestrial impact.
Here's a possibility which results in never-asteroid-impact: no impact for enough time for humanity to create a spatial program able to deflect (or otherwise deal with) incoming asteroids; humanity flourishes and manages to keep the program successful until the far future is so unrecognizable that the notion of an asteroid impact loses meaning (e.g. Earth is destroyed by anthropogenic means, humanity uploads into the matrix and turns the solar system into computronium, humanity survives enough time for the Sun to explode and destroy the Earth).
For the Yellowstone supervolcano, I don't have a concrete example but I would be surprised if there wasn't some possibility of geo-engineering that would get rid of the risk.
I can revise the details: the extraterrestrial object is coming, and absent action, will cause an extinction-level event (ELE). So if people don't exist, it will still happen. If people can destroy and/or deflect it, it would still count as inevitably happening, like if you know a hurricane is coming then you take action to evacuate, no longer making your death inevitable.
You should have a certain non-zero probability for the possibility that the thing never happens, and some probability distribution for when exactly it is supposed to happen if it happens.
For example, you are 99% sure that X happens, and you believe the chances are uniformly distributed among the following 1000 days.
If it doesn't happen the first day, stay calm. Even the hypothesis "it will happen" has assigned 99.9% probability that it will not be on the first day. So your probability only moves a little.
But after 990 out of those 1000 days have passed without the thing happening, your beliefs should be 50:50. The "not happen" hypothesis has 1% priors and 100% data match; the "happen" hypothesis has 99% priors and 1% data match.
"The Ups and Downs of the Hope Function In a Fruitless Search"
"On Bayesian updating of beliefs in sequentially searching a set of possibilities where failure is possible, such as waiting for a bus; the psychologically counterintuitive implication is that success on the next search increases even as the total probability of success decreases."
Just in a normal Bayesian sort of way. Your hypothesis is that the 10am bus is on its way. You start with a 99% prior that it is on its way since about 1% of buses get cancelled. If it's 10:03 and the bus hasn't arrived yet, you ask yourself "given that the bus is on its way, what's the probability it will be at least three minutes late?" And so forth.
When Fidel Castro reaches 90 and is still not dead, you don't need to start adjusting too far away from the hypothesis that he's a normal human being with a normal lifespan, but if he reaches 150 then you're going to need to consider other possibilities.
I think this is a more complicated version of the same math, which makes it annoying and hard to follow and which I was hoping to avoid. It would look something like:
- Hypothesis 1: Castro is mortal, with his actuarial table looking like that of any other human
- Hypothesis 2: Castro is immortal (maybe he has discovered the Fountain of Youth).
...and then you update your balance between those hypotheses as time goes on. Given that (1) still leaves a little probability mass on Castro living to 100, 110, etc, and (2) starts with very low probability, in practice even Castro living to 110 should barely change your balance between these hypotheses. Once you get to some level where (1) is making very confident predictions and being proven wrong (eg if Castro is still alive at 150) then at some point 2 becomes more probable. You're doing this in the background as you're doing all the other updates discussed above, but hopefully within a normal regime it doesn't become relevant.
What would that look like in a case like the doctor and the experimental drug, or AI risk, where there's no standard graph to compare to and no clear "hmm, they/we should've been dead by now" point?
In principle you should still be able to define your prior probability distribution over the LD50 of the drug; it will just take more work since you don't have good data for comparable situations. For AI risk it's even harder because the choice of independent variable is not obvious (time? compute? quantity of training data?) but once you pick one, the math is the same.
Given that we now know that most or all of the "people living to 125" stories are false (they have all the hallmarks of mistaken birth dates or fraud, and they are uncheckable due to e.g. city hall burning down), you should probably start updating around 110.
Because it's such an outlier, there is actually a pretty big skeptical movement about it. It's been a while, but I read both the strong pro and con cases for it, and overall I lean slightly towards the age being real. The alternative theory revolves around her unmarried daughter assuming her identity to inherit her financial situation.
All people living to 120 stories turn out to be false apart from one. In the Slatetarcodex days there was a debare on whether the Jeanne Calment claim was likely to be false. I haven't seen anyone update on this.
As I mentioned in my response, it seems worth breaking out near term predictions that have gradual onset of symptoms.
"The chance of Castro dying of cancer" is something that should probably be predicted 3 to 6 months in advance. So if you can verify that Castro does not have cancer now, your prediction of Castro dying in one month from cancer might strongly defy the actuarial tables. Provided you trust the doctors examining Castro.
Yeah, in particular this quote below is wrong, unless Scott is referring to a more sophisticated actuarial table than the one from the SSA (https://www.ssa.gov/oact/STATS/table4c6.html):
"Your probability that he dies **in any given year** should be the actuarial table. ... If Castro seems to be **in about average health** for his age, nothing short of discovering the Fountain of Youth should make you update away from the actuarial table."
Older adults in average health are much less likely to die than the actuarial table's probability of dying that year, since the deaths for that age will be dominated by those in below average health with gradual onset issues. This is much less true of a young person, where accidents are more prevalent.
Logically, I think such failed predictions **should** be evidence against inevitability. We should reduce our probability somewhat, even for the inevitability case and these should count as failed predictions. Its just that those predictions have a much lower prior initially. If Castro lives to 500, I'm definitely going to be assigning more weight to the possibility that he's immortal than now.
If we're playing higher/lower for some number between 1 and 100, and I've reached 99 and told its still higher, I know its 100 - but only if my assumptions were right. If we began with a 0.1% probability that the number might be outside the range, then really my new position ought to be ~91% its 100, 9% its outside the range. If that number is really "Putins threshold for escalation", then "Never" *is* increased each failed prediction, simply because it takes a proportional share of the probability space eliminated. It just doesn't mean its particularly high, if we started with a low prior.
But to someone who began with a much higher prior against escalation, potentially those failed predictions could push their "Never escalates" probability to somewhat probable levels.
40% on all the missiles are rusted solid and won't fly.
20% on you'd need to nuke Moscow.
And
20% on Putin actually following through on his threats.
20% on random provocation N setting him off. (Say as 0.1% on threat 1 to 200).
With a prior like that, we can say that yes Putin threatened nukes 136 times already. But there is a 0.1% chance that he launches nukes at the 137'th attempt at Ukraine to defend itself.
Under this model, Scott would be right. Following your logic 64 times would give a 6.4% chance of nuclear war, which would be bad.
My model is more, ‘Putin’s threat lost its force when he complained so loudly earlier in the war, letting Ukraine strike Russia with ATACMS was safer this time than it would have been at the start of the war’. My position on AI risk is similar, enough good guys with 10^26 FLOPS AIs can probably stop a bad guy with a 10^27 FLOPS AI, if they have enough practice stopping bad guys with other 10^26 FLOPS AIs, even if the 10^27 FLOPS AI itself is the bad guy.
> enough good guys with 10^26 FLOPS AIs can probably stop a bad guy with a 10^27 FLOPS AI,
Can enough good guys with T-rex can stop a bad guy with Godzilla?
There are 2 questions here.
1 ) Power scaling. 1 order of magnitude is bigger than the gap between humans and monkeys.
2) What is the state of the 10^26 Flop AI's. Are all those AI's are flawlessly trustworthy. Are all the humans and AI's are working together to bring down the 10^27 Flop AI?
Alternate vision, some of the 10^26 Flop AI's are American, some Chinese, some owned by OpenAI, Deepmind, microsoft, baidu etc. The Russians and Ukrainians and Koreans and Israel and Iran are all trying to use their 10^26 AI's in active conflict against each other.
The AI's display all the worst behaviour of recent chatbots, unruly toddlers and drugged up monkeys. These little chaos gremlins delight in finding novel and creative ways to disobey their instructions. They aren't smart enough to break out of their cage. A drone AI might perform flawlessly in simulations, but once it's in a real drone, it just draws contrail dicks in the sky until it runs out of fuel and crashes in a field.
Ten monkeys could absolutely stop one human if you dropped the human in a jungle. My instinctive response is that each new AI will be better at manipulating geopolitical conflicts, but the geopolitical conflicts will get harder to manipulate each time that happens. I don’t know if that’s how it’d play out. Would Trump have been 10x more effective if he’d been 10x smarter, or would he have sieved the same niche?
Perhaps. I do feel that dropping a human in a jungle with no warning or preparation or tools is a bit skewing the hypothetical towards the monkeys.
If the human had some training in relevant skills, and a few weeks before the monkeys showed up, they could probably make a bow and arrow or something.
What if you dumped 10 monkeys in New York for every person living there?
And this doesn't even address why the monkeys would want to team up against the human, as opposed to fighting the other monkeys.
Part of the answer here is that one person's predictions alone are not enough to answer this question well. You could make up a bunch of factors in order to do Bayesian math about it, but you'll do much better by doing things like asking other credible people for their predictions and looking at the direct proximal evidence again.
For example: if one person predicts something, then the same person predicts it, then the same person predicts it, maybe that thing isn't happening.
If one person predicts it, then two people predict it, then ten people predict it, maybe you are climbing up the tails of a normal distribution of predictions, towards the center of the distribution where lots of people predict it and it's likely to actually happen.
That's just the general problem of induction, and there is no general answer to it. After some number of swans, all of which have been white, I should start to think that all swans are white. But any rule for how to do it goes wrong in some worlds. All we can say is that you should have some internally coherent policy for updating (i.e., if you say "the next swan is 75% going to be black", and it isn't, then you should shift the relevant hypotheses by a factor of 3 to 1), and that it's better if you can be lucky so that your coherent policy actually lines up with the parts of the world that you know nothing about yet.
I almost posted about this actually. Because not only should you update every time you see a white swan, you should update a very very very little bit every time you see a non-white object that isn't a swan (all factual claims imply their converses after all).
The thing is that I think most of these discussions of Beyesian thought are kinda similar to looking at 3 swans and a red car and either guessing that either all swans are white or saying "most animals aren't monochromatic and the three swans we've seen aren't strong evidence anyway so we'll just keep assuming we're right."
If you’ve got a random sampling model of the universe, and you think most things aren’t swans and most things aren’t white, then that’s right (and Janina Hosiasson wrote a good paper giving that Bayesian solution to the paradox of the ravens back in the 1930s).
But if you don’t have a random sampling of the universe model, you can actually get positive instances being disconfirming. For instance I currently believe very strongly that all unicorns are pink (because I believe very strongly that there are zero of them) but if I saw an actual pink unicorn, I would no longer be so sure. Similarly, if someone is highly confident that there are no rats in Alberta (ie that all rats live outside Alberta), then observing a rat just on the outside of the line would likely make them less confident, even though it’s an instance of the generalization.
Thanks! These are all great notes, and I'll for sure look up that paper.
I still think Beyesian reasoning when you have an extremely small portion of all the possible evidence (because we can't access most of it, or because a lot of it is in the future, or because we're trying to reason about a one-time event and can't repeat the event in different conditions to help us update our intuition) tends closer to the kind of heuristic guessing that it's supposed to protect against than a lot of rationalists are comfortable with.
It's better than just straight-up using bias to make all your choices, but not by much. Especially if you've written a bunch of recent articles about how you shouldn't update your beliefs based on argument, real-world events, or people you trust being wrong.
There are broadly speaking two kinds of "inevitable" event - those with a fixed probability for a given unit time (e.g. "there's an X% chance of an asteroid hitting Earth" or "there's an X% chance of a nearby star going supernova"), and those with an increasing probability over time (e.g. "humans eventually get old and die"). The OP is about the second kind.
For the first kind, you should steadily update downwards each time the event fails to happen in proportion to your probability estimate - e.g. if you think there's a 50% chance of nuclear war per year, this can be disproven fairly quickly, while if you think there's a 1% chance of nuclear war per year it'll take longer.
For the second type, you should update downwards more strongly as time goes on, since your theory is making incredibly strong predictions. A baby not dying of old age proves nothing, even a 90-year-old not dying proves little, but a 130-year-old not dying is pretty suggestive and a 200-year-old not dying of old age proves a lot.
In the case of ASI, living for a year without a Singularity proves little, living for 50 years proves a lot more, 100 years even more, and living for 1000 years probably settles it.
This is, of course, annoying for people who don't expect to live 1000 or even 100 years. There are at least some other observations one can make besides "has ASI appeared", like trying to graph rates of AI progress, looking for "warning shots", and looking at analogous events, much like one might eat a little of a substance and try to gauge how likely it is to poison you in large amounts based on things like taste and tomach cramps.
Here's an interesting (thought) experimental setup for this:
You have 100 boxes, initially all the boxes are empty. You have an associate prepare them the following way: first she flips a coin, if it lands heads, she doesn't do anything. For tails, she puts a ball in one random box.
Later you open the boxes one by one. Each time, what's the probability that the next box you open contains the ball? At each step, what's the probability that the ball is in one of the remaining boxes?
I haven't done the math, but I think the probability that the ball is in the next box should go up on each step, but the probability that the ball is there at all goes down.
(I think this is easier to see, with an alternative but equivalent setup: you double the number of boxes to 200, skip the coinflip and have your assistant always place the ball somewhere, but then you only open 100 boxes.)
Gwern hosts at his website a writeup on this sort of situation. You are correct that your probability of the ball being in the next box increases, but your probability for it being in any box decreases.
In some of these cases you can get intermediate warning signals before the cliff: For example, some political observers noticed Biden was doing almost no media appearances in the last year or two and raised their suspicion, and Putin can escalate in other ways (e.g. by sending Yemen advanced anti-ship missiles) before jumping straight to nukes. Depending on how severe the risk is and how sure you are you'll get an early warning, it can be better to have a policy of just waiting for that.
(This probably isn't the case for AI, where I think we're likely to get a fire alarm but most fire alarms are already pretty catastrophic and may not leave us enough time to respond after they happen).
Political observers were noticing Biden doing unusually few media and public appearances *during his campaign*, and the observers on the right frequently mocked him for "campaigning from his basement". This tendency didn't stop during his presidency, although it certainly and obviously got worse.
Like, the Republicans were right about this one, full stop.
Scott's lack of acknowledgement of his own bias, and that of many prominent media personalities, really weakens that example and IMO the overall piece.
Ignoring that bias is one example of ignoring the lack of information for most people in these situations. If there's a whole staff or a whole structure strongly influenced in favor of presenting a certain image, like limited appearances only during certain daytime hours, limited press questions, etc, then whatever updates an observer makes will be wildly skewed.
Maybe there are good reasons for Scott et al to not trust Republican reporters, but it's not because they were wrong about Biden, he doesn't trust them about Biden because of whatever that past incorrectness was (but really, mostly bias and social bubbles).
The argument during the 2020 campaign was that he was campaigning from his basement because of COVID, and Democrats more generally were more inclined to participate in NPIs than Republicans. This made for a strong alternative explanation, especially as so many other, younger, more active, Democrats also campaigned largely via Zoom (the "from his basement" bit was just the location of his studio in his house in Delaware).
The bit that is relevant is that other Democrats started making a lot more media/public appearances in 2021 and (especially) 2022, and Biden didn't (he made a few more, but not a lot more). That should have been more suspicious but there were so few people making the (IMO, in retrospect, correct) case that 2020 was COVID but 2022 was a sign that he was deterioriating (whether that was dementia or physical deterioration is irrelevant) and being protected from the public meant that I didn't come across a clear version of that argument until well into 2023, after which I did find it relatively convincing.
I don't blame you, to be clear! I blame the reporting malpractice. (And one notable difference to pay attention to during COVID, arguing against that interpretation: How much Zoom did Biden actually do, as opposed to purely scripted and strictly controlled recordings? Biden's media reclusiveness was weird and stood out even against the background COVID reclusiveness.)
Which to be clear, didn't actually end; see all the people who were entirely blindsided by that Trump won at all, much less won as thoroughly as he did.
But, like ... that's not just evidence about Biden. It's evidence about the entire edifice of information coming to you (not you personally, but in general, people who were blindsided by Biden's debate performance). Everything a huge segment of the population learned, they learned through the same set of filters that kept out awareness of Biden's issues.
>It's evidence about the entire edifice of information coming to you (not you personally, but in general, people who were blindsided by Biden's debate performance). Everything a huge segment of the population learned, they learned through the same set of filters that kept out awareness of Biden's issues.
True, unfortunately. Seeing the Biden v Trump debate, a natural question is indeed "So what _else_ are the media hiding from me?"
Many Thanks! Could be - it isn't too clear how the concealment of Biden's cognitive decline was divided between the White House staff and news media. All I can tell from my position was that I kept seeing 'Biden is fine', 'Biden is fine', (I'm mostly remembering New York Times coverage here) then "We defeated Medicare." - oops... (Also - were the media truly blindsided, surprised by what they saw, or did they overestimate what they could get away with?)
I'm didn't stay up to date on the American 2024 election in general. But while if someone wins an election, I expect their close supporters to prop them up with a good solid stick for the rest of their term, regardless of their actual physical or mental health, if someone is clearly incompetent I would have expected a functioning party apparatus to use the re-election as an opportunity to swap the declining candidate out for another candidate.
The outcome of "wait until, inevitably, problems were blatantly obvious even though they must have been privately obvious long before, and even though there was an opportunity to back a different candidate" is such a poorly thought out strategy in an overarching sense that I wouldn't have considered a functioning political party would have pursued it. And especially not without more pushback from inside the Democratic tent.
Senility isn't usually a boolean, you don't flip from not-senile to senile from one day to the next. You have good days and bad days, with the frequency and intensity of bad days slowly increasing, and the frequency and intensity of good days slowly decreasing. His 2024 performance was a particularly bad day - see all the people confused by how lucid he's been behaving recently. Biden has more control over his media presence and isn't showing up when he wouldn't be there to show up.
Such a good point. And generally, in many verticals it’s like the problem of security services which either seem redundant and money down the drain if nothing actually happens, or the most important thing in the world if shit hits the fan and the security saves you. In both cases of course the security service did the same job, we just dismiss it when it’s uneventful. Same with pessimistic predictions.. they could be right to be cautious even if nothing actually happened. As a parent Ive learnt it very quickly as the more cautious parent.
Security services exist because we have experience with the security breaches which we want them to prevent. We don't have experience with whatever Scott is worried about with AI.
Arguably we do. We have examples of new technologies that conferred large military and/or economic advantages to the first people to get them. We have examples of colonial empires where one nation got a big enough relative advantage to conquer large parts of the world. We have the example of humans evolving capabilities that let them take over the world in a way that other animals were basically powerless against. You could play a game of reference class tennis, at least.
But I also think there's a really important higher-level point here about the fact that if you choose to ignore all threats that you don't have precedents for, then it's impossible to stop world-ending threats, because it will always be the case that the world hasn't ended before.
Imagine a billion hypothetical versions of the world, each with different threat profiles. Some of them, fortunately, have no world-ending threats to worry about. Some are at risk of meteors, some of runaway greenhousing, some of alien invasions, others of various other threats. You don't initially know which world you're in, though you might be able to figure it out (or at least narrow it down) by examining the available evidence.
If you do your homework and plan carefully, you might be able to anticipate and ward off some of the threats your world faces. But if you have a policy of ignoring unprecedented threats, then every world that faces a world-ending threat will just die. You might be lucky and be in one of the worlds with no such threats, but investigating whether the world has *already* ended does not tell you whether you're in one of those worlds or not, so it doesn't save you if you're in one of the dangerous worlds.
That's only true of threats which are world-ending in an all-or-nothing sort of way. A world which took respiratory disease risks seriously (perhaps if the reaction against chemical weapons in WWI had escalated into a broader moral panic over air quality?) might have transitioned away from coal, toward some combination of nuclear, wind, and solar, soon enough that the greenhouse effect never became a notable concern.
Similarly, some world which aggressively expanded into asteroid mining might have set up a system capable of deflecting naturally-occurring dinosaur killers as an incidental side benefit of salvaging the debris from industrial accidents, or thwarting malicious attempts to bombard specific cities.
In both cases, the potential world-ending threat was real, and required real effort to solve, but that effort was undertaken in response to related, non-world-ending threats for which there was abundant precedent.
Yeah. I remember a press conference from some time before ChatGPT where some military representative was insisting that AI would not be given the kill switch. And I'm sitting there thinking "well, you still use land mines, right?" The claim that the military would keep the kill switch in human hands if there was an advantage to it not being in human hands is hard to reconcile with current military practices.
Regarding being unable to stop all world ending threats while the world still exists, I suspect that while there are people like what you describe many people have requirements for mechanistic proof that the world *could* reasonably end before they act. Fear of global nuclear war increased significantly after the first atomic bombs were dropped. Models of climate change, coupled with clear evidence that the models function in the near term, (as well as feasible methods to actually address climate change) are required by many for action on climate change. Such requirements increase our exposure to danger. But they also filter out a tremendous number of threats that are unlikely to materialize, so we're not running around like Chicken Little, unable to function.
We are constantly beset by a million perils. Isolating one and saying 'why aren't we doing anything about X' is good, rhetorical strategy if one is worried about X. But the rare honest answer to that question tends to include acknowledging the true current weight of threats a-to-w.
If the Doors of Perception were cleansed, we would see the world as it really is... too complex to think about.
I think it is important to keep in mind the cost of the counterfactual. One may believe that *not* escalating with Russia could have negative consequences if you think he is winning and him winning could be very bad for humanity.
On the topic of AI, which I suspect is what this post is really about, I think the same is true. Even if we know we are in an infinitely escalating game, we cannot ignore the counterfactuals like "if we do slow advancement in the US, it will still happen in China; it will still happen inside the government; people will continue to die of horrible diseases because we can't cure them; we will not generate technological advancements that could allow post-scarcity, etc.
Yes, at some point AI probably will be able to out-compete individual unmodified humans. However, knowing that doesn't mean that we should stop developing AI because the alternative (no AI in the future, or no open source AI in the future, or only governments have AI in the future) may be much worse for most of us than the future where AI out-competes unmodified humans.
Yes, I think particularly in the case of Ukraine/Russia, this is essential.
It's not really escalating to match your opponent's level of aggression - and Russia has been firing missiles into Ukraine for 3 years now.
In 2014, Russia took Crimea, and the West did basically nothing. Well, maybe we learned that Russia was willing to annex neighbouring countries.
If the West doesn't stop Putin's aggression, then countries will fall, one by one, until there are non left. This is not even a prediction, it's what the Russians say they will do.
I looked at the extensive list of examples in the "genre of commentary". A list of seven sources - superficially that's impressive. But it's all tweets. It doesn't really surprise me that Scott can find a few twats making erroneous claims. I'm sure you could do that about anything! But looking more closely, some of them don't even make the argument Scott claims.
The real arguments for why Russia won't escalate are more extensive, and nuanced, and cover things like it not being in Putin's interest to start WWIII e.g. when he's hoping Trump will offer more favourable terms in a few months.
Really, the point isn't that Russia won't possibly ever start WWIII at some point, it's that Russia has a history of making false threats and it's a mistake to take them at their word. In fact, the first tweet in the list says this explicitly:
Slazorii : "Putin made nuclear threats when the west "escalated" to provide tanks, when they provided HIMARS, when they provided ATACMS, when they provided F16s... nuclear blackmail is simply a feature of ru foreign policy. he will not be deposed if ukraine continues to strike inside russia."
Claiming this is equivalent to "Russia will never initiate WWIII under any circumstances" is... well, it's ... I'm going to be diplomatic and say it's a mistake.
> If the West doesn't stop Putin's aggression, then countries will fall, one by one, until there are non left. This is not even a prediction, it's what the Russians say they will do.
Um, what? I don’t say you are wrong, but I must have missed a press release.
Even in the old days of the USSR, I think this was more often stated as that communism would inevitably sweep the world rather than that the USSR would inevitably conquer it.
Interesting for having a visitor say something they weren't meant to.
partial transcript:
(political scientist) We keep talking about Ukraine. In reality, no-one cares about Ukraine.
They have the following goal -
Russia is embarked on a certain expansionist course. This is a fact. And not only in Ukraine, by the way.
NATO countries, European countries, want to somehow halt this expansionist course.
They don't have a concept of how to accomplish that. They can't do it, because -
(host interrupts) it's not an expansion, it's defending natural interests. We should take a short break.
(At this point, the autogenerated transcript says "and as a result of our own safety, we need to take a break.)
Those are both a few months old. I am sure there are more examples on there, and more elsewhere. I'm not talking about the (numerous, constant) threats, although those do make it harder to find the official statements or analysis. I'm pretty sure I've also seen videos of several Russian or allied officials making such statements, but I don't know how I'd find those.
But if you don't think an overall impression of the official state media counts, maybe you would consider that it's not just me that thinks this.
Quote of analysis by Frank Gardner, at the end of this article (from March this year):
//
The latest warning from Poland's prime minister echoes what his neighbours in the Baltic states have been saying for some time; if Russia can get away with invading, occupying and annexing whole provinces in Ukraine then how long, they fear, before President Putin decides to launch a similar offensive against countries like theirs, that used to be part of Moscow's orbit?
[...]
Vladimir Putin, who critics say has just "reappointed himself" to a fifth presidential term in a "sham election", has recently said he has no plans to attack a Nato country.
But Baltic leaders like Estonia's Prime Minister Kaja Kallas say Moscow's word cannot be trusted. In the days leading up to Russia's full-scale invasion of Ukraine in February 2022 Russia's Foreign Minister Sergei Lavrov dismissed Western warnings of the imminent invasion as "propaganda" and "Western hyperbole".
Fair enough. That’s still quite a ways from “there will be [no countries] left.
God knows I’m no fan of either Russia or Putin. (I sent flower seeds to the Russian Embassy back when that was a thing.) But when a country tries to assert its hegemony over a region that it has traditionally considered within its sphere of influence, and is met with the kind of concerted opposition that the West has deployed, I can understand them feeling like they need even more of a buffer than they thought, and while I don’t consider acting on that feeling to be in any way defensible, I would not call it equivalent to gunning for world conquest.
(But if I were in Moldova or Latvia or even Poland, I’d be keeping my powder dry.)
Perhaps I’m just over-reacting to a bit of hyperbole on your part?
> But when a country tries to assert its hegemony over a region that it has traditionally considered within its sphere of influence
The full list of countries this language applies to is:
- Russia
That's the *only* state that has *ever* declared that the countries neighboring them exist only as human shields that will get torched to slow a foreign invasion. And the Russians have always considered their sphere of influence "Everywhere too weak to stop them". The whole "sphere of influence" concept where 1 great power basically rules everyone they can through catspaws and sockpuppets is a distinctly Russian concept.
And that is a distinction from empires, where kings would owe public fealty to their emperor. The Russian model is a group of "independent" countries with sovereign rights and accountability for their "independent" actions. So, for example, Moscow could order a hit, East Berlin security would carry it out, and because East Germany was (not really) an independent country the response would hit East Germany, not Russia.
<i>If the West doesn't stop Putin's aggression, then countries will fall, one by one, until there are non left. This is not even a prediction, it's what the Russians say they will do.</i>
Counterpoint: Russia's military capacity has been sufficiently degraded by three years of war that there's no prospect in the foreseeable future of Russian tanks rolling across to the Atlantic. At this stage, the risk of escalation provoking WW3 is bigger than the risk of de-escalation emboldening Putin to march across Europe, because the latter scenario is no longer remotely possible, if it ever was.
But you realise that /is/ what stopping them entails, right?
The way you put that, I think 'the foreseeable future' is probably shorter than you are assuming. Russia has a massive capacity to consolidate and reform.
In mid-1941, Germany invaded the USSR on a broad front, destroying much of its military and covering hundreds of miles, to reach within 15 miles of Moscow by November. Despite massive losses, the Russians held them back, and by 1943, Soviet armaments production was fully operational and increasingly outproducing the German war economy.
Also, I'd like to point out that I didn't say it would all happen in one 'push'. Russia has been incrementally taking bites out of neighbouring countries for centuries. Sure, it lost some ground with the fall of the USSR, but it's been trying again since Putin came into power.
<i>In mid-1941, Germany invaded the USSR on a broad front, destroying much of its military and covering hundreds of miles, to reach within 15 miles of Moscow by November. Despite massive losses, the Russians held them back, and by 1943, Soviet armaments production was fully operational and increasingly outproducing the German war economy.</i>
Russia was able to do that because of massive aid from the Western Allies, and in particular the USA, which obviously isn't going to be available this time round. Not to mention, the Russian Federation currently has a TFR of just 1.5, putting it at no. 170 out of 204 countries and territories. IOW, Russia simply doesn't have the demographics to keep throwing young men into the meatgrinder, even if it manages to restock its materiel supplies and keep its armaments production on a long-term war footing.
<i>Also, I'd like to point out that I didn't say it would all happen in one 'push'. Russia has been incrementally taking bites out of neighbouring countries for centuries. Sure, it lost some ground with the fall of the USSR, but it's been trying again since Putin came into power.</i>
I think "lost some ground" is underplaying it a bit TBH; Russia basically lost all the territorial gains it had made since Peter the Great's time. It's possible that, in another three hundred years' time, Russia will be back to its pre-1991 borders (although the ROI on wars of conquest is lower now than it was for most of the past three centuries), but TBH I don't think this possibility is worth risking a nuclear war over.
> Russia was able to do that because of massive aid from the Western Allies, and in particular the USA, which obviously isn't going to be available this time round
Worth noting that it does have the support of the world's current industrial superpower (which might have a lot of spare weapons to throw around once its done with its current plans for Taiwan). Their TFR problem is harder (but OTOH eastern europe's is even lower and even western europe isn't much higher).
Your attempt at formatting isn't doing anything.. I'm sorry, I don't know how to quote properly here either.
To the first point, I don't think Russian production towards the end of WWII had anything much to do with western supplied aid. They did that themselves. Western aid helped with the war effort, sure... but that's something of a different matter, and not really relevant here.
The fertility rate isn't a constant, and easily changed in a sufficiently authoritarian regime. Furthermore, when Russia has captured territory, it conscripts the population and uses those to attack the next target. This is what it has done to the adult male population of the annexed areas of Crimea, Donbas and Luhansk.
To your second point, okay, what's your plan? Do nothing whenever Russia threatens to use nukes, to avoid starting WWIII?
Supporting Ukraine in any way is "risking a nuclear war", because Putin is more than willing to threaten starting it whenever aid is is provided to Ukraine.
Here's how I see your strategy going down:
Without support, Ukraine will fall, and Russia will take all of it, in fairly short order.
There will be a pause while the population is subjugated.
However, without much delay, Moldova will also be taken. I don't think there's really any question about that - it's a small, poor country and Russia already has a foothold there.
Georgia will also be claimed at some point, and the West won't be able to respond without "risking nuclear war".
Maybe a couple of additional neighbours may also be sequentially taken, but once all the immediately unstable and unprotected regions around it have been claimed for Russia, there will be a few years while it consolidates and re-arms.
All its neighbours will be panicking and building up their defences, and maybe a few more pacts will be signed. However, these mean very little, because remember, we can't risk nuclear war, and providing support to another country risks that. So all countries look to their own defence.
Meanwhile, Russia infects neighbouring regions with partisans, randomly attacks places and makes claims about how opposing forces did it... you know, the usual, for modern-day Russia.
When the time is right for them, Russia invades a small part of a NATO country. Nowhere populous or important, just some backwater nobody cares much about. NATO doesn't do anything, because of course that would be risking WWIII.
The precedent having been set, Russia assimilates the rest of that country.
Then it repeats this process, from a slightly superior position each time.
Where in that do you intervene? The least dangerous point is at the very beginning, and the second-least is as soon as possible.
Firstly, I said that *at this stage*, the risk of escalating the war is bigger than the risk of Putin being emboldened by peace proposals, so you can drop that silly "We can't do anything whatsoever" strawman you've constructed.
Secondly, you're being extremely blase about how easy it is to control tens of millions of people who really don't want you to control them, particularly because, in your scenario, those tens of millions would have to be armed (hard to conscript them into expanding your empire without giving them weapons, after all).
Russia has lost a huge amount of what you might call its "soviet inheritance" of tanks and APCs. But structurally speaking, its military is much healthier than it was at the beginning of the war. And furthermore, I would argue that there are much worse things than an invasion that Putin can do with control of Ukraine.
In 2022 the military was incredibly brittle because of the overwhelming amount of cronyism and embezzlement. If they had been pushed harder in that state, I think they would have completely disintegrated. But the sluggish response by western powers gave them time to begin addressing the widespread corruption. And the seriousness with which Putin treated the war provided political cover for ordinary Russians to start airing these issues publicly. In fact, complaining about corruption in the ministry of defense is basically the only allowed form of political criticism. It's far from being solved, but it seems like the message has been received that the military is no longer just for show, and results actually matter.
But beyond that, I think Putin's experience with Syria has provided him an effective playbook for weakening Europe. I suspect that even if given the chance, Putin would not do a thunder run to the Polish border. He would take Kyiv and install a friendly government, while allowing some of the military to retreat to the west. This would divide the country in a way that leaves the "true" government economically unsustainable and in a constant state of war.
The result is a failed state, and a long, slow refugee crisis that can be channeled into the rest of Europe to bleed them economically and further increase support for far right anti-immigration parties. If you thought the situation with Syria was bad, just wait until it's a country with twice the population and they're right next door. And Europe and the US have already demonstrated that they they don't have the appetite to meaningfully engage the Russian military in order to stabilize a situation like Syria.
Here’s a counterpoint: a system’s purpose is what it does. When we observe Russia, what we see is a country that is, relative to its modern history, probably at its small geographical area. Certainly Russian territory is very small compared to its Imperial or Soviet height.
In contrast, the extent of US client states is the widest it has ever been. The US has allies and vassals across the entire globe, and every decade a new member joins closer and closer to Russia. It is additionally well established that the US uses color revolutions and sabotage to overthrow foreign governments and install friendly regimes.
Therefore, if we are to evaluate which perspective more closely resembles reality, the Russian claim that the US is bent on world domination has a much closer resemblance to reality than the American claim that Russia is an aggressor nation planning to march across Europe. Now, we may disagree with the Russian conclusion that it therefore has the right to enforce border security, but it is inaccurate to say that there’s clear evidence that Russia plans to invade all of Europe.
Historically, these kinds of border skirmishes and proxy wars between empires are extremely common, and have only culminated in global warfare a few times in about 700 years, which is a pretty good track record of border wars being limited in scope. Therefore, I would strongly disagree that this indicates Russia plans to invade “the entire world” or some other claim.
"It is additionally well established that the US uses color revolutions and sabotage to overthrow foreign governments and install friendly regimes."
What is the strongest evidence you have that color revolutions have been used by the US to overthrow governments and install friendly regimes? Do you think Russia has also tried to overthrow governments and install friendly regimes?
My evidence is the extensive, well documented involvement of the CIA and US Government in counter-Soviet revolutions in Latin America and in the Middle East (including funding for, among other orgs, ISIS when it was fighting the Syrian Government) as well as the fact that color revolutions almost exclusively strike US-unfriendly governments and replace them with US friendly governments
Sure, maybe Russia does get involved in foreign revolutions. It certainly did when it was a member of the Soviet Bloc, but considering it has literally one ally now and the US dominates half the globe it seems likely that only one of these countries has been successful in this practice.
"Color Revolutions" are, pretty much by definition, populist uprisings in favor of democracy and against authoritarian governments. If they don't meet that standard, we just call them "revolutions".
So color revolutions are inherently aligned with the interests of anyone who likes democracy and dislikes authoritarian regimes. Being on the same side as every color revolution ever is evidence that one is on Team Good Guy, not that they are the ones secretly causing all the color revolutions. I mean, color revolutions almost exclusively strike New-Zealand-unfriendly governments
and replace them with New-Zealand-friendly governments; are they part of the conspiracy?
The fact that you think the decision between a democratic and non-democratic government is “Team Good Guy vs Team Bad Guy” is the core of the problem. Democratic governments are not inherently good, non-Democratic governments are not inherently bad. “Democratic” Britain terrorized colonial Ireland, India, Africa, and Asia for centuries. “Democratic” America started a war in the Middle East on false premises, killing millions of Iraqis to secure oil fields for Europe and the US.
“Autocratic” Singapore under LKY turned a third world unlivable slum into a first world country in a generation. “Autocratic” El Salvador turned one of the most violent countries in the world into one of the safest places in North America.
Among the long list of horrible, horrible things the US government and its core military and police organs have done in living memory:
* Perpetrated a policy of implicit ethnic cleansing against Indigenous Americans.
* Funded ISIS, creating the largest terrorist the world has ever known
* Burned the Waco Compound to the ground and took triumphal photos on the smoldering corpses of literal children
* Waged terror wars against Vietnamese and Cambodian civilians
* Funded regimes in Latin America that committed ethnic cleansing and terrorized their own populations.
* Funded a military coup in Indonesia that is responsible for massacring over 60,000 people
* Funded ISIS, creating the largest terrorist organization the world has ever known
* Experimented on US citizens through the MK Ultra program, for which it has never apologized.
* Conspired to murder US Citizens during operation North Woods to justify an invasion of Cuba.
* Illegally detained and tortured Arabs and Pashtuns during the Afghanistan and Iraq wars
* Used drone strikes against American civilians without legal justification
* Keeps prisoners in prison longer than their established sentence to sell them for labor in jobs which include, among other things, fighting wildfires, which has an extremely high casualty rate. This is systemic to such an extent that even Kamala Harris was found to do this during her AGship in California.
* Did I mention they funded ISIS, the largest terrorist organization the world has ever seen?
And I am supposed to believe that funding color revolutions means the USG is actually the good guy because this government is “pro democracy”?
I believe the rapid self-iterative foom concept also makes a case for not creating severe AI regulation. If this is impossible for humans to handle then we should want AI to advance into self-iterative attempts that fail in some way and alert everybody, and in hopes that pre-foom AI can identify and deal with it better than us. This is all at its most possible when hardware is less advanced
smopecakes gets it. The safest day to develop AGI was yesterday, the next best is today, and the most dangerous is tomorrow. Your best chance of safety is getting to it as early as possible because the resources it has will be at their most limited and the number of experiments are at a minimum and at a scale controllable by orgs / govs.
You really do not want to try holding back the tide until compute costs are low enough for your average white color professional, or teenager if you wait longer, to train current level frontier models.
If you believe that a future accident could happen, the thing to do isn't to wait for the accident to happen and then *hope* that the mitigation afterwards to the accident will work out, it's to prevent the accident in the first place!
People who claim that they understand what will drive others to action do not have good predictive records on what would have driven to action on previous "warning shots". Their confidence re: the existence of warning shots is almost entirely from confirmation bias. If you don't believe me. Before reading the below comment, please register your belief states about:
What metric causes people to take things seriously, for something like nuclear accidents, pandemic response or biological attacks.
And how confident you would be that, if that metric were fulfilled you would get some dramatic action of some kind
And see how general your metric is compared to your internal felt sense of what would prompt responses for AI.
FYI, your second comment link is malformed and (at least for me on Chrome) does not deep link to the comment. It was pretty easy to find the one you meant though.
I think you are also missing the point. I'm not making any claim on what warning shots would or would not spur action or may or may not exist.
I am saying that if the statement "AGI / superintelligence can be achieved through scaling current methods" - something that AI safety proponents invoke when saying that we're on a dangerous path, need to pause, AI will inevitably become dangerous, etc. - then you are better off building it as soon as you possibly can. It is not without risk to do so, it is a relatively lower risk.
Otherwise you're gambling that compute will never get cheap enough for me, or the many other people like me, to afford the compute that frontier labs require today. That is a bad gamble. It is inherently easier to control a few processes that require massive investment than if you're in a hundred flowers blooming scenario.
I think your model implicitly relies on a notion of warning shots. If we *did* advance frontier models, and then the large companies acted without control, in what way is it actually better than the "Cyborg Oprah gifts the audience 10 SotA superhuman models every week" world? It still relies on the idea that the control you'd want for the mass consumption case is what you'd get for the elite production case, which does not appear to be the case now (and in fact you are arguing against control regulation that could not happen in the "compute is cheap" world.
(Whoops. Let me try to fix the link)
Edit: also I guess I read too much into the parent message of "it fails and everyone is alerted", which you don't necessarily agree with
Your edit has it right. I have no idea if there will be warning shots or any other form of early warning. My point is that as a simple physical matter, it is always easier to exert control over fewer massive centralized things than if you have many smaller decentralized things.
Easier. Not guaranteed. There are no absolutes. You are correct that the outcome could still be the same as in Cyborg Oprah world.
I oppose control regulation - and I think you're conflating control regulation with actual control - that would slow down AGI development because all signs point to the "compute is cheap" world as the one we inhabit. Either scaling will not work and it doesn't matter either way, or it will and is thus inevitable. Delay only moves you further into the danger zone when I can earn cash back buying today's frontier lab levels of compute on credit.
No one knows how much an AGI will cost to train in compute but if it's possible via scaling it fundamentally doesn't matter. Eventually it's 25-250k in today equivalent dollars. The metaphorical day after that it's 2,500 in equivalent dollars. Etc.
As a relative matter do you think it's safer to develop something potentially dangerous when it's being done by a single organization versus ten thousand or ten million or ten billion?
The problem is that this is a domain specific argument. In several domains AI can already outperform humans, sometimes to an extreme level. And we don't have an AGI, so there's no competition in areas where that is needed.
AI isn't a single thing. Chess playing agents are AI, but they only threaten chess players as chess players. LLM's operate in a more general domain, but it's still a restricted one. (But actors and authors are right to be worried.) Etc.
In this blog post, things are considered from a Bayesian perspective; when it comes to the total costs, one would probably have to analyse them using game theory. But the Bayesian perspective has not lost any of its validity. If, for example, one were to find that after each round of escalation in the Ukraine war, the number of voices that think Putin is a bluffer increases significantly, one could conclude that this is a self-confidence that is not supported by facts. In practice, of course, this is difficult, as the opinions published often pursue certain goals.
As far as AI is concerned, the ultimate danger is not outcompeting humans, but rather their destruction - or was the same thing meant here? In any case, the topic is not a ban on AI development, but the question of how to recognize whether the next step will cross a threshold beyond which someone else will decide for (or against) us.
Also it's a bad fit because states do not inevitably escalate in response to increasing provocations, under that feedback system everyone in the world would already be dead. The terminal inevitably, analogous to Castro's death, is that one side sues for peace.
Increasing Ukraine's ability to hold Russian territory at risk much more rapidly increases the chance of a balanced end state than it encourages Putin to opt for national suicide.
"But it’s not true that at some point the Republicans have to overthrow democracy, and the chance gets higher each election."
No, but we can see that the US system can't cope with either of the main parties becoming authoritarian. The design was that the House,Senate, and Presidency would have to work together and this would be a moderating influence, but in a world where the probability of each party winning is somehow driven to 50%, all that does is reduce the probability of the "bad party" getting complete control to one out of eight, with a new try at each election; a half-life of just over 20 years.
The historical trend here is that there’s a cycle of authoritarianism, in which each side feels threatened by the other and then responds.
Framing the question this way - “will they really be Hitler this time?” - is what people used to justify things like the FBI strong arming social media companies into censorship. Or, a culture of censorship that we know caused thousands of kids to get medically sterilized because nobody in the medical profession was willing to say, “this is wrong.”
So if you fear authoritarianism, the historical evidence says we have to oppose it wherever we see it, and not ignore it when our allies do it. The democrats didn’t run a primary and tried to use the courts to keep their opponent from being re-elected. It didn’t work, and hopefully they can provoke some soul searching that maybe it is wrong to try and demonize your opponents and use whatever legal apparatus you can bring to bear on them.
Fearing authoritarianism only from one side, while giving the other a free pass, is a recipe for more authoritarianism.
This is a great, nuanced point about how the practice of low-level authoritarianism can increase the future probability/magnitude. This is my great fear with the Trump administration.
Trump has obviouly had many scummy business/legal dealings over the decades, but the choice to pursue legal action against him (and not against thousands of other scummy business people) once he opted to run for 2024 was so obviously politically motivated. I'm desperately hoping that we can avoid an escalating tit-for-tat leveraging of courts/regulations/IRS (and, God forbid, CPS) to punish/harrass political enemies. I'd actually like to see Trump make some high-profile pardons of Hunter Biden and other Dems, which would hopefully put the brakes on this potential escalation.
The federal indictments against Trump are very open and shut. He should be in jail right now for falsifying electoral votes and obstructing the FBI investigation at Mar A Lago. There is no comparison between Trump and the so called "Biden crime family." House Republicans led an impeachment inquiry against Biden and 1) Don't recommend impeachment, 2) Don't recommend charges when Biden is out of office, 3) Have secured 0 federal inditements, and 4) There appear to be no plans for follow up investigations.
Hunter Biden is charged for using a gun while using drugs -not for any corruption, and Joe Biden has promised not to pardon him. Trump meanwhile promises to prosecute the enemy within and hold military tribunals and to pardon everybody involved on Jan 6. There is no equivalency here - Trump and Trump supporting Republicans are far more corrupt than the fantasies of whatever right wing media corporation is losing their defamation suit for intentionally lying about a public figure to their viewers this week.
The Democrats didn't run a competitive primary, and I think it was a strategic mistake for them not to field more competitive candidates in the last four years, but their not having one isn't a sign of authoritarianism, it's the same reason the Republicans didn't run a primary in 2020: their candidate was already the President. After Biden stepped down from running, there are other candidates who most likely would have been more electable than Kamala, but rather than selecting the most electable candidate, the party ran his Vice President, the only person who the existing rules allowed to continue his campaign given that it was too late to run a primary.
As far as "using the courts to try to prevent him from being re-elected," Trump was impeached by the House after Jan 6th, but the Republicans in the Senate voted against hearing evidence, let alone confirming, and the stated reason was that he was already out of office, and so if he had committed crimes in the process, that was a matter for the courts to address.
There are other prospective crimes which he had already been involved in by that point, and then others which took place later. The Republicans in the Senate outright stated that this was a matter for the courts to address, and then the party has spent the four years since insisting that addressing them in court is a sign of partisan overreach. If we remain agnostic about whether Trump actually committed crimes, as matter of principle, I don't think democracy is well-served by a system where, if a person commits crimes in order to improve their chances of election, those crimes cannot be investigated or punished because that would interfere with the process of the election. That incentivizes candidates to pursue election by criminal means.
"the Republicans in the Senate voted against hearing evidence, let alone confirming, and the stated reason was that he was already out of office, and so if he had committed crimes in the process, that was a matter for the courts to address....then the party has spent the four years since insisting that addressing them in court is a sign of partisan overreach."
Exactly. On this particular topic the GOP has utterly outplayed the Dems in the court of public opinion, the way a housecat outplays a half-dead mouse.
"I don't think democracy is well-served by a system where, if a person commits crimes in order to improve their chances of election, those crimes cannot be investigated or punished because that would interfere with the process of the election. That incentivizes candidates to pursue election by criminal means."
Yep. And the SCOTUS has effectively ratified that as our new system for presidential elections.
Most countries on earth don't have primaries. Even the US didn't have primaries where the *voters* could pick the candidate until the 1970s. Those times and places weren't authoritarian.
But doesn't this assume the parties are monolithic? That isn't true even here in the UK and from what I understand US politicians are a lot more independent minded. Some Republicans and Democrats may even be anti authoritarian!
Whatever was the case before, the current Republican party is about as monolithic as you can reasonably get. Most of the never-Trumpers have already defected.
Can we see that? That model of the US system ceased to be plausible in 1796, as soon as Washington left office. There have been two dominant parties since then (though of course, the two parties are no longer the Democratic-Republicans and the Federalists).
The check wasn't "these institutions are in control of different parties" it's "these institutions are going to fiercely hold on to their own power and not cede it to any other."
But Congress has been perfectly willing to not do its job, letting the President do things that are Congress's responsibility (e.g. tariffs) or not bothering with making sure a law is constitutional and making SCOTUS the bad guy who says "no you can't ban flag burning you idiots."
We have had authoritarian presidencies during Lincoln's term, Wilson's, and FDR's. In Lincoln's case it wound down within a decade of the war ending, in Wilson's the public reacted against it and chose "normalcy" at the voting booth. FDR's left a lasting legacy of empowered unaccountable agencies with massive discretion and power, that continues to be problematic today because it diminishes the role of Congress (which Congress has gone along with for various ideological or electorally-motivated reasons.) But we still have cycles where that authoritarian executive bureaucracy gets politically challenged, as Reagan did, as Trump is doing now, and arguably the Democrats did in the midterm cycle of '06. (If you are of the opinion that Nixon was an authoritarian, you have another easy example to add to this list.)
The American public has been able to fend this off and swing the pendulum back numerous times. It was only during major wars that anyone got very close to dangerous authoritarianism, and in each case the people wearied of war and wartime restrictions. We are not a country that rallies around grand national projects for very long, and outside of such projects we have a very low tolerance for authority. The US people can cope with this, whether or not the system as designed is fending it off in the way the founders planned it.
"No, but we can see that the US system can't cope with either of the main parties becoming authoritarian."
This is certainly an opinion you can have, but neither it nor the implication smuggled into it (that one of the parties has become authoritarian) is immediately evident. The USA is still going. The Democrats want to reduce the power of the Republicans and the Republicans appear to sincerely wish to diminish the power of the government. None of this seems particularly authoritarian.
The next VP says he wouldn't have certified Trump's loss in the 2020 election and that Trump should ignore the courts if they rule in ways he doesn't like. It's very obvious Republicans want to annoint Trump king so they can reap the short term benefits of staying by his side. If Republicans aren't authoritarian why did they vote not to impeach Trump when he falsified electoral votes and tried to direct his DoJ to confiscate votes from the states and sent the Jan 6th mob to pressure Pence and House Republicans to certify his fradulent electoral votes? This is what authoritarian dictators all over the world do to make it seem like they have the consent of the people to rule.
Part of the problem is that "authoritarianism" isn't particularly well defined, meaning that both parties constantly see whatever the other one is doing as a sign of creeping authoritarianism. We perhaps need to break down the word "authoritarian" into a few different terms. Consider the following:
Country A is a libertarian utopia where you can do anything you want, except criticise libertarianism. To protect libertarianism, criticism of libertarianism is severely punished.
Country B is a theocracy where everybody is obliged to worship the sun god every day. But this is fine, because there's full and free democratic elections each year and the theocracy party always wins with 90% of the vote.
Which of these countries is more authoritarian? I'm not sure the question makes sense, they're both deeply imperfect in two generally authoritarian directions. (To be clear, these two examples don't represent the left and the right, they're just two examples of different ways to be imperfect.)
EDIT: This post is wrong and was clarified in the replies.
Your framework misses the Peter Schiff case (the name is a stand in for a certain genre of Austrian economist) who predicts a recession and/or mass inflation every year. We *can* safely ignore their constant doomsdaying, given the total lack of credibility, but nevertheless, in some sense, it's almost certain that as time passes, we will hit a recession and/or mass inflation. If I understood you correctly, we can't dismiss the Peter Schiff's. But we should.
You could dismiss that there will be a recession / mass inflation at some point in the future, but it would require an extraordinarily good foundation.
You can dismiss the Peter Schiffs. If someone constantly predicts a cyclical thing, they will eventually be right. We should *not* say they predicted the event in any meaningful way.
The difference between the Shiff case and the Castro case is that the longer Castro lives, the more likely he is to die in the next year. The longer a country goes without a recession, the less likely it is to have a recession in the next year. In both of these cases, and unlike the nuclear escalation or AI safety cases, we have a fairly large sample size of reference cases.
"The longer a country goes without a recession, the less likely it is to have a recession in the next year"
That's not really true. If you haven't experienced a recession at a time when it actually matters to you (rather than your parents worrying about stuff you don't really care about), then you're less likely to think that it will happen.
Couple that with older people dying off, and there's a tendency to start thinking "This time we've got a handle on things and it won't happen again."
Until it inevitably does. Generally for the same reasons it happened last time.
This is counter Bayes, but when you have a situation where everyone updating for lower probability (as per Bayes) causes the actual probability to go up (because the measures put in place are being torn down as a result of the updates), then Bayes is not all that useful.
Yep. That clarifies it. So it's not just that X will almost certainly happen, but that the probability of X's happening soon, increases the more X doesn't happen. ("soon-ness" doesn't have to be literally temporal, as the drug dose case illustrates, but the analogy is close enough).
I think this is similar to the Biden dementia case or the Republican dictator case.
We should calculate a per year risk of a recession (does this go up every year as you get distance from the last recession? I don't know and would be interested to hear economists' opinions!)
Then if we originally expected Peter Schiff to be an authority, we could update to a higher probability based on his word.
Then, as Schiff is proven wrong, we un-update back to our prior.
Regarding increasing probability of recession as a function of distance from the last one: this economist says yes, but with caveats. I think the best macro model of an economy is a drunk staggering down a hallway. The ideal path he could follow, were he perfectly sober, would be to just walk straight down the hall over time, but being drunk he tends to swing and stagger one direction or the other until he rather painfully hits the wall and course corrects towards the middle. Then he goes too far in that correction, painfully hits the other side of the wall, corrects in the other direction, etc. He also has a bottle of Wild Turkey he's currently pulling off.
So, if an observer says "Huh... he hasn't hit the wall in a while... I guess he's not going to anymore?" the correct response is "nah, just means he is due." At the same time, hitting the wall recently might make it more likely he way over corrects and hits the other wall sooner than expected. All else equal having just hit a wall and recovered should make it less likely he hits again soon, but really all you can say is that if he hasn't hit in a while, he's due.
That assumes of course you do not have good data on what his current trajectory and speed look like, and whether or not his kids have left toys all over the hallway for him to stumble on, or how far down that bottle of Wild Turkey he's gotten in the past hour. If someone says "Oh, he's going to hit again! Look how fast he is moving towards that RC car!" that's a more compelling argument than "It's been a while", but only if you can confirm that he is moving fast and there is in fact an RC car there. That's hard to do, so commentators often claim knowledge of speed and toys to enhance their reputation at little cost.
Australia hasn't had a recession in decades. On a related note, their central bank has been targeting 4% inflation over that time period. Many people thought it was impossible to have a "soft landing", in which inflation was brought down without sparking a recession/unemployment, but that happened recently. Perhaps it's possible for central bankers to sober up!
The first search result for “Australian recession” was a BBC article on how Australia was plunging into a recession in 2020. Granted, it claimed that it was the first since 1990, which might even be true. It is possible that central bankers can avoid recessions, but it is also true that they can lie about economic data. The USSR had amazing growth year over year according to the official data, yet never quite managed to catch up to the USA.
You know, I would have thought so about the US as well, but the last few years have been really bad for official statistics. I suspect it is not USSR levels of lies, but I am much less sure. There is a really big weak point in the system, in that the single source of truth has a very strong incentive to represent that truth one way, and there is little auditability.
In the case of Ukraine you also have to think of it the other way. Russia is acting imperialist, and attacked Georgia. We slapped them on the wrist, then to Crimea and then we sent slightly more aid to Ukraine, then they staged coups and arms to the Donbas, then they invaded Ukraine. So as much as we can ask what brings us closer to war with Putin we also need to ask what is enough to deter his expansionist tendencies.
So the other side is eventually some number of "teeth" will deter his advance on other nations.
Suppose I think something is inevitably coming and it’s just a matter of time before it does, but actually this is false.
When should I make the switch to believing that it’s not coming eventually? How high of a dose repeated how many times could the patient take before the doctor should say, well I guess it really is harmless and I was wrong all along?
Obviously 'make the switch' isn't a very Bayesian thing to say. I *mean* how should I update that probability, lol
I think the problem is that single data points carry little weight when it comes to determining whether a threshold for an effect exists. If you can approach that threshold carefully in safe conditions with statistical confidence (i.e. a large double-blind study or whatever, for a drug), then that helps confirm the existence of the threshold, and if it doesn't, you can do another study that carefully explores larger dosages, until you've explored the relevant space of interesting dosages.
In the case of thresholds where we all could potentially die if we accidentally reach it unprepared, it seems trickier.
If I thought something was "inevitably coming", I hope I'd have very strong arguments for making that case in the first place. Then, the probability would be updated as time goes on and new information can be tacked on to the original case. I think there's a spectrum between the outlandish and the probable, and the updating of that probability will be impacted by the nature of the claim as well as the duration of the claim. If I believe "AI will inevitably bring about the end of civilization", it's unlikely I'll adjust my probability downward rapidly because anytime I see progress in AI, in my framework it adds to that probability even if the end of civilization hasn't materialized in any tangible way. In this case I would probably try to come up with reasonable arguments against my case of inevitability, because I'm not sure I want to live with such confidence in a belief that has such a long expiry date. If I believe "a recession will inevitably hit the US economy before 2028", we're in much easier territory. I will update my belief as macroeconomic data comes along, I will stay informed by reading the financial stability reports by the leading central banks, in one breath I will adjust my belief downward or upward as time goes on, with a "time value" aspect that decreases more and more rapidly as we come to 2028.
Here are a couple of things inevitably coming: an extinction-level extraterrestrial impact, and the Yellowstone supervolcano erupting. I don't see how anyone can dispute either of these things happening, and we can even give an approximate time range for when they are likely, but no one can tell within a useful time range when they will occur.
On the other hand, some people believe the Rapture to be inevitable, and it is even stated that no one will know in advance that it's here. It seems to be opinion-based whether the Rapture is coming, depending on who you ask.
Satya Benson was specifically talking about believing in something that is inevitably coming, but that turns out to be false. I agree that certain examples like religious ones are firmly opinion-based and therefore cannot be much changed by incoming information, except if the person slowly becomes an atheist or changes religion as they update their belief system.
I like your example of the Rapture; I believe it is inevitably *never* going to happen, and I don't have space in my framework for updating my probability on that one. If that's false, I'll be dead wrong (ha!), never having given it a second thought.
The person wouldn't need to change religion, just change to a non-apocalyptic version of their religion.
Partially agreed. It is arguable as to whether the non-apocalyptic version is the "same" religion. ( And I'd give better than 50/50 odds that, given a few members each of the apocalyptic and non-apocalyptic versions in the same room, that they will indeed argue :-) )
I can dispute the extinction-level extraterrestrial impact.
Here's a possibility which results in never-asteroid-impact: no impact for enough time for humanity to create a spatial program able to deflect (or otherwise deal with) incoming asteroids; humanity flourishes and manages to keep the program successful until the far future is so unrecognizable that the notion of an asteroid impact loses meaning (e.g. Earth is destroyed by anthropogenic means, humanity uploads into the matrix and turns the solar system into computronium, humanity survives enough time for the Sun to explode and destroy the Earth).
For the Yellowstone supervolcano, I don't have a concrete example but I would be surprised if there wasn't some possibility of geo-engineering that would get rid of the risk.
I can revise the details: the extraterrestrial object is coming, and absent action, will cause an extinction-level event (ELE). So if people don't exist, it will still happen. If people can destroy and/or deflect it, it would still count as inevitably happening, like if you know a hurricane is coming then you take action to evacuate, no longer making your death inevitable.
Believing that something is inevitable also isn't very Bayesian.
https://www.readthesequences.com/Zero-And-One-Are-Not-Probabilities
You should have a certain non-zero probability for the possibility that the thing never happens, and some probability distribution for when exactly it is supposed to happen if it happens.
For example, you are 99% sure that X happens, and you believe the chances are uniformly distributed among the following 1000 days.
If it doesn't happen the first day, stay calm. Even the hypothesis "it will happen" has assigned 99.9% probability that it will not be on the first day. So your probability only moves a little.
But after 990 out of those 1000 days have passed without the thing happening, your beliefs should be 50:50. The "not happen" hypothesis has 1% priors and 100% data match; the "happen" hypothesis has 99% priors and 1% data match.
Gwern has a very nice essay on this transcribed on his site:
https://gwern.net/doc/statistics/bayes/hope-function/1994-falk
"The Ups and Downs of the Hope Function In a Fruitless Search"
"On Bayesian updating of beliefs in sequentially searching a set of possibilities where failure is possible, such as waiting for a bus; the psychologically counterintuitive implication is that success on the next search increases even as the total probability of success decreases."
Just in a normal Bayesian sort of way. Your hypothesis is that the 10am bus is on its way. You start with a 99% prior that it is on its way since about 1% of buses get cancelled. If it's 10:03 and the bus hasn't arrived yet, you ask yourself "given that the bus is on its way, what's the probability it will be at least three minutes late?" And so forth.
When Fidel Castro reaches 90 and is still not dead, you don't need to start adjusting too far away from the hypothesis that he's a normal human being with a normal lifespan, but if he reaches 150 then you're going to need to consider other possibilities.
I think this is a more complicated version of the same math, which makes it annoying and hard to follow and which I was hoping to avoid. It would look something like:
- Hypothesis 1: Castro is mortal, with his actuarial table looking like that of any other human
- Hypothesis 2: Castro is immortal (maybe he has discovered the Fountain of Youth).
...and then you update your balance between those hypotheses as time goes on. Given that (1) still leaves a little probability mass on Castro living to 100, 110, etc, and (2) starts with very low probability, in practice even Castro living to 110 should barely change your balance between these hypotheses. Once you get to some level where (1) is making very confident predictions and being proven wrong (eg if Castro is still alive at 150) then at some point 2 becomes more probable. You're doing this in the background as you're doing all the other updates discussed above, but hopefully within a normal regime it doesn't become relevant.
What would that look like in a case like the doctor and the experimental drug, or AI risk, where there's no standard graph to compare to and no clear "hmm, they/we should've been dead by now" point?
In principle you should still be able to define your prior probability distribution over the LD50 of the drug; it will just take more work since you don't have good data for comparable situations. For AI risk it's even harder because the choice of independent variable is not obvious (time? compute? quantity of training data?) but once you pick one, the math is the same.
The choice of prior for AI risk is also a lot less obvious than for mortality or even the toxicity of unknown drugs, though.
Anyone who tells you there are standard graphs that let you do all of Bayesian reasoning is lying to you.
Bayesianism is just a set of constraints of coherence among your different theories, but doesn't do the work of finding theories for you.
Given that we now know that most or all of the "people living to 125" stories are false (they have all the hallmarks of mistaken birth dates or fraud, and they are uncheckable due to e.g. city hall burning down), you should probably start updating around 110.
I thought the record of 122 was still considered pretty reliable though?
Because it's such an outlier, there is actually a pretty big skeptical movement about it. It's been a while, but I read both the strong pro and con cases for it, and overall I lean slightly towards the age being real. The alternative theory revolves around her unmarried daughter assuming her identity to inherit her financial situation.
All people living to 120 stories turn out to be false apart from one. In the Slatetarcodex days there was a debare on whether the Jeanne Calment claim was likely to be false. I haven't seen anyone update on this.
At that point I'd be updating towards "lied about birth date", not towards "found the fountain of youth".
As I mentioned in my response, it seems worth breaking out near term predictions that have gradual onset of symptoms.
"The chance of Castro dying of cancer" is something that should probably be predicted 3 to 6 months in advance. So if you can verify that Castro does not have cancer now, your prediction of Castro dying in one month from cancer might strongly defy the actuarial tables. Provided you trust the doctors examining Castro.
Yeah, in particular this quote below is wrong, unless Scott is referring to a more sophisticated actuarial table than the one from the SSA (https://www.ssa.gov/oact/STATS/table4c6.html):
"Your probability that he dies **in any given year** should be the actuarial table. ... If Castro seems to be **in about average health** for his age, nothing short of discovering the Fountain of Youth should make you update away from the actuarial table."
Older adults in average health are much less likely to die than the actuarial table's probability of dying that year, since the deaths for that age will be dominated by those in below average health with gradual onset issues. This is much less true of a young person, where accidents are more prevalent.
Logically, I think such failed predictions **should** be evidence against inevitability. We should reduce our probability somewhat, even for the inevitability case and these should count as failed predictions. Its just that those predictions have a much lower prior initially. If Castro lives to 500, I'm definitely going to be assigning more weight to the possibility that he's immortal than now.
If we're playing higher/lower for some number between 1 and 100, and I've reached 99 and told its still higher, I know its 100 - but only if my assumptions were right. If we began with a 0.1% probability that the number might be outside the range, then really my new position ought to be ~91% its 100, 9% its outside the range. If that number is really "Putins threshold for escalation", then "Never" *is* increased each failed prediction, simply because it takes a proportional share of the probability space eliminated. It just doesn't mean its particularly high, if we started with a low prior.
But to someone who began with a much higher prior against escalation, potentially those failed predictions could push their "Never escalates" probability to somewhat probable levels.
It's more that, if we start with a prior with
40% on all the missiles are rusted solid and won't fly.
20% on you'd need to nuke Moscow.
And
20% on Putin actually following through on his threats.
20% on random provocation N setting him off. (Say as 0.1% on threat 1 to 200).
With a prior like that, we can say that yes Putin threatened nukes 136 times already. But there is a 0.1% chance that he launches nukes at the 137'th attempt at Ukraine to defend itself.
Under this model, Scott would be right. Following your logic 64 times would give a 6.4% chance of nuclear war, which would be bad.
My model is more, ‘Putin’s threat lost its force when he complained so loudly earlier in the war, letting Ukraine strike Russia with ATACMS was safer this time than it would have been at the start of the war’. My position on AI risk is similar, enough good guys with 10^26 FLOPS AIs can probably stop a bad guy with a 10^27 FLOPS AI, if they have enough practice stopping bad guys with other 10^26 FLOPS AIs, even if the 10^27 FLOPS AI itself is the bad guy.
> enough good guys with 10^26 FLOPS AIs can probably stop a bad guy with a 10^27 FLOPS AI,
Can enough good guys with T-rex can stop a bad guy with Godzilla?
There are 2 questions here.
1 ) Power scaling. 1 order of magnitude is bigger than the gap between humans and monkeys.
2) What is the state of the 10^26 Flop AI's. Are all those AI's are flawlessly trustworthy. Are all the humans and AI's are working together to bring down the 10^27 Flop AI?
Alternate vision, some of the 10^26 Flop AI's are American, some Chinese, some owned by OpenAI, Deepmind, microsoft, baidu etc. The Russians and Ukrainians and Koreans and Israel and Iran are all trying to use their 10^26 AI's in active conflict against each other.
The AI's display all the worst behaviour of recent chatbots, unruly toddlers and drugged up monkeys. These little chaos gremlins delight in finding novel and creative ways to disobey their instructions. They aren't smart enough to break out of their cage. A drone AI might perform flawlessly in simulations, but once it's in a real drone, it just draws contrail dicks in the sky until it runs out of fuel and crashes in a field.
Ten monkeys could absolutely stop one human if you dropped the human in a jungle. My instinctive response is that each new AI will be better at manipulating geopolitical conflicts, but the geopolitical conflicts will get harder to manipulate each time that happens. I don’t know if that’s how it’d play out. Would Trump have been 10x more effective if he’d been 10x smarter, or would he have sieved the same niche?
> if you dropped the human in a jungle.
Perhaps. I do feel that dropping a human in a jungle with no warning or preparation or tools is a bit skewing the hypothetical towards the monkeys.
If the human had some training in relevant skills, and a few weeks before the monkeys showed up, they could probably make a bow and arrow or something.
What if you dumped 10 monkeys in New York for every person living there?
And this doesn't even address why the monkeys would want to team up against the human, as opposed to fighting the other monkeys.
Part of the answer here is that one person's predictions alone are not enough to answer this question well. You could make up a bunch of factors in order to do Bayesian math about it, but you'll do much better by doing things like asking other credible people for their predictions and looking at the direct proximal evidence again.
For example: if one person predicts something, then the same person predicts it, then the same person predicts it, maybe that thing isn't happening.
If one person predicts it, then two people predict it, then ten people predict it, maybe you are climbing up the tails of a normal distribution of predictions, towards the center of the distribution where lots of people predict it and it's likely to actually happen.
That's just the general problem of induction, and there is no general answer to it. After some number of swans, all of which have been white, I should start to think that all swans are white. But any rule for how to do it goes wrong in some worlds. All we can say is that you should have some internally coherent policy for updating (i.e., if you say "the next swan is 75% going to be black", and it isn't, then you should shift the relevant hypotheses by a factor of 3 to 1), and that it's better if you can be lucky so that your coherent policy actually lines up with the parts of the world that you know nothing about yet.
I almost posted about this actually. Because not only should you update every time you see a white swan, you should update a very very very little bit every time you see a non-white object that isn't a swan (all factual claims imply their converses after all).
The thing is that I think most of these discussions of Beyesian thought are kinda similar to looking at 3 swans and a red car and either guessing that either all swans are white or saying "most animals aren't monochromatic and the three swans we've seen aren't strong evidence anyway so we'll just keep assuming we're right."
If you’ve got a random sampling model of the universe, and you think most things aren’t swans and most things aren’t white, then that’s right (and Janina Hosiasson wrote a good paper giving that Bayesian solution to the paradox of the ravens back in the 1930s).
But if you don’t have a random sampling of the universe model, you can actually get positive instances being disconfirming. For instance I currently believe very strongly that all unicorns are pink (because I believe very strongly that there are zero of them) but if I saw an actual pink unicorn, I would no longer be so sure. Similarly, if someone is highly confident that there are no rats in Alberta (ie that all rats live outside Alberta), then observing a rat just on the outside of the line would likely make them less confident, even though it’s an instance of the generalization.
Thanks! These are all great notes, and I'll for sure look up that paper.
I still think Beyesian reasoning when you have an extremely small portion of all the possible evidence (because we can't access most of it, or because a lot of it is in the future, or because we're trying to reason about a one-time event and can't repeat the event in different conditions to help us update our intuition) tends closer to the kind of heuristic guessing that it's supposed to protect against than a lot of rationalists are comfortable with.
It's better than just straight-up using bias to make all your choices, but not by much. Especially if you've written a bunch of recent articles about how you shouldn't update your beliefs based on argument, real-world events, or people you trust being wrong.
There are broadly speaking two kinds of "inevitable" event - those with a fixed probability for a given unit time (e.g. "there's an X% chance of an asteroid hitting Earth" or "there's an X% chance of a nearby star going supernova"), and those with an increasing probability over time (e.g. "humans eventually get old and die"). The OP is about the second kind.
For the first kind, you should steadily update downwards each time the event fails to happen in proportion to your probability estimate - e.g. if you think there's a 50% chance of nuclear war per year, this can be disproven fairly quickly, while if you think there's a 1% chance of nuclear war per year it'll take longer.
For the second type, you should update downwards more strongly as time goes on, since your theory is making incredibly strong predictions. A baby not dying of old age proves nothing, even a 90-year-old not dying proves little, but a 130-year-old not dying is pretty suggestive and a 200-year-old not dying of old age proves a lot.
In the case of ASI, living for a year without a Singularity proves little, living for 50 years proves a lot more, 100 years even more, and living for 1000 years probably settles it.
This is, of course, annoying for people who don't expect to live 1000 or even 100 years. There are at least some other observations one can make besides "has ASI appeared", like trying to graph rates of AI progress, looking for "warning shots", and looking at analogous events, much like one might eat a little of a substance and try to gauge how likely it is to poison you in large amounts based on things like taste and tomach cramps.
Here's an interesting (thought) experimental setup for this:
You have 100 boxes, initially all the boxes are empty. You have an associate prepare them the following way: first she flips a coin, if it lands heads, she doesn't do anything. For tails, she puts a ball in one random box.
Later you open the boxes one by one. Each time, what's the probability that the next box you open contains the ball? At each step, what's the probability that the ball is in one of the remaining boxes?
I haven't done the math, but I think the probability that the ball is in the next box should go up on each step, but the probability that the ball is there at all goes down.
(I think this is easier to see, with an alternative but equivalent setup: you double the number of boxes to 200, skip the coinflip and have your assistant always place the ball somewhere, but then you only open 100 boxes.)
Gwern hosts at his website a writeup on this sort of situation. You are correct that your probability of the ball being in the next box increases, but your probability for it being in any box decreases.
https://gwern.net/doc/statistics/bayes/hope-function/1994-falk
In some of these cases you can get intermediate warning signals before the cliff: For example, some political observers noticed Biden was doing almost no media appearances in the last year or two and raised their suspicion, and Putin can escalate in other ways (e.g. by sending Yemen advanced anti-ship missiles) before jumping straight to nukes. Depending on how severe the risk is and how sure you are you'll get an early warning, it can be better to have a policy of just waiting for that.
(This probably isn't the case for AI, where I think we're likely to get a fire alarm but most fire alarms are already pretty catastrophic and may not leave us enough time to respond after they happen).
Political observers were noticing Biden doing unusually few media and public appearances *during his campaign*, and the observers on the right frequently mocked him for "campaigning from his basement". This tendency didn't stop during his presidency, although it certainly and obviously got worse.
Like, the Republicans were right about this one, full stop.
Scott's lack of acknowledgement of his own bias, and that of many prominent media personalities, really weakens that example and IMO the overall piece.
Ignoring that bias is one example of ignoring the lack of information for most people in these situations. If there's a whole staff or a whole structure strongly influenced in favor of presenting a certain image, like limited appearances only during certain daytime hours, limited press questions, etc, then whatever updates an observer makes will be wildly skewed.
Maybe there are good reasons for Scott et al to not trust Republican reporters, but it's not because they were wrong about Biden, he doesn't trust them about Biden because of whatever that past incorrectness was (but really, mostly bias and social bubbles).
The argument during the 2020 campaign was that he was campaigning from his basement because of COVID, and Democrats more generally were more inclined to participate in NPIs than Republicans. This made for a strong alternative explanation, especially as so many other, younger, more active, Democrats also campaigned largely via Zoom (the "from his basement" bit was just the location of his studio in his house in Delaware).
The bit that is relevant is that other Democrats started making a lot more media/public appearances in 2021 and (especially) 2022, and Biden didn't (he made a few more, but not a lot more). That should have been more suspicious but there were so few people making the (IMO, in retrospect, correct) case that 2020 was COVID but 2022 was a sign that he was deterioriating (whether that was dementia or physical deterioration is irrelevant) and being protected from the public meant that I didn't come across a clear version of that argument until well into 2023, after which I did find it relatively convincing.
I don't blame you, to be clear! I blame the reporting malpractice. (And one notable difference to pay attention to during COVID, arguing against that interpretation: How much Zoom did Biden actually do, as opposed to purely scripted and strictly controlled recordings? Biden's media reclusiveness was weird and stood out even against the background COVID reclusiveness.)
Which to be clear, didn't actually end; see all the people who were entirely blindsided by that Trump won at all, much less won as thoroughly as he did.
But, like ... that's not just evidence about Biden. It's evidence about the entire edifice of information coming to you (not you personally, but in general, people who were blindsided by Biden's debate performance). Everything a huge segment of the population learned, they learned through the same set of filters that kept out awareness of Biden's issues.
>It's evidence about the entire edifice of information coming to you (not you personally, but in general, people who were blindsided by Biden's debate performance). Everything a huge segment of the population learned, they learned through the same set of filters that kept out awareness of Biden's issues.
True, unfortunately. Seeing the Biden v Trump debate, a natural question is indeed "So what _else_ are the media hiding from me?"
Given that the media were themselves blindsided, the hiding is blinkers rather than intentional.
Many Thanks! Could be - it isn't too clear how the concealment of Biden's cognitive decline was divided between the White House staff and news media. All I can tell from my position was that I kept seeing 'Biden is fine', 'Biden is fine', (I'm mostly remembering New York Times coverage here) then "We defeated Medicare." - oops... (Also - were the media truly blindsided, surprised by what they saw, or did they overestimate what they could get away with?)
I'm didn't stay up to date on the American 2024 election in general. But while if someone wins an election, I expect their close supporters to prop them up with a good solid stick for the rest of their term, regardless of their actual physical or mental health, if someone is clearly incompetent I would have expected a functioning party apparatus to use the re-election as an opportunity to swap the declining candidate out for another candidate.
The outcome of "wait until, inevitably, problems were blatantly obvious even though they must have been privately obvious long before, and even though there was an opportunity to back a different candidate" is such a poorly thought out strategy in an overarching sense that I wouldn't have considered a functioning political party would have pursued it. And especially not without more pushback from inside the Democratic tent.
If he were senile in 2020 how did he beat Trump in their debates?
Senility isn't usually a boolean, you don't flip from not-senile to senile from one day to the next. You have good days and bad days, with the frequency and intensity of bad days slowly increasing, and the frequency and intensity of good days slowly decreasing. His 2024 performance was a particularly bad day - see all the people confused by how lucid he's been behaving recently. Biden has more control over his media presence and isn't showing up when he wouldn't be there to show up.
Such a good point. And generally, in many verticals it’s like the problem of security services which either seem redundant and money down the drain if nothing actually happens, or the most important thing in the world if shit hits the fan and the security saves you. In both cases of course the security service did the same job, we just dismiss it when it’s uneventful. Same with pessimistic predictions.. they could be right to be cautious even if nothing actually happened. As a parent Ive learnt it very quickly as the more cautious parent.
Security services exist because we have experience with the security breaches which we want them to prevent. We don't have experience with whatever Scott is worried about with AI.
Arguably we do. We have examples of new technologies that conferred large military and/or economic advantages to the first people to get them. We have examples of colonial empires where one nation got a big enough relative advantage to conquer large parts of the world. We have the example of humans evolving capabilities that let them take over the world in a way that other animals were basically powerless against. You could play a game of reference class tennis, at least.
But I also think there's a really important higher-level point here about the fact that if you choose to ignore all threats that you don't have precedents for, then it's impossible to stop world-ending threats, because it will always be the case that the world hasn't ended before.
Imagine a billion hypothetical versions of the world, each with different threat profiles. Some of them, fortunately, have no world-ending threats to worry about. Some are at risk of meteors, some of runaway greenhousing, some of alien invasions, others of various other threats. You don't initially know which world you're in, though you might be able to figure it out (or at least narrow it down) by examining the available evidence.
If you do your homework and plan carefully, you might be able to anticipate and ward off some of the threats your world faces. But if you have a policy of ignoring unprecedented threats, then every world that faces a world-ending threat will just die. You might be lucky and be in one of the worlds with no such threats, but investigating whether the world has *already* ended does not tell you whether you're in one of those worlds or not, so it doesn't save you if you're in one of the dangerous worlds.
That's only true of threats which are world-ending in an all-or-nothing sort of way. A world which took respiratory disease risks seriously (perhaps if the reaction against chemical weapons in WWI had escalated into a broader moral panic over air quality?) might have transitioned away from coal, toward some combination of nuclear, wind, and solar, soon enough that the greenhouse effect never became a notable concern.
Similarly, some world which aggressively expanded into asteroid mining might have set up a system capable of deflecting naturally-occurring dinosaur killers as an incidental side benefit of salvaging the debris from industrial accidents, or thwarting malicious attempts to bombard specific cities.
In both cases, the potential world-ending threat was real, and required real effort to solve, but that effort was undertaken in response to related, non-world-ending threats for which there was abundant precedent.
Yeah. I remember a press conference from some time before ChatGPT where some military representative was insisting that AI would not be given the kill switch. And I'm sitting there thinking "well, you still use land mines, right?" The claim that the military would keep the kill switch in human hands if there was an advantage to it not being in human hands is hard to reconcile with current military practices.
Regarding being unable to stop all world ending threats while the world still exists, I suspect that while there are people like what you describe many people have requirements for mechanistic proof that the world *could* reasonably end before they act. Fear of global nuclear war increased significantly after the first atomic bombs were dropped. Models of climate change, coupled with clear evidence that the models function in the near term, (as well as feasible methods to actually address climate change) are required by many for action on climate change. Such requirements increase our exposure to danger. But they also filter out a tremendous number of threats that are unlikely to materialize, so we're not running around like Chicken Little, unable to function.
We are constantly beset by a million perils. Isolating one and saying 'why aren't we doing anything about X' is good, rhetorical strategy if one is worried about X. But the rare honest answer to that question tends to include acknowledging the true current weight of threats a-to-w.
If the Doors of Perception were cleansed, we would see the world as it really is... too complex to think about.
I would say those "reference classes" are analogies, which I regard as some of the least persuasive kinds of arguments:
https://entitledtoanopinion.wordpress.com/2009/02/04/what-evidence-is-convincing/
Analogies frequently hide how much we know (and don't know) about a subject by substituting a different subject.
True. And if you follow that heuristic, you will get bitten by any problem that is significantly novel.
New things exist. New things can sometimes be predicted theoretically.
I agree new things exist. I'm more skeptical of our ability to understand them via theory in advance.
I think it is important to keep in mind the cost of the counterfactual. One may believe that *not* escalating with Russia could have negative consequences if you think he is winning and him winning could be very bad for humanity.
On the topic of AI, which I suspect is what this post is really about, I think the same is true. Even if we know we are in an infinitely escalating game, we cannot ignore the counterfactuals like "if we do slow advancement in the US, it will still happen in China; it will still happen inside the government; people will continue to die of horrible diseases because we can't cure them; we will not generate technological advancements that could allow post-scarcity, etc.
Yes, at some point AI probably will be able to out-compete individual unmodified humans. However, knowing that doesn't mean that we should stop developing AI because the alternative (no AI in the future, or no open source AI in the future, or only governments have AI in the future) may be much worse for most of us than the future where AI out-competes unmodified humans.
Yes, I think particularly in the case of Ukraine/Russia, this is essential.
It's not really escalating to match your opponent's level of aggression - and Russia has been firing missiles into Ukraine for 3 years now.
In 2014, Russia took Crimea, and the West did basically nothing. Well, maybe we learned that Russia was willing to annex neighbouring countries.
If the West doesn't stop Putin's aggression, then countries will fall, one by one, until there are non left. This is not even a prediction, it's what the Russians say they will do.
I looked at the extensive list of examples in the "genre of commentary". A list of seven sources - superficially that's impressive. But it's all tweets. It doesn't really surprise me that Scott can find a few twats making erroneous claims. I'm sure you could do that about anything! But looking more closely, some of them don't even make the argument Scott claims.
The real arguments for why Russia won't escalate are more extensive, and nuanced, and cover things like it not being in Putin's interest to start WWIII e.g. when he's hoping Trump will offer more favourable terms in a few months.
Really, the point isn't that Russia won't possibly ever start WWIII at some point, it's that Russia has a history of making false threats and it's a mistake to take them at their word. In fact, the first tweet in the list says this explicitly:
Slazorii : "Putin made nuclear threats when the west "escalated" to provide tanks, when they provided HIMARS, when they provided ATACMS, when they provided F16s... nuclear blackmail is simply a feature of ru foreign policy. he will not be deposed if ukraine continues to strike inside russia."
Claiming this is equivalent to "Russia will never initiate WWIII under any circumstances" is... well, it's ... I'm going to be diplomatic and say it's a mistake.
> If the West doesn't stop Putin's aggression, then countries will fall, one by one, until there are non left. This is not even a prediction, it's what the Russians say they will do.
Um, what? I don’t say you are wrong, but I must have missed a press release.
Even in the old days of the USSR, I think this was more often stated as that communism would inevitably sweep the world rather than that the USSR would inevitably conquer it.
They talk about this on Russian state media a lot.
I'm not sure this is the best example, but here's one example, translated courtesy of Russian Media Monitor: https://www.youtube.com/watch?v=T6g6hlvT1Po
partial transcript (retyped by me):
The west is truly waging a war against us.
Is there any doubt that they are waging a war against us?
No, there is no doubt, we understand that.
We should ask, so we know it for later - where should we plant our flag next?
Where should we stop after liberating Ukraine of this disease, since they are waging a war against us?
Or this one: https://www.youtube.com/watch?v=GKDxGX2llqk
Interesting for having a visitor say something they weren't meant to.
partial transcript:
(political scientist) We keep talking about Ukraine. In reality, no-one cares about Ukraine.
They have the following goal -
Russia is embarked on a certain expansionist course. This is a fact. And not only in Ukraine, by the way.
NATO countries, European countries, want to somehow halt this expansionist course.
They don't have a concept of how to accomplish that. They can't do it, because -
(host interrupts) it's not an expansion, it's defending natural interests. We should take a short break.
(At this point, the autogenerated transcript says "and as a result of our own safety, we need to take a break.)
Those are both a few months old. I am sure there are more examples on there, and more elsewhere. I'm not talking about the (numerous, constant) threats, although those do make it harder to find the official statements or analysis. I'm pretty sure I've also seen videos of several Russian or allied officials making such statements, but I don't know how I'd find those.
But if you don't think an overall impression of the official state media counts, maybe you would consider that it's not just me that thinks this.
https://www.bbc.co.uk/news/world-europe-68692195
Quote of analysis by Frank Gardner, at the end of this article (from March this year):
//
The latest warning from Poland's prime minister echoes what his neighbours in the Baltic states have been saying for some time; if Russia can get away with invading, occupying and annexing whole provinces in Ukraine then how long, they fear, before President Putin decides to launch a similar offensive against countries like theirs, that used to be part of Moscow's orbit?
[...]
Vladimir Putin, who critics say has just "reappointed himself" to a fifth presidential term in a "sham election", has recently said he has no plans to attack a Nato country.
But Baltic leaders like Estonia's Prime Minister Kaja Kallas say Moscow's word cannot be trusted. In the days leading up to Russia's full-scale invasion of Ukraine in February 2022 Russia's Foreign Minister Sergei Lavrov dismissed Western warnings of the imminent invasion as "propaganda" and "Western hyperbole".
//
Fair enough. That’s still quite a ways from “there will be [no countries] left.
God knows I’m no fan of either Russia or Putin. (I sent flower seeds to the Russian Embassy back when that was a thing.) But when a country tries to assert its hegemony over a region that it has traditionally considered within its sphere of influence, and is met with the kind of concerted opposition that the West has deployed, I can understand them feeling like they need even more of a buffer than they thought, and while I don’t consider acting on that feeling to be in any way defensible, I would not call it equivalent to gunning for world conquest.
(But if I were in Moldova or Latvia or even Poland, I’d be keeping my powder dry.)
Perhaps I’m just over-reacting to a bit of hyperbole on your part?
> But when a country tries to assert its hegemony over a region that it has traditionally considered within its sphere of influence
The full list of countries this language applies to is:
- Russia
That's the *only* state that has *ever* declared that the countries neighboring them exist only as human shields that will get torched to slow a foreign invasion. And the Russians have always considered their sphere of influence "Everywhere too weak to stop them". The whole "sphere of influence" concept where 1 great power basically rules everyone they can through catspaws and sockpuppets is a distinctly Russian concept.
And that is a distinction from empires, where kings would owe public fealty to their emperor. The Russian model is a group of "independent" countries with sovereign rights and accountability for their "independent" actions. So, for example, Moscow could order a hit, East Berlin security would carry it out, and because East Germany was (not really) an independent country the response would hit East Germany, not Russia.
Cuban Missile Crisis? Monroe Doctrine? British Empire?
<i>If the West doesn't stop Putin's aggression, then countries will fall, one by one, until there are non left. This is not even a prediction, it's what the Russians say they will do.</i>
Counterpoint: Russia's military capacity has been sufficiently degraded by three years of war that there's no prospect in the foreseeable future of Russian tanks rolling across to the Atlantic. At this stage, the risk of escalation provoking WW3 is bigger than the risk of de-escalation emboldening Putin to march across Europe, because the latter scenario is no longer remotely possible, if it ever was.
But you realise that /is/ what stopping them entails, right?
The way you put that, I think 'the foreseeable future' is probably shorter than you are assuming. Russia has a massive capacity to consolidate and reform.
In mid-1941, Germany invaded the USSR on a broad front, destroying much of its military and covering hundreds of miles, to reach within 15 miles of Moscow by November. Despite massive losses, the Russians held them back, and by 1943, Soviet armaments production was fully operational and increasingly outproducing the German war economy.
Also, I'd like to point out that I didn't say it would all happen in one 'push'. Russia has been incrementally taking bites out of neighbouring countries for centuries. Sure, it lost some ground with the fall of the USSR, but it's been trying again since Putin came into power.
<i>In mid-1941, Germany invaded the USSR on a broad front, destroying much of its military and covering hundreds of miles, to reach within 15 miles of Moscow by November. Despite massive losses, the Russians held them back, and by 1943, Soviet armaments production was fully operational and increasingly outproducing the German war economy.</i>
Russia was able to do that because of massive aid from the Western Allies, and in particular the USA, which obviously isn't going to be available this time round. Not to mention, the Russian Federation currently has a TFR of just 1.5, putting it at no. 170 out of 204 countries and territories. IOW, Russia simply doesn't have the demographics to keep throwing young men into the meatgrinder, even if it manages to restock its materiel supplies and keep its armaments production on a long-term war footing.
<i>Also, I'd like to point out that I didn't say it would all happen in one 'push'. Russia has been incrementally taking bites out of neighbouring countries for centuries. Sure, it lost some ground with the fall of the USSR, but it's been trying again since Putin came into power.</i>
I think "lost some ground" is underplaying it a bit TBH; Russia basically lost all the territorial gains it had made since Peter the Great's time. It's possible that, in another three hundred years' time, Russia will be back to its pre-1991 borders (although the ROI on wars of conquest is lower now than it was for most of the past three centuries), but TBH I don't think this possibility is worth risking a nuclear war over.
> Russia was able to do that because of massive aid from the Western Allies, and in particular the USA, which obviously isn't going to be available this time round
Worth noting that it does have the support of the world's current industrial superpower (which might have a lot of spare weapons to throw around once its done with its current plans for Taiwan). Their TFR problem is harder (but OTOH eastern europe's is even lower and even western europe isn't much higher).
Your attempt at formatting isn't doing anything.. I'm sorry, I don't know how to quote properly here either.
To the first point, I don't think Russian production towards the end of WWII had anything much to do with western supplied aid. They did that themselves. Western aid helped with the war effort, sure... but that's something of a different matter, and not really relevant here.
The fertility rate isn't a constant, and easily changed in a sufficiently authoritarian regime. Furthermore, when Russia has captured territory, it conscripts the population and uses those to attack the next target. This is what it has done to the adult male population of the annexed areas of Crimea, Donbas and Luhansk.
To your second point, okay, what's your plan? Do nothing whenever Russia threatens to use nukes, to avoid starting WWIII?
Supporting Ukraine in any way is "risking a nuclear war", because Putin is more than willing to threaten starting it whenever aid is is provided to Ukraine.
Here's how I see your strategy going down:
Without support, Ukraine will fall, and Russia will take all of it, in fairly short order.
There will be a pause while the population is subjugated.
However, without much delay, Moldova will also be taken. I don't think there's really any question about that - it's a small, poor country and Russia already has a foothold there.
Georgia will also be claimed at some point, and the West won't be able to respond without "risking nuclear war".
Maybe a couple of additional neighbours may also be sequentially taken, but once all the immediately unstable and unprotected regions around it have been claimed for Russia, there will be a few years while it consolidates and re-arms.
All its neighbours will be panicking and building up their defences, and maybe a few more pacts will be signed. However, these mean very little, because remember, we can't risk nuclear war, and providing support to another country risks that. So all countries look to their own defence.
Meanwhile, Russia infects neighbouring regions with partisans, randomly attacks places and makes claims about how opposing forces did it... you know, the usual, for modern-day Russia.
When the time is right for them, Russia invades a small part of a NATO country. Nowhere populous or important, just some backwater nobody cares much about. NATO doesn't do anything, because of course that would be risking WWIII.
The precedent having been set, Russia assimilates the rest of that country.
Then it repeats this process, from a slightly superior position each time.
Where in that do you intervene? The least dangerous point is at the very beginning, and the second-least is as soon as possible.
Firstly, I said that *at this stage*, the risk of escalating the war is bigger than the risk of Putin being emboldened by peace proposals, so you can drop that silly "We can't do anything whatsoever" strawman you've constructed.
Secondly, you're being extremely blase about how easy it is to control tens of millions of people who really don't want you to control them, particularly because, in your scenario, those tens of millions would have to be armed (hard to conscript them into expanding your empire without giving them weapons, after all).
Russia has lost a huge amount of what you might call its "soviet inheritance" of tanks and APCs. But structurally speaking, its military is much healthier than it was at the beginning of the war. And furthermore, I would argue that there are much worse things than an invasion that Putin can do with control of Ukraine.
In 2022 the military was incredibly brittle because of the overwhelming amount of cronyism and embezzlement. If they had been pushed harder in that state, I think they would have completely disintegrated. But the sluggish response by western powers gave them time to begin addressing the widespread corruption. And the seriousness with which Putin treated the war provided political cover for ordinary Russians to start airing these issues publicly. In fact, complaining about corruption in the ministry of defense is basically the only allowed form of political criticism. It's far from being solved, but it seems like the message has been received that the military is no longer just for show, and results actually matter.
But beyond that, I think Putin's experience with Syria has provided him an effective playbook for weakening Europe. I suspect that even if given the chance, Putin would not do a thunder run to the Polish border. He would take Kyiv and install a friendly government, while allowing some of the military to retreat to the west. This would divide the country in a way that leaves the "true" government economically unsustainable and in a constant state of war.
The result is a failed state, and a long, slow refugee crisis that can be channeled into the rest of Europe to bleed them economically and further increase support for far right anti-immigration parties. If you thought the situation with Syria was bad, just wait until it's a country with twice the population and they're right next door. And Europe and the US have already demonstrated that they they don't have the appetite to meaningfully engage the Russian military in order to stabilize a situation like Syria.
Here’s a counterpoint: a system’s purpose is what it does. When we observe Russia, what we see is a country that is, relative to its modern history, probably at its small geographical area. Certainly Russian territory is very small compared to its Imperial or Soviet height.
In contrast, the extent of US client states is the widest it has ever been. The US has allies and vassals across the entire globe, and every decade a new member joins closer and closer to Russia. It is additionally well established that the US uses color revolutions and sabotage to overthrow foreign governments and install friendly regimes.
Therefore, if we are to evaluate which perspective more closely resembles reality, the Russian claim that the US is bent on world domination has a much closer resemblance to reality than the American claim that Russia is an aggressor nation planning to march across Europe. Now, we may disagree with the Russian conclusion that it therefore has the right to enforce border security, but it is inaccurate to say that there’s clear evidence that Russia plans to invade all of Europe.
Historically, these kinds of border skirmishes and proxy wars between empires are extremely common, and have only culminated in global warfare a few times in about 700 years, which is a pretty good track record of border wars being limited in scope. Therefore, I would strongly disagree that this indicates Russia plans to invade “the entire world” or some other claim.
"It is additionally well established that the US uses color revolutions and sabotage to overthrow foreign governments and install friendly regimes."
What is the strongest evidence you have that color revolutions have been used by the US to overthrow governments and install friendly regimes? Do you think Russia has also tried to overthrow governments and install friendly regimes?
My evidence is the extensive, well documented involvement of the CIA and US Government in counter-Soviet revolutions in Latin America and in the Middle East (including funding for, among other orgs, ISIS when it was fighting the Syrian Government) as well as the fact that color revolutions almost exclusively strike US-unfriendly governments and replace them with US friendly governments
Sure, maybe Russia does get involved in foreign revolutions. It certainly did when it was a member of the Soviet Bloc, but considering it has literally one ally now and the US dominates half the globe it seems likely that only one of these countries has been successful in this practice.
"Color Revolutions" are, pretty much by definition, populist uprisings in favor of democracy and against authoritarian governments. If they don't meet that standard, we just call them "revolutions".
So color revolutions are inherently aligned with the interests of anyone who likes democracy and dislikes authoritarian regimes. Being on the same side as every color revolution ever is evidence that one is on Team Good Guy, not that they are the ones secretly causing all the color revolutions. I mean, color revolutions almost exclusively strike New-Zealand-unfriendly governments
and replace them with New-Zealand-friendly governments; are they part of the conspiracy?
The fact that you think the decision between a democratic and non-democratic government is “Team Good Guy vs Team Bad Guy” is the core of the problem. Democratic governments are not inherently good, non-Democratic governments are not inherently bad. “Democratic” Britain terrorized colonial Ireland, India, Africa, and Asia for centuries. “Democratic” America started a war in the Middle East on false premises, killing millions of Iraqis to secure oil fields for Europe and the US.
“Autocratic” Singapore under LKY turned a third world unlivable slum into a first world country in a generation. “Autocratic” El Salvador turned one of the most violent countries in the world into one of the safest places in North America.
Among the long list of horrible, horrible things the US government and its core military and police organs have done in living memory:
* Perpetrated a policy of implicit ethnic cleansing against Indigenous Americans.
* Funded ISIS, creating the largest terrorist the world has ever known
* Burned the Waco Compound to the ground and took triumphal photos on the smoldering corpses of literal children
* Waged terror wars against Vietnamese and Cambodian civilians
* Funded regimes in Latin America that committed ethnic cleansing and terrorized their own populations.
* Funded a military coup in Indonesia that is responsible for massacring over 60,000 people
* Funded ISIS, creating the largest terrorist organization the world has ever known
* Experimented on US citizens through the MK Ultra program, for which it has never apologized.
* Conspired to murder US Citizens during operation North Woods to justify an invasion of Cuba.
* Illegally detained and tortured Arabs and Pashtuns during the Afghanistan and Iraq wars
* Used drone strikes against American civilians without legal justification
* Keeps prisoners in prison longer than their established sentence to sell them for labor in jobs which include, among other things, fighting wildfires, which has an extremely high casualty rate. This is systemic to such an extent that even Kamala Harris was found to do this during her AGship in California.
* Did I mention they funded ISIS, the largest terrorist organization the world has ever seen?
And I am supposed to believe that funding color revolutions means the USG is actually the good guy because this government is “pro democracy”?
I believe the rapid self-iterative foom concept also makes a case for not creating severe AI regulation. If this is impossible for humans to handle then we should want AI to advance into self-iterative attempts that fail in some way and alert everybody, and in hopes that pre-foom AI can identify and deal with it better than us. This is all at its most possible when hardware is less advanced
smopecakes gets it. The safest day to develop AGI was yesterday, the next best is today, and the most dangerous is tomorrow. Your best chance of safety is getting to it as early as possible because the resources it has will be at their most limited and the number of experiments are at a minimum and at a scale controllable by orgs / govs.
You really do not want to try holding back the tide until compute costs are low enough for your average white color professional, or teenager if you wait longer, to train current level frontier models.
I don't understand why this is necessary.
If you believe that a future accident could happen, the thing to do isn't to wait for the accident to happen and then *hope* that the mitigation afterwards to the accident will work out, it's to prevent the accident in the first place!
See this comment by gwern: https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned/comment/ckwcde6QBaNw7yxCt
People who claim that they understand what will drive others to action do not have good predictive records on what would have driven to action on previous "warning shots". Their confidence re: the existence of warning shots is almost entirely from confirmation bias. If you don't believe me. Before reading the below comment, please register your belief states about:
What metric causes people to take things seriously, for something like nuclear accidents, pandemic response or biological attacks.
And how confident you would be that, if that metric were fulfilled you would get some dramatic action of some kind
And see how general your metric is compared to your internal felt sense of what would prompt responses for AI.
Then read the followup comment by gwern:
https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned?commentId=PnxH3WMKChSHfTfJif you are well calibrated, then I'd be at least slightly self interested, since it means you have passed the *absolute minimum* test most beliefs should pass, but even then, for me personally it's much more likely you're fudging your own memories about how inaccurate your positions were.
(Edit: fixed the link problem from the reply)
FYI, your second comment link is malformed and (at least for me on Chrome) does not deep link to the comment. It was pretty easy to find the one you meant though.
I think you are also missing the point. I'm not making any claim on what warning shots would or would not spur action or may or may not exist.
I am saying that if the statement "AGI / superintelligence can be achieved through scaling current methods" - something that AI safety proponents invoke when saying that we're on a dangerous path, need to pause, AI will inevitably become dangerous, etc. - then you are better off building it as soon as you possibly can. It is not without risk to do so, it is a relatively lower risk.
Otherwise you're gambling that compute will never get cheap enough for me, or the many other people like me, to afford the compute that frontier labs require today. That is a bad gamble. It is inherently easier to control a few processes that require massive investment than if you're in a hundred flowers blooming scenario.
I think your model implicitly relies on a notion of warning shots. If we *did* advance frontier models, and then the large companies acted without control, in what way is it actually better than the "Cyborg Oprah gifts the audience 10 SotA superhuman models every week" world? It still relies on the idea that the control you'd want for the mass consumption case is what you'd get for the elite production case, which does not appear to be the case now (and in fact you are arguing against control regulation that could not happen in the "compute is cheap" world.
(Whoops. Let me try to fix the link)
Edit: also I guess I read too much into the parent message of "it fails and everyone is alerted", which you don't necessarily agree with
Your edit has it right. I have no idea if there will be warning shots or any other form of early warning. My point is that as a simple physical matter, it is always easier to exert control over fewer massive centralized things than if you have many smaller decentralized things.
Easier. Not guaranteed. There are no absolutes. You are correct that the outcome could still be the same as in Cyborg Oprah world.
I oppose control regulation - and I think you're conflating control regulation with actual control - that would slow down AGI development because all signs point to the "compute is cheap" world as the one we inhabit. Either scaling will not work and it doesn't matter either way, or it will and is thus inevitable. Delay only moves you further into the danger zone when I can earn cash back buying today's frontier lab levels of compute on credit.
No one knows how much an AGI will cost to train in compute but if it's possible via scaling it fundamentally doesn't matter. Eventually it's 25-250k in today equivalent dollars. The metaphorical day after that it's 2,500 in equivalent dollars. Etc.
As a relative matter do you think it's safer to develop something potentially dangerous when it's being done by a single organization versus ten thousand or ten million or ten billion?
Is this the "hardware overhang" argument?
The problem is that this is a domain specific argument. In several domains AI can already outperform humans, sometimes to an extreme level. And we don't have an AGI, so there's no competition in areas where that is needed.
AI isn't a single thing. Chess playing agents are AI, but they only threaten chess players as chess players. LLM's operate in a more general domain, but it's still a restricted one. (But actors and authors are right to be worried.) Etc.
In this blog post, things are considered from a Bayesian perspective; when it comes to the total costs, one would probably have to analyse them using game theory. But the Bayesian perspective has not lost any of its validity. If, for example, one were to find that after each round of escalation in the Ukraine war, the number of voices that think Putin is a bluffer increases significantly, one could conclude that this is a self-confidence that is not supported by facts. In practice, of course, this is difficult, as the opinions published often pursue certain goals.
As far as AI is concerned, the ultimate danger is not outcompeting humans, but rather their destruction - or was the same thing meant here? In any case, the topic is not a ban on AI development, but the question of how to recognize whether the next step will cross a threshold beyond which someone else will decide for (or against) us.
Also it's a bad fit because states do not inevitably escalate in response to increasing provocations, under that feedback system everyone in the world would already be dead. The terminal inevitably, analogous to Castro's death, is that one side sues for peace.
Increasing Ukraine's ability to hold Russian territory at risk much more rapidly increases the chance of a balanced end state than it encourages Putin to opt for national suicide.
"But it’s not true that at some point the Republicans have to overthrow democracy, and the chance gets higher each election."
No, but we can see that the US system can't cope with either of the main parties becoming authoritarian. The design was that the House,Senate, and Presidency would have to work together and this would be a moderating influence, but in a world where the probability of each party winning is somehow driven to 50%, all that does is reduce the probability of the "bad party" getting complete control to one out of eight, with a new try at each election; a half-life of just over 20 years.
The historical trend here is that there’s a cycle of authoritarianism, in which each side feels threatened by the other and then responds.
Framing the question this way - “will they really be Hitler this time?” - is what people used to justify things like the FBI strong arming social media companies into censorship. Or, a culture of censorship that we know caused thousands of kids to get medically sterilized because nobody in the medical profession was willing to say, “this is wrong.”
So if you fear authoritarianism, the historical evidence says we have to oppose it wherever we see it, and not ignore it when our allies do it. The democrats didn’t run a primary and tried to use the courts to keep their opponent from being re-elected. It didn’t work, and hopefully they can provoke some soul searching that maybe it is wrong to try and demonize your opponents and use whatever legal apparatus you can bring to bear on them.
Fearing authoritarianism only from one side, while giving the other a free pass, is a recipe for more authoritarianism.
Well put.
This is a great, nuanced point about how the practice of low-level authoritarianism can increase the future probability/magnitude. This is my great fear with the Trump administration.
Trump has obviouly had many scummy business/legal dealings over the decades, but the choice to pursue legal action against him (and not against thousands of other scummy business people) once he opted to run for 2024 was so obviously politically motivated. I'm desperately hoping that we can avoid an escalating tit-for-tat leveraging of courts/regulations/IRS (and, God forbid, CPS) to punish/harrass political enemies. I'd actually like to see Trump make some high-profile pardons of Hunter Biden and other Dems, which would hopefully put the brakes on this potential escalation.
The federal indictments against Trump are very open and shut. He should be in jail right now for falsifying electoral votes and obstructing the FBI investigation at Mar A Lago. There is no comparison between Trump and the so called "Biden crime family." House Republicans led an impeachment inquiry against Biden and 1) Don't recommend impeachment, 2) Don't recommend charges when Biden is out of office, 3) Have secured 0 federal inditements, and 4) There appear to be no plans for follow up investigations.
Hunter Biden is charged for using a gun while using drugs -not for any corruption, and Joe Biden has promised not to pardon him. Trump meanwhile promises to prosecute the enemy within and hold military tribunals and to pardon everybody involved on Jan 6. There is no equivalency here - Trump and Trump supporting Republicans are far more corrupt than the fantasies of whatever right wing media corporation is losing their defamation suit for intentionally lying about a public figure to their viewers this week.
The Democrats didn't run a competitive primary, and I think it was a strategic mistake for them not to field more competitive candidates in the last four years, but their not having one isn't a sign of authoritarianism, it's the same reason the Republicans didn't run a primary in 2020: their candidate was already the President. After Biden stepped down from running, there are other candidates who most likely would have been more electable than Kamala, but rather than selecting the most electable candidate, the party ran his Vice President, the only person who the existing rules allowed to continue his campaign given that it was too late to run a primary.
As far as "using the courts to try to prevent him from being re-elected," Trump was impeached by the House after Jan 6th, but the Republicans in the Senate voted against hearing evidence, let alone confirming, and the stated reason was that he was already out of office, and so if he had committed crimes in the process, that was a matter for the courts to address.
There are other prospective crimes which he had already been involved in by that point, and then others which took place later. The Republicans in the Senate outright stated that this was a matter for the courts to address, and then the party has spent the four years since insisting that addressing them in court is a sign of partisan overreach. If we remain agnostic about whether Trump actually committed crimes, as matter of principle, I don't think democracy is well-served by a system where, if a person commits crimes in order to improve their chances of election, those crimes cannot be investigated or punished because that would interfere with the process of the election. That incentivizes candidates to pursue election by criminal means.
"the Republicans in the Senate voted against hearing evidence, let alone confirming, and the stated reason was that he was already out of office, and so if he had committed crimes in the process, that was a matter for the courts to address....then the party has spent the four years since insisting that addressing them in court is a sign of partisan overreach."
Exactly. On this particular topic the GOP has utterly outplayed the Dems in the court of public opinion, the way a housecat outplays a half-dead mouse.
"I don't think democracy is well-served by a system where, if a person commits crimes in order to improve their chances of election, those crimes cannot be investigated or punished because that would interfere with the process of the election. That incentivizes candidates to pursue election by criminal means."
Yep. And the SCOTUS has effectively ratified that as our new system for presidential elections.
Most countries on earth don't have primaries. Even the US didn't have primaries where the *voters* could pick the candidate until the 1970s. Those times and places weren't authoritarian.
But doesn't this assume the parties are monolithic? That isn't true even here in the UK and from what I understand US politicians are a lot more independent minded. Some Republicans and Democrats may even be anti authoritarian!
Many individuals are anti-authoritarian, but neither major party is, nor are any of the dominant figures in either party.
Right. But you only need a few voting with the opposition to put the kibosh on whatever the plan is don’t you?
Whatever was the case before, the current Republican party is about as monolithic as you can reasonably get. Most of the never-Trumpers have already defected.
Can we see that? That model of the US system ceased to be plausible in 1796, as soon as Washington left office. There have been two dominant parties since then (though of course, the two parties are no longer the Democratic-Republicans and the Federalists).
The check wasn't "these institutions are in control of different parties" it's "these institutions are going to fiercely hold on to their own power and not cede it to any other."
But Congress has been perfectly willing to not do its job, letting the President do things that are Congress's responsibility (e.g. tariffs) or not bothering with making sure a law is constitutional and making SCOTUS the bad guy who says "no you can't ban flag burning you idiots."
Very true.
We have had authoritarian presidencies during Lincoln's term, Wilson's, and FDR's. In Lincoln's case it wound down within a decade of the war ending, in Wilson's the public reacted against it and chose "normalcy" at the voting booth. FDR's left a lasting legacy of empowered unaccountable agencies with massive discretion and power, that continues to be problematic today because it diminishes the role of Congress (which Congress has gone along with for various ideological or electorally-motivated reasons.) But we still have cycles where that authoritarian executive bureaucracy gets politically challenged, as Reagan did, as Trump is doing now, and arguably the Democrats did in the midterm cycle of '06. (If you are of the opinion that Nixon was an authoritarian, you have another easy example to add to this list.)
The American public has been able to fend this off and swing the pendulum back numerous times. It was only during major wars that anyone got very close to dangerous authoritarianism, and in each case the people wearied of war and wartime restrictions. We are not a country that rallies around grand national projects for very long, and outside of such projects we have a very low tolerance for authority. The US people can cope with this, whether or not the system as designed is fending it off in the way the founders planned it.
"No, but we can see that the US system can't cope with either of the main parties becoming authoritarian."
This is certainly an opinion you can have, but neither it nor the implication smuggled into it (that one of the parties has become authoritarian) is immediately evident. The USA is still going. The Democrats want to reduce the power of the Republicans and the Republicans appear to sincerely wish to diminish the power of the government. None of this seems particularly authoritarian.
The next VP says he wouldn't have certified Trump's loss in the 2020 election and that Trump should ignore the courts if they rule in ways he doesn't like. It's very obvious Republicans want to annoint Trump king so they can reap the short term benefits of staying by his side. If Republicans aren't authoritarian why did they vote not to impeach Trump when he falsified electoral votes and tried to direct his DoJ to confiscate votes from the states and sent the Jan 6th mob to pressure Pence and House Republicans to certify his fradulent electoral votes? This is what authoritarian dictators all over the world do to make it seem like they have the consent of the people to rule.
Part of the problem is that "authoritarianism" isn't particularly well defined, meaning that both parties constantly see whatever the other one is doing as a sign of creeping authoritarianism. We perhaps need to break down the word "authoritarian" into a few different terms. Consider the following:
Country A is a libertarian utopia where you can do anything you want, except criticise libertarianism. To protect libertarianism, criticism of libertarianism is severely punished.
Country B is a theocracy where everybody is obliged to worship the sun god every day. But this is fine, because there's full and free democratic elections each year and the theocracy party always wins with 90% of the vote.
Which of these countries is more authoritarian? I'm not sure the question makes sense, they're both deeply imperfect in two generally authoritarian directions. (To be clear, these two examples don't represent the left and the right, they're just two examples of different ways to be imperfect.)
EDIT: This post is wrong and was clarified in the replies.
Your framework misses the Peter Schiff case (the name is a stand in for a certain genre of Austrian economist) who predicts a recession and/or mass inflation every year. We *can* safely ignore their constant doomsdaying, given the total lack of credibility, but nevertheless, in some sense, it's almost certain that as time passes, we will hit a recession and/or mass inflation. If I understood you correctly, we can't dismiss the Peter Schiff's. But we should.
Maybe you can dismiss the Peter Schiffs, but you can't dismiss that there will be a recession or mass inflation at some point in the future.
You could dismiss that there will be a recession / mass inflation at some point in the future, but it would require an extraordinarily good foundation.
You can dismiss the Peter Schiffs. If someone constantly predicts a cyclical thing, they will eventually be right. We should *not* say they predicted the event in any meaningful way.
The difference between the Shiff case and the Castro case is that the longer Castro lives, the more likely he is to die in the next year. The longer a country goes without a recession, the less likely it is to have a recession in the next year. In both of these cases, and unlike the nuclear escalation or AI safety cases, we have a fairly large sample size of reference cases.
"The longer a country goes without a recession, the less likely it is to have a recession in the next year"
That's not really true. If you haven't experienced a recession at a time when it actually matters to you (rather than your parents worrying about stuff you don't really care about), then you're less likely to think that it will happen.
Couple that with older people dying off, and there's a tendency to start thinking "This time we've got a handle on things and it won't happen again."
Until it inevitably does. Generally for the same reasons it happened last time.
This is counter Bayes, but when you have a situation where everyone updating for lower probability (as per Bayes) causes the actual probability to go up (because the measures put in place are being torn down as a result of the updates), then Bayes is not all that useful.
Yep. That clarifies it. So it's not just that X will almost certainly happen, but that the probability of X's happening soon, increases the more X doesn't happen. ("soon-ness" doesn't have to be literally temporal, as the drug dose case illustrates, but the analogy is close enough).
I think this is similar to the Biden dementia case or the Republican dictator case.
We should calculate a per year risk of a recession (does this go up every year as you get distance from the last recession? I don't know and would be interested to hear economists' opinions!)
Then if we originally expected Peter Schiff to be an authority, we could update to a higher probability based on his word.
Then, as Schiff is proven wrong, we un-update back to our prior.
Regarding increasing probability of recession as a function of distance from the last one: this economist says yes, but with caveats. I think the best macro model of an economy is a drunk staggering down a hallway. The ideal path he could follow, were he perfectly sober, would be to just walk straight down the hall over time, but being drunk he tends to swing and stagger one direction or the other until he rather painfully hits the wall and course corrects towards the middle. Then he goes too far in that correction, painfully hits the other side of the wall, corrects in the other direction, etc. He also has a bottle of Wild Turkey he's currently pulling off.
So, if an observer says "Huh... he hasn't hit the wall in a while... I guess he's not going to anymore?" the correct response is "nah, just means he is due." At the same time, hitting the wall recently might make it more likely he way over corrects and hits the other wall sooner than expected. All else equal having just hit a wall and recovered should make it less likely he hits again soon, but really all you can say is that if he hasn't hit in a while, he's due.
That assumes of course you do not have good data on what his current trajectory and speed look like, and whether or not his kids have left toys all over the hallway for him to stumble on, or how far down that bottle of Wild Turkey he's gotten in the past hour. If someone says "Oh, he's going to hit again! Look how fast he is moving towards that RC car!" that's a more compelling argument than "It's been a while", but only if you can confirm that he is moving fast and there is in fact an RC car there. That's hard to do, so commentators often claim knowledge of speed and toys to enhance their reputation at little cost.
Australia hasn't had a recession in decades. On a related note, their central bank has been targeting 4% inflation over that time period. Many people thought it was impossible to have a "soft landing", in which inflation was brought down without sparking a recession/unemployment, but that happened recently. Perhaps it's possible for central bankers to sober up!
The first search result for “Australian recession” was a BBC article on how Australia was plunging into a recession in 2020. Granted, it claimed that it was the first since 1990, which might even be true. It is possible that central bankers can avoid recessions, but it is also true that they can lie about economic data. The USSR had amazing growth year over year according to the official data, yet never quite managed to catch up to the USA.
Australia, like almost every country in the world, had a recession in 2020 due to covid.
A genuine Real Business Cycle recession in the modern era!
I think we have much better access to reliable economic data in Australia than we did with the USSR.
You know, I would have thought so about the US as well, but the last few years have been really bad for official statistics. I suspect it is not USSR levels of lies, but I am much less sure. There is a really big weak point in the system, in that the single source of truth has a very strong incentive to represent that truth one way, and there is little auditability.
In the case of Ukraine you also have to think of it the other way. Russia is acting imperialist, and attacked Georgia. We slapped them on the wrist, then to Crimea and then we sent slightly more aid to Ukraine, then they staged coups and arms to the Donbas, then they invaded Ukraine. So as much as we can ask what brings us closer to war with Putin we also need to ask what is enough to deter his expansionist tendencies.
So the other side is eventually some number of "teeth" will deter his advance on other nations.
Or, as Václav Havel used to say, "the problem with Russia is that it doesn't exactly know where it ends".