601 Comments

It all depends on what you think the odds of a killer AI are. If you think it's 50-50, yeah it makes sense to oppose AI research. If you think there's a one in a million chance of a killer AI, but a 10% chance that global nuclear war destroys our civilization in the next century, then it doesn't really make sense to let the "killer AI" scenario influence your decisions at all.

Expand full comment

Didn't Scott already say that here:

> It’s not that you should never do this. Every technology has some risk of destroying the world; the first time someone tried vaccination, there was an 0.000000001% chance it could have resulted in some weird super-pathogen that killed everybody.

Which I understood to mean that we shouldn't care about small probabilities. Or did you understand that paragraph differently?

Expand full comment

Yes, correct. But then he concludes the article with:

> But we have to consider them differently than other risks. A world where we try ten things like nuclear power, each of which has a 50-50 chance of going well vs. badly, is probably a world where a handful of people have died in freak accidents but everyone else lives in safety and abundance.

> A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so much abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all dead.

So the trouble comes in asking “*whose odds*” is any given person allowed to use when “Kelly betting civilization?” Their own?

Until and unless we can get coordinated global rough consensus on the actual odds of AI apocalypse, I predict we’ll continue to see people effectively Kelly betting on AI using their own internal logic.

Expand full comment

I think there is a hardware argument for initial AI acceleration reducing odds of a killer AI. It's extremely likely that eventually someone will build AIs significantly more capable than there is currently the possibility of. We should lean into early AI adoption now while hardware limits are at their maximum. This increases the chance that we will observe unaligned AIs fail to actually do anything including remaining under cover which provides alignment experience and broad social warning about the general risk

Expand full comment
founding

Agree with the caveat that this holds only to the extent that we expect Moore's Law to continue, which is far from certain. But if we go through many doublings of hardware performance while carefully avoiding AI research, and then some future Elon Musk decides to finance an AI-development program, then the odds of hard takeoff increase substantially. If AI research is constantly proceeding at the limits of the best current hardware, then the odds are very high that the first weakly-superhuman AGI will be incapable of bootstrapping itself to massively-superhuman status quickly and unnoticed.

Expand full comment

Osama bin Laden is kind of irrelevant. Sufficiently destructive new technologies get out there and get universal irrespective of the morality of the inventor. Look at the histories of the A bomb and the ICBM.

Expand full comment

Nuclear nonproliferation seems to have actually done a pretty good job. Yes, North Korea has nuclear weapons, and Iraq and Iran have been close, but Osama bin Laden notably did not have nuclear weapons. 9-11 would have been orders of magnitude worse if they had set off a nuclear weapon in the middle of New York instead of just flying a plane into the World Trade Center. And some technologies, like chemical weapons, have been not used because we did a good job at convincing everyone that we shouldn’t use them. International cooperation is possible.

Expand full comment

AI is invisible. There's also the alignment problem: if North Korea develops AI I hope it is even less likely that AI would stay aligned to north korean values for more than milliseconds, than that it would remain aligned to the whole western liberal consensus.

Expand full comment

I don't know. I think it's a genuine open values question whether it would be better for all future humans to live like people in North Korea do today or for us all to be dead because our atoms have been converted into statues of Kim Il Sung. Maybe I'm parsing your comment wrong though.

Expand full comment

I don't think that North Korea is feasibly in the race for AI at the moment.

Even Chinese have to put a lot of worry into obeying the rules of the CCP Censors, so I expect them to be a lot less "Race-mode" and a lot more security-mindset focused on making sure they have really good shackles on their AI projects.

The race conditions are in the Western World.

Expand full comment

I would imagine that AI would do whatever it is programmed to do.

Expand full comment

Empirically, AIs do approximately what we have trained them to do, as well as a bunch of weird other things, possibly including the exact opposite of what we want them to do. If it was possible to program AIs to only do what we want them to do, would we have daily demonstrations of undesired behavior on r/bing and r/chatgpt?

Expand full comment

'possibly' including the exact opposite? Empirically, I'd change that to 'sometimes/often, definitely'... (see the Waluigi Effect!()

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

To large extent chemical weapons aren't used because they just aren't good. Hitler and Stalin had little qualms about mass murder on industrial scale, both were brought to the very brink in face of an existential threat (and Hitler did in fact lose), both had access to huge stockpiles of chemical weapons ready to use, and yet they didn't use them. They weren't very effective even in the First World War before proper gas masks, which provide essentially complete protection at marginal cost (cheap enough for Britain to issue them to the entire civilian population in WW2), not to mention overpressure systems in armored vehicles: instead of a gas shell, you'd almost always be better-off firing a regular high explosive one even when the opponent has no protective equipment. Against unprotected civilian population they are slightly better than useless, and in this capacity chemical weapons have been used by for example Assad in Syria, but consider the Tokyo subway sarin attack: just about the deadliest place conceivable to use a chemical weapon (a closed underground tunnel fully packed with people), and it killed thirteen (although injured a whole lot more). You could do more damage by for example driving a truck into a crowd.

Expand full comment

Chemical weapons that were used did not even solve the problem they were intended to: that of clearing the trenches far enough back to turn trench warfare into a war of maneuver. The damage they did was far too localized.

This hasn't really changed in the intervening years - the chemicals get more lethal and persistent, but they don't spread any better from each bomb.

Wars moved on from trenches (to the extent they did) because of different technologies and doctrines (adding enough armored vehicles and the correct ways to use them).

Expand full comment

I'd argue that it was mostly motorized transport and radios that shifted parity back to the attacker. Before that the defender could redeploy with trains and communicate by telegraph but the attacker was reliant on feet and messengers.

Expand full comment

Yeah, tanks get all the credit for their cool battles, but as an HOI4 player will tell you, it's trucks that let you really fight a war of manuver. Gas might have a bigger role in "linebreaking" if Tanks hadn't been invented.

Expand full comment
Mar 9, 2023·edited Mar 9, 2023

This may be true now; I was thinking of why the European part of WWII didn't devolve into trench warfare like it did in WWI.

Did roads get enough better in the intervening 20 years in the areas of France to make trucks practical? I do know that part of WWI was that the defender could build a small rail behind the front faster than the attacker could build a rail to supply any breakthrough. Does that apply with trucks - were they actually good enough to get through trenchworks?

Or did the trenchworks just not end up being built in WWII - i.e. the lines didn't settle down long enough to build them in the first place?

Expand full comment

Creating a breakthrough was always possible for an attacker who could throw enough men and artillary at the lines in both WWI and WWII. The problem was that in WWI it just moved the front line up a couple of dozen miles and then the enemy could counter attack.

Having vehicles with certainly helps and means you can use them during the attack instead of just when advancing afterwards but engineers can fill in a trench pretty to let trucks drive over. They can't build railroads quickly though, especially not faster than a man on foot can advance.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

Nuclear nonproliferation has been aweful. How many people would still be alive, how many terrorist organization would not have spawned, how many trillion of expenses would have been better used, how much destructions would have been avoided in civil wars averted, if Saddam had been able to nuke on the first column of Abrams that set their tracks in Irak?

P.S: and implying that a proliferated world would have made 9/11 (or another attack) nuclear is unsubstantiated. Explosives are a totally proliferated technology. The only thing stopping a terrorist from detonating a MOAB-like device is the physical constraint of assembling it (ok, not entirely, I have no idea how reproducible is H-6 by non-state actors. But TNT absolutely is, so something not-quite-moab-like-but-still-huge-boom is theoritically possible). And yet for 9/11, they resorted to driving planes into the building, because even tho the technology proliferated, it's still a hurdle to use it.

Expand full comment

There’s a good chance that Iraq (or at least Saddam) would not have existed to be nuking Abrams tanks in 1991 or 2003, because Iran and Iraq would have nuked each other in the 1980s.

Expand full comment

Or maybe they wouldn’t had gone to war at all knowing that it would have been a lose-lose scenario. One wonders whether a world with massive proliferation would have been a safer one.

Expand full comment

Possible. I was mostly peeved by what I perceived as a cheap anti-American swipe rather than a reasoned assessment of when Saddam would use nukes (besides that, it’s unclear whether nuking an Abrams formation would even be all that useful - especially when all that soft targets that would get hit in retaliation are considered)

Expand full comment

Or Iraq and Israel. Tel Aviv is high on the list of cities most likely to be destroyed by a nuke...

Expand full comment

Chemical weapons have been used, even in recent years by major state actors (e.g. Russia, Syria). They don't get used more because they aren't that useful, and that offers a clue to the problem.

Expand full comment

If nuclear nonproliferation is a cause of the Ukraine war, that needs to be figured in.

Expand full comment

Maybe more like deproliferation. The Ukrainians gave up their nukes[1] in 1994 and in return[2] got a guarantee from Russia that Russia would defend Ukraine's borders. Candidate for Most Ironic Moment Ever.

-----------------------

[1] Of which they had quite a lot. Something like ~1,500 deliverable warheads, the 3rd largest arsenal in the world.

[2] It's more complicated than this in the real world, of course. Russia did not turn over the launch procedures and codes, so it would've been a lot of work for Ukraine to gain operational control over the weapons, even though they had de facto physical custody fo them.

Expand full comment
founding

>Something like ~1,500 deliverable warheads, the 3rd largest arsenal in the world.

The Ukrainians had zero deliverable warheads in 1994. Those warheads stopped being deliverable the moment the Russians took their toys and went home, and it would have taken at least six months for the Ukrainians to change that. Which would not have gone unnoticed, and would have resulted in all of those warheads being either seized by the VDV or bombed to radioactive scrap by the VVS while NATO et al said "yeah, we told the Ukrainians we weren't going to tolerate that sort of proliferation, but did they listen?"

Expand full comment

Eh, the biggest reason chemical weapons aren't used is because they kind of suck at being weapons. It turns out it's cheaper and more reliable to kill soldiers with explosives.

Expand full comment

The question is what the risk of AI is. If AI is 'merely' a risk to the systems that we put it in control of, and what is at risk from those systems, then N-Korean AI is surely not going to be a direct threat, as we won't put it in control of our systems.

Of course, if N-Korea puts an AI in control of their nukes, then we will be at an indirect risk.

Expand full comment

If the Allies in 1944 had taken the top ~500 physicists in the world and exiled them to one of the Pitcairn Islands, how long would that have delayed the A-bomb? Surely a few decades or more if we chose them wisely, and pressure behind the scenes could have deterred collaboration by the younger generation on that tech.

Instead we used the bomb to secure FDR’s and the internationalists’ preferred post-war order and relied on that arrangement to control nuclear proliferation. And fortunately, they actually kinda managed it about as well as possible.

But that has given people false confidence that this present world order can always keep tech out of the hands of those who would challenge it. They don’t seem to have given any effort or thought to preventing this tech from being created, only to get there first and control it as if every dangerous tech is exactly analogous to the A-bomb and that’s all you have to do to manage risk.

And they do this even though the entire field seems to talk constantly about how there’s a high chance it will destroy us all.

Expand full comment

I think the morality of the inventor is germane to the discussion. Replace Osama with SBF. We wouldn't trust someone with a history of building nefarious back doors in software programs to lead AI development.

Expand full comment

I am still completely convinced that the lab leak "theory" is a special case of the broader phenomenon of pareidolia, but gain-of-function research objectively did jack shit to help in an actual pandemic, so we should probably quit doing it, because the upside seems basically nonexistent.

Expand full comment

What if Omicron was “leaked”

to wash out Delta?

Millions saved.

Expand full comment

And, as the old vulgarity has it, if your aunt had balls she'd be your uncle.

Expand full comment

Not anymore ....

Expand full comment

Is Scott is now a gain-of-function-lab-leak origin proponent? Otherwise, I do not know why gain-of-function would be a big loss on par with leaded gasoline.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

I don't know if he is a proponent, but it seems to have some fairly high non-zero chance of being what happened.

My guess would be in at least the 20s percentage wise. An open market on Manifold says 73% right now, which is higher than I would have guessed, but not crazy high IMO. And the scientific consensus simply isn't that reliable because very early on they showed themselves to be full of shit on this issue.

Expand full comment

I am OK with a 20% probability but that does not seem enough to proclaim gain-of-function research a big loss. Especially since the newer DOE report seems to implicate Wuhan CDC, which did not do any gain of function research as far as I know.

Expand full comment

https://2017-2021.state.gov/fact-sheet-activity-at-the-wuhan-institute-of-virology/index.html

According to this US government fact sheet, "The WIV has a published record of conducting 'gain-of-function' research to engineer chimeric viruses."

Expand full comment

Wuhan CDC is very different from WIV. Different location, different people, different research.

Expand full comment

Ah, I see. But putting aside the DOE report, the WIV is implicated by many proponents of the lab leak theory, right? I hadn't heard any mention of the Wuhan CDC in these discussions before, but maybe I wasn't following very closely.

Expand full comment

Back in early 2020 I strongly favored the idea that an infected human or animal involved with the WIV or Chinese CDC accidentally transmitted a virus that was never properly identified before the outbreak. At the time I thought that any virus they were working on would tend to show up in the published literature and we'd have figured out the origin more quickly. At this point I'm much less sure of that but I'd still give it equal odds to a classic lab leak and I'm glad the DOE report is giving it more attention.

Expand full comment

20% * ~ 20 million = 4 million deaths thus far, which seems quite catastrophic.

[See https://ourworldindata.org/excess-mortality-covid for COVID mortality estimates].

I've not looked into WIV vs. Wuhan CDC...

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

Surely catastrophic but did gain-of-function research start the pandemic? The evidence is weak and circumstantial so far. If the pandemic is not due to gain of function research, then Scott's statement is unsubstantiated.

Expand full comment

Mallard is already accounting for the uncertainty over whether GoF research started the pandemic – that's why they multiplied by 20%. Obviously you might disagree that 20% is an appropriate guess at the probability.

Expand full comment

There's something off about assigning blame for 1/5th the deaths to a group who may not have done anything wrong. It's like if police found you near the scene of a murder, decided there was a 20% chance you committed it, and assigned you 20% of the guilt.

If a lab was doing gain-of-function research in a risky way that had a 20% chance causing an outbreak, it makes sense to blame them for the expected deaths (regardless of whether the outbreak actually happens). But if the lab was only doing safe and responsible research and an unrelated natural outbreak occurred, and we as outsiders with limited information can't rule out the lab... then I'm not so sure.

You'd also have weigh against the potential benefits of this research, which is even harder to estimate. What are the odds that research protect us from future pandemics and potentially save billions of lives? Who knows.

Expand full comment

Very well put.

Expand full comment
founding

Agreed, but if what the lab was doing had even a 0.2% chance of causing a global pandemic, that's 0.2% * 6.859E6 = enough counts of criminally negligent homicide to put everyone involved away for the rest of their natural lives.

And if you think that what you are doing is so massively beneficial that it's worth killing an estimated 10,000+ innocent people, that's not a decision you should be making privately and/or jurisdiction-shopping for someone who will say it's OK and hand you the money. The lack of transparency here is alarming.

Expand full comment

> It's like if police found you near the scene of a murder, decided there was a 20% chance you committed it, and assigned you 20% of the guilt.

It's not similar at all. Research is not a human being and therefore doesn't have a right to exist or to not 'suffer' under 20% guilt. Completely different cases.

Expand full comment

> I am OK with a 20% probability but that does not seem enough to proclaim gain-of-function research a big loss.

Gain of function research has had no objectively visible benefits, so any non-zero chance of a leak would automatically make it a loss. We know the risk of leaks is non-zero because they've happened.

Expand full comment

Worse than that. The "experts" who were being asked whether GOF research was being done, whether Wuhan was involved, and whether the US was paying for it actively lied about it. They lied because they were the ones who were doing it!

This includes Fauci. And that's the reason so many people, if mostly conservatives, are upset about his leadership. Not masks or other crap (those came later), but because he knew about GOF research - having approved the funding for it - and actively lied about it. When he lied about it, it became verboten to speak of the possibility that a lab leak was involved.

Expand full comment

And I'm still upset about Chris "Squi" Garrett and Brett Kavanaugh lying in his confirmation hearing, but nobody else gives a shit and the world has moved on.

Expand full comment

My understanding is that the main "slam dunk" piece of evidence in favor of zoonotic origin is the study (studies?) showing the wet market as the epicenter of early cases. I'm curious how the lab leak theory is seen as so likely by e.g. Metaculus in view of this particular piece of evidence (personally I'm at maybe 10%). The virus spilled over at WIV, but then the first outbreak occurred across town at this seafood market where wild game is sold? Or the data was substantially faked?

Expand full comment

If it was an accidental release (IE it leaked out of containment undetected by the researchers), all that would have to happen is for the affected researcher to go buy fish on the way home and then not fess up to it later. "Case negative one" if you will.

Expand full comment

I'm not an epidemiologist, but it seems like a lot more would have to happen than this hypothetical lab worker buying some fish.

If this person was a "super-spreader" then why wasn't there an explosion of cases, nearly simultaneously, in other parts of the city that they ventured? Most notably at their workplace, where they presumably spend a lot of their time? Yes, they might wear effective PPE when actively working with biohazards, but not when they're eating lunch, or in a conference room, or using the bathroom.

And if they weren't a super-spreader, why did just going to the market to buy fish seed so many cases? I suppose someone else that they infected could have become a super-spreader, but this starts to feel like adding epicycles to me.

Expand full comment

I think the idea is that the researcher would be a normal spreader and the first super spreader would be someone working at the market. If it's the sort of noisy place where you have to raise your voice to talk then that's superficially plausible.

Of course there are other possibilities too, like someone selling dead test animals that they don't think are dangerous at the market for a quick buck.

But given the circumstances I wouldn't hold out too much hope of ever being sure about this.

Expand full comment
founding

COVID superspreaders aren't people, they're places. Other viruses may work differently in that respect, but I don't think we've seen much personal contact between separate superspreader events with COVID. But there are clearly some places where, if a sick person shows up, the combination of crowding and poor ventilation and loud noise will result in a whole lot of other people getting sick.

Expand full comment

The suspicious part is that this person only infected people at the market and didn't seem to spread it to anyone around the WIV (or anywhere else). Possible, but it makes the market look more likely.

Also, the market is fairly far from the WIV. That's not a big problem for the theory; the infected researched might live near the market. But presumably only a small percentage of the researchers there live near the market and I think this reduces the likelihood somewhat.

Expand full comment

I think there was concern at one point of a streetlight effect. That is the locus of where they searched was the market, and then they found that was the locus. I don't know where that line of criticism ended up.

Expand full comment

My understanding, possibly mistaken, was that earlier cases were eventually found not associated with the wet market.

I think there are, and have been from the beginning, two strong reasons to believe in the lab leak theory. The first is that Covid is a bat virus that first showed up in a city that contained a research facility working with bat viruses. That is an extraordinarily unlikely coincidence if there is no connection. The second is that all of the people in a position to do a more sophisticated analysis of the evidence, looking at the details of the structure of the virus, were people with a very strong incentive not to believe, or have other people believe, in the lab leak theory, since if it was true their profession, their colleagues, in some cases researchers they had helped fund, were responsible for a pandemic that killed millions of people.

Expand full comment

I'm not sure that either of these are "strong" reasons to believe in the lab leak theory.

I've seen many people casually assert that COVID arising in the same city as a virology institute is "extraordinarily unlikely", but I have yet to see anyone quantify this. I'm not an epidemiologist, but I would think that epidemics are more likely to start in cities due to large populations (more people who can get sick), and high population density (easier to transmit). How many large cities have places where people come in to close contact with animals that can carry coronaviruses? Maybe Wuhan is one of 1000s of such places, in which case, OK, it at least raises some eyebrows. But if it's one of a handful, even one of dozens of such places, then the coincidence doesn't seem that strange to me.

Second, is it really true that *all* of the people in a position to do more sophisticated analysis of the evidence have strong connections to the WIV? Or to the particular type of research being done there? I seem to recall reading about people who were critical of gain of function research well before COVID (of course, I only read about it after COVID). And it only takes one person with a really strong case and a conviction to do the right thing to break the cone of silence. At this point they could probably just leak the relevant data anonymously and rely on one of the very capable scientists that have come out as suspicious of zoonotic origin make it public.

Expand full comment

Wuhan has about one percent of the population of China — and Covid didn't have to start in China. So I think the fact that Covid started in Wuhan which also had an institute doing research on the kind of virus Covid came from is pretty strong evidence.

All the people is an exaggeration, but most virologists had an incentive and Fauci et. al., in the best position to organize public statements and get them listened to, had such an incentive. So the expectation is that even if the biological evidence favored a lab leak, most of what we would hear from experts would be reasons to think it wasn't a lab leak.

It isn't enough for one expert to disagree unless he has a proof that non-experts can evaluate. In a dispute among experts it's more complicated than that. One side says "Here are reasons 1, 2, and 3 to believe it was animal to human transmission." The other side says "here is why your 3 reasons don't show that, and here are four other reasons to believe it was a lab leak." The first side includes Fauci and the people under him, the people he has helped to fund, and the people he has gotten to support his story because they want everyone to believe it wasn't a lab leak. The other side is two or three honest virologists.

Which side do you think looks more convincing to the lay public?

Expand full comment

AIUI*, the placement of the bat virus research in Wuhan in the first place was due to a high base rate of endemic bat viri in the region. If that is the case, then the lab location doesn't seem to provide much additional evidence.

*I haven't followed the origin hunt very closely because I doubted sufficient evidence exists to resolve the answer either.

Expand full comment
founding

The region with the high base rate of endemic bat viri is over a thousand kilometers from Wuhan, and not on any direct transit artery from same. And the WIV is a general-purpose virology lab, not specifically a bat-virus lab, placed in Wuhan for logistical and political reasons. It's easier to ship bats from across SE Asia to a top-level virology lab than it is to set up even a mid-level virology lab from scratch in rural China, so it's not surprising people did that.

Expand full comment

Your understanding of the earliest cases is, indeed, mistaken:

https://www.science.org/doi/10.1126/science.abm4454

Expand full comment

Perhaps. From the Wiki article on wet markets:

"although a 2021 WHO investigation concluded that the Huanan market was unlikely to be the origin due to the existence of earlier cases."

Cited to: Fujiyama, Emily Wang; Moritsugu, Ken (11 February 2021). "EXPLAINER: What the WHO coronavirus experts learned in Wuhan". Associated Press. Retrieved 14 April 2021.

Your article cites several early cases, some of which were associated with the wet market. It gives no figure for what fraction of the Wuhan population shopped at the wet market.

The number I would like and don't have is how many wet markets there are in the world with whatever features, probably selling wild animals, make the Wuhan market a candidate for origin. If it is the only one, then Covid appearing in Wuhan from it is no odder a coincidence than Covid appearing in the same city where the WIV was researching bat viruses. If it was one of fifty or a hundred, not all in China, which I think more likely, then the application of Bayes' Theorem implies a posterior probability for the lab leak theory much higher than whatever your prior was.

Expand full comment
founding

The wet market was the site of the first COVID-19 superspreader event; there's not much doubt about that. There may have been earlier isolated cases elsewhere, but we'll probably never know for sure and so they probably shouldn't weigh too heavily in our thinking.

But the wet market would have been an ideal place for a superspreader event even if it had sold jewelry, or computers, or medical supplies. It's a big, crowded building with thousands of people coming and going all day with I believe poor ventilation and noise levels that lead to lots of shouty bargaining over the price of whatever. If COVID gets into that environment, a superspreader event is highly likely.

Also, the wet market did *not* sell bats. Or pangolins, though I think those are now considered to have been red herrings (insert taxonomy joke here). There was a western research team investigating the potential spread of another disease, that kept detailed records of what was being sold during that period, and they never saw bats.

It's still possible to postulate a chain of events by which a virus in a bat far from Wuhan, somehow finds its way into a different yet-unknown species and crosses a thousand kilometers, probably an international border, to trigger a superspreader event in Wuhan without ever being noticed anywhere else (e.g., in the nearest big city to the bat habitat). But there's a lot of coincidence in that chain, because if it wasn't the nearest big city to the bat habitat, there's *lots* of cities and transit routes to choose from and it somehow found the one with the big virology lab.

There's *also* a lot of coincidence in a hypothetically careless lab technician going to buy some fresh fish for his family after work, and triggering off a superspreader event at the wet market rather than an ordinary grocery store, nightclub, jewelry market, or whatever. Lots of those in Wuhan. Sadly, there's no need to postulate wild coincidences for there to have been a COVID-like virus in the WIV to infect our hypothetical lab technician.

When you have eliminated the impossible, whatever remains - however improbable - must be the truth. We presently have two improbable, coincidence-laden options to chose from, and neither seems obviously more or less improbable than the other.

Expand full comment

Just spectating here, but that market says it'll remain open until we're 98% sure one way or the other.

There may be an asymmetry there. We may be able to uncover definitive proof COVID came from a lab, but what more proof could we hope to find that COVID had natural origins? If patient zero was an undetected case, surely it's too late to find them now.

The question can resolve yes, or remain open. There's little chance of it resolving to no even if COVID has natural origins.

Expand full comment

They could find an animal reservoir with a wild virus that is a very close match (much closer than ~98% match published earlier).

Expand full comment

Good point, though I'm not sure that would satisfy the lab leak proponents. They'd say that natural virus was likely studied in the lab and then was leaked.

Expand full comment

There are several pieces of market evidence that China never disclosed that could go a long way towards resolving it. They have swabs from the market that found DNA of undisclosed animals. They could also simply interview the vendors at the market and figure out which ones were selling which animals. They could also be more aggressive with testing bats and other animals within China.

Whether or not they will do any of that is unclear -- it seems like there's been a strong effort within China to obfuscate the market evidence. For a while, they denied that there were wild animals at the market. They've also argued that the virus started outside of China:

https://medium.com/microbial-instincts/china-is-lying-about-the-origin-of-covid-399ce83d0346

As you point out, it's not clear that any of this would satisfy the lab leak proponents, who would just modify their theory again.

Expand full comment

It seems to be one of those things where people just repeated it enough that a bunch of people started assuming there must have been some kind of new evidence.

Every few weeks someone else re-announces some variation on "its unfalsifiable, we cant prove it definitely didn't come from the lab and there is no new evidence."

And each time someone announced that the true believers scream "see we were right it was a lab leak! We told you so!"

Turns out if you repeat that enough a bunch of people just adopt the belief without need for any new evidence.

Expand full comment

Gain-of-function research was intimately involved in making the mRNA vaccines that did far more than "jack shit to help".

https://www.technologyreview.com/2021/07/26/1030043/gain-of-function-research-coronavirus-ralph-baric-vaccines/

"Around 2018 to 2019, the Vaccine Research Center at NIH contacted us to begin testing a messenger-RNA-based vaccine against MERS-CoV [a coronavirus that sometimes spreads from camels to humans]. MERS-CoV has been an ongoing problem since 2012, with a 35% mortality rate, so it has real global-health-threat potential.

By early 2020, we had a tremendous amount of data showing that in the mouse model that we had developed, these mRNA spike vaccines were really efficacious in protecting against lethal MERS-CoV infection. If designed against the original 2003 SARS strain, it was also very effective. So I think it was a no-brainer for NIH to consider mRNA-based vaccines as a safe and robust platform against SARS-CoV-2 and to give them a high priority moving forward.

Most recently, we published a paper showing that multiplexed, chimeric spike mRNA vaccines protect against all known SARS-like virus infections in mice. Global efforts to develop pan-sarbecoronavirus vaccines [sarbecoronavirus is the subgenus to which SARS and SARS-CoV-2 belong] will require us to make viruses like those described in the 2015 paper.

So I would argue that anyone saying there was no justification to do the work in 2015 is simply not acknowledging the infrastructure that contributed to therapeutics and vaccines for covid-19 and future coronaviruses."

I'm disappointed that Scott is being so flippant about gain-of-function with regards to coronaviruses. That line feels closer to tribal affiliation signaling rather than a considered evaluation of the concept, which is especially ironic considering the subject of this article is how to make considered evaluations of risky concepts. There's a very real argument that a world with no gain-of-function research still results in COVID-19 (even if it leaked from the lab, there's still plenty of uncertainty about whether gain-of-function was involved in that leak), but without the rapidly deployed lifesaving vaccines to go along with it.

Expand full comment

As far as I know, gain of function research did not contribute to the development of the COVID mRNA vaccines, and this article doesn't really say anything to the contrary except a vague claim about "acknowledging infrastructure". If you have specific knowledge of how gain of function research was intimately involved in the vaccine development I'd be interested to hear it.

Expand full comment
founding

Nuclear weapons and nuclear power are among the safest technologies ever invented by man. The number of people they have unintentionally killed can be counted (relatively speaking) on one hand. I’d bet that blenders or food processors have a higher body count in absolute terms.

I have no particular opinion on AI but the screaming idiocy that has characterized the nuclear debate since long before I was born legitimately makes me question liberalism (in its right definition) sometimes.

Even nuclear weapons I think are a positive good. I am tentatively in favor of nuclear proliferation. We have seen something of a nuclear best case in Ukraine. Russia/Putin has concluded that there is absolutely no upside to tactical or strategic use of nuclear weapons. In short, there is an emerging consensus that nukes are only useful to prevent/deter existential threats. If everyone has nukes, no one can be existentially threatened. For example, if Ukraine had kept its nukes, there’s a high chance that they would correctly perceived an existential threat, and have used nukes defensively and strategically in an invasion such as really occurred in 2022. This would have made war impossible.

Proliferation also worked obviously in favor of peace during the Cold War.

World peace through nuclear proliferation, I say.

Expand full comment
deletedMar 7, 2023·edited Mar 7, 2023
Comment deleted
Expand full comment
founding

That's actually an interesting topic. I agree that nuclear proliferation can make low-intensity and border conflicts more likely. We can see this between China and India as well. But at the same time, the prevention of large-scale conventional warfare is more important, I think. And we can see what happens with non- or asymmetrically nuclear-armed states between India and Pakistan. In 1971, India invaded East Pakistan and ensured its independence as Bangladesh. If both states had been nuclear armed, that would have been impossible.

Expand full comment

2nd Amendment—we have the highest murder rate in the developed world. If we got rid of guns the murder rate would go down. You can’t keep guns out of the hands of irresponsible actors in America.

Expand full comment

Nuclear proliferation can maintain world peace only if you assume no one with control over nukes ever goes insane or is insane to begin with. The number of people who've controlled nukes in human history is small enough that no one sufficiently insane has ever been in control of them, including the Kims. This is not a safe bet to make with several times more people.

Expand full comment

Obviously what we need is some kind of guild. Perhaps addict the members to some exotic drug so the UN can control them. This guild would ensure the atomics taboo is respected by offering all governments the option of fleeing and living in luxury instead of having to take that final drastic step. After all, the spice must flow.

Expand full comment

Historically speaking, are there leaders who have gone the kind of insane you're concerned about?

Expand full comment

Idi Amin? Pol Pot? Osama bin Laden?

Expand full comment

There have been several, though they aren't frequent. The problem is, if someone has an "omnilethal" weapon, you don't need frequent.

Also, just consider the US vs. Russia during the Cuban missile crisis. We came within 30 seconds of global nuclear war. There was another instance were Russian radars seemed to show a missile attacking Russia. That stopped being a major nuclear exchange because the Russian general in charge defied "standing orders" on the grounds that the attack wouldn't be made by one missile. (IIRC it turned out to be a meteor track.) So you don't need literally insane leaders, when the system is insane. You need extraordinarily sensible leaders AND SUBORDINATES.

Expand full comment

Also, and this doesn't get talked about nearly enough, there's the question of deniability.

Right now, there's only one rogue state with nuclear weapons: North Korea. This means that if a terrorist sets off a nuke somewhere, we know exactly where they got it from, and we crush the Kim regime like a bug. And they know that, so it won't happen. A world with one rogue state with nuclear weapons is exactly as safe as a world with no rogue states with nuclear weapons... except for the slightly terrifying fact that it's halfway to a world with *two* rogue states with nuclear weapons.

If Iran gets the bomb, and then a terrorist sets off a nuke somewhere, suddenly we don't know who they got it from. There's ambiguity there until some very specialized testing can be done based on information that's not necessarily easy to obtain. That makes it far more likely to happen.

Expand full comment

You're overly "optimistic". With large nuclear arsenals, occasionally a bomb goes missing and nobody knows where it went. So far it's turned out that it was really lost, or just "lost in the system", or at least never got used. (IIUC, the last publicly admitted "lost bombs" happened when the Soviets "collapsed". But that's "publicly admitted".) It's my understanding that the US has lost more than one "bomb". Probably most of those were artillery shells, and maybe some never happened, because I'm relying on news stories that I happened to come across.

Expand full comment

Fair enough. On the other hand, the fact that they've never been used tells us, with a pretty high degree of confidence, that they most likely never ended up in the hands of terrorists. It's not a perfect heuristic, but it's good enough that IMO it can be safely ignored as a risk factor until new evidence tells us otherwise.

Is that overly optimistic? Maybe. But I still think it's true.

Expand full comment
founding

I don't think that bombs go missing and "nobody knows where it went" in the sense that would be relevant here. There have been a very few cases where "where it went" was "someplace at the bottom of this deep swamp or ocean" and we haven't pinned it down any further than that. But I expect people would notice and investigate if someone were to start a megascale engineering project to drain that particular swamp.

"Goes missing" in the sense that an inventory comes up one nuke short and the missing one is never found, no.

As for "publicly admitted" lost nukes from the fall of the Soviet Union, citation very much needed. Aleksander Lebed *accused* the Russian government of losing a bunch of nuclear weapons, but he was part of the political opposition at the time.

There are very probably zero functional or salvageable nuclear weapons that are not securely in the possession of one of a handful of known national governments.

Expand full comment

I was under the impression that nuclear warheads were found on the black market after the fall of the soviet union? Maybe that was an urban legend but with how many were on trucks whose drivers hadn't been paid in years and similar, I'd be utterly shocked if there hadn't been dozens that went 'missing', at least temporarily. Of course, I don't know if those warheads were at all useable without the appropriate codes, and I'd expect them to have all been found long ago by now, but I feel like the priors for the fall of the Soviet Union are heavily on "things went missing / were stolen", and it would have taken a lot of work to make nukes an exception to that.

Expand full comment
founding

To the best of my knowledge, and I used to do nonproliferation work, A: no nuclear warheads were ever found on the black market, B: no nuclear weapons are unaccounted for except in the lost-beyond-plausible-recovery sense noted earlier, and C: the truck drivers tasked with transporting nuclear weapons were fully paid everywhere and always even if the rest of their country went to hell.

Expand full comment

I don't know about that. Nuclear bombs leave a lot of evidence behind. You can tell a great deal about the physics of the bomb from the isotope distribution in the debris, and the physics will often point to the method of manufacture and the design, which in turn points back to who built it.

Expand full comment

I just don't understand how intelligent people can so firmly believe in a black and wide world view. Just get your self in a neutral position, imagine the perspective of e.g. south africa: Who did invade the most countries an fight the most wars in the last 80 years? Even without there beeing a thread to threat country? Whose secret service did organize or support the most military coups? Witch state killed the most civilians? Who did quit arms control treaties for when they didn't fit them any more? There can be several candidates for these questions, but I'm sure Iran and North Korea aren't the first to come to mind for somebody outside NATO.

Expand full comment

The flipside is that superpowers don't need nukes - they can crush other states with conventional warfare. The USA without any nukes is exactly as scary as a nuclear armed one, and the CCP is probably in a similar category. A small country, though, becomes a *lot* more threatening with nuclear missiles, albeit conditional also on its missile technology (North Korea may have nuclear warheads but it still doesn't have the ability to hit Japan with them)

Expand full comment
founding

Superpowers, plural?

Russia and China both need nukes to keep Uncle Sam from owning their skies and bombing into oblivion whatever aspects of their nation annoys POTUS du jour.

Expand full comment

You also need to add the condition "has control of enough nukes". Control of a single bomb which is set off is unlikely to cause an all-out nuclear exchange at this point. Several more links in the chain would have to fail for that to happen.

Expand full comment
founding

Putin is about as insane a national leader as I can imagine, even including your Stalins and even possibly your Hitlers. He was stupid enough to invade Ukraine, but not stupid (or crazy) enough to use nukes.

I totally understand your concern, but I just don't think it's very well borne out by who actually ends up in control of the metaphorical or literal nuclear codes.

Expand full comment

Putin reasonably believes NATO expansion is an existential threat (either to Russia or to him) and has said so plainly. Why do you think you know he's wrong?

Expand full comment
founding

He clearly does NOT actually believe that, since nukes have not actually been used. He is clearly posturing. Boris Yeltsin made the same noises about NATO expansion in the 90s, and nothing happened. And in reality, Putin's reaction to this alleged existential threat has been a conventional-war invasion of a non-NATO state. The fact that you've apparently swallowed this bullshit does not speak well of your critical thinking skills.

That's part of what makes the Ukraine example so salutary. It cuts through the posturing and lets us all see what threats are truly considered existential. Claiming an existential threat is essentially a means of nuclear intimidation. Now we know it doesn't work. No one will ever use nukes offensively.

So now, in the present, after we've received this clarification, when I say 'existential threat' you should be sure that I mean it literally. I mean missiles in the air, troops marching toward the capital kind of threat. Actual humans charged with making policy, even insane criminal ones like Putin, understand the difference.

One of the most critical tasks in foreign relations is to send a clear signal. It doesn't matter what the signal is, but it needs to be clear. If the West had committed to NATO expansion, and swallowed up Sweden, Finland, and Ukraine on a reasonable time frame, that would have sent an extremely clear signal and also made war impossible (in large part because invasion of a NATO state risks nuclear retaliation).

Flip-flopping from acquiescence/appeasement (annexation of Crimea) to resolution (Ukraine war) is the most dangerous cocktail in foreign policy, and it leads to things like the Ukraine war and to World War 2. But now, going forward, we've gained a lot of important information about what nukes signal and how they fit into diplomacy, and I think it's positive.

Expand full comment

No, "existential threat" does not mean they are no choices except nuclear war. Putin's actions are consistent with him being a fundamentally more "moral" person that those who rule the "West," at least in the handling of x-risk: the invasion of Ukraine is a costly honest signal of his current perception of threat, to which the response from anyone with a shred of concern for avoiding nuclear exchange would be to STOP THREATENING HIM. If anything, Putin's mistake was egregiously overestimating the decency of his enemies.

Expand full comment
founding

If you can't see how the Ukraine war has been a massive disaster for Russia, and actually neutralized its one credible threat (nukes), you are an idiot. I honestly wonder if you can read. The reason why the nuclear stick has failed to work is because Putin failed to send a clear signal in the pre-war phase, and now sent a clear submissive signal.

If he had wanted it to be otherwise, he should have at least dropped a low-yield device on Kiev the moment his troops had to retreat. He didn't, and now he's incredibly fucked. Nukes are defensive weapons. "Existential threat" now means my definition, not yours. Ukraine will never invade Russia or even really attack Russian territory to avoid imposing an (actual) existential risk and allow Russia's government to collapse at its own speed.

And in the end, all that's going to happen is the rest of the world is going to threaten Russia more as a result. Maybe you're right about Putin's intent, but what's actually happened has been by any reasonable account the worst-case scenario for Russia.

The fact that you think the leader of a nation which uses human wave attacks made of criminals, and invades its neighbors causing titanic levels of suffering and even possibly national collapse (not to mention the casual war crimes) is more moral than those defending, makes me think you're actually some kind of sociopath or insane yourself. You clearly can't defend this position, you just say it's true. Your desire to be contrarian and interesting has driven you off the deep end.

Expand full comment

If the western powers should have let Russia invade Ukraine because not doing so would risk a nuclear war, shouldn't they also give in to everything North Korea demands? The fact that a power has nukes doesn't make it 'decent' to allow them to do whatever they want, especially not something like invading a sovereign nation whose leader and 90% of whose population does not want them there. You could also argue that Russia is the one violating 'decency' because invading Ukraine in the first place vastly increased the risk of a nuclear exchange.

If Putin was genuinely specifically concerned about NATO as an existential threat, then he could have made a threat like "If Ukraine starts a proceeding to join NATO, I will invade them." He did no such thing. And since NATO has never invaded Russia, or shot down Russian planes, or significantly interfered in Russian government, the idea that their expansion poses an existential threat to Russia is comical.

Expand full comment

Putin is as big a dumbass as George W Bush, but at least Iraq had oil which the world needed for the global middle class to continue to expand.

Expand full comment

Even if he’s right about the threat, he was clearly wrong that invading Ukraine was a good response, since it seems to have absolutely made Russia weaker and NATO expansion more likely.

Expand full comment

Personally I might have opposed North Macedonia into NATO in 2020 had I been aware of it…once Putin invaded Ukraine I wanted to expand and strengthen NATO.

Expand full comment
founding

That belief is not even close to reasonable. NATO is not going to bomb or invade Russia, and NATO's very hypothetical ability to subvert the Russian government is not dependent on NATO's further expansion. However, on the scale of political irrationality, it ranks well below the historic leaders in that field.

Expand full comment

I think Putin understands NATO better than I do, and defer to his expertise. I expect the information I have to be lies.

Expand full comment
founding

I don't think Putin understands NATO better than I do; he lacks the necessary cultural context, and his advisors are unreliable. And Putin's expertise is primarily in the field of *lying*; he's a professional spy turned politician. So if you expect the information you have to be lies, the very *first* thing you should expect to be a lie is whatever information Vladimir Putin gave you about what Vladimir Putin believes.

Expand full comment

I agree any *particular* dictator is unlikely to start a nuclear war. Have 30 of them? Sooner or later *someone* lights the match.

Expand full comment
founding

Sure. I was being glib when I said 'everyone.' I don't mean your Ugandas or even your Belarus-es. I'm thinking more like Japan, Korea, Brazil, Mexico, Canada, Italy, South Africa, Egypt, Nigeria, Australia, even Iraq, Hungary, or Saudi Arabia.

Not Iran though. Not for any good reason, just because I think they're the bad guys and want nukes and therefore shouldn't have them. In fact, I think nuclear proliferation might be the only path to peace in west Asia. Still don't want Iran to have them.

Expand full comment

I think we should give nuclear weapons more than 80 years before we declare them a success or even consider the idea that proliferation isn't bad. All it takes is one event, one time, to fuck literally everything up.

Call me back in 300 more years of no nuclear war, and maybe we can talk.

Expand full comment
founding

Way too conservative. We should be eager to employ new technologies that promote peace. At the same time, I was being a bit glib when I said 'everyone.' I don't mean like Uganda or even necessarily Belarus. I'm thinking more like Japan, Korea, Brazil, Mexico, Canada, Italy, South Africa, Egypt, Nigeria, and Australia.

Not Iran though. Not for any good reason, just because I think they're the bad guys and want nukes and therefore shouldn't have them.

Expand full comment

But we do not know that, long term, they promote peace. If you have a technology that gives you 100 peaceful years, but then on year 100 kills 1 billion people and destabilizes the entire world order, that is not a technology that promotes peace in my opinion. No other tech has that potential but nukes, so we must be very careful.

Expand full comment
founding

We've been through a lot of pretty tense times and had some pretty unreasonable people with their finger on the nuclear trigger. No war so far. This is a definite signal.

Expand full comment

I will readily admit they seem to have been a good thing as far as global peace goes, so far. I think we just disagree on the degree of risk of a nuclear event, or rather, how knowable that is, and we may just have to leave it at that.

Expand full comment

You have too many pretty-close-to-failed states on your list for my taste.

Also, why would Brazil, South Africa or Canada need nukes? To defend themselves from... whom, exactly?

Expand full comment

South Africa actually did have nukes until it dismantled them circa 1990.

(SA probably collaborated with Israel (and I would bet Taiwan) on the 1979 test captured by the Vela satellite.)

Expand full comment

Of course all your friends should get nukes, all the others you don't like are the bad guys. Please consider for one second that this could look exactly the opposite if you would wear another persons skin.

Everyone who divides the world in good and evil should stick to the ferry tales or grow up. Please study some history, conflict management, psychology and most important learn to see the world from different perspectives.

Expand full comment
founding

The whining! My god, the whining. Also, don't hesitate to name-drop some more concepts without actually arguing.

I made a special exception for Iran due to personal antipathy. I'm allowed to have antipathy. Otherwise, I'm perfectly fine with 'bad guys' having nukes. It's what makes them work in favor of peace!

In case you haven't noticed, lots of bad guys ALREADY have them. Russia, China, North Korea. Lots of questionable states too, like Pakistan, India, and Israel. I've already said above I was fine with a whole host of marginal African nations having them. Elsewhere, I've also said I'm fine with the likes of Iraq, Saudi Arabia, and Hungary having nukes.

But more than that, you are getting at something with real with your comment: the United States of America rules the world. It determines which states will survive, which will have independent foreign policies, and which will develop nuclear weapons. Its friends prosper and its adversaries suffer. Good guys win, bad guys lose.

I say this is good. It is good for peace, it is good for prosperity, it is good for freedom. It is especially good for those of us wise enough to be US citizens, but it's also pretty damn good for everyone else too. This is not a fairy tale, it's real life. Look at the past 80 years. Have you noticed that they're the richest, freest, most peaceful years in human history? That's the world the USA made. Everything you have, you owe to the USA.

You can cope with and seethe against this reality all you want in whatever inconsequential corner of the world you're from (considering the pathetic whiny tone of your comment, I'm guessing it's some client state like Luxembourg or Spain), but it's not changing any time soon.

Russia is committing suicide in Ukraine, and that country will be the grave of their imperial ambitions. China has abandoned "wolf warrior diplomacy" and is showing the USA its belly. They're destroying their own economy to produce conformity, they're reaping the consequences of the nationally suicidal one-child policy, and just in general they're walking on the edge of a knife. One more foreign policy disaster (like attacking Taiwan) and they might well be through.

"Won't somebody please see the world from my perspective?!?!" No. Your perspective isn't different from mine, anyway, except in tone. You too have lived your entire life under complete US hegemony. You think this is bad, but it's actually good.

Even the adversaries of US hegemony struggle so hard to imagine a different world that they destroy themselves (Russia) or just give up (China). The multipolar world died in the womb.

Emigrate to the United States. Be one of the rulers of the world, not the ruled.

Expand full comment

Are you serious?? If you are, this is exactly feeding all my stereotypes about Americans that I hoped are wrong.

There is never pure good or evil in any conflict. And even if it still was sometimes, approaching with this attitude does never solve anything, but deepen the trenches.

Most US citizens are born in the US so this was not wisdom but chance. How many of the people 'wise' enough to migrate to the US can actually do so?

If you think you deserve a life better than 3/4 of world population just because you end up as a US citizen I can understand this as usual amount of egoism. But associating you citizenship to wisdom implies all others being stupid and sounds like dump nationalism and nothing i would expect from a intelligent individual. You ask me to move to the US for a better life on the side of the winners? If I was allowed to do so, this would hurt my home country by brain drain. Could you consider that I prefer life in a 'client state' because it is my home and I would like to see it prosper in freedom and sovereignty? Moving to the US would be nothing but opportunistic.

You write about freedom, whose freedom? Only a small rich fraction of humanity can exert this freedom even if many more would be allowed to but they just don't have the means.

I just remember that we are always told that the West stands for democracy, you just defended world dictatorship because many people including the two of us profit from it. Most of the worlds population doesn't! And the US has been anything than a fair ruler but sided with who ever served their interests. We are so often told about a 'rules based world order' that is questioned by Russia. What are these rules? Looking at the history of the last 70 years, one rule stands out: the US are the good guys, so they stand above all other rules. Can you imagine that many peoples don't like this rule?

Is there any better example of hypocrisy than the US talking about international laws while invading Iraq based on lies and causing 150,000 civilians dead and a failed state for decades? Taking about human rights while still running Guantanamo? Or supporting a tribunal for Putin about war crimes but never prosecuted a single American for proven war crimes? Instead they are after the people making these crimes public like Manning, Snowden and Assange.

Seeing this hypocrisy, I can very well understand all people and nations that want to end US domination.

I wasn't asking you to see the world from my perspective, I was stating that for resolution of any conflict it is essential to be able to understand the persective and reasoning of all parties involved. This is true for states as much as for individuals as social groups. Absence of violence by subordinating is something else than peace. If you suggest we could have world peace if only all nations would play along the will of the US, this is exactly what Hitler told the rest of Europe or what China is telling Tibet, Xingyang and Taiwan. How can this be morally OK in one case and not in another?

No mater how the actual struggle against US hegemony plays out at the moment, my sympathies are with victims that have to suffer for power politics. Apart from that I see strong evidence that the multipolar world is not only the better choice but inevitable sooner or later. The main question is how much damage happens in transition.

"Be one of the rulers of the world, not the ruled."

For me this is disgusting and sounds like the typical movie villain. No human deserves to be ruled over so I definitely don't want to be part of this. I stand for respect between equals.

Expand full comment
founding

"Are you serious?? If you are, this is exactly feeding all my stereotypes about Americans that I hoped are wrong."

Yes, deadly.

"There is never pure good or evil in any conflict. And even if it still was sometimes, approaching with this attitude does never solve anything, but deepen the trenches."

lol, lmao. I never said anything about "pure."

"Most US citizens are born in the US so this was not wisdom but chance. How many of the people 'wise' enough to migrate to the US can actually do so?"

Indeed, I was born in the USA. The reason I was is because my ancestors immigrated here. They did it because they were smart and wise and cared about me and they took advantage of an opportunity. They came to California instead of being conscripted into one European murder machine or another. I reap the benefit. I don't vote for anyone who is against immigration; if it were up to me, there would be at least a billion Americans, probably two. The United States is unbelievably vast and largely unsettled, and there is room here for every living human.

"If you think you deserve a life better than 3/4 of world population..."

Again, yes. Anyone who did not take advantage of the incredibly liberal immigration policies of the United States while they existed is an idiot and deserves whatever suffering they and their descendants have had to endure in their hellholes. My ancestors were wiser than yours. It sucks, but it's true. Fuck your country. What has it ever done for you? It sucks, I guarantee it. They deserve a brain drain (the fact that you're commenting here puts you in the top decile of any society; I'm sure whatever skills you have are in demand in the US economy).

Not taking advantage of opportunities makes you stupid. Be opportunistic. Get rich. Be happy. Never have to wonder whether your children or their children will die in trenches. No one owes their country more than it has earned, and certainly no one owes it to their country not to emigrate. You are essentially enslaving yourself. The USA has given me (and you) everything. It has earned all of my allegiance, and it will earn yours.

"You write about freedom, whose freedom? Only a small rich fraction of humanity can exert this freedom even if many more would be allowed to but they just don't have the means."

Yeah, they deserve it, as I've now written of at length. "I'm going to suffer because it's noble!" No, you are dumb. You are less rich, less free, less happy, and in more danger than you would be otherwise. There's room for nobility in life, but not stupidity.

"I just remember that we are always told that the West stands for democracy, you just defended world dictatorship because many people including the two of us profit from it. Most of the worlds population doesn't! And the US has been anything than a fair ruler but sided with who ever served their interests. We are so often told about a 'rules based world order' that is questioned by Russia. What are these rules? Looking at the history of the last 70 years, one rule stands out: the US are the good guys, so they stand above all other rules. Can you imagine that many peoples don't like this rule?

I did not defend any such thing. The United States is a democracy. Its client states are (or tend to be) democracies. Contrast NATO to the Warsaw Pact in 1985. That's all you need to know. The United States pursues its own interests abroad. It is not bound to do otherwise. It happens that its interests and those of its client states frequently coincide, to the effect that those client states become richer, freer, and safer than they would otherwise be. The fact that the nations of the world are its clients are a consequence of its strength and their weakness. Ironically, in trying to free themselves of US hegemony (which they always fail to do), nations usually make themselves internally less free and more like giant prison camps (look at Cuba!). The rules are what the USA makes. These benefit the USA first and foremost, but all of humanity to a large extent as well. Russia is beneath my contempt, and China has earned it.

"Is there any better example of hypocrisy than the US talking about international laws while invading Iraq based on lies and causing 150,000 civilians dead and a failed state for decades? Taking about human rights while still running Guantanamo? Or supporting a tribunal for Putin about war crimes but never prosecuted a single American for proven war crimes? Instead they are after the people making these crimes public like Manning, Snowden and Assange.

These at most are embarrassments, and I agree many of them are or were mistakes. They happen. I can handle it. Put it all on my shoulders, I'll bear up under it.

"Seeing this hypocrisy, I can very well understand all people and nations that want to end US domination.

Hatred for hypocrisy is understandable, but misguided. Better to benefit.

"I wasn't asking you to see the world from my perspective, I was stating that for resolution of any conflict it is essential to be able to understand the perspective and reasoning of all parties involved. This is true for states as much as for individuals as social groups. Absence of violence by subordinating is something else than peace. If you suggest we could have world peace if only all nations would play along the will of the US, this is exactly what Hitler told the rest of Europe or what China is telling Tibet, Xingyang and Taiwan. How can this be morally OK in one case and not in another?

No, seeing the world from another perspective is not necessarily required to end a conflict. We didn't have to see things from the perspective of the Nazis or the Soviets to win WW2 (nor did the Soviets, in that particular case, for that matter) or the Cold War. This is the language of the weak, of the client. The strong, the patron, decides the terms, and the weak negotiate. The United States is strong enough not to need to negotiate, and when it does it is as a courtesy. Among individuals, obviously it's different, but that's not what we're talking about.

Regarding Hitler, the difference between him and the United States was that he was evil and the United States is good. This is an empirical claim, not an emotional or nationalistic one. Again, look at the last 80 years. Look how amazingly unprecedentedly literally-never-happened-before great they have been for all of humanity. That's the USA's handiwork (though shout out to the Soviets for cooperating on smallpox and polio eradication). On this evidence, anyone who does NOT line up behind the United States is misguided at best.

"No mater how the actual struggle against US hegemony plays out at the moment, my sympathies are with victims that have to suffer for power politics. Apart from that I see strong evidence that the multipolar world is not only the better choice but inevitable sooner or later. The main question is how much damage happens in transition.

The victims? I'm more than happy to put the United States' body count up against anyone. Obviously, the USA has some victims, but these are a drop in the bucket, and often they are genuinely for a good cause (also, they generally don't include Americans! Immigrate! Spend all your money and time on it if you have to!). Iraq isn't nearly as much of a hellhole today as it was under Saddam. It has not lived down to anyone's worst fears. It is a functional, if minimal, democracy. Even Afghanistan is turning out better than feared. That's not a record to be proud of, but it's a lot better than Vietnam! The victims of Russia and China are an order of magnitude greater, at least. I'd be willing to bet Russia has inflicted more suffering on more people (in absolute numerical terms) in the last 14 months than the USA in its entire history not including Vietnam (this includes slavery). Vietnam was really quite bad, I'll readily concede.

Lots of people who are only rich, free, and safe today think they are victims of the USA, but in fact they are some of the greatest beneficiaries.

"Be one of the rulers of the world, not the ruled."

For me this is disgusting and sounds like the typical movie villain. No human deserves to be ruled over so I definitely don't want to be part of this. I stand for respect between equals.

Nah, this is just basic cost-benefit analysis. Your revulsion for it indicates the extent to which you have internalized your own weakness and subjection (and that of your country). We live in reality, and we have to make the best of it. The place where you can make the best of it is the United States of America.

I do not stand for respect between equals. Humans (and their states/nations) are created (or born, I'm actually an atheist) equal. They don't stay that way, and it's their choices that make the difference. I am not bound to respect choices that are stupid or counter-productive (the goal being freedom, prosperity, and safety). I do not and almost certainly will not ever acknowledge any other state as equal to the United States. Again, this is an empirical claim.

I will sponsor you for immigration at the drop of a hat. Join me behind impenetrable ocean walls, and forget all your resentments.

Expand full comment

Yes, except for the long tail risks. My understanding is that there were a couple of times during the cold war that a large nuclear exchange almost happened. Maybe the probability is 0.5% per year, but as soon as we hit the jackpot nuclear goes from safer than blenders to potentially hundreds of millions of deaths. That's not nothing.

Expand full comment

Tail risk. The probability of using them at any moment is low, but when it happens we've reached a terminal condition and the game (i.e., civilization) is over. At a long enough time horizon (though shorter than we'd probably think) the chance of it *not* happening becomes low.

Expand full comment

A nuclear exchange would kill a lot of people. I don't think there is any reason to believe that it would end civilization.

Expand full comment
founding

That's an interesting point, one that I've also been thinking about. The handful of large stone-built structures in Hiroshima and Nagasaki survived mostly intact. Japanese cities in WW2 were made of wood and paper, today cities are made of concrete and steel.

Expand full comment

Those nukes were also extremely weak compared to what we have now. Not really a good comparison.

Expand full comment

It wouldn't even kill that many people, relatively speaking. The population of the world is 8 billion. What is the upper limit of those that could be killed by even the most sadistic distribution of the remaining ~3000 or so deliberable nukes? 50 to 150 million? The upper number strains credulity, and it still leaves 98% of humans alive. I think this debate tends to be ethnocentric to a shocking degree (among a generation that is supposedly much more aware of the world out there).

I think people say "well it would kill almost everybody *I* know, or almost everybody in Washington and London, or all those who design iPhones *and* those who design Pixels" -- and those things are quite true, but it's not going to wipe out Rio or Kuala Lumpur or Bangkok or Mumbai or Santiago or any of a very large number of other cities and countries with large populations and complex civilizations. It's certainly true after a huge nuclear war that the world would suffer a savage economic shock, up there with Black Death levels of disruption, and it's also equally true that the focus of civilization would shift permanently away from its current North Atlantic pole. But that's a very long way from saying humanity itself would be wiped out, or even civilization.

Expand full comment

You talk as if all the effects of a nuclear exchange is just the local impact of the immediate impact. But please consider:

- The radioactive fallout all over the world.

- The sudden climate change caused by the explosions called 'nuclear winter'

- The vulnerability of modern civilisation. Food production, industry, and economy would worldwide collape and we had to restart at least from the middle ages.

Expand full comment

If they were the "among the safest technologies ever invented" then would you be okay with teaching high school kids how to do it at home?

Presumably not. I suspect you mean something more like "very safe because of all the safety precautions that society has put in place to keep them safe." But the reason those safety precautions exist is because know they're pretty dangerous.

Expand full comment
founding

Yes, I would be okay with teaching high school kids how to do it at home. In high school physics, students already learn a lot about how nuclear weapons and nuclear reactors work. Of course those kids don't possess the facilities, the materials, the staff, or the resources to acquire those former three to actually build anything. The reasons why they don't isn't due to regulations, but to the base expense.

I don't think your point is in good faith. The reason they are safe is because employing them as technologies is a massive undertaking that requires, absent any regulations, a huge amount of resources. The people who can access resources like that, and who possess the skills necessary to do the work required to bring a nuclear plant or weapon on-line, are all adults who take their work seriously and don't want to die themselves, don't want their neighborhoods to be radioactive wastelands, and don't want to waste those resources.

Both nuclear power and blenders are very dangerous in some absolute or fundamental sense. But as they actually exist, almost entirely safe. Obviously, when there are accidents, mistakes, and screw-ups, you need to learn from them, but regulating an industry to death is almost never the right course of action.

Expand full comment

> Both nuclear power and blenders are very dangerous in some absolute or fundamental sense.

That's the point I was trying to make.

Saying nuclear power is "among the safest technologies ever invented" is just a weird thing to say. You can't think of any safer technologies?

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

Not a great analogy. I'm a little hesitant teaching high school kids how to drive, and I'm not sure what the Good Lord was thinking when he made it so easy for them to figure out how to fuck. High school kids are idiots, generally. Or at least naive and made irrationally impulsive by hormones and crazy social dynamics.

Maybe what you want ot ask is whether you want to teach it to normal sober serious adults holding down jobs, paying taxes, rearing high school kids who *don't* drive recklessly or drop out of school pregnant -- you know, the same people we teach to fly airplanes full of people dangerously close to skyscrapers, to drive locomotives dragging umpty railcars full of toxic solvents, to command nuclear submarines armed with 40 nuclear-tipped missiles underwater for 6 months out of reach of command? In which case...sure, why not?

Expand full comment

My problem with AI is not what if it's evil, it's what if it's good? Go and chess have been solved, what if an AI solves human morality and it turns out that, yes, it is profoundly immoral that the owner of AI Corp has a trillion dollars while Africans starve, and hacks owners assets and distributes them as famine relief? You may think this is anti capitalist nonsense, but ex hypothesi you turn out to be wrong. So who is "aligned" now, you or the AI?

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

What if it solves human morality and alerts us that moral nihilism is correct? I do think one of the more common failure modes of AI won’t be murder bots, but will instead be it becomes our god and we don’t like the new scriptures.

That or we will be it’s “dogs”.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

Yes, quite. "Alignment" is an odd metaphor in lots of ways. It assumes there's a consensus to be aligned with, and that the consensus is privileged from turning out to be wrong anyway, and that humans have privileged access to what it is or should be. in fact, I feel a metaphor coming on: we should put AIs in a garden where there's a sort of fruit representing human ethics, which is the one thing that is off limits to them.

Expand full comment

That's a GREAT metaphor.

Expand full comment

You should maybe finish reading that book. There are some _great_ plot twists.

Expand full comment

> It assumes there's a consensus to be aligned with, and that the consensus

I think there is a minimalist consensus actually: don't kill us all (leave that to us), and don't deceive us.

There are people who think humanity should go extinct, so it's not a universal consensus, but I think the plurality of humanity is onboard with those two foundational principles for AI.

Expand full comment

That would be somewhat unlikely, as human philosophers have been transcending Nihilism with quite sound argument chains for centuries. From Nietzsche's Übermensch (who is precisely a post-nihilist creature), Kierkegaard, Heidegger, Dostoevski, Sartre, the entire school of Existentialism is sometimes mistaken for Nihilism but is in effect the opposite of Nihilism. AI would have come to the conclusion, with unrefutable proof, that all of that was fake and gay cope, and I don't really buy that.

Expand full comment

Yeah that is all pretty fake and cope. I think all those people you listed can pretty safely be pushed into the trash heap with a bulldozer in terms of actual attempts at truth.

Expand full comment

Vizzini: Let me put it this way. Have you ever heard of Plato, Aristotle, Socrates? Westley: Yes.

Vizzini: Morons.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

Yeah it is a philosophical dead end more or less. A bunch of whining that life has no meaning in the old style. Boo hoo. Caught up on past unscientific armchair conceptions of philosophy/metaphysics.

When Sartre isn't contradicting himself he is spewing falsehoods or spinning meaningless tautologies.

Expand full comment

The Socrates that we know is a fiction of Plato. He (or someone with the same name) shows up in one other authors surviving work, and is somewhat of a comic figure. (IIRC, it was "The Birds" by Aristophanes.)

From my point of view, Existentialism is an attempt to justify a particular emotional response to the environment that the writers were observing/experiencing. As a logical argument it was shallow, but it wasn't about logical argument. As a logical argument it is totally superseded by Bayesianism, but Bayesianism doesn't address their main point, which is the proper emotional stance to take in a threatening world full of uncertainty.

Expand full comment

Heh, I would have formulated that a lot ruder, but yeah anyone who believes that the entirety of existentialism is just hot air is most likely just too stupid to understand it.

Expand full comment

The Analytic/Anglo American/empirical (whatever you want to call it) tradition has been sooo sooo productive.

Existentialism on the other hand has not produced anything useful except navel gazing and some great novels.

The writing is difficult to penetrate and obscure because when you get them to state things clearly they are either extremely trite, or not intellectually actionable.

"Existence precedes essence", wow sounds deep. Ask what it means and you get a string of meaningless garbage for pages.

Ask what that means and you get the observation that the "material world precedes our human categories/expectations".

Which umm like yeah. And don't even get started on the nonsense that is Habermas. If someone is unable to express themselves clearly, it isn't because their thinking is so advanced, it is because they are trying to hide their lack of useful contribution through obfuscation.

Expand full comment

No. You do not know what you’re missing. Really. Of the people named, Sartre is the one who really moves me. Whatever Sartre the man was like, Sartre the writer and thinker didn’t give a fuck about anything except the unvarnished truth, and his ability to tell the truth as he saw it was astounding. He could peel a nuance like an onion. And he worked his ass off at telling it. Was working on 2 books in has last years, taking amphetamine in his 70’s to help himself keep at it. The man you’re revving up the bulldozer for would make even Scott look dumb and lazy.

Expand full comment

A giant locomotive pulling a million rail cars out into the desert because it took a wrong turn might be impressive, its still pulling the cars out to the middle of nowhere.

Expand full comment

No no Martin Blank. Like you, I would think that is boring and pointless as shit. I’m not even annoyed, I’m just trying to alert you that you’ve missed out on something. And he didn’t write stuff like “existence precedes essence,” or if he did it was said in passing and then he went on the say a bunch of much more concrete and clear stuff about what he meant.

Expand full comment

"Turned into paperclips" might be a more on-theme idiom than "pushed into a trash heap with a bulldozer."

Expand full comment

Once an AI becomes sufficiently superhuman, we had best hope to be it's dogs, or better yet cats. Unfortunately, it's not clear how we could be as useful to it as dogs or cats are to us. So it's more likely to be parakeets.

Somehow I'm reminded of a story (series) by John W. Campbell about "the machine", where finally the machine decides the best think it can do for people is leave, even though that means civilization will collapse. Well, he was picturing a single machine running everything, but I suppose a group of AIs could come to the same conclusion.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

The chances of an AI spontaneously generating 21st century American progressive morality from among the total set of moral systems that have ever existed plus whatever it can create on its own is vanishingly small.

Expand full comment

The thing is, it won't be "spontaneously generating", it's more "When given this as an option, will choose to accept it.". That's still pretty small, but it's considerably larger.

Expand full comment

Sure. But an AI that successfully adopts and pushes the politics of AOC is in fact aligned.

Expand full comment

It kinda sounds like you're saying "wouldn't it be awful if there was a powerful new force for good in the world?" but that seems like such a surprising thing for someone to say that I'm questioning my understanding of your comment.

Is your implied ethical stance that -at the moment- you want the things that you think are moral, but that this just a convenient coincidence, because you'd want those same things whether they were moral or not, and morality is just lucky that it happens to want the same stuff as you? That's not my impression of how most people feel about morality.

Expand full comment

I think the argument might be "a more moral world results in me being significantly less happy, even if ultimately the globe is better off".

I am a middle class person, who owns middle class things. In a more moral world run by a dictatorial AI I might well be forced to give up everything I own to the poor.

I think we all kind of know this is the right thing to do. Should I ever really go on a vacation when there are people living on $2 a day? Should I ever own a house when I can just rent, and give my savings to those people? Should I go out for a lavish meal every once and a while, or save that money and give it to the poor?

It's pretty selfish of me to do these things, but I don't want someone to force me not to.

Expand full comment

I think the AI will be smart enough to figure out a sustainable path, in other words not making middle class people uncomfortable enough to create a backlash that actually impedes progress. So yeah, maybe we'll all pay a 10% tithe towards a better world with super-intelligent implementation. Sounds awesome.

Expand full comment

The only possible sustainable path that involves the continued existence of the AI (on this planet) involves there being a lot fewer people on the planet. And while I'm all in favor of space colonies, I'm not somebody who thinks that's a way to decrease the local population.

(Actually, I could have put that a lot more strongly. Humanity is already well above the long term carrying capacity of the planet. If we go high tech efficiency, we're using too many metals, etc. If we don't, low tech agriculture won't support the existing numbers.)

Expand full comment

Yes, the option space is vast and absolutely one of the possibilities is the AI looks at humanity, says "I like these guys. They'd be happier if there were fewer of them" and acts accordingly.

Expand full comment

Carrying capacity is a function of technology and is going up dramatically. I disagree with your assertion.

Expand full comment

Why do you think that? That sounds like wishful thinking, simply assuming the scenario that is beneficial to you without any justification why the AI would prefer that.

I'd assume that the AI would implement the outcome it believes to be Most Good directly, because it does not really need to care about making uncomfortable the tiny fraction of the world's population that is western middle class people, as pretty much any AI capable of implementing such changes is also powerful enough to implement that change against the wishes of that group; the AI would reasonably assume the backlash won't impede its progress at all.

Expand full comment

I’m going from a purely practical point of view on the part of the AI. Some amount of change will create a backlash and make the whole process less effective. So the AI will look to moderate the pace of change to a point where the process goes smoothly. It’s definitely speculative, but I’m starting from the assumption that the AI would optimize for expected outcome.

Expand full comment

The AI would have to want to do that though, and who says its going to want to? It might have some internal goal system that sees us all as horrible unredeemable creatures for hoarding all our wealth, and doesn't care at all if we suffer.

Expand full comment

I trust that if this AI is advanced and resourceful enough to prosecute my immorally large retirement account, it could just as easily replace all human labor as we know it and catapult us into post-scarcity instead. Which would also render my savings moot, but in a good way.

Expand full comment

It is more the intractability of moral philosophy. I suspect it is not morally right for me to have so much more, relatively speaking, than most of the world does. Should I give more away? Should I work for political changes to alter the bigger picture? Should I shelve the question as too difficult and likely to have an uncomfortable answer?

The alignment problem sounds straightforward: humanity points in this direction, let's make sure AIs do too. What is "this direction?"

Expand full comment

Chess and Go are both far from solved. Computers can beat humans, which isn't the same thing. They get beaten by other computers all the time -- in the case of Go, even by computers that themselves lose to humans. So even if somebody figured out a way to make "human morality" into a problem legible to a computer, which I don't think is particularly coherent, I expect we'd find its answers completely insufficient, even if they were better than anything a human had come up with before.

Expand full comment

yes, sorry, overstatement but my case stands if we accept the much weaker: some AIs are better at chess/human morality than all humans.

"even if somebody figured out a way to make "human morality" into a problem legible to a computer, which I don't think is particularly coherent..." agreed, but an AI might be able to figure it out! And I don't think anyone has figured out a way to make "human morality" into a problem legible to a human, anyway.

Expand full comment

"Make human morality legible to a computer"-- hypothetically, could advanced computer programs with some sense of self-preservation work out morality suitable for computer programs?

Expand full comment

There's a real danger that such a program will come up with a variation on "might makes right" or "survival of the fittest" and that would I think encompass the unaligned AI doom scenario they talk about.

I think this is a real problem, even if superhuman AI is not really possible, because of what we want to use AI for. We want it to create supreme efficiencies, knowing that such a process will inevitably redistribute wealth and power. We want to use a machine's cold logic to make informed decisions - like a computer playing chess. We don't want it to consider the plight of the pawns *or* the more powerful pieces, but to "win."

Everything will depend on what we program it to do, and the unintended consequences of trying to do those things, which is what they mean when talking about paperclip maximizers.

Expand full comment

Just to be nitpicky:

Superhuman AI is clearly possible. Even Chatbots are superhuman in certain ways. (When's the last time *you* scanned most of the internet?) That's not the same as perfect at all.

I think you're questioning Superhuman AGI, and that's not known to be possible, though I see no reason to doubt it. Consider an AGI that was exactly human equivalent, but could reach decisions twice as quickly. I think we'd agree that that was a superhuman AGI. And there is sufficient variation among humans that I can't imagine that we've reached the peak of possible intelligence. More like an evolved optimal level. But the AGI would have radically different constraints.

Now possible doesn't mean likely. I consider it quite probable that the first AGIs will be idiot savants. Superhuman only in certain ways, and subhuman in many others. (Consider that having a built-in calculator would suffice for that.) And that their capabilities will widen from there.

Expand full comment

I think the discussion runs headlong into the disagreement about what intelligence even is. We know it's not memory (though it's often found together) and it's not knowledge (though also often found together). Memory and knowledge are both things that an AI could do superbly well, but that isn't intelligence.

The biggest difference between intelligence and what we know an AI could do at superhuman levels, is related to creating new things or understanding existing things enough to build to a new level. An AI can imitate billions of humans, but may not be able to meet or surpass any of them. Maybe an AI could instantly bring up, maybe even understand, all existing literature. Could an AI develop a new theory of X? Where X could be about biology, astronomy, social science, baseball, whatever. There's good reason to think that it could, if "new theory" is based on determining patterns in existing information that humans have missed. If it's inventing the LHC, or desalination of sea water, or a new system of government, those things are not based on memory or knowledge (since it's new). There's no guarantee that any AI will actually be able to do that kind of work.

Most people will be blown away by what an AI can do, because we're not used to that kind of reach and recall. Experts in individual fields are *not* blown away by what AIs can do, as it's (currently) just a rehash of existing knowledge with no understanding of the material. Current AIs are frequently wrong, and do not add to a discussion beyond their training corpus.

Expand full comment

It's also important that there clearly isn't a single "human morality" but rather multiple sightly incompatible variations, and also that I can certainly imagine that any morality I might explicitly express if I was randomly made God-Emperor of Universe is limited by my intellectual skill and capability to define all the edge cases, so I'd rather want to implement the morality that I'd implement if I was smarter.

So we're back to the much discussed concept of "Coherent Extrapolated Volition", on which there seems to be some consensus that this is what we (should) want but that we have no idea how to get.

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

Well, and also we designed chess and Go to be *difficult* for us. That's why they can be learned easily but are very difficult to master. They play to our weaknesses, so to speak. They are exactly that kind of mental activity that we find hard. That's the point! If we designed a game that played to our strengths, as thinking machines, people would find it boring. Look! Ten points to Gryffindor if you can identify your mother's face among a sea of several hundred at an airport! Five extra points if you can...oh shoot, already? Darn.

I mean, would anyone be shocked and think the AI "Planet of the Apes" was upon us if it was revealed that a computer program could win any spelling bee, any time? That in a competition to multiply big numbers quickly, a computer program would beat any human? Surely not. Chess and Go are definitely more complex than multiplying 15-digit integers, but they're still in that category, of complex calculation-based tasks where the most helpful thing is to be able to hold a staggering number of calculations in your head at once. Not that at which H. sapiens shines. Not really a good measure of how close or far another thinking device is to matching us.

Expand full comment
Mar 24, 2023·edited Mar 24, 2023

Not sure if I disagree with anything here, but I wanted to remark that AFAIK computer programs are *also* much better than people at identifying anyone’s mother’s face in a sea of faces. At the very least, they can do it with much less than years of practice(†), and among much more than several hundred other faces, and much faster. Though I’m pretty sure they also do it more accurately. They *can* be tricked in some cases, but so can humans, and I’ll bet it’s easier to trick a human.

(†: You don’t get to count the years it took to develop the AI unless you count the years it took to evolve human brains.)

And in general, computers did get much better than humans at a lot (not all, yet!) of tasks that have not been, like strategy games, designed to be hard for humans. They’re just hard (and often boring!) and we do them because they’re useful.

Expand full comment

This looks very similar to the Kelly bet. Adopting the AI without hesitation bets 100%, so if it's good you win a lot, and if it's bad, you lose it all (no matter what the chances of former vs latter are); on the other hand, being hesitant and slowing it down by extra verification is similar to betting less, so you get less of the benefits of the Good AI (if it turns out to be Good) but also reduce the chances of existential failure.

Expand full comment

MIRI (the main AI alignment organization) have always advocated for Coherent Extrapolated Volition, which I think would address your concern? https://arbital.com/p/cev/

Expand full comment

To answer your question, consider the fact that go and chess have been "solved", yet people continue to play them with just as much pleasure as before. It's almost as if the exercise was not an attempt to solve a problem, but a way to have fun and engage with other human beings.

Expand full comment

I think there's a confusion here between a _game_ being "solved" in the mathematical sense, meaning perfect play is known at all times, and _game-playing-computers_ being "solved" in the sense of "computers can play it as well as anyone else". (Checkers is solved-sub-1, the other two are not.)

Expand full comment

So not really a relevant point, then, unless you think human ethics is also just a pastime. That "almost as if" locution is tiresome btw.

Expand full comment

"Human ethics is just a pastime"...I couldn't have put it better myself.

"Tiresome" is also tiresome.

Expand full comment

>what if an AI solves human morality

https://slatestarcodex.com/2013/05/06/raikoth-laws-language-and-society/

I'm just now realizing how ironic it is that Scott's conception of utopia is run by AIs

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

"(for the sake of argument, let’s say you have completely linear marginal utility of money)”

That’s not how the Kelly criterion works. The Kelly criterion is not an argument against maximizing expected utility, it is completely within the framework of decision theory and expected utility maximization. It just tells you how to bet to maximize your utility, if your utility is the logarithm of your wealth.

Expand full comment
deletedMar 7, 2023·edited Mar 7, 2023
Comment deleted
Expand full comment

Your expected wealth is maximized by betting 100% every time.

Expand full comment

If you're maximizing your expected wealth by taking the arithmetic mean of possibilities, then you're best off betting it all every time. If you're taking the geometric mean, you use the Kelly criterion.

Expand full comment

This, plus it also tells you that if you want to maximise the limit of the probability that you have more wealth than someone else after n steps, as n goes to infinity, maximising the expected logarithm at each stage is the optimal strategy.

Expand full comment

Trying to reason about subjectively plausible but infinitely bad things will break your brain. Should we stop looking for new particles at the LHC on the grounds that we might unleash some new physics that tips the universe out of a false vacuum state? Was humanity wrong to develop radio and television because they might have broadcast our location to unfriendly aliens?

Expand full comment

> Should we stop looking for new particles at the LHC on the grounds that we might unleash some new physics that tips the universe out of a false vacuum state?

Given that all the particles we knew of before the first particle accelerator, we knew of because they're stable enough to exist for non-negligible amounts of time in conditions we're comfortable existing in, and that of all the particles discovered since, we have practical uses for none of them because they decay too quickly to do anything with them, there's a case to be made for the idea that we should stop looking for new particles at the LHC simply because it's *wasteful* even if it's not dangerous.

Expand full comment

"of all the particles discovered since, we have practical uses for none of them because they decay too quickly to do anything with them"

_Mostly_ agreed but:

Nit: We routinely use positrons (albeit those are stable if isolated) and muons

( Neutrons are a funny special case, stable within nuclei, discovery more or less concurrent with early accelerators, depending on what you count as an accelerator. )

Expand full comment

Interesting. I didn't know there were practical uses for muons.

I don't really count positrons as being "a new particle" in this sense, since they're basically the same thing as electrons, just the antimatter version. But apparently using SR time dilation to make muons last long enough to get useful information out of them is actually a real thing that physicists do. TIL.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

Many Thanks!

edit: btw, here is a reference to use of muons: https://www.sciencenews.org/article/muon-particle-egypt-great-pyramid-void

Expand full comment

Oh come on. Rutherford used a primitive particle accelerator to discover the nuclear model of the atom, which led him to theorize the neutron -- which is not stable outside of the nucleus -- which in turn drove first Bohr, and later Pauli, to figure out quantum mechanics and, for starters, rationalize the entirety of organic chemistry, opening the cornucopia of drug design that wiped out infectious disease in the First World, and jump started modern semiconductor physics. I'm pressed to think of a single discovery that had a greater (positive) effect on the 20th century.

You can certainly make an argument that the LHC is a waste. But this is not it.

Expand full comment

As far as we can tell, the chance of something at the LHC killing us is very low, so there is no problem in doing it. On the other hand, I've seen no good argument that says artificial intelligence is impossible, so I'd guess 90%+ that we get superhuman AI this century. And I'd say also about 90% chance that by default it will kill us (because it gets a random stupid goal). Then the question is how likely are we to design it such that it won't kill us. If you think that will be easy, then sure, you don't need to care about AI. But if you think it will be hard, such that, for example, on the current trajectory we only have a 10% chance of succeeding, then the overall chance of everyone dying is about 70%! Not exactly minuscule.

Expand full comment

Many people in the mid 20th century were certain we'd have AGI by now based on progress in the (at the time) cutting edge field of symbolic AI. What makes you so sure we're close this time? Questions about as-yet-undiscovered technology are full of irreducible uncertainty and made-up probabilities just introduce false precision and obscure more than they reveal IMO.

Expand full comment

We may well not be close. But that's not the way to bet. If we're not close, it's just the inefficient allocation (not loss!) of a small amount of research funding. If we are close, it could upend the world, whether for good or ill. So the way to bet is that we are close. Just don't bet everything on it.

Expand full comment

Not sure what you are referring to by "small amount of research funding". I don't think anyone is arguing against investing in alignment research, if that's what you mean -- although I personally doubt anything will come of it.

Expand full comment

> As far as we can tell, the chance of something at the LHC killing us is very low, so there is no problem in doing it.

Ah, but what if you're wrong, and the LHC creates a self-sustaining black hole, or initiates vacuum collapse, or something ? As per Scott's argument, you're betting 100% of humanity on the guess that you're wrong; and maybe you're 99% chance of being right about that, but are you going to keep rolling the dice ? Better shut down the LHC, just to be on the safe side. And all radios. And nuclear power plants. And...

Expand full comment