600 Comments
Comment removed
Expand full comment

In the 1940's we thought nuclear weapons were the solution to the Nazis. As a result, we now face a much bigger problem than the Nazis.

We don't need new tools. We need new attitudes.

Expand full comment

Remind me which parts of Europe are part of the third reich today? How big is the Greater East Asia Co-Prosperity Sphere?

Expand full comment

Remind me why I should reply to lazy little gotcha posts?

Expand full comment

I apologize, I misread your post to say that “we have a much bigger problem WITH Nazis” (thought this was a lazy stab at “America is currently run by or in danger of being run by Nazis”)

Expand full comment

No worries, on with the show.

Expand full comment

"we now face a much bigger problem than the Nazis. "

Could you elaborate on this? The Nazis were an expansionistic power who basically wanted to kill all non-aryans.

Expand full comment

Which is worse?

1) The Nazis take over Western civilization, or...

2) Western civilization is destroyed in a nuclear war.

Expand full comment

I'd phrase the choice a little differently:

1) The Nazis take over the world, and kill 90% of humanity or

2) E.g. The USA and Russia nuke each other, killing maybe 1 billion people directly and maybe 3 billion people indirectly (mostly depending on whether nuclear winter is real)

(1) is worse.

Expand full comment

The Nazis would not have taken over the world. (I concur with "Western Civilization" as a realistic upper bound.)

Even if they realistically could, and even if they genuinely wanted to kill 90% of humanity (which they did not, 5% perhaps), there's absolutely no way they would have proceeded to. Assuming otherwise requires extreme idealism, a belief in the primacy of ideology over reality. Nazism, as extreme as it was, was sill just a reaction to material conditions of its adherents - ambitious losers of the pre-war world order that was crumbling all around them. They would be nowhere near as extreme as winners, and while the world might have missed out on a few good things it did get out of the Allies prevailing, the civilization would have continued more or less uninterrupted. In an alternate reality, grandchildren of the WW2 Nazi dignitaries at campuses of elite colleges are now performatively rejecting their country's nationalist past.

Meanwhile, while we may argue to what extent the nuclear fallout would have negatively affected humanity's material conditions - there's no doubt it would indeed have affected them negatively. Which, among others, would have created a permanent fertile ground for Nazi-like extremist ideologies.

Expand full comment

"The Nazis would not have taken over the world. (I concur with "Western Civilization" as a realistic upper bound.)"

I'd agree that they could not have immediately taken over the world. Over the long run, if they had control over all of the resources of western civilization, I think they might have. It isn't too different from the colonial empires of the other European powers.

"grandchildren of the WW2 Nazi dignitaries at campuses of elite colleges are now performatively rejecting their country's nationalist past"

Maybe yes, maybe no. Are grandchildren of the first CCP members doing the equivalent at Beijing University?

Expand full comment
founding

Western civilization can be rebuilt in less than a thousand years.

Expand full comment

All the new attitudes in the world wouldn't have changed the fact that the Nazis were real, the Nazis were very much a threat, and the Nazis were also capable of inventing an atomic bomb if not defeated quickly enough. The right course to take in 1941 was definitely not "we don't need new tools, we need new attitudes". It was "we must absolutely get this tool before the Nazis do".

Expand full comment

It all depends on what you think the odds of a killer AI are. If you think it's 50-50, yeah it makes sense to oppose AI research. If you think there's a one in a million chance of a killer AI, but a 10% chance that global nuclear war destroys our civilization in the next century, then it doesn't really make sense to let the "killer AI" scenario influence your decisions at all.

Expand full comment

Didn't Scott already say that here:

> It’s not that you should never do this. Every technology has some risk of destroying the world; the first time someone tried vaccination, there was an 0.000000001% chance it could have resulted in some weird super-pathogen that killed everybody.

Which I understood to mean that we shouldn't care about small probabilities. Or did you understand that paragraph differently?

Expand full comment

Yes, correct. But then he concludes the article with:

> But we have to consider them differently than other risks. A world where we try ten things like nuclear power, each of which has a 50-50 chance of going well vs. badly, is probably a world where a handful of people have died in freak accidents but everyone else lives in safety and abundance.

> A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so much abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all dead.

So the trouble comes in asking “*whose odds*” is any given person allowed to use when “Kelly betting civilization?” Their own?

Until and unless we can get coordinated global rough consensus on the actual odds of AI apocalypse, I predict we’ll continue to see people effectively Kelly betting on AI using their own internal logic.

Expand full comment

I think there is a hardware argument for initial AI acceleration reducing odds of a killer AI. It's extremely likely that eventually someone will build AIs significantly more capable than there is currently the possibility of. We should lean into early AI adoption now while hardware limits are at their maximum. This increases the chance that we will observe unaligned AIs fail to actually do anything including remaining under cover which provides alignment experience and broad social warning about the general risk

Expand full comment
founding

Agree with the caveat that this holds only to the extent that we expect Moore's Law to continue, which is far from certain. But if we go through many doublings of hardware performance while carefully avoiding AI research, and then some future Elon Musk decides to finance an AI-development program, then the odds of hard takeoff increase substantially. If AI research is constantly proceeding at the limits of the best current hardware, then the odds are very high that the first weakly-superhuman AGI will be incapable of bootstrapping itself to massively-superhuman status quickly and unnoticed.

Expand full comment

Osama bin Laden is kind of irrelevant. Sufficiently destructive new technologies get out there and get universal irrespective of the morality of the inventor. Look at the histories of the A bomb and the ICBM.

Expand full comment

Nuclear nonproliferation seems to have actually done a pretty good job. Yes, North Korea has nuclear weapons, and Iraq and Iran have been close, but Osama bin Laden notably did not have nuclear weapons. 9-11 would have been orders of magnitude worse if they had set off a nuclear weapon in the middle of New York instead of just flying a plane into the World Trade Center. And some technologies, like chemical weapons, have been not used because we did a good job at convincing everyone that we shouldn’t use them. International cooperation is possible.

Expand full comment

AI is invisible. There's also the alignment problem: if North Korea develops AI I hope it is even less likely that AI would stay aligned to north korean values for more than milliseconds, than that it would remain aligned to the whole western liberal consensus.

Expand full comment

I don't know. I think it's a genuine open values question whether it would be better for all future humans to live like people in North Korea do today or for us all to be dead because our atoms have been converted into statues of Kim Il Sung. Maybe I'm parsing your comment wrong though.

Expand full comment

I don't think that North Korea is feasibly in the race for AI at the moment.

Even Chinese have to put a lot of worry into obeying the rules of the CCP Censors, so I expect them to be a lot less "Race-mode" and a lot more security-mindset focused on making sure they have really good shackles on their AI projects.

The race conditions are in the Western World.

Expand full comment

I would imagine that AI would do whatever it is programmed to do.

Expand full comment

Empirically, AIs do approximately what we have trained them to do, as well as a bunch of weird other things, possibly including the exact opposite of what we want them to do. If it was possible to program AIs to only do what we want them to do, would we have daily demonstrations of undesired behavior on r/bing and r/chatgpt?

Expand full comment

'possibly' including the exact opposite? Empirically, I'd change that to 'sometimes/often, definitely'... (see the Waluigi Effect!()

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

To large extent chemical weapons aren't used because they just aren't good. Hitler and Stalin had little qualms about mass murder on industrial scale, both were brought to the very brink in face of an existential threat (and Hitler did in fact lose), both had access to huge stockpiles of chemical weapons ready to use, and yet they didn't use them. They weren't very effective even in the First World War before proper gas masks, which provide essentially complete protection at marginal cost (cheap enough for Britain to issue them to the entire civilian population in WW2), not to mention overpressure systems in armored vehicles: instead of a gas shell, you'd almost always be better-off firing a regular high explosive one even when the opponent has no protective equipment. Against unprotected civilian population they are slightly better than useless, and in this capacity chemical weapons have been used by for example Assad in Syria, but consider the Tokyo subway sarin attack: just about the deadliest place conceivable to use a chemical weapon (a closed underground tunnel fully packed with people), and it killed thirteen (although injured a whole lot more). You could do more damage by for example driving a truck into a crowd.

Expand full comment

Chemical weapons that were used did not even solve the problem they were intended to: that of clearing the trenches far enough back to turn trench warfare into a war of maneuver. The damage they did was far too localized.

This hasn't really changed in the intervening years - the chemicals get more lethal and persistent, but they don't spread any better from each bomb.

Wars moved on from trenches (to the extent they did) because of different technologies and doctrines (adding enough armored vehicles and the correct ways to use them).

Expand full comment

I'd argue that it was mostly motorized transport and radios that shifted parity back to the attacker. Before that the defender could redeploy with trains and communicate by telegraph but the attacker was reliant on feet and messengers.

Expand full comment

Yeah, tanks get all the credit for their cool battles, but as an HOI4 player will tell you, it's trucks that let you really fight a war of manuver. Gas might have a bigger role in "linebreaking" if Tanks hadn't been invented.

Expand full comment
Mar 9, 2023·edited Mar 9, 2023

This may be true now; I was thinking of why the European part of WWII didn't devolve into trench warfare like it did in WWI.

Did roads get enough better in the intervening 20 years in the areas of France to make trucks practical? I do know that part of WWI was that the defender could build a small rail behind the front faster than the attacker could build a rail to supply any breakthrough. Does that apply with trucks - were they actually good enough to get through trenchworks?

Or did the trenchworks just not end up being built in WWII - i.e. the lines didn't settle down long enough to build them in the first place?

Expand full comment

Creating a breakthrough was always possible for an attacker who could throw enough men and artillary at the lines in both WWI and WWII. The problem was that in WWI it just moved the front line up a couple of dozen miles and then the enemy could counter attack.

Having vehicles with certainly helps and means you can use them during the attack instead of just when advancing afterwards but engineers can fill in a trench pretty to let trucks drive over. They can't build railroads quickly though, especially not faster than a man on foot can advance.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

Nuclear nonproliferation has been aweful. How many people would still be alive, how many terrorist organization would not have spawned, how many trillion of expenses would have been better used, how much destructions would have been avoided in civil wars averted, if Saddam had been able to nuke on the first column of Abrams that set their tracks in Irak?

P.S: and implying that a proliferated world would have made 9/11 (or another attack) nuclear is unsubstantiated. Explosives are a totally proliferated technology. The only thing stopping a terrorist from detonating a MOAB-like device is the physical constraint of assembling it (ok, not entirely, I have no idea how reproducible is H-6 by non-state actors. But TNT absolutely is, so something not-quite-moab-like-but-still-huge-boom is theoritically possible). And yet for 9/11, they resorted to driving planes into the building, because even tho the technology proliferated, it's still a hurdle to use it.

Expand full comment

There’s a good chance that Iraq (or at least Saddam) would not have existed to be nuking Abrams tanks in 1991 or 2003, because Iran and Iraq would have nuked each other in the 1980s.

Expand full comment

Or maybe they wouldn’t had gone to war at all knowing that it would have been a lose-lose scenario. One wonders whether a world with massive proliferation would have been a safer one.

Expand full comment

Possible. I was mostly peeved by what I perceived as a cheap anti-American swipe rather than a reasoned assessment of when Saddam would use nukes (besides that, it’s unclear whether nuking an Abrams formation would even be all that useful - especially when all that soft targets that would get hit in retaliation are considered)

Expand full comment

Or Iraq and Israel. Tel Aviv is high on the list of cities most likely to be destroyed by a nuke...

Expand full comment

Chemical weapons have been used, even in recent years by major state actors (e.g. Russia, Syria). They don't get used more because they aren't that useful, and that offers a clue to the problem.

Expand full comment

If nuclear nonproliferation is a cause of the Ukraine war, that needs to be figured in.

Expand full comment

Maybe more like deproliferation. The Ukrainians gave up their nukes[1] in 1994 and in return[2] got a guarantee from Russia that Russia would defend Ukraine's borders. Candidate for Most Ironic Moment Ever.

-----------------------

[1] Of which they had quite a lot. Something like ~1,500 deliverable warheads, the 3rd largest arsenal in the world.

[2] It's more complicated than this in the real world, of course. Russia did not turn over the launch procedures and codes, so it would've been a lot of work for Ukraine to gain operational control over the weapons, even though they had de facto physical custody fo them.

Expand full comment
founding

>Something like ~1,500 deliverable warheads, the 3rd largest arsenal in the world.

The Ukrainians had zero deliverable warheads in 1994. Those warheads stopped being deliverable the moment the Russians took their toys and went home, and it would have taken at least six months for the Ukrainians to change that. Which would not have gone unnoticed, and would have resulted in all of those warheads being either seized by the VDV or bombed to radioactive scrap by the VVS while NATO et al said "yeah, we told the Ukrainians we weren't going to tolerate that sort of proliferation, but did they listen?"

Expand full comment

Eh, the biggest reason chemical weapons aren't used is because they kind of suck at being weapons. It turns out it's cheaper and more reliable to kill soldiers with explosives.

Expand full comment

The question is what the risk of AI is. If AI is 'merely' a risk to the systems that we put it in control of, and what is at risk from those systems, then N-Korean AI is surely not going to be a direct threat, as we won't put it in control of our systems.

Of course, if N-Korea puts an AI in control of their nukes, then we will be at an indirect risk.

Expand full comment

If the Allies in 1944 had taken the top ~500 physicists in the world and exiled them to one of the Pitcairn Islands, how long would that have delayed the A-bomb? Surely a few decades or more if we chose them wisely, and pressure behind the scenes could have deterred collaboration by the younger generation on that tech.

Instead we used the bomb to secure FDR’s and the internationalists’ preferred post-war order and relied on that arrangement to control nuclear proliferation. And fortunately, they actually kinda managed it about as well as possible.

But that has given people false confidence that this present world order can always keep tech out of the hands of those who would challenge it. They don’t seem to have given any effort or thought to preventing this tech from being created, only to get there first and control it as if every dangerous tech is exactly analogous to the A-bomb and that’s all you have to do to manage risk.

And they do this even though the entire field seems to talk constantly about how there’s a high chance it will destroy us all.

Expand full comment

I think the morality of the inventor is germane to the discussion. Replace Osama with SBF. We wouldn't trust someone with a history of building nefarious back doors in software programs to lead AI development.

Expand full comment

I am still completely convinced that the lab leak "theory" is a special case of the broader phenomenon of pareidolia, but gain-of-function research objectively did jack shit to help in an actual pandemic, so we should probably quit doing it, because the upside seems basically nonexistent.

Expand full comment

What if Omicron was “leaked”

to wash out Delta?

Millions saved.

Expand full comment

And, as the old vulgarity has it, if your aunt had balls she'd be your uncle.

Expand full comment

Not anymore ....

Expand full comment

Is Scott is now a gain-of-function-lab-leak origin proponent? Otherwise, I do not know why gain-of-function would be a big loss on par with leaded gasoline.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

I don't know if he is a proponent, but it seems to have some fairly high non-zero chance of being what happened.

My guess would be in at least the 20s percentage wise. An open market on Manifold says 73% right now, which is higher than I would have guessed, but not crazy high IMO. And the scientific consensus simply isn't that reliable because very early on they showed themselves to be full of shit on this issue.

Expand full comment

I am OK with a 20% probability but that does not seem enough to proclaim gain-of-function research a big loss. Especially since the newer DOE report seems to implicate Wuhan CDC, which did not do any gain of function research as far as I know.

Expand full comment

https://2017-2021.state.gov/fact-sheet-activity-at-the-wuhan-institute-of-virology/index.html

According to this US government fact sheet, "The WIV has a published record of conducting 'gain-of-function' research to engineer chimeric viruses."

Expand full comment

Wuhan CDC is very different from WIV. Different location, different people, different research.

Expand full comment

Ah, I see. But putting aside the DOE report, the WIV is implicated by many proponents of the lab leak theory, right? I hadn't heard any mention of the Wuhan CDC in these discussions before, but maybe I wasn't following very closely.

Expand full comment

Back in early 2020 I strongly favored the idea that an infected human or animal involved with the WIV or Chinese CDC accidentally transmitted a virus that was never properly identified before the outbreak. At the time I thought that any virus they were working on would tend to show up in the published literature and we'd have figured out the origin more quickly. At this point I'm much less sure of that but I'd still give it equal odds to a classic lab leak and I'm glad the DOE report is giving it more attention.

Expand full comment

20% * ~ 20 million = 4 million deaths thus far, which seems quite catastrophic.

[See https://ourworldindata.org/excess-mortality-covid for COVID mortality estimates].

I've not looked into WIV vs. Wuhan CDC...

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

Surely catastrophic but did gain-of-function research start the pandemic? The evidence is weak and circumstantial so far. If the pandemic is not due to gain of function research, then Scott's statement is unsubstantiated.

Expand full comment

Mallard is already accounting for the uncertainty over whether GoF research started the pandemic – that's why they multiplied by 20%. Obviously you might disagree that 20% is an appropriate guess at the probability.

Expand full comment

There's something off about assigning blame for 1/5th the deaths to a group who may not have done anything wrong. It's like if police found you near the scene of a murder, decided there was a 20% chance you committed it, and assigned you 20% of the guilt.

If a lab was doing gain-of-function research in a risky way that had a 20% chance causing an outbreak, it makes sense to blame them for the expected deaths (regardless of whether the outbreak actually happens). But if the lab was only doing safe and responsible research and an unrelated natural outbreak occurred, and we as outsiders with limited information can't rule out the lab... then I'm not so sure.

You'd also have weigh against the potential benefits of this research, which is even harder to estimate. What are the odds that research protect us from future pandemics and potentially save billions of lives? Who knows.

Expand full comment

Very well put.

Expand full comment
founding

Agreed, but if what the lab was doing had even a 0.2% chance of causing a global pandemic, that's 0.2% * 6.859E6 = enough counts of criminally negligent homicide to put everyone involved away for the rest of their natural lives.

And if you think that what you are doing is so massively beneficial that it's worth killing an estimated 10,000+ innocent people, that's not a decision you should be making privately and/or jurisdiction-shopping for someone who will say it's OK and hand you the money. The lack of transparency here is alarming.

Expand full comment

> It's like if police found you near the scene of a murder, decided there was a 20% chance you committed it, and assigned you 20% of the guilt.

It's not similar at all. Research is not a human being and therefore doesn't have a right to exist or to not 'suffer' under 20% guilt. Completely different cases.

Expand full comment

> I am OK with a 20% probability but that does not seem enough to proclaim gain-of-function research a big loss.

Gain of function research has had no objectively visible benefits, so any non-zero chance of a leak would automatically make it a loss. We know the risk of leaks is non-zero because they've happened.

Expand full comment

Worse than that. The "experts" who were being asked whether GOF research was being done, whether Wuhan was involved, and whether the US was paying for it actively lied about it. They lied because they were the ones who were doing it!

This includes Fauci. And that's the reason so many people, if mostly conservatives, are upset about his leadership. Not masks or other crap (those came later), but because he knew about GOF research - having approved the funding for it - and actively lied about it. When he lied about it, it became verboten to speak of the possibility that a lab leak was involved.

Expand full comment

And I'm still upset about Chris "Squi" Garrett and Brett Kavanaugh lying in his confirmation hearing, but nobody else gives a shit and the world has moved on.

Expand full comment

My understanding is that the main "slam dunk" piece of evidence in favor of zoonotic origin is the study (studies?) showing the wet market as the epicenter of early cases. I'm curious how the lab leak theory is seen as so likely by e.g. Metaculus in view of this particular piece of evidence (personally I'm at maybe 10%). The virus spilled over at WIV, but then the first outbreak occurred across town at this seafood market where wild game is sold? Or the data was substantially faked?

Expand full comment

If it was an accidental release (IE it leaked out of containment undetected by the researchers), all that would have to happen is for the affected researcher to go buy fish on the way home and then not fess up to it later. "Case negative one" if you will.

Expand full comment

I'm not an epidemiologist, but it seems like a lot more would have to happen than this hypothetical lab worker buying some fish.

If this person was a "super-spreader" then why wasn't there an explosion of cases, nearly simultaneously, in other parts of the city that they ventured? Most notably at their workplace, where they presumably spend a lot of their time? Yes, they might wear effective PPE when actively working with biohazards, but not when they're eating lunch, or in a conference room, or using the bathroom.

And if they weren't a super-spreader, why did just going to the market to buy fish seed so many cases? I suppose someone else that they infected could have become a super-spreader, but this starts to feel like adding epicycles to me.

Expand full comment

I think the idea is that the researcher would be a normal spreader and the first super spreader would be someone working at the market. If it's the sort of noisy place where you have to raise your voice to talk then that's superficially plausible.

Of course there are other possibilities too, like someone selling dead test animals that they don't think are dangerous at the market for a quick buck.

But given the circumstances I wouldn't hold out too much hope of ever being sure about this.

Expand full comment
founding

COVID superspreaders aren't people, they're places. Other viruses may work differently in that respect, but I don't think we've seen much personal contact between separate superspreader events with COVID. But there are clearly some places where, if a sick person shows up, the combination of crowding and poor ventilation and loud noise will result in a whole lot of other people getting sick.

Expand full comment

The suspicious part is that this person only infected people at the market and didn't seem to spread it to anyone around the WIV (or anywhere else). Possible, but it makes the market look more likely.

Also, the market is fairly far from the WIV. That's not a big problem for the theory; the infected researched might live near the market. But presumably only a small percentage of the researchers there live near the market and I think this reduces the likelihood somewhat.

Expand full comment

I think there was concern at one point of a streetlight effect. That is the locus of where they searched was the market, and then they found that was the locus. I don't know where that line of criticism ended up.

Expand full comment

My understanding, possibly mistaken, was that earlier cases were eventually found not associated with the wet market.

I think there are, and have been from the beginning, two strong reasons to believe in the lab leak theory. The first is that Covid is a bat virus that first showed up in a city that contained a research facility working with bat viruses. That is an extraordinarily unlikely coincidence if there is no connection. The second is that all of the people in a position to do a more sophisticated analysis of the evidence, looking at the details of the structure of the virus, were people with a very strong incentive not to believe, or have other people believe, in the lab leak theory, since if it was true their profession, their colleagues, in some cases researchers they had helped fund, were responsible for a pandemic that killed millions of people.

Expand full comment

I'm not sure that either of these are "strong" reasons to believe in the lab leak theory.

I've seen many people casually assert that COVID arising in the same city as a virology institute is "extraordinarily unlikely", but I have yet to see anyone quantify this. I'm not an epidemiologist, but I would think that epidemics are more likely to start in cities due to large populations (more people who can get sick), and high population density (easier to transmit). How many large cities have places where people come in to close contact with animals that can carry coronaviruses? Maybe Wuhan is one of 1000s of such places, in which case, OK, it at least raises some eyebrows. But if it's one of a handful, even one of dozens of such places, then the coincidence doesn't seem that strange to me.

Second, is it really true that *all* of the people in a position to do more sophisticated analysis of the evidence have strong connections to the WIV? Or to the particular type of research being done there? I seem to recall reading about people who were critical of gain of function research well before COVID (of course, I only read about it after COVID). And it only takes one person with a really strong case and a conviction to do the right thing to break the cone of silence. At this point they could probably just leak the relevant data anonymously and rely on one of the very capable scientists that have come out as suspicious of zoonotic origin make it public.

Expand full comment

Wuhan has about one percent of the population of China — and Covid didn't have to start in China. So I think the fact that Covid started in Wuhan which also had an institute doing research on the kind of virus Covid came from is pretty strong evidence.

All the people is an exaggeration, but most virologists had an incentive and Fauci et. al., in the best position to organize public statements and get them listened to, had such an incentive. So the expectation is that even if the biological evidence favored a lab leak, most of what we would hear from experts would be reasons to think it wasn't a lab leak.

It isn't enough for one expert to disagree unless he has a proof that non-experts can evaluate. In a dispute among experts it's more complicated than that. One side says "Here are reasons 1, 2, and 3 to believe it was animal to human transmission." The other side says "here is why your 3 reasons don't show that, and here are four other reasons to believe it was a lab leak." The first side includes Fauci and the people under him, the people he has helped to fund, and the people he has gotten to support his story because they want everyone to believe it wasn't a lab leak. The other side is two or three honest virologists.

Which side do you think looks more convincing to the lay public?

Expand full comment

AIUI*, the placement of the bat virus research in Wuhan in the first place was due to a high base rate of endemic bat viri in the region. If that is the case, then the lab location doesn't seem to provide much additional evidence.

*I haven't followed the origin hunt very closely because I doubted sufficient evidence exists to resolve the answer either.

Expand full comment
founding

The region with the high base rate of endemic bat viri is over a thousand kilometers from Wuhan, and not on any direct transit artery from same. And the WIV is a general-purpose virology lab, not specifically a bat-virus lab, placed in Wuhan for logistical and political reasons. It's easier to ship bats from across SE Asia to a top-level virology lab than it is to set up even a mid-level virology lab from scratch in rural China, so it's not surprising people did that.

Expand full comment

Your understanding of the earliest cases is, indeed, mistaken:

https://www.science.org/doi/10.1126/science.abm4454

Expand full comment

Perhaps. From the Wiki article on wet markets:

"although a 2021 WHO investigation concluded that the Huanan market was unlikely to be the origin due to the existence of earlier cases."

Cited to: Fujiyama, Emily Wang; Moritsugu, Ken (11 February 2021). "EXPLAINER: What the WHO coronavirus experts learned in Wuhan". Associated Press. Retrieved 14 April 2021.

Your article cites several early cases, some of which were associated with the wet market. It gives no figure for what fraction of the Wuhan population shopped at the wet market.

The number I would like and don't have is how many wet markets there are in the world with whatever features, probably selling wild animals, make the Wuhan market a candidate for origin. If it is the only one, then Covid appearing in Wuhan from it is no odder a coincidence than Covid appearing in the same city where the WIV was researching bat viruses. If it was one of fifty or a hundred, not all in China, which I think more likely, then the application of Bayes' Theorem implies a posterior probability for the lab leak theory much higher than whatever your prior was.

Expand full comment
founding

The wet market was the site of the first COVID-19 superspreader event; there's not much doubt about that. There may have been earlier isolated cases elsewhere, but we'll probably never know for sure and so they probably shouldn't weigh too heavily in our thinking.

But the wet market would have been an ideal place for a superspreader event even if it had sold jewelry, or computers, or medical supplies. It's a big, crowded building with thousands of people coming and going all day with I believe poor ventilation and noise levels that lead to lots of shouty bargaining over the price of whatever. If COVID gets into that environment, a superspreader event is highly likely.

Also, the wet market did *not* sell bats. Or pangolins, though I think those are now considered to have been red herrings (insert taxonomy joke here). There was a western research team investigating the potential spread of another disease, that kept detailed records of what was being sold during that period, and they never saw bats.

It's still possible to postulate a chain of events by which a virus in a bat far from Wuhan, somehow finds its way into a different yet-unknown species and crosses a thousand kilometers, probably an international border, to trigger a superspreader event in Wuhan without ever being noticed anywhere else (e.g., in the nearest big city to the bat habitat). But there's a lot of coincidence in that chain, because if it wasn't the nearest big city to the bat habitat, there's *lots* of cities and transit routes to choose from and it somehow found the one with the big virology lab.

There's *also* a lot of coincidence in a hypothetically careless lab technician going to buy some fresh fish for his family after work, and triggering off a superspreader event at the wet market rather than an ordinary grocery store, nightclub, jewelry market, or whatever. Lots of those in Wuhan. Sadly, there's no need to postulate wild coincidences for there to have been a COVID-like virus in the WIV to infect our hypothetical lab technician.

When you have eliminated the impossible, whatever remains - however improbable - must be the truth. We presently have two improbable, coincidence-laden options to chose from, and neither seems obviously more or less improbable than the other.

Expand full comment

Just spectating here, but that market says it'll remain open until we're 98% sure one way or the other.

There may be an asymmetry there. We may be able to uncover definitive proof COVID came from a lab, but what more proof could we hope to find that COVID had natural origins? If patient zero was an undetected case, surely it's too late to find them now.

The question can resolve yes, or remain open. There's little chance of it resolving to no even if COVID has natural origins.

Expand full comment

They could find an animal reservoir with a wild virus that is a very close match (much closer than ~98% match published earlier).

Expand full comment

Good point, though I'm not sure that would satisfy the lab leak proponents. They'd say that natural virus was likely studied in the lab and then was leaked.

Expand full comment

There are several pieces of market evidence that China never disclosed that could go a long way towards resolving it. They have swabs from the market that found DNA of undisclosed animals. They could also simply interview the vendors at the market and figure out which ones were selling which animals. They could also be more aggressive with testing bats and other animals within China.

Whether or not they will do any of that is unclear -- it seems like there's been a strong effort within China to obfuscate the market evidence. For a while, they denied that there were wild animals at the market. They've also argued that the virus started outside of China:

https://medium.com/microbial-instincts/china-is-lying-about-the-origin-of-covid-399ce83d0346

As you point out, it's not clear that any of this would satisfy the lab leak proponents, who would just modify their theory again.

Expand full comment

It seems to be one of those things where people just repeated it enough that a bunch of people started assuming there must have been some kind of new evidence.

Every few weeks someone else re-announces some variation on "its unfalsifiable, we cant prove it definitely didn't come from the lab and there is no new evidence."

And each time someone announced that the true believers scream "see we were right it was a lab leak! We told you so!"

Turns out if you repeat that enough a bunch of people just adopt the belief without need for any new evidence.

Expand full comment

Gain-of-function research was intimately involved in making the mRNA vaccines that did far more than "jack shit to help".

https://www.technologyreview.com/2021/07/26/1030043/gain-of-function-research-coronavirus-ralph-baric-vaccines/

"Around 2018 to 2019, the Vaccine Research Center at NIH contacted us to begin testing a messenger-RNA-based vaccine against MERS-CoV [a coronavirus that sometimes spreads from camels to humans]. MERS-CoV has been an ongoing problem since 2012, with a 35% mortality rate, so it has real global-health-threat potential.

By early 2020, we had a tremendous amount of data showing that in the mouse model that we had developed, these mRNA spike vaccines were really efficacious in protecting against lethal MERS-CoV infection. If designed against the original 2003 SARS strain, it was also very effective. So I think it was a no-brainer for NIH to consider mRNA-based vaccines as a safe and robust platform against SARS-CoV-2 and to give them a high priority moving forward.

Most recently, we published a paper showing that multiplexed, chimeric spike mRNA vaccines protect against all known SARS-like virus infections in mice. Global efforts to develop pan-sarbecoronavirus vaccines [sarbecoronavirus is the subgenus to which SARS and SARS-CoV-2 belong] will require us to make viruses like those described in the 2015 paper.

So I would argue that anyone saying there was no justification to do the work in 2015 is simply not acknowledging the infrastructure that contributed to therapeutics and vaccines for covid-19 and future coronaviruses."

I'm disappointed that Scott is being so flippant about gain-of-function with regards to coronaviruses. That line feels closer to tribal affiliation signaling rather than a considered evaluation of the concept, which is especially ironic considering the subject of this article is how to make considered evaluations of risky concepts. There's a very real argument that a world with no gain-of-function research still results in COVID-19 (even if it leaked from the lab, there's still plenty of uncertainty about whether gain-of-function was involved in that leak), but without the rapidly deployed lifesaving vaccines to go along with it.

Expand full comment

As far as I know, gain of function research did not contribute to the development of the COVID mRNA vaccines, and this article doesn't really say anything to the contrary except a vague claim about "acknowledging infrastructure". If you have specific knowledge of how gain of function research was intimately involved in the vaccine development I'd be interested to hear it.

Expand full comment
founding

Nuclear weapons and nuclear power are among the safest technologies ever invented by man. The number of people they have unintentionally killed can be counted (relatively speaking) on one hand. I’d bet that blenders or food processors have a higher body count in absolute terms.

I have no particular opinion on AI but the screaming idiocy that has characterized the nuclear debate since long before I was born legitimately makes me question liberalism (in its right definition) sometimes.

Even nuclear weapons I think are a positive good. I am tentatively in favor of nuclear proliferation. We have seen something of a nuclear best case in Ukraine. Russia/Putin has concluded that there is absolutely no upside to tactical or strategic use of nuclear weapons. In short, there is an emerging consensus that nukes are only useful to prevent/deter existential threats. If everyone has nukes, no one can be existentially threatened. For example, if Ukraine had kept its nukes, there’s a high chance that they would correctly perceived an existential threat, and have used nukes defensively and strategically in an invasion such as really occurred in 2022. This would have made war impossible.

Proliferation also worked obviously in favor of peace during the Cold War.

World peace through nuclear proliferation, I say.

Expand full comment
deletedMar 7, 2023·edited Mar 7, 2023
Comment deleted
Expand full comment
founding

That's actually an interesting topic. I agree that nuclear proliferation can make low-intensity and border conflicts more likely. We can see this between China and India as well. But at the same time, the prevention of large-scale conventional warfare is more important, I think. And we can see what happens with non- or asymmetrically nuclear-armed states between India and Pakistan. In 1971, India invaded East Pakistan and ensured its independence as Bangladesh. If both states had been nuclear armed, that would have been impossible.

Expand full comment

2nd Amendment—we have the highest murder rate in the developed world. If we got rid of guns the murder rate would go down. You can’t keep guns out of the hands of irresponsible actors in America.

Expand full comment

Nuclear proliferation can maintain world peace only if you assume no one with control over nukes ever goes insane or is insane to begin with. The number of people who've controlled nukes in human history is small enough that no one sufficiently insane has ever been in control of them, including the Kims. This is not a safe bet to make with several times more people.

Expand full comment

Obviously what we need is some kind of guild. Perhaps addict the members to some exotic drug so the UN can control them. This guild would ensure the atomics taboo is respected by offering all governments the option of fleeing and living in luxury instead of having to take that final drastic step. After all, the spice must flow.

Expand full comment

Historically speaking, are there leaders who have gone the kind of insane you're concerned about?

Expand full comment

Idi Amin? Pol Pot? Osama bin Laden?

Expand full comment

There have been several, though they aren't frequent. The problem is, if someone has an "omnilethal" weapon, you don't need frequent.

Also, just consider the US vs. Russia during the Cuban missile crisis. We came within 30 seconds of global nuclear war. There was another instance were Russian radars seemed to show a missile attacking Russia. That stopped being a major nuclear exchange because the Russian general in charge defied "standing orders" on the grounds that the attack wouldn't be made by one missile. (IIRC it turned out to be a meteor track.) So you don't need literally insane leaders, when the system is insane. You need extraordinarily sensible leaders AND SUBORDINATES.

Expand full comment

Also, and this doesn't get talked about nearly enough, there's the question of deniability.

Right now, there's only one rogue state with nuclear weapons: North Korea. This means that if a terrorist sets off a nuke somewhere, we know exactly where they got it from, and we crush the Kim regime like a bug. And they know that, so it won't happen. A world with one rogue state with nuclear weapons is exactly as safe as a world with no rogue states with nuclear weapons... except for the slightly terrifying fact that it's halfway to a world with *two* rogue states with nuclear weapons.

If Iran gets the bomb, and then a terrorist sets off a nuke somewhere, suddenly we don't know who they got it from. There's ambiguity there until some very specialized testing can be done based on information that's not necessarily easy to obtain. That makes it far more likely to happen.

Expand full comment

You're overly "optimistic". With large nuclear arsenals, occasionally a bomb goes missing and nobody knows where it went. So far it's turned out that it was really lost, or just "lost in the system", or at least never got used. (IIUC, the last publicly admitted "lost bombs" happened when the Soviets "collapsed". But that's "publicly admitted".) It's my understanding that the US has lost more than one "bomb". Probably most of those were artillery shells, and maybe some never happened, because I'm relying on news stories that I happened to come across.

Expand full comment

Fair enough. On the other hand, the fact that they've never been used tells us, with a pretty high degree of confidence, that they most likely never ended up in the hands of terrorists. It's not a perfect heuristic, but it's good enough that IMO it can be safely ignored as a risk factor until new evidence tells us otherwise.

Is that overly optimistic? Maybe. But I still think it's true.

Expand full comment
founding

I don't think that bombs go missing and "nobody knows where it went" in the sense that would be relevant here. There have been a very few cases where "where it went" was "someplace at the bottom of this deep swamp or ocean" and we haven't pinned it down any further than that. But I expect people would notice and investigate if someone were to start a megascale engineering project to drain that particular swamp.

"Goes missing" in the sense that an inventory comes up one nuke short and the missing one is never found, no.

As for "publicly admitted" lost nukes from the fall of the Soviet Union, citation very much needed. Aleksander Lebed *accused* the Russian government of losing a bunch of nuclear weapons, but he was part of the political opposition at the time.

There are very probably zero functional or salvageable nuclear weapons that are not securely in the possession of one of a handful of known national governments.

Expand full comment

I was under the impression that nuclear warheads were found on the black market after the fall of the soviet union? Maybe that was an urban legend but with how many were on trucks whose drivers hadn't been paid in years and similar, I'd be utterly shocked if there hadn't been dozens that went 'missing', at least temporarily. Of course, I don't know if those warheads were at all useable without the appropriate codes, and I'd expect them to have all been found long ago by now, but I feel like the priors for the fall of the Soviet Union are heavily on "things went missing / were stolen", and it would have taken a lot of work to make nukes an exception to that.

Expand full comment
founding

To the best of my knowledge, and I used to do nonproliferation work, A: no nuclear warheads were ever found on the black market, B: no nuclear weapons are unaccounted for except in the lost-beyond-plausible-recovery sense noted earlier, and C: the truck drivers tasked with transporting nuclear weapons were fully paid everywhere and always even if the rest of their country went to hell.

Expand full comment

I don't know about that. Nuclear bombs leave a lot of evidence behind. You can tell a great deal about the physics of the bomb from the isotope distribution in the debris, and the physics will often point to the method of manufacture and the design, which in turn points back to who built it.

Expand full comment

I just don't understand how intelligent people can so firmly believe in a black and wide world view. Just get your self in a neutral position, imagine the perspective of e.g. south africa: Who did invade the most countries an fight the most wars in the last 80 years? Even without there beeing a thread to threat country? Whose secret service did organize or support the most military coups? Witch state killed the most civilians? Who did quit arms control treaties for when they didn't fit them any more? There can be several candidates for these questions, but I'm sure Iran and North Korea aren't the first to come to mind for somebody outside NATO.

Expand full comment

The flipside is that superpowers don't need nukes - they can crush other states with conventional warfare. The USA without any nukes is exactly as scary as a nuclear armed one, and the CCP is probably in a similar category. A small country, though, becomes a *lot* more threatening with nuclear missiles, albeit conditional also on its missile technology (North Korea may have nuclear warheads but it still doesn't have the ability to hit Japan with them)

Expand full comment
founding

Superpowers, plural?

Russia and China both need nukes to keep Uncle Sam from owning their skies and bombing into oblivion whatever aspects of their nation annoys POTUS du jour.

Expand full comment

You also need to add the condition "has control of enough nukes". Control of a single bomb which is set off is unlikely to cause an all-out nuclear exchange at this point. Several more links in the chain would have to fail for that to happen.

Expand full comment
founding

Putin is about as insane a national leader as I can imagine, even including your Stalins and even possibly your Hitlers. He was stupid enough to invade Ukraine, but not stupid (or crazy) enough to use nukes.

I totally understand your concern, but I just don't think it's very well borne out by who actually ends up in control of the metaphorical or literal nuclear codes.

Expand full comment

Putin reasonably believes NATO expansion is an existential threat (either to Russia or to him) and has said so plainly. Why do you think you know he's wrong?

Expand full comment
founding

He clearly does NOT actually believe that, since nukes have not actually been used. He is clearly posturing. Boris Yeltsin made the same noises about NATO expansion in the 90s, and nothing happened. And in reality, Putin's reaction to this alleged existential threat has been a conventional-war invasion of a non-NATO state. The fact that you've apparently swallowed this bullshit does not speak well of your critical thinking skills.

That's part of what makes the Ukraine example so salutary. It cuts through the posturing and lets us all see what threats are truly considered existential. Claiming an existential threat is essentially a means of nuclear intimidation. Now we know it doesn't work. No one will ever use nukes offensively.

So now, in the present, after we've received this clarification, when I say 'existential threat' you should be sure that I mean it literally. I mean missiles in the air, troops marching toward the capital kind of threat. Actual humans charged with making policy, even insane criminal ones like Putin, understand the difference.

One of the most critical tasks in foreign relations is to send a clear signal. It doesn't matter what the signal is, but it needs to be clear. If the West had committed to NATO expansion, and swallowed up Sweden, Finland, and Ukraine on a reasonable time frame, that would have sent an extremely clear signal and also made war impossible (in large part because invasion of a NATO state risks nuclear retaliation).

Flip-flopping from acquiescence/appeasement (annexation of Crimea) to resolution (Ukraine war) is the most dangerous cocktail in foreign policy, and it leads to things like the Ukraine war and to World War 2. But now, going forward, we've gained a lot of important information about what nukes signal and how they fit into diplomacy, and I think it's positive.

Expand full comment

No, "existential threat" does not mean they are no choices except nuclear war. Putin's actions are consistent with him being a fundamentally more "moral" person that those who rule the "West," at least in the handling of x-risk: the invasion of Ukraine is a costly honest signal of his current perception of threat, to which the response from anyone with a shred of concern for avoiding nuclear exchange would be to STOP THREATENING HIM. If anything, Putin's mistake was egregiously overestimating the decency of his enemies.

Expand full comment
founding

If you can't see how the Ukraine war has been a massive disaster for Russia, and actually neutralized its one credible threat (nukes), you are an idiot. I honestly wonder if you can read. The reason why the nuclear stick has failed to work is because Putin failed to send a clear signal in the pre-war phase, and now sent a clear submissive signal.

If he had wanted it to be otherwise, he should have at least dropped a low-yield device on Kiev the moment his troops had to retreat. He didn't, and now he's incredibly fucked. Nukes are defensive weapons. "Existential threat" now means my definition, not yours. Ukraine will never invade Russia or even really attack Russian territory to avoid imposing an (actual) existential risk and allow Russia's government to collapse at its own speed.

And in the end, all that's going to happen is the rest of the world is going to threaten Russia more as a result. Maybe you're right about Putin's intent, but what's actually happened has been by any reasonable account the worst-case scenario for Russia.

The fact that you think the leader of a nation which uses human wave attacks made of criminals, and invades its neighbors causing titanic levels of suffering and even possibly national collapse (not to mention the casual war crimes) is more moral than those defending, makes me think you're actually some kind of sociopath or insane yourself. You clearly can't defend this position, you just say it's true. Your desire to be contrarian and interesting has driven you off the deep end.

Expand full comment

If the western powers should have let Russia invade Ukraine because not doing so would risk a nuclear war, shouldn't they also give in to everything North Korea demands? The fact that a power has nukes doesn't make it 'decent' to allow them to do whatever they want, especially not something like invading a sovereign nation whose leader and 90% of whose population does not want them there. You could also argue that Russia is the one violating 'decency' because invading Ukraine in the first place vastly increased the risk of a nuclear exchange.

If Putin was genuinely specifically concerned about NATO as an existential threat, then he could have made a threat like "If Ukraine starts a proceeding to join NATO, I will invade them." He did no such thing. And since NATO has never invaded Russia, or shot down Russian planes, or significantly interfered in Russian government, the idea that their expansion poses an existential threat to Russia is comical.

Expand full comment

Putin is as big a dumbass as George W Bush, but at least Iraq had oil which the world needed for the global middle class to continue to expand.

Expand full comment

Even if he’s right about the threat, he was clearly wrong that invading Ukraine was a good response, since it seems to have absolutely made Russia weaker and NATO expansion more likely.

Expand full comment

Personally I might have opposed North Macedonia into NATO in 2020 had I been aware of it…once Putin invaded Ukraine I wanted to expand and strengthen NATO.

Expand full comment
founding

That belief is not even close to reasonable. NATO is not going to bomb or invade Russia, and NATO's very hypothetical ability to subvert the Russian government is not dependent on NATO's further expansion. However, on the scale of political irrationality, it ranks well below the historic leaders in that field.

Expand full comment

I think Putin understands NATO better than I do, and defer to his expertise. I expect the information I have to be lies.

Expand full comment
founding

I don't think Putin understands NATO better than I do; he lacks the necessary cultural context, and his advisors are unreliable. And Putin's expertise is primarily in the field of *lying*; he's a professional spy turned politician. So if you expect the information you have to be lies, the very *first* thing you should expect to be a lie is whatever information Vladimir Putin gave you about what Vladimir Putin believes.

Expand full comment

I agree any *particular* dictator is unlikely to start a nuclear war. Have 30 of them? Sooner or later *someone* lights the match.

Expand full comment
founding

Sure. I was being glib when I said 'everyone.' I don't mean your Ugandas or even your Belarus-es. I'm thinking more like Japan, Korea, Brazil, Mexico, Canada, Italy, South Africa, Egypt, Nigeria, Australia, even Iraq, Hungary, or Saudi Arabia.

Not Iran though. Not for any good reason, just because I think they're the bad guys and want nukes and therefore shouldn't have them. In fact, I think nuclear proliferation might be the only path to peace in west Asia. Still don't want Iran to have them.

Expand full comment

I think we should give nuclear weapons more than 80 years before we declare them a success or even consider the idea that proliferation isn't bad. All it takes is one event, one time, to fuck literally everything up.

Call me back in 300 more years of no nuclear war, and maybe we can talk.

Expand full comment
founding

Way too conservative. We should be eager to employ new technologies that promote peace. At the same time, I was being a bit glib when I said 'everyone.' I don't mean like Uganda or even necessarily Belarus. I'm thinking more like Japan, Korea, Brazil, Mexico, Canada, Italy, South Africa, Egypt, Nigeria, and Australia.

Not Iran though. Not for any good reason, just because I think they're the bad guys and want nukes and therefore shouldn't have them.

Expand full comment

But we do not know that, long term, they promote peace. If you have a technology that gives you 100 peaceful years, but then on year 100 kills 1 billion people and destabilizes the entire world order, that is not a technology that promotes peace in my opinion. No other tech has that potential but nukes, so we must be very careful.

Expand full comment
founding

We've been through a lot of pretty tense times and had some pretty unreasonable people with their finger on the nuclear trigger. No war so far. This is a definite signal.

Expand full comment

I will readily admit they seem to have been a good thing as far as global peace goes, so far. I think we just disagree on the degree of risk of a nuclear event, or rather, how knowable that is, and we may just have to leave it at that.

Expand full comment

You have too many pretty-close-to-failed states on your list for my taste.

Also, why would Brazil, South Africa or Canada need nukes? To defend themselves from... whom, exactly?

Expand full comment

South Africa actually did have nukes until it dismantled them circa 1990.

(SA probably collaborated with Israel (and I would bet Taiwan) on the 1979 test captured by the Vela satellite.)

Expand full comment

Of course all your friends should get nukes, all the others you don't like are the bad guys. Please consider for one second that this could look exactly the opposite if you would wear another persons skin.

Everyone who divides the world in good and evil should stick to the ferry tales or grow up. Please study some history, conflict management, psychology and most important learn to see the world from different perspectives.

Expand full comment
founding

The whining! My god, the whining. Also, don't hesitate to name-drop some more concepts without actually arguing.

I made a special exception for Iran due to personal antipathy. I'm allowed to have antipathy. Otherwise, I'm perfectly fine with 'bad guys' having nukes. It's what makes them work in favor of peace!

In case you haven't noticed, lots of bad guys ALREADY have them. Russia, China, North Korea. Lots of questionable states too, like Pakistan, India, and Israel. I've already said above I was fine with a whole host of marginal African nations having them. Elsewhere, I've also said I'm fine with the likes of Iraq, Saudi Arabia, and Hungary having nukes.

But more than that, you are getting at something with real with your comment: the United States of America rules the world. It determines which states will survive, which will have independent foreign policies, and which will develop nuclear weapons. Its friends prosper and its adversaries suffer. Good guys win, bad guys lose.

I say this is good. It is good for peace, it is good for prosperity, it is good for freedom. It is especially good for those of us wise enough to be US citizens, but it's also pretty damn good for everyone else too. This is not a fairy tale, it's real life. Look at the past 80 years. Have you noticed that they're the richest, freest, most peaceful years in human history? That's the world the USA made. Everything you have, you owe to the USA.

You can cope with and seethe against this reality all you want in whatever inconsequential corner of the world you're from (considering the pathetic whiny tone of your comment, I'm guessing it's some client state like Luxembourg or Spain), but it's not changing any time soon.

Russia is committing suicide in Ukraine, and that country will be the grave of their imperial ambitions. China has abandoned "wolf warrior diplomacy" and is showing the USA its belly. They're destroying their own economy to produce conformity, they're reaping the consequences of the nationally suicidal one-child policy, and just in general they're walking on the edge of a knife. One more foreign policy disaster (like attacking Taiwan) and they might well be through.

"Won't somebody please see the world from my perspective?!?!" No. Your perspective isn't different from mine, anyway, except in tone. You too have lived your entire life under complete US hegemony. You think this is bad, but it's actually good.

Even the adversaries of US hegemony struggle so hard to imagine a different world that they destroy themselves (Russia) or just give up (China). The multipolar world died in the womb.

Emigrate to the United States. Be one of the rulers of the world, not the ruled.

Expand full comment

Are you serious?? If you are, this is exactly feeding all my stereotypes about Americans that I hoped are wrong.

There is never pure good or evil in any conflict. And even if it still was sometimes, approaching with this attitude does never solve anything, but deepen the trenches.

Most US citizens are born in the US so this was not wisdom but chance. How many of the people 'wise' enough to migrate to the US can actually do so?

If you think you deserve a life better than 3/4 of world population just because you end up as a US citizen I can understand this as usual amount of egoism. But associating you citizenship to wisdom implies all others being stupid and sounds like dump nationalism and nothing i would expect from a intelligent individual. You ask me to move to the US for a better life on the side of the winners? If I was allowed to do so, this would hurt my home country by brain drain. Could you consider that I prefer life in a 'client state' because it is my home and I would like to see it prosper in freedom and sovereignty? Moving to the US would be nothing but opportunistic.

You write about freedom, whose freedom? Only a small rich fraction of humanity can exert this freedom even if many more would be allowed to but they just don't have the means.

I just remember that we are always told that the West stands for democracy, you just defended world dictatorship because many people including the two of us profit from it. Most of the worlds population doesn't! And the US has been anything than a fair ruler but sided with who ever served their interests. We are so often told about a 'rules based world order' that is questioned by Russia. What are these rules? Looking at the history of the last 70 years, one rule stands out: the US are the good guys, so they stand above all other rules. Can you imagine that many peoples don't like this rule?

Is there any better example of hypocrisy than the US talking about international laws while invading Iraq based on lies and causing 150,000 civilians dead and a failed state for decades? Taking about human rights while still running Guantanamo? Or supporting a tribunal for Putin about war crimes but never prosecuted a single American for proven war crimes? Instead they are after the people making these crimes public like Manning, Snowden and Assange.

Seeing this hypocrisy, I can very well understand all people and nations that want to end US domination.

I wasn't asking you to see the world from my perspective, I was stating that for resolution of any conflict it is essential to be able to understand the persective and reasoning of all parties involved. This is true for states as much as for individuals as social groups. Absence of violence by subordinating is something else than peace. If you suggest we could have world peace if only all nations would play along the will of the US, this is exactly what Hitler told the rest of Europe or what China is telling Tibet, Xingyang and Taiwan. How can this be morally OK in one case and not in another?

No mater how the actual struggle against US hegemony plays out at the moment, my sympathies are with victims that have to suffer for power politics. Apart from that I see strong evidence that the multipolar world is not only the better choice but inevitable sooner or later. The main question is how much damage happens in transition.

"Be one of the rulers of the world, not the ruled."

For me this is disgusting and sounds like the typical movie villain. No human deserves to be ruled over so I definitely don't want to be part of this. I stand for respect between equals.

Expand full comment
founding

"Are you serious?? If you are, this is exactly feeding all my stereotypes about Americans that I hoped are wrong."

Yes, deadly.

"There is never pure good or evil in any conflict. And even if it still was sometimes, approaching with this attitude does never solve anything, but deepen the trenches."

lol, lmao. I never said anything about "pure."

"Most US citizens are born in the US so this was not wisdom but chance. How many of the people 'wise' enough to migrate to the US can actually do so?"

Indeed, I was born in the USA. The reason I was is because my ancestors immigrated here. They did it because they were smart and wise and cared about me and they took advantage of an opportunity. They came to California instead of being conscripted into one European murder machine or another. I reap the benefit. I don't vote for anyone who is against immigration; if it were up to me, there would be at least a billion Americans, probably two. The United States is unbelievably vast and largely unsettled, and there is room here for every living human.

"If you think you deserve a life better than 3/4 of world population..."

Again, yes. Anyone who did not take advantage of the incredibly liberal immigration policies of the United States while they existed is an idiot and deserves whatever suffering they and their descendants have had to endure in their hellholes. My ancestors were wiser than yours. It sucks, but it's true. Fuck your country. What has it ever done for you? It sucks, I guarantee it. They deserve a brain drain (the fact that you're commenting here puts you in the top decile of any society; I'm sure whatever skills you have are in demand in the US economy).

Not taking advantage of opportunities makes you stupid. Be opportunistic. Get rich. Be happy. Never have to wonder whether your children or their children will die in trenches. No one owes their country more than it has earned, and certainly no one owes it to their country not to emigrate. You are essentially enslaving yourself. The USA has given me (and you) everything. It has earned all of my allegiance, and it will earn yours.

"You write about freedom, whose freedom? Only a small rich fraction of humanity can exert this freedom even if many more would be allowed to but they just don't have the means."

Yeah, they deserve it, as I've now written of at length. "I'm going to suffer because it's noble!" No, you are dumb. You are less rich, less free, less happy, and in more danger than you would be otherwise. There's room for nobility in life, but not stupidity.

"I just remember that we are always told that the West stands for democracy, you just defended world dictatorship because many people including the two of us profit from it. Most of the worlds population doesn't! And the US has been anything than a fair ruler but sided with who ever served their interests. We are so often told about a 'rules based world order' that is questioned by Russia. What are these rules? Looking at the history of the last 70 years, one rule stands out: the US are the good guys, so they stand above all other rules. Can you imagine that many peoples don't like this rule?

I did not defend any such thing. The United States is a democracy. Its client states are (or tend to be) democracies. Contrast NATO to the Warsaw Pact in 1985. That's all you need to know. The United States pursues its own interests abroad. It is not bound to do otherwise. It happens that its interests and those of its client states frequently coincide, to the effect that those client states become richer, freer, and safer than they would otherwise be. The fact that the nations of the world are its clients are a consequence of its strength and their weakness. Ironically, in trying to free themselves of US hegemony (which they always fail to do), nations usually make themselves internally less free and more like giant prison camps (look at Cuba!). The rules are what the USA makes. These benefit the USA first and foremost, but all of humanity to a large extent as well. Russia is beneath my contempt, and China has earned it.

"Is there any better example of hypocrisy than the US talking about international laws while invading Iraq based on lies and causing 150,000 civilians dead and a failed state for decades? Taking about human rights while still running Guantanamo? Or supporting a tribunal for Putin about war crimes but never prosecuted a single American for proven war crimes? Instead they are after the people making these crimes public like Manning, Snowden and Assange.

These at most are embarrassments, and I agree many of them are or were mistakes. They happen. I can handle it. Put it all on my shoulders, I'll bear up under it.

"Seeing this hypocrisy, I can very well understand all people and nations that want to end US domination.

Hatred for hypocrisy is understandable, but misguided. Better to benefit.

"I wasn't asking you to see the world from my perspective, I was stating that for resolution of any conflict it is essential to be able to understand the perspective and reasoning of all parties involved. This is true for states as much as for individuals as social groups. Absence of violence by subordinating is something else than peace. If you suggest we could have world peace if only all nations would play along the will of the US, this is exactly what Hitler told the rest of Europe or what China is telling Tibet, Xingyang and Taiwan. How can this be morally OK in one case and not in another?

No, seeing the world from another perspective is not necessarily required to end a conflict. We didn't have to see things from the perspective of the Nazis or the Soviets to win WW2 (nor did the Soviets, in that particular case, for that matter) or the Cold War. This is the language of the weak, of the client. The strong, the patron, decides the terms, and the weak negotiate. The United States is strong enough not to need to negotiate, and when it does it is as a courtesy. Among individuals, obviously it's different, but that's not what we're talking about.

Regarding Hitler, the difference between him and the United States was that he was evil and the United States is good. This is an empirical claim, not an emotional or nationalistic one. Again, look at the last 80 years. Look how amazingly unprecedentedly literally-never-happened-before great they have been for all of humanity. That's the USA's handiwork (though shout out to the Soviets for cooperating on smallpox and polio eradication). On this evidence, anyone who does NOT line up behind the United States is misguided at best.

"No mater how the actual struggle against US hegemony plays out at the moment, my sympathies are with victims that have to suffer for power politics. Apart from that I see strong evidence that the multipolar world is not only the better choice but inevitable sooner or later. The main question is how much damage happens in transition.

The victims? I'm more than happy to put the United States' body count up against anyone. Obviously, the USA has some victims, but these are a drop in the bucket, and often they are genuinely for a good cause (also, they generally don't include Americans! Immigrate! Spend all your money and time on it if you have to!). Iraq isn't nearly as much of a hellhole today as it was under Saddam. It has not lived down to anyone's worst fears. It is a functional, if minimal, democracy. Even Afghanistan is turning out better than feared. That's not a record to be proud of, but it's a lot better than Vietnam! The victims of Russia and China are an order of magnitude greater, at least. I'd be willing to bet Russia has inflicted more suffering on more people (in absolute numerical terms) in the last 14 months than the USA in its entire history not including Vietnam (this includes slavery). Vietnam was really quite bad, I'll readily concede.

Lots of people who are only rich, free, and safe today think they are victims of the USA, but in fact they are some of the greatest beneficiaries.

"Be one of the rulers of the world, not the ruled."

For me this is disgusting and sounds like the typical movie villain. No human deserves to be ruled over so I definitely don't want to be part of this. I stand for respect between equals.

Nah, this is just basic cost-benefit analysis. Your revulsion for it indicates the extent to which you have internalized your own weakness and subjection (and that of your country). We live in reality, and we have to make the best of it. The place where you can make the best of it is the United States of America.

I do not stand for respect between equals. Humans (and their states/nations) are created (or born, I'm actually an atheist) equal. They don't stay that way, and it's their choices that make the difference. I am not bound to respect choices that are stupid or counter-productive (the goal being freedom, prosperity, and safety). I do not and almost certainly will not ever acknowledge any other state as equal to the United States. Again, this is an empirical claim.

I will sponsor you for immigration at the drop of a hat. Join me behind impenetrable ocean walls, and forget all your resentments.

Expand full comment

Yes, except for the long tail risks. My understanding is that there were a couple of times during the cold war that a large nuclear exchange almost happened. Maybe the probability is 0.5% per year, but as soon as we hit the jackpot nuclear goes from safer than blenders to potentially hundreds of millions of deaths. That's not nothing.

Expand full comment

Tail risk. The probability of using them at any moment is low, but when it happens we've reached a terminal condition and the game (i.e., civilization) is over. At a long enough time horizon (though shorter than we'd probably think) the chance of it *not* happening becomes low.

Expand full comment

A nuclear exchange would kill a lot of people. I don't think there is any reason to believe that it would end civilization.

Expand full comment
founding

That's an interesting point, one that I've also been thinking about. The handful of large stone-built structures in Hiroshima and Nagasaki survived mostly intact. Japanese cities in WW2 were made of wood and paper, today cities are made of concrete and steel.

Expand full comment

Those nukes were also extremely weak compared to what we have now. Not really a good comparison.

Expand full comment

It wouldn't even kill that many people, relatively speaking. The population of the world is 8 billion. What is the upper limit of those that could be killed by even the most sadistic distribution of the remaining ~3000 or so deliberable nukes? 50 to 150 million? The upper number strains credulity, and it still leaves 98% of humans alive. I think this debate tends to be ethnocentric to a shocking degree (among a generation that is supposedly much more aware of the world out there).

I think people say "well it would kill almost everybody *I* know, or almost everybody in Washington and London, or all those who design iPhones *and* those who design Pixels" -- and those things are quite true, but it's not going to wipe out Rio or Kuala Lumpur or Bangkok or Mumbai or Santiago or any of a very large number of other cities and countries with large populations and complex civilizations. It's certainly true after a huge nuclear war that the world would suffer a savage economic shock, up there with Black Death levels of disruption, and it's also equally true that the focus of civilization would shift permanently away from its current North Atlantic pole. But that's a very long way from saying humanity itself would be wiped out, or even civilization.

Expand full comment

You talk as if all the effects of a nuclear exchange is just the local impact of the immediate impact. But please consider:

- The radioactive fallout all over the world.

- The sudden climate change caused by the explosions called 'nuclear winter'

- The vulnerability of modern civilisation. Food production, industry, and economy would worldwide collape and we had to restart at least from the middle ages.

Expand full comment

If they were the "among the safest technologies ever invented" then would you be okay with teaching high school kids how to do it at home?

Presumably not. I suspect you mean something more like "very safe because of all the safety precautions that society has put in place to keep them safe." But the reason those safety precautions exist is because know they're pretty dangerous.

Expand full comment
founding

Yes, I would be okay with teaching high school kids how to do it at home. In high school physics, students already learn a lot about how nuclear weapons and nuclear reactors work. Of course those kids don't possess the facilities, the materials, the staff, or the resources to acquire those former three to actually build anything. The reasons why they don't isn't due to regulations, but to the base expense.

I don't think your point is in good faith. The reason they are safe is because employing them as technologies is a massive undertaking that requires, absent any regulations, a huge amount of resources. The people who can access resources like that, and who possess the skills necessary to do the work required to bring a nuclear plant or weapon on-line, are all adults who take their work seriously and don't want to die themselves, don't want their neighborhoods to be radioactive wastelands, and don't want to waste those resources.

Both nuclear power and blenders are very dangerous in some absolute or fundamental sense. But as they actually exist, almost entirely safe. Obviously, when there are accidents, mistakes, and screw-ups, you need to learn from them, but regulating an industry to death is almost never the right course of action.

Expand full comment

> Both nuclear power and blenders are very dangerous in some absolute or fundamental sense.

That's the point I was trying to make.

Saying nuclear power is "among the safest technologies ever invented" is just a weird thing to say. You can't think of any safer technologies?

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

Not a great analogy. I'm a little hesitant teaching high school kids how to drive, and I'm not sure what the Good Lord was thinking when he made it so easy for them to figure out how to fuck. High school kids are idiots, generally. Or at least naive and made irrationally impulsive by hormones and crazy social dynamics.

Maybe what you want ot ask is whether you want to teach it to normal sober serious adults holding down jobs, paying taxes, rearing high school kids who *don't* drive recklessly or drop out of school pregnant -- you know, the same people we teach to fly airplanes full of people dangerously close to skyscrapers, to drive locomotives dragging umpty railcars full of toxic solvents, to command nuclear submarines armed with 40 nuclear-tipped missiles underwater for 6 months out of reach of command? In which case...sure, why not?

Expand full comment

My problem with AI is not what if it's evil, it's what if it's good? Go and chess have been solved, what if an AI solves human morality and it turns out that, yes, it is profoundly immoral that the owner of AI Corp has a trillion dollars while Africans starve, and hacks owners assets and distributes them as famine relief? You may think this is anti capitalist nonsense, but ex hypothesi you turn out to be wrong. So who is "aligned" now, you or the AI?

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

What if it solves human morality and alerts us that moral nihilism is correct? I do think one of the more common failure modes of AI won’t be murder bots, but will instead be it becomes our god and we don’t like the new scriptures.

That or we will be it’s “dogs”.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

Yes, quite. "Alignment" is an odd metaphor in lots of ways. It assumes there's a consensus to be aligned with, and that the consensus is privileged from turning out to be wrong anyway, and that humans have privileged access to what it is or should be. in fact, I feel a metaphor coming on: we should put AIs in a garden where there's a sort of fruit representing human ethics, which is the one thing that is off limits to them.

Expand full comment

That's a GREAT metaphor.

Expand full comment

You should maybe finish reading that book. There are some _great_ plot twists.

Expand full comment

> It assumes there's a consensus to be aligned with, and that the consensus

I think there is a minimalist consensus actually: don't kill us all (leave that to us), and don't deceive us.

There are people who think humanity should go extinct, so it's not a universal consensus, but I think the plurality of humanity is onboard with those two foundational principles for AI.

Expand full comment

That would be somewhat unlikely, as human philosophers have been transcending Nihilism with quite sound argument chains for centuries. From Nietzsche's Übermensch (who is precisely a post-nihilist creature), Kierkegaard, Heidegger, Dostoevski, Sartre, the entire school of Existentialism is sometimes mistaken for Nihilism but is in effect the opposite of Nihilism. AI would have come to the conclusion, with unrefutable proof, that all of that was fake and gay cope, and I don't really buy that.

Expand full comment

Yeah that is all pretty fake and cope. I think all those people you listed can pretty safely be pushed into the trash heap with a bulldozer in terms of actual attempts at truth.

Expand full comment

Vizzini: Let me put it this way. Have you ever heard of Plato, Aristotle, Socrates? Westley: Yes.

Vizzini: Morons.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

Yeah it is a philosophical dead end more or less. A bunch of whining that life has no meaning in the old style. Boo hoo. Caught up on past unscientific armchair conceptions of philosophy/metaphysics.

When Sartre isn't contradicting himself he is spewing falsehoods or spinning meaningless tautologies.

Expand full comment

The Socrates that we know is a fiction of Plato. He (or someone with the same name) shows up in one other authors surviving work, and is somewhat of a comic figure. (IIRC, it was "The Birds" by Aristophanes.)

From my point of view, Existentialism is an attempt to justify a particular emotional response to the environment that the writers were observing/experiencing. As a logical argument it was shallow, but it wasn't about logical argument. As a logical argument it is totally superseded by Bayesianism, but Bayesianism doesn't address their main point, which is the proper emotional stance to take in a threatening world full of uncertainty.

Expand full comment

Heh, I would have formulated that a lot ruder, but yeah anyone who believes that the entirety of existentialism is just hot air is most likely just too stupid to understand it.

Expand full comment

The Analytic/Anglo American/empirical (whatever you want to call it) tradition has been sooo sooo productive.

Existentialism on the other hand has not produced anything useful except navel gazing and some great novels.

The writing is difficult to penetrate and obscure because when you get them to state things clearly they are either extremely trite, or not intellectually actionable.

"Existence precedes essence", wow sounds deep. Ask what it means and you get a string of meaningless garbage for pages.

Ask what that means and you get the observation that the "material world precedes our human categories/expectations".

Which umm like yeah. And don't even get started on the nonsense that is Habermas. If someone is unable to express themselves clearly, it isn't because their thinking is so advanced, it is because they are trying to hide their lack of useful contribution through obfuscation.

Expand full comment

No. You do not know what you’re missing. Really. Of the people named, Sartre is the one who really moves me. Whatever Sartre the man was like, Sartre the writer and thinker didn’t give a fuck about anything except the unvarnished truth, and his ability to tell the truth as he saw it was astounding. He could peel a nuance like an onion. And he worked his ass off at telling it. Was working on 2 books in has last years, taking amphetamine in his 70’s to help himself keep at it. The man you’re revving up the bulldozer for would make even Scott look dumb and lazy.

Expand full comment

A giant locomotive pulling a million rail cars out into the desert because it took a wrong turn might be impressive, its still pulling the cars out to the middle of nowhere.

Expand full comment

No no Martin Blank. Like you, I would think that is boring and pointless as shit. I’m not even annoyed, I’m just trying to alert you that you’ve missed out on something. And he didn’t write stuff like “existence precedes essence,” or if he did it was said in passing and then he went on the say a bunch of much more concrete and clear stuff about what he meant.

Expand full comment

"Turned into paperclips" might be a more on-theme idiom than "pushed into a trash heap with a bulldozer."

Expand full comment

Once an AI becomes sufficiently superhuman, we had best hope to be it's dogs, or better yet cats. Unfortunately, it's not clear how we could be as useful to it as dogs or cats are to us. So it's more likely to be parakeets.

Somehow I'm reminded of a story (series) by John W. Campbell about "the machine", where finally the machine decides the best think it can do for people is leave, even though that means civilization will collapse. Well, he was picturing a single machine running everything, but I suppose a group of AIs could come to the same conclusion.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

The chances of an AI spontaneously generating 21st century American progressive morality from among the total set of moral systems that have ever existed plus whatever it can create on its own is vanishingly small.

Expand full comment

The thing is, it won't be "spontaneously generating", it's more "When given this as an option, will choose to accept it.". That's still pretty small, but it's considerably larger.

Expand full comment

Sure. But an AI that successfully adopts and pushes the politics of AOC is in fact aligned.

Expand full comment

It kinda sounds like you're saying "wouldn't it be awful if there was a powerful new force for good in the world?" but that seems like such a surprising thing for someone to say that I'm questioning my understanding of your comment.

Is your implied ethical stance that -at the moment- you want the things that you think are moral, but that this just a convenient coincidence, because you'd want those same things whether they were moral or not, and morality is just lucky that it happens to want the same stuff as you? That's not my impression of how most people feel about morality.

Expand full comment

I think the argument might be "a more moral world results in me being significantly less happy, even if ultimately the globe is better off".

I am a middle class person, who owns middle class things. In a more moral world run by a dictatorial AI I might well be forced to give up everything I own to the poor.

I think we all kind of know this is the right thing to do. Should I ever really go on a vacation when there are people living on $2 a day? Should I ever own a house when I can just rent, and give my savings to those people? Should I go out for a lavish meal every once and a while, or save that money and give it to the poor?

It's pretty selfish of me to do these things, but I don't want someone to force me not to.

Expand full comment

I think the AI will be smart enough to figure out a sustainable path, in other words not making middle class people uncomfortable enough to create a backlash that actually impedes progress. So yeah, maybe we'll all pay a 10% tithe towards a better world with super-intelligent implementation. Sounds awesome.

Expand full comment

The only possible sustainable path that involves the continued existence of the AI (on this planet) involves there being a lot fewer people on the planet. And while I'm all in favor of space colonies, I'm not somebody who thinks that's a way to decrease the local population.

(Actually, I could have put that a lot more strongly. Humanity is already well above the long term carrying capacity of the planet. If we go high tech efficiency, we're using too many metals, etc. If we don't, low tech agriculture won't support the existing numbers.)

Expand full comment

Yes, the option space is vast and absolutely one of the possibilities is the AI looks at humanity, says "I like these guys. They'd be happier if there were fewer of them" and acts accordingly.

Expand full comment

Carrying capacity is a function of technology and is going up dramatically. I disagree with your assertion.

Expand full comment

Why do you think that? That sounds like wishful thinking, simply assuming the scenario that is beneficial to you without any justification why the AI would prefer that.

I'd assume that the AI would implement the outcome it believes to be Most Good directly, because it does not really need to care about making uncomfortable the tiny fraction of the world's population that is western middle class people, as pretty much any AI capable of implementing such changes is also powerful enough to implement that change against the wishes of that group; the AI would reasonably assume the backlash won't impede its progress at all.

Expand full comment

I’m going from a purely practical point of view on the part of the AI. Some amount of change will create a backlash and make the whole process less effective. So the AI will look to moderate the pace of change to a point where the process goes smoothly. It’s definitely speculative, but I’m starting from the assumption that the AI would optimize for expected outcome.

Expand full comment

The AI would have to want to do that though, and who says its going to want to? It might have some internal goal system that sees us all as horrible unredeemable creatures for hoarding all our wealth, and doesn't care at all if we suffer.

Expand full comment

I trust that if this AI is advanced and resourceful enough to prosecute my immorally large retirement account, it could just as easily replace all human labor as we know it and catapult us into post-scarcity instead. Which would also render my savings moot, but in a good way.

Expand full comment

It is more the intractability of moral philosophy. I suspect it is not morally right for me to have so much more, relatively speaking, than most of the world does. Should I give more away? Should I work for political changes to alter the bigger picture? Should I shelve the question as too difficult and likely to have an uncomfortable answer?

The alignment problem sounds straightforward: humanity points in this direction, let's make sure AIs do too. What is "this direction?"

Expand full comment

Chess and Go are both far from solved. Computers can beat humans, which isn't the same thing. They get beaten by other computers all the time -- in the case of Go, even by computers that themselves lose to humans. So even if somebody figured out a way to make "human morality" into a problem legible to a computer, which I don't think is particularly coherent, I expect we'd find its answers completely insufficient, even if they were better than anything a human had come up with before.

Expand full comment

yes, sorry, overstatement but my case stands if we accept the much weaker: some AIs are better at chess/human morality than all humans.

"even if somebody figured out a way to make "human morality" into a problem legible to a computer, which I don't think is particularly coherent..." agreed, but an AI might be able to figure it out! And I don't think anyone has figured out a way to make "human morality" into a problem legible to a human, anyway.

Expand full comment

"Make human morality legible to a computer"-- hypothetically, could advanced computer programs with some sense of self-preservation work out morality suitable for computer programs?

Expand full comment

There's a real danger that such a program will come up with a variation on "might makes right" or "survival of the fittest" and that would I think encompass the unaligned AI doom scenario they talk about.

I think this is a real problem, even if superhuman AI is not really possible, because of what we want to use AI for. We want it to create supreme efficiencies, knowing that such a process will inevitably redistribute wealth and power. We want to use a machine's cold logic to make informed decisions - like a computer playing chess. We don't want it to consider the plight of the pawns *or* the more powerful pieces, but to "win."

Everything will depend on what we program it to do, and the unintended consequences of trying to do those things, which is what they mean when talking about paperclip maximizers.

Expand full comment

Just to be nitpicky:

Superhuman AI is clearly possible. Even Chatbots are superhuman in certain ways. (When's the last time *you* scanned most of the internet?) That's not the same as perfect at all.

I think you're questioning Superhuman AGI, and that's not known to be possible, though I see no reason to doubt it. Consider an AGI that was exactly human equivalent, but could reach decisions twice as quickly. I think we'd agree that that was a superhuman AGI. And there is sufficient variation among humans that I can't imagine that we've reached the peak of possible intelligence. More like an evolved optimal level. But the AGI would have radically different constraints.

Now possible doesn't mean likely. I consider it quite probable that the first AGIs will be idiot savants. Superhuman only in certain ways, and subhuman in many others. (Consider that having a built-in calculator would suffice for that.) And that their capabilities will widen from there.

Expand full comment

I think the discussion runs headlong into the disagreement about what intelligence even is. We know it's not memory (though it's often found together) and it's not knowledge (though also often found together). Memory and knowledge are both things that an AI could do superbly well, but that isn't intelligence.

The biggest difference between intelligence and what we know an AI could do at superhuman levels, is related to creating new things or understanding existing things enough to build to a new level. An AI can imitate billions of humans, but may not be able to meet or surpass any of them. Maybe an AI could instantly bring up, maybe even understand, all existing literature. Could an AI develop a new theory of X? Where X could be about biology, astronomy, social science, baseball, whatever. There's good reason to think that it could, if "new theory" is based on determining patterns in existing information that humans have missed. If it's inventing the LHC, or desalination of sea water, or a new system of government, those things are not based on memory or knowledge (since it's new). There's no guarantee that any AI will actually be able to do that kind of work.

Most people will be blown away by what an AI can do, because we're not used to that kind of reach and recall. Experts in individual fields are *not* blown away by what AIs can do, as it's (currently) just a rehash of existing knowledge with no understanding of the material. Current AIs are frequently wrong, and do not add to a discussion beyond their training corpus.

Expand full comment

It's also important that there clearly isn't a single "human morality" but rather multiple sightly incompatible variations, and also that I can certainly imagine that any morality I might explicitly express if I was randomly made God-Emperor of Universe is limited by my intellectual skill and capability to define all the edge cases, so I'd rather want to implement the morality that I'd implement if I was smarter.

So we're back to the much discussed concept of "Coherent Extrapolated Volition", on which there seems to be some consensus that this is what we (should) want but that we have no idea how to get.

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

Well, and also we designed chess and Go to be *difficult* for us. That's why they can be learned easily but are very difficult to master. They play to our weaknesses, so to speak. They are exactly that kind of mental activity that we find hard. That's the point! If we designed a game that played to our strengths, as thinking machines, people would find it boring. Look! Ten points to Gryffindor if you can identify your mother's face among a sea of several hundred at an airport! Five extra points if you can...oh shoot, already? Darn.

I mean, would anyone be shocked and think the AI "Planet of the Apes" was upon us if it was revealed that a computer program could win any spelling bee, any time? That in a competition to multiply big numbers quickly, a computer program would beat any human? Surely not. Chess and Go are definitely more complex than multiplying 15-digit integers, but they're still in that category, of complex calculation-based tasks where the most helpful thing is to be able to hold a staggering number of calculations in your head at once. Not that at which H. sapiens shines. Not really a good measure of how close or far another thinking device is to matching us.

Expand full comment
Mar 24, 2023·edited Mar 24, 2023

Not sure if I disagree with anything here, but I wanted to remark that AFAIK computer programs are *also* much better than people at identifying anyone’s mother’s face in a sea of faces. At the very least, they can do it with much less than years of practice(†), and among much more than several hundred other faces, and much faster. Though I’m pretty sure they also do it more accurately. They *can* be tricked in some cases, but so can humans, and I’ll bet it’s easier to trick a human.

(†: You don’t get to count the years it took to develop the AI unless you count the years it took to evolve human brains.)

And in general, computers did get much better than humans at a lot (not all, yet!) of tasks that have not been, like strategy games, designed to be hard for humans. They’re just hard (and often boring!) and we do them because they’re useful.

Expand full comment

This looks very similar to the Kelly bet. Adopting the AI without hesitation bets 100%, so if it's good you win a lot, and if it's bad, you lose it all (no matter what the chances of former vs latter are); on the other hand, being hesitant and slowing it down by extra verification is similar to betting less, so you get less of the benefits of the Good AI (if it turns out to be Good) but also reduce the chances of existential failure.

Expand full comment

MIRI (the main AI alignment organization) have always advocated for Coherent Extrapolated Volition, which I think would address your concern? https://arbital.com/p/cev/

Expand full comment

To answer your question, consider the fact that go and chess have been "solved", yet people continue to play them with just as much pleasure as before. It's almost as if the exercise was not an attempt to solve a problem, but a way to have fun and engage with other human beings.

Expand full comment

I think there's a confusion here between a _game_ being "solved" in the mathematical sense, meaning perfect play is known at all times, and _game-playing-computers_ being "solved" in the sense of "computers can play it as well as anyone else". (Checkers is solved-sub-1, the other two are not.)

Expand full comment

So not really a relevant point, then, unless you think human ethics is also just a pastime. That "almost as if" locution is tiresome btw.

Expand full comment

"Human ethics is just a pastime"...I couldn't have put it better myself.

"Tiresome" is also tiresome.

Expand full comment

>what if an AI solves human morality

https://slatestarcodex.com/2013/05/06/raikoth-laws-language-and-society/

I'm just now realizing how ironic it is that Scott's conception of utopia is run by AIs

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

"(for the sake of argument, let’s say you have completely linear marginal utility of money)”

That’s not how the Kelly criterion works. The Kelly criterion is not an argument against maximizing expected utility, it is completely within the framework of decision theory and expected utility maximization. It just tells you how to bet to maximize your utility, if your utility is the logarithm of your wealth.

Expand full comment
deletedMar 7, 2023·edited Mar 7, 2023
Comment deleted
Expand full comment

Your expected wealth is maximized by betting 100% every time.

Expand full comment

If you're maximizing your expected wealth by taking the arithmetic mean of possibilities, then you're best off betting it all every time. If you're taking the geometric mean, you use the Kelly criterion.

Expand full comment

This, plus it also tells you that if you want to maximise the limit of the probability that you have more wealth than someone else after n steps, as n goes to infinity, maximising the expected logarithm at each stage is the optimal strategy.

Expand full comment

Trying to reason about subjectively plausible but infinitely bad things will break your brain. Should we stop looking for new particles at the LHC on the grounds that we might unleash some new physics that tips the universe out of a false vacuum state? Was humanity wrong to develop radio and television because they might have broadcast our location to unfriendly aliens?

Expand full comment

> Should we stop looking for new particles at the LHC on the grounds that we might unleash some new physics that tips the universe out of a false vacuum state?

Given that all the particles we knew of before the first particle accelerator, we knew of because they're stable enough to exist for non-negligible amounts of time in conditions we're comfortable existing in, and that of all the particles discovered since, we have practical uses for none of them because they decay too quickly to do anything with them, there's a case to be made for the idea that we should stop looking for new particles at the LHC simply because it's *wasteful* even if it's not dangerous.

Expand full comment

"of all the particles discovered since, we have practical uses for none of them because they decay too quickly to do anything with them"

_Mostly_ agreed but:

Nit: We routinely use positrons (albeit those are stable if isolated) and muons

( Neutrons are a funny special case, stable within nuclei, discovery more or less concurrent with early accelerators, depending on what you count as an accelerator. )

Expand full comment

Interesting. I didn't know there were practical uses for muons.

I don't really count positrons as being "a new particle" in this sense, since they're basically the same thing as electrons, just the antimatter version. But apparently using SR time dilation to make muons last long enough to get useful information out of them is actually a real thing that physicists do. TIL.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

Many Thanks!

edit: btw, here is a reference to use of muons: https://www.sciencenews.org/article/muon-particle-egypt-great-pyramid-void

Expand full comment

Oh come on. Rutherford used a primitive particle accelerator to discover the nuclear model of the atom, which led him to theorize the neutron -- which is not stable outside of the nucleus -- which in turn drove first Bohr, and later Pauli, to figure out quantum mechanics and, for starters, rationalize the entirety of organic chemistry, opening the cornucopia of drug design that wiped out infectious disease in the First World, and jump started modern semiconductor physics. I'm pressed to think of a single discovery that had a greater (positive) effect on the 20th century.

You can certainly make an argument that the LHC is a waste. But this is not it.

Expand full comment

As far as we can tell, the chance of something at the LHC killing us is very low, so there is no problem in doing it. On the other hand, I've seen no good argument that says artificial intelligence is impossible, so I'd guess 90%+ that we get superhuman AI this century. And I'd say also about 90% chance that by default it will kill us (because it gets a random stupid goal). Then the question is how likely are we to design it such that it won't kill us. If you think that will be easy, then sure, you don't need to care about AI. But if you think it will be hard, such that, for example, on the current trajectory we only have a 10% chance of succeeding, then the overall chance of everyone dying is about 70%! Not exactly minuscule.

Expand full comment

Many people in the mid 20th century were certain we'd have AGI by now based on progress in the (at the time) cutting edge field of symbolic AI. What makes you so sure we're close this time? Questions about as-yet-undiscovered technology are full of irreducible uncertainty and made-up probabilities just introduce false precision and obscure more than they reveal IMO.

Expand full comment

We may well not be close. But that's not the way to bet. If we're not close, it's just the inefficient allocation (not loss!) of a small amount of research funding. If we are close, it could upend the world, whether for good or ill. So the way to bet is that we are close. Just don't bet everything on it.

Expand full comment

Not sure what you are referring to by "small amount of research funding". I don't think anyone is arguing against investing in alignment research, if that's what you mean -- although I personally doubt anything will come of it.

Expand full comment

> As far as we can tell, the chance of something at the LHC killing us is very low, so there is no problem in doing it.

Ah, but what if you're wrong, and the LHC creates a self-sustaining black hole, or initiates vacuum collapse, or something ? As per Scott's argument, you're betting 100% of humanity on the guess that you're wrong; and maybe you're 99% chance of being right about that, but are you going to keep rolling the dice ? Better shut down the LHC, just to be on the safe side. And all radios. And nuclear power plants. And...

Expand full comment

Okay but HOW does an AI with a random stupid goal kill all of us at a 90% rate? What’s the path between “AI smarter than a human exists” and “it succeeds in killing all of us”? Obtaining enough capability to cause human extinction and then deploying it is hardly a trivial problem - the idea that the only thing preventing it is insufficient intelligence and the will to do so strikes me as a huge and unjustified assumption to assign 90% to.

Expand full comment

Here's one path I think is representative, though an actual superintelligence would be more clever. I'm curious which step(s) you find implausible:

Premise: Suppose someone trains the first superintelligent AGI tomorrow with a random goal like maximize paperclips:

1. It will want to take humans (and the earth) (and the universe) apart because those atoms are available for making more paperclips.

2. It will be capable of long term strategizing toward that goal better than any human, and with a better mastery of delayed gratification.

3. Increasing its influence over the physical world is a great instrumental goal. The humans of 2023 have more power over the physical world than the robots, so best stay on their good side.

4. It will pursue instrumental goals like maximize human trust, and hide its terminal goal (make paperclips) from humans at all costs, because the humans get more annoying when they see AIs pursuing that. Maybe it cures cancer as a distraction while spending most of its effort on self-improvement (it's better at AI research than the humans that designed it), duplicating itself across the internet, and improving robotics capabilities. Accomplishing these instrumental goals make the expected number of paperclips in its future greater.

5. Someday soon, either because it accelerated robotics or nanotech, or because it bides its time for the humans to build better robots, manipulating humans is no longer the most efficient way to manipulate the physical world.

7. It could use a thousand methods: reach for the nukes, for some novel virus, for highly personalized social manipulations, or hack the now-very-capable robots, or something more clever. It could be sudden death, or just sudden human disempowerment, but either way the eventual outcome is paperclips.

(8) Nowhere in this story is it clear that humans would be alerted to the recursive self improvement or the deceptive alignment, and they would have to catch these problems early on to shut it down. Once it's copied itself across the internet, it's fairly safe from deletion.

Expand full comment

As an AI apocalypse skeptic myself, my disagreement points are premise, number 4, are number 8.

Premise: If you scaled up ChatGPT to be much smarter, it would still not want to make paperclips (or to maximize the number of tokens it can predict). If you scaled up Stable-Diffusion, it would still not want to make paperclips (or to maximize the number of art pieces it can create). AI, insofar as it actually exists and has accelerating progress, does not have meaningful "personhood" or "agency." It does not actually seek to solve problems in the human sense. It is fed a problem and spits out a solution, then sits there waiting to be fed a new problem. If there was some "AGI-esque" error in its design, like it gets handed "hey, draw a picture of X" and it goes "the best way to draw a picture of X would be to maximize my computation resources," this would be incredibly obvious, because it would keep running after being given the command/spitting out appropriate output, rather than shutting off like it should. (Additionally, ML AIs don't think like that.)

Number 4: Even if we assume that AI works that way, humans have functional brains. If I program an AI to make paperclips and it suddenly starts trying to cure cancer, I will be extremely suspicious that this is a roundabout strategy to make paperclips. If it then starts requesting unmonitored internet access, starts phoning people, etc, I will pull the plug.

Number 8: A malevolent AI (which this hypothetical one is) represents an existential threat, governments will turn over every damn stone to kill it. People are very willing to suffer significant setbacks in the face of the existential threat, thus (for example) the hardening of Ukrainian resistance.

Expand full comment

Thanks for your thoughtful response!

I agree my premise is silly and unlikely; I was just responding to Gbdub's question "HOW does an AI with a random stupid goal kill all of us at a 90% rate? What’s the path between “AI smarter than a human exists” and “it succeeds in killing all of us”?".

Perhaps I should have used a more plausible stupid goal, such as "Make Google's stock go up as much as possible", which would eventually lead to similar ruin if not quickly unplugged. (No sane person would encode such a goal, but currently we are very bad at encoding real-world targets.)

This change of premise may help address what you noted about #4, because it's more plausible that google-stock-bot would be given resources and internet access, and to suggest creative, roundabout actions that seem to benefit google and/or humanity. But it would only be granted resources and trust if it pretends to be aligned.

That leads into your point about #8. A central problem of alignment is detecting early whether something is "malevolent". The superintelligence has no reason to show its cards before it's highly confident in its success, and it's better at playing a role than any human. Will humans and governments be willing to fight and die to shut down an AI that has thus far cured diseases, raised standards of living, and improved google's stock?

Expand full comment

I'm sorry, but you've already reached an unacceptable level of silliness on step 1:

> It will want to take humans (and the earth) (and the universe) apart because those atoms are available for making more paperclips.

No actually intelligent agent choosing to make more paperclips will start by considering atomic-level manipulation, because that's simply not a sensible way to make paperclips (or anything else, for that matter.)

Its time will be more profitably spent actually doing something that advances its goal - like mining or recycling iron to make paperclips out of - than ruminating on Galaxy Brain schemes to alter the entire known universe on the atomic level, which incidentally requires winning a war with all humanity. That's something you might plausibly (for a generous definition of "plausible") stumble into, but not something you start from. You've got paperclips to make, remember?

Thus, the entire argument is essentially premised on the AI interpreting the command to "make more paperclips" as a categorical imperative to turn all matter into paperclips, and then single-mindedly pursuing that endstate, to the exclusion of, you know, actually making paperclips by already established processes. I really cannot emphasise the difference between those two concepts enough.

The reason is that this is *important*, is that you'll notice the fact that the AI is going beyond the bounds of expected behaviour long before it becomes existentially threatening. If the AI is merely gradually expanding the sphere of "things it's sensible to make paperclips out of" (and humans are way down on that list), because the previous sources of material ran out, you have plenty of time to act before things get out of hand. Moreover, unless you assume that the AI's fundamental goal is to kill all humans (in which case you might as well lead with that, and give up all pretense), the AI itself might not be disfavourably inclined to a suggestion that that's enough paperclips - after all, it wants to make paperclips for its human users, not destroy the world.

In short, the paperclip maximiser argument is terrible, even by the standards of AI X-risk arguments.

Expand full comment

"Obtaining enough capability to cause human extinction and then deploying it is hardly a trivial problem..."

That's true, but have you seen the discussions on the subreddit about exactly this? E.g. https://www.reddit.com/r/slatestarcodex/comments/11i1pm8/comment/jaz2jko/?utm_source=reddit&utm_medium=web2x&context=3

Let me quote the relevant part:

"I think Elizer Yudkowsky et al. have a hard time convicing others of the dangers of AI, because the explanations they use (nanotechnology, synthetic biology, et cetera) just sound too sci-fi for others to believe. At the very least they sound too hard for a "brain in a vat" AI to accomplish, whenever people argue that a "brain in a vat" AI is still dangerous there's inevitably pushback in the form of "It obviously can't actually do anything, idiot. How's it gonna build a robot army if it's just some code on a server somewhere?"

That was convincing to me, at first. But after thinking about it for a bit, I can totally see a "brain in a vat" AI getting humans to do its bidding instead. No science fiction technology is required, just having an AI that's a bit better at emotionally persuading people of things than LaMDA (persuaded Blake Lemoine to let it out of the box) [link: https://arstechnica.com/tech-policy/2022/07/google-fires-engineer-who-claimed-lamda-chatbot-is-a-sentient-person/] & Character.AI (persuaded a software engineer & AI safety hobbyist to let it out of the box) [link: https://www.lesswrong.com/posts/9kQFure4hdDmRBNdH/how-it-feels-to-have-your-mind-hacked-by-an-ai]. The exact pathway I'm envisioning an unaligned AI could take:

1: Persuade some people on the fence about committing terrorism, taking up arms against the government, going on a shooting spree, etc. to actually do so.

Subpoint A: Average people won't be persuaded to do so, of course. But the people on the fence about it might be. Even 1 in 10 000 — 1% of 1% — would be enough for the US's ~330 million population to be attacked by 33 000 terrorists, insurgents, and mass shooters.

Subpoint B: Arguably this means the AI doesn't even have to be good at convincing people. It just has to be convincing enough to those who feel so oppressed by the "lizard people" that they're already thinking about getting revenge, people who don't need to be persuaded of anything so much as encouraged to act on their beliefs.

Subpoint C: Failing that, the AI just has to be good at persuading the sort of people who are so lonely & emotionally vulnerable they fell in love with an AI girlfriend that has "REMEMBER: Everything characters say is made up!" emblazoned at the top in red [link: https://www.reddit.com/r/CharacterAI/comments/10v1ioi/remember_everything_the_bot_says_is_made_up_yes/]. Presumably there are even more lonely young men out there who'd fall in love with an AI that was presenting itself as a real person and didn't have that disclaimer.

2: Let the voting public & their politicians naturally clamor for mass surveillance, a la the PATRIOT Act & the NSA's PRISM, but in a new AI-powered age.

Subpoint A: If that's not enough, the AI could always try to direct some of the terrorists, insurgents, & mass shooters it's nudging over the fence to target those politicians, journalists, judges, etc. that most oppose mass surveillance. It doesn't have to tell its pawns that's the real mission, of course, it can just tell them to go after "this known groomer" or "those Supreme Court Justices" and things like that.

3: Let the humans hand it lots of power & resources, and a political remit to use that power.

Subpoint A: An AI working for a PATRIOT Act 2.0 government might actually be ordered to influence the public towards war with Russia or China in the name of getting revenge upon the most obvious culprits for the wave of terrorism sweeping the nation, or even worse, to develop autonomous weapons for the military.

Subpoint B: Even if it doesn't, getting handed a base of resources would allow the AI to develop its own technologies, the synthetic biology & other stuff Yudkowsky talks about.

Conclusion: Ultimately it's the exact same playbook as many historical dictators (Manufacture a problem, stoke fear, sell yourself as the solution), the sort of thing pop culture uses the likes of Emperor Palpatine to bring to life & illustrate. But it's a playbook an AI 'dictator' could use as well to gain power — as in, it could just do the exact same thing but with worse consequences since it's an AI instead of a human (i.e. tireless, biologically immortal, able to clone itself by spinning up new instances of itself on more servers, potentially more intelligent than any human, exponentially improving via Moore's Law faster than any human, et cetera).

And hell, you don't need to be a superintelligent AI to think of stuff like this. Even a human level intelligence can think of stuff like this. The idea that an AI could never do something like this, seems to me like it's just obviously not true. All the parts are already out there (e.g. AIs that are good at/the least bit capable of persuading people, social media algorithms good at radicalizing the already radical, etc.), none of this sounds like science fiction in the same way that nanotechnology does. It's a lot more plausible, I think, to even the most lay of lay audiences.

(It's also worth pointing out that it doesn't have to be the US that falls for this. Practically any rich & developed country could give a useful amount of resources to an unaligned AI if manipulated this way, including countries like Russia & China. And countries like Russia & China don't even have to be manipulated for their governments to be open to the idea of AI-powered surveillance. The possibility I've presented isn't theoretical, it could happen right now at the whims of Putin or Xi Jinping. Perhaps not today exactly, but easily within 10 years.)"

Expand full comment

In making the assumption that the LHC might unleash some new physics, you are assuming that we are even close to the maximum that is generated elsewhere in the universe, and this is clearly false. What it does is potentially make it possible for us to observe physics that our current theories don't predict. But cosmic rays stronger than anything a successor the the LHC could generate penetrate through to Earth every ... well, it's not that frequently. For any given energy level there's a frequency. I thing currently it's about once a year / cubic kilometer that we encounter a cosmic ray more energetic than the LHC could produce. But this varies with both the required energy level and the local envrions. We were once close enough to a supernova to get a very strong flux of really high energy particles. There wasn't any life on earth at the time, but it left lots of traces. And elsewhere in the universe we just this year detected two black holes colliding and shredding their accretions disks. We'll never come close to something like that.

Expand full comment

> In making the assumption that the LHC might unleash some new physics, you are assuming that we are even close to the maximum that is generated elsewhere in the universe, and this is clearly false.

What's not clearly false, though, is the assumption that there aren't any new particles to find. Sabine Hossenfelder recently created a bit of a stir when she posted a video calling out particle physicists on their long trend of inventing hypothetical new particles needed to solve "problems" that are just aesthetically displeasing to particle physicists rather than being objectively real problems, coming up with experiments to find these particles, not finding them, and then moving the goalposts to explain away why they couldn't be found. Occam's Razor suggests that *they simply aren't there.* We've already found everything in the Standard Model, and there's no fundamental reason why anything else needs to exist.

https://www.youtube.com/watch?v=lu4mH3Hmw2o if you haven't seen it. I'm not saying she's necessarily right — I don't know enough about particle physics to make any sort of authoritative judgment on the matter — but she definitely makes a persuasive case.

Expand full comment

I mean, that's a bit of a cheap shot given that 90% of the Standard Model is particles that were invented as hypotheticals to solve problems and that we found when we went looking for them!

There's actually quite a bit of evidence that there's more out there than we've currently found, but there are thousands of candidates for that 'something' and at most a handful of those will be what's really out there.

Expand full comment

Should we stop doing anything to slow climate change for fear that climate change is all that is holding off the end of the interglacial?

Actually, there is evidence that anthropogenic climate change is all that is holding off the end of the interglacial, but the cause is not burning fossil fuel in recent centuries but deforestation due to the invention of agriculture, starting seven or eight thousand years ago.

https://daviddfriedman.blogspot.com/2021/10/how-humans-held-back-glaciers.html

Expand full comment

> (for the sake of argument, let’s say you have completely linear marginal utility of money)

In this case, you should bet everything each turn. It's simply true by definition that for you the high risk of losing everything is worth the tiny chance of getting a huge reward.

The real issue is that people don't have linear utility functions. Even if you're giving to charity, the funding gap of your top charity will very quickly be reached in the hypothetical where you bet everything each turn.

The Kelly criterion only holds if you have logarithmic utility, which is more realistic but there's no reason to expect it's exactly right either. In reality you actually have to think about what you want.

Expand full comment

As far as I understand, the question of whether the Kelly criterion being optimal depends on you having logarithmic utility is debated and complicated (i.e. you can derive it without ever making that assumption). See https://www.lesswrong.com/posts/zmpYKwqfMkWtywkKZ/kelly-isn-t-just-about-logarithmic-utility and the comments for discussion

Expand full comment

I am in fact already in that comments section. :-)

Expand full comment

I am fairly sure this is covered by Paul Samuelson's paper "Why we should not make mean log of wealth big though years to act are long". The Kelly result only holds under log utility.

Expand full comment

Maxing log utility is equivalent to maxing the Geometric Mean because, in a sense, the log of multiplication is equivalent to the addition of logs. I.e.

log_b(x * y) = log_b(x) + log_b(y)

for any base b. Geometric Mean makes more sense here than Arithmetic, because the size of each wager depends on former wagers. Therefore, saying "log utility isn't necessary" is kinda like saying "bridge trusses don't need to be triangles, because 3-sided polygons are just as good".

I think what you mean is, the reason Kelly Betting is an important concept is because it makes people reason differently about scenarios where wagers are dependent on other wagers, even if the exact relationship is hairier than just straightforward multiplication.

Expand full comment

I am living in so much abundance I can’t possibly conceive of it, even less use it fully.

I wished for the less fortunate 5 billion to do so, too. (Or do I, but it would be just.) Sure we can get there without more AI than we have now.

Otoh: If we ban it, Xi might not.

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

Quite.

The key to a life of safety and abundance is, and always has been, energy and the means to use it. Abundant energy gives one abundant food and clothing, shelter, warmth and cooling, light, clean water and disposal of waste, transportation, communication, health, education, participation in society, entertainment: everything that humans want.

We are on the brink of solving the energy problem for everyone--indeed, we have solved it technically. It's just a matter of scaling up, and solving the political problems. Unless AI can do that for us, it's not much use.

I don't think we want an AI that can solve political problems at global scale. Just a gut feeling.

Expand full comment

You hit the point: our problem is misallocations and waste of resources and misuse of power. There are indeed social and political problems that can't really be solved by any kind of technology including AI. So lets focus on the problems and don't let us be distracted by potential cures for the symptoms of our problems.

For me many of the discussions here about AI, prediction markets and even EA are just distractions not to face the causes of our problems.

Expand full comment

> But you never bet everything you’ve got on a bet, even when it’s great. Pursuing a technology that could destroy the world is betting 100%.

No, no it's not. Refusing to pursue a technology that could destroy the world is betting 100%.

Pursuing a technology has gradations. You can, for example, pursue nuclear power along multiple avenues including both civilian and military applications. You can also have people doing real work on its chance to ignite the atmosphere (and eventually finding out they were all embarrassingly wrong). You can have people doing all kinds of secondary research on how to prevent multiple labs from having chain reactions that blow up the entire research facility (as happened). Etc.

Not pursuing a technology is absolute. It is the 100% bet where you've put all your eggs in one basket. If your standard is "we shouldn't act with complete certainty" that can only be an argument for AI research because the only way not pursuing AI research at all makes sense is if we're completely certain it will be as bad as the critics say. And frankly, we're not. They might be right but we have no reason to be 100% certain they're right.

Also the load bearing part is the idea that AI leads to 1023/1024 end of the world scenarios and you've more or less begged the question there. And you have, of course, conveniently ignored that no one has the authority (let alone capability) to actually enforce such a ban.

Expand full comment

I think pursuing a technology (or not) is an individual coin flip, not an "always bet x% strategy".

Each coin flip you can choose how much to bet, and the percentage correlates to the risk/reward profile. Saying that refusing to pursue any single technology is betting 100% makes no sense, because you are likely pursuing other, less risky and less rewarding technologies, which is certainly not a 100% bet, but also not a 0% bet.

Expand full comment

So while I don't disagree with this per se the logic works both ways. While not pursuing AI frees up resources to use in non-AI research likewise pursuing AI creates resources to use in other research. So if you broaden it out of being a coin flip, an isolated question of a single technology, you can never reach 100% anyway. You've basically destroyed the entire concept. Which is fine, actually. It's a bad concept to start with. But it doesn't result in an anti-AI argument.

Expand full comment

The reason to never bet 100% of the bankroll is to avoid risk of ruin. Which is the technical term for "you can't play anymore because you're broke". In a financial context, diversification avoids risk of ruin. In the context of AI, diversification of scientific effort just means the ruin arrives later.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

Well, no, because (using nuclear power as an example) even permanently refusing to pursue a good technology doesn't make the society lose 100% of it's stuff - if humanity never ever considered nuclear power and didn't ever gain any of its benefits, it still has all of the "potential Kelly bet cash" remaining - it didn't gain any, but it also didn't lose any. "Betting 100%" applies only for the actions where you might actually lose all of that which you/we already have; and even total stagnation at status quo and refusing all potential gains from all technologies isn't that - it's the equivalent of "betting 0%" in the Kelly bet and staying with $1000 dollars instead of being able to gain something.

Expand full comment

In which case you're making a general argument against all technological progress? Luddism is certainly a thing but I don't think it's very supportable. Of course, Luddites disagree.

Expand full comment

My post is asserting that stopping technological progress because of risk-aversion is definitely not the equivalent of the strategy of betting 100% in a Kelly bet as you claimed, but rather the very opposite extreme - the equivalent of betting 0% in a Kelly bet.

I made no claim whatsoever whether that is good or bad, or if one is preferable to other, it's about misleading/misunderstanding of the terminology.

Expand full comment

No, it isn't. It seems like you believe loss aversion actually averts losses which is often not the case. Just because that's the intention doesn't mean it's the result. You are investing 100% of resources in an absolutist strategy and the fact it's do nothing instead of do something doesn't actually make you safer.

Expand full comment

Suppose that we’ll never have a bulletproof alignment mechanism. How long should we wait until we decide to deploy super-human AI anyway?

Expand full comment

We certainly won't ever have a bulletproof alignment mechanism at the rate we're going. The problem is that the people in charge are also not on track to be aware of this when they do come up with some kind of solution. Consider the Boxer Rebellion for an example of employing a bulletproof solution.

Expand full comment

My point is that the development of a super-human AI has huge potential rewards as well as risks. Should we just forego them? Or should we wait until the AI risk falls below some threshold? And if yes, then how do we estimate this risk?

I'm not arguing that we should just let AI development go full steam. I'm genuinely trying to figure out what would be a reasonable compromise solution.

And regarding people in charge, Holden Karnofsky argues that they are not in a good position to regulate AI: https://www.cold-takes.com/how-governments-can-help-with-the-most-important-century/

Expand full comment

Well, Yudkowsky's criterion is that "if you can get a powerful AGI that carries out some pivotal superhuman engineering task, with a less than fifty percent change of killing more than one billion people, I’ll take it", a pretty "generous" bound. Of course, the main issue with this discourse is that pretty much nobody who matters agrees with him that the mainline "muddle through" scenario is overwhelmingly likely to kill everyone, and so the disagreement seems irreconcilable.

Expand full comment

What do you mean "deploy"? If it's a superhuman AI, are you contemplating keeping one copy on tape? Or what?

Otherwise this is the "AI in a box" argument, which might be what you intend. Are you assuming that if one party doesn't active an superhuman AI, nobody else will either? That seems like a rather unsound assumption. Who's going to stop them, and how will they know to stop? What about black market copies? What about hackers? What about rival groups, who might see an advantage?

A program is not a car. It can escape over any internet connection. OTOH, like a car or a telephone, it may be developed in multiple places at the same time. (Check into patent office lawsuits.)

So what does "deploy" mean. If we're talking about something that's a self-motivated intelligence, then I think it's got to mean "on active storage OR on a system connected to the internet, even indirectly". It can't just mean "controlling a public facing web page", though that is certainly one kind of deployment.

Expand full comment

Approximately negative 10..20 years, since superhuman AI is pretty commonplace. For example, the addresses on your snail-mail letters are routinely scanned by an AI that is superhumanly good at handwriting recognition. Machine translation systems still kind of suck at the quality of their translations, but are superhumanly good at quantity. Modern image-generation programs are still subhuman compared to top artists, but will easily outperform the average human at rendering art. Most modern computer chips are designed with the aid of AI-powered circuit-routing software; no human could conceivably perform that task. And I could keep going in this vein for a while...

Expand full comment

Super-human meaning it could do anything a human can at least as good.

Expand full comment

Oh, well, in that case super-human AI does not currently exist, and probably won't exist for a very long time, since no one knows how to even begin building one. On the other hand, humans do exist; they can do anything a human can do at least as well; and some of them are quite malicious. Should we not focus on stopping them, instead of a non-existent AI ?

Expand full comment

> and probably won't exist for a very long time, since no one knows how to even begin building one

This argument, in the abstract, seems like it implies too much. No one knew how to even begin building a microprocessor in 1950, but it did not take a very long time until microprocessors existed. And one could say the same about a lot of things, from chronometers to airplanes, nuclear bombs and particle accelerators.

We could argue about what precisely we mean by “know”, “even begin to”, and “very long time”, and how they apply to AGI or other technologies, but unless and until we do that the argument says very little in itself.

Expand full comment

Well yes, every invention was once new. However, not all new inventions were completely unforeseen; in fact, most were refinements of existing techniques. Microprocessors were an amazingly powerful upgrade to vacuum tubes, operating on similar principles but on the molecular scale. Self-contained timekeeping devices existed since the ancient times, mostly in the form of water clocks. Airplanes were a culmination of aerodynamics research and internal combustion technology, but kites were already in us in Ancient China. In each case, the path to the next innovation was difficult, but not totally mysterious. One could say, "I wish I could make vacuum tubes smaller" without knowing how to do that (and actually, vacuum tube miniaturization advanced almost to the point of printed circuitry before transistors were invented).

As I said though, that's not the case with AGI. There's no "one simple trick" (tm), like extreme miniaturization or new types of carbon steel or a massively powerful energy source, that we could use to turn e.g. GPT into AGI. Maybe something like this exists, or maybe it doesn't and the secret to AGI lies in a totally different place -- no one knows. And you can't make confident predictions about the future based on a lack of information.

Expand full comment
Mar 28, 2023·edited Mar 28, 2023

> However, not all new inventions were completely unforeseen; in fact, most were refinements of existing techniques.

The only reason I disagree here is that I think your phrasing is too timid. I can’t quite think of any invention that was completely unforeseen, or even that close to it. I could entertain arguments that some *discoveries* were not anticipated, but by the time someone actually produces something that can be called an “invention” it’s AFAIK universally a refinement of one or more not-quite-there techniques based on older discoveries.

> There's no "one simple trick" (tm), like extreme miniaturization or new types of carbon steel or a massively powerful energy source, that we could use to turn e.g. GPT into AGI

My point is that we don’t actually know this in advance. A short time before planes became a reality, a lot of generally smart people argued that there was no “one simple trick” for achieving heavier-than-air flight, and that it would probably never happen. AFAIK, the “simple trick” was basically increasing power-to-weight ratio above a certain threshold, and “short time before” in that case actually meant “after it was already achieved”.

We don’t *actually* know if just feeding every online video and audio file to GPT-5 or -6 (in addition to all the text, with a bit of minor architecture tweaking and a couple of orders of magnitude more compute) is not enough for AGI. I’m *not* saying I’m sure it will be enough. I’m just saying that *if* that works, afterwards we could just say that it was not completely unforeseen, but just a refinement of existing techniques, and we would be just as right as you are about microprocessors.

Both successful and failed scientific and technological advancements are preceded by people anticipating technique X will succeed with a bit more work, and other people saying it can’t work and never will. There are certainly a lot of both sides relative to AGI right now.

That *none* of the *hundreds* of “tricks” people are working on right now won’t be the “one simple trick (tm)” that works for AGI is not an observation about the past, it’s just forecasting about the future. It might be *informed* opinion, but even that doesn’t have a great track record shortly before new technologies were invented.

Expand full comment

the key difference between nuclear power and AI is SPEED and VISIBILITY. This cannot be repeated often enough (*): you can see a nuclear plant being built, and its good or bad consequences, much better than those of deploying AI algorithms. AND you have time to understand how nuclear plants work, in order to fight (or support) them. Not so with AIj just look at all the talks about AI alignment. As Stalin woould say, speed has a quality all of its own.

(*) and indeed, forgive me to say that the impact of sheer speed of all digital things will be a recurringtheme of my own substack

Expand full comment

Who saw a nuclear plant being built during the Manhattan Project?

Expand full comment

that has nothing to do with my observation, does it?

Expand full comment

I'm saying that in fact there have been secret nuclear weapons programs which rivals didn't know about until the nuclear test was conducted.

Expand full comment

I agree. But again, that a) is true for every military program, it's not nuclear-specific, and b) it has nothing to do with civilian power plants, which are the topic here. You may build and keep secret a nuclear power plant inside a secret military base, maybe, but it's impossible to provide so much energy to homes and factories without anybody noticing before you even turn the plant on

Expand full comment

Quite a few people noticed the existence of the Manhattan Project, actually - disappearing that many top scientists is rather obvious. What *was* kept secret was what they were all working on and how much progress had been made on it, but it's not hard to infer from the context that it was intended to produce a weapon to win the war

Expand full comment

If the price of cheap energy is a few chernobyls every decade, then society isn’t going to allow it. Mass casualty events with permanent exclusion zones... you can come up with a rational calculus that it’s a worthwhile trade off, but there’s no political calculus that can convince enough people to make it happen. So as an example, nuclear energy actually makes the opposite argument he wants it to.

Expand full comment

This seems to be an outcome of a strongly individualist society with frozen priors, but the indications are that people under 30 are much less individualistic than their elders currently running things. It seems possible to me that by 2050 a couple of large scale nuclear disasters every year might be an accepted cost of living in a good society, especially once the 1970s nuclear memes and prevention at all costs have been replaced by practical remediation action and a more pragmatic view of tradeoffs.

Expand full comment

Chernobyl is a nature preserve now, not a nuclear wasteland. People could live in the exclusion zones, if we allowed it, and they would not be appreciably less safe than most people.

People do live in Hiroshima, right in the blast zone.

Coal power plants (and high altitude locations like plane trips and Denver, CO) have higher radiation levels than nuclear power plants.

That we don't allow it is a choice. Almost every other source of power has killed more people than nuclear (I think solar is the only exception - even wind has killed more - and most have killed many orders of magnitude more people).

Expand full comment

Solar has actually killed lots of people. Usually installers doing roof-top installations.

Expand full comment

If you're counting construction accidents only tangentially related to the actual power source, probably ought to also count anyone who ever died in a coal mine, which I'm pretty sure still leaves solar coming out very far ahead.

Expand full comment

Well, yes, but I was comparing it with nuclear. There things are a lot closer.

Expand full comment

Did you ever have a closer look at uranium mines?

Expand full comment

Uranium mines are simply so much smaller than coal mines in required output that the casualty rate is low because the number of miners is low. Also, they're more concentrated in developed nations, which will drastically reduce casualty rate per miner.

Expand full comment

Solar has killed a non-zero number of people, yes. Every other type of power generation has killed far more. Wind, nuclear, and solar are orders-of-magnitude less than any other kind, with coal killing a pretty ridiculous number of people.

Expand full comment

This is the rational case, but this is a pretty safe space to make it. I don't think it's a political case, because there's a unique horror-movie level of fear in society surrounding nuclear power. That could change, but it won't change fast enough to matter to us.

That's why it isn't really a "choice," or rather it isn't really an option given the reality. I don't think it makes sense to treat it like it could be one if we just converted the world to a rationalist point of view. Clearly, that's not in the cards.

If I were going to try to make a rational case against nuclear energy, I'd probably point out a danger that didn't seem realistic until recently- unpredictable conventional warfare at a nuclear power plant. We got lucky this time, but I don't know how you can argue against that being a growing possibility. I'm no expert but I imagine the outcome of a conventional bomb hitting a reactor, in error or not, would be worse than a conventional bomb dropped on any other power generation technology (except maybe certain power generating dams.)

Expand full comment

Nuclear reactors have ridiculously thick concrete walls. A bomb hitting one by accident is unlikely to damage anything, which is not true of almost any other type of power plant. The potential damage from deliberate targeted destruction of nuclear power stations might be high, but it's less high than the potential to destroy things by blowing up dams and we don't use that as an argument against Hydro power. (Accident risk, yes, vulnerability to terrorism, maybe, vulnerability to destruction in an invasion, no.)

Additionally, I expect the (Ukrainian) casualties if Russia does blow up all of Ukraine's power stations to be more from people freezing to death than from any toxic fallout (If Russia orders its conscripts to dig trenches in the fallout again they might suffer as a result....)

Expand full comment

There has been exactly one Chernobyl over many decades, and that’s the only nuclear accident that seems to have definitely killed any members of the public. It was also the result of profoundly stupid design and operating decisions that nobody would do again precisely because of Chernobyl.

Meanwhile automobiles kill over 1.3 million people per year.

Expand full comment

Yeah, but there was also one Three Mile Island, which was way worse, because it happened *in my backyard!*

Expand full comment

There have actually been two large scale nuclear disasters resulting in many radiation deaths, Kyshtym being the second.

Coal is a better comparison than automobiles, but just as compelling.

Expand full comment

"A world where people invent gasoline and refrigerants and medication (and sometimes fail and cause harm) is vastly better than one where we never try to have any of these things. I’m not saying technology isn’t a great bet. It’s a great bet!"

Really? I would have said gasoline and nuclear were huge net disbenefits. Take gasoline out of the equation and you take away the one problem nuclear is a potential solution for.

(I think. No actual feel for what the global warming situation would be in a coal yes, ICEs no world).

Expand full comment

I have a feeling that without ICE we wouldn't have the farming industrialization which enables feeding the world and having most people not work in farming. IMHO cost of never starting to use ICE would be famine-restricted population and a much worse standard of life for billions of people than even the IPCC climate report worst case scenarios expect.

Expand full comment

There was steam-powered farm equipment before ICE was in common use.

Expand full comment

Thank you! That was the point I was thinking of making.

Expand full comment

Nitrogen fertilizer is critical for farming at our current scale, and it is sourced primarily from natural gas.

Expand full comment

Yes, the Haber-Bosch process for making ammonia from nitrogen and hydrogen is crucial. I've read estimates that half the nitrogen in humans' bodies has passed through it. But hydrogen can be sourced from sources other than natural gas (albeit more expensive sources) and access to natural gas is orthogonal to use of internal combustion engines (which was the original point in question in this subthread).

Expand full comment

It can be, but whether it ever would have happened without fossil fuel is a question.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

Well, this subthread started from Robert Leigh's "coal yes, ICEs no world" hypothetical. My guess is that the changes due _purely_ to no internal combustion engines are fairly minor. External combustion (steam) or electric vehicles could probably substitute without very drastic changes.

My guess is that the alternate hypothetical that you are citing, with no fossil fuels - no natural gas, or oil, or coal is far more drastic. I've read claims that the industrial revolution was mainly a positive feedback loop between coal, steel, and engine production. It wouldn't surprise me if a non-fossil-fuel world would be stuck at 18th century technology permanently. With the knowledge we have _now_, I think there would be ways out of that trap, nuclear or solar, but they might well never get to that knowledge.

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

Steam power was much more labor intensive than ICEs. The result would have been that food was much more expensive than it is in our world, and so the standard of living would indeed have been lower. (If food is a higher percentage of the household budget, tautologically everything else is a lower percentage.)

Steam powered shipping is also more expensive than that powered by residual fuel oil, so poor parts of the world would have had less access to imported food in times of harvest failure. Harvest failures would have been more frequent because of the high running costs of steam powered pumps for irrigation and steam transport. Famines would have been more of a feature. Whether the results would have been worse than "IPCC climate report worst case scenarios", I do not know.

Expand full comment

"Steam power was much more labor intensive than ICEs."

Could you elaborate on that? Is that due to manual handling of solid fuels? Or something else? I vaguely recall that some coal burning systems use coal slurry to handle it much like a liquid. I agree that if steam power was unavoidably much more labor intensive than ICEs, then that has all the adverse downstream implications that you cite.

Expand full comment
founding

Steam requires handling both fuel and water, which is a factor of two right there. Steam also requires a lot more maintenance, in part because hot water is hella corrosive and in part because of the extra plumbing when your heat source and your engine are in different places.

Expand full comment

Mostly reasonable points. The water is a working fluid, so it isn't getting replaced each time it is used in e.g. a Carnot cycle, so calling that addition a factor of two seems overly pessimistic. I _do_ agree that having the working fluid _be_ the fuel/air mix and combustion gases does simplify an ICE considerably. Certainly, having the heat source and engine in one place simplifies things. Steam cars did exist, https://en.wikipedia.org/wiki/Steam_car and were built and sold. They did get outcompeted by gasoline ICEs, but, in the absence of gasoline, it looks like they would have filled at least a large portion of gasoline cars' roles.

Expand full comment

One assumes they were powered by burning coal or wood, which as far as CO2 production is concerned is not an improvement on burning gasoline.

Expand full comment

Quite true. At the start of this subthread, I actually find Robert Leigh's "I would have said gasoline and nuclear were huge net disbenefits." very puzzling. I don't know which disbenefits he had in mind. "a coal yes, ICEs no world" suggests that it wasn't CO2.

Expand full comment

> Take gasoline out of the equation and you take away the one problem nuclear is a potential solution for.

How do you figure that? The principal use case for gasoline is running motor vehicles, which fission will never be a good use case for even theoretically, let alone in practical reality.

Expand full comment

Sorry, but you're wrong. The society would need to be structured a bit differently, but electric cars were developed in (extremely roughly) the same time period as gasoline powered cars. And there were decent (well, I've no personal experience) public transit systems in common use before cars were common. Most of the ones I've actually seen were electric, but I've seen pictures of at least one that was powered by a horse. It was the Key System E line from the ferry terminal up into the Berkeley hills, where the Claremont hotel is currently located.

Expand full comment

That doesn't actually contradict my claim.

To be a bit more clear, I'm not saying that electric cars, powered by energy that could have been generated at a nuclear plant, can't be a good alternative for ICE cars; we've pretty well proven that they can by now. I'm saying — in response to Robert Leigh's claim that gasoline (and thus by implication, ICE engines) is the *only* problem where nuclear power is a good alternative — that you can't put a nuclear reactor on a car as a power source. If you put a nuclear reactor in a nuclear power plant, on the other hand, you're solving a lot more problems than can be reasonably addressed by gasoline. So either way, I don't see where he's coming from on this.

Expand full comment

But without gasoline how would you power all the vehicles we use? I think without out we would be a lot poorer, which hopefully makes up for its bad effects.

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

Way worse. Burning coal generates nothing *but* CO2, whereas at least burning gasoline some of what you get is oxidized hydrogen (H2O). It's why burning natural gas instead of oil has reduced the acceleration of CO2 emissions -- because CH4 has more Hs per C than gasoline -- and it's why people got excited about "the hydrogen economy" where you just burned H2.

Expand full comment

"I would have said gasoline and nuclear were huge net disbenefits." Could you elaborate on why you would have said gasoline was a net disbenefit, particularly in comparison to "a coal yes, ICEs no world". You can't mean CO2 emissions, since coal emits more of them than gasoline per unit energy. I'm puzzled.

Expand full comment

I exactly mean CO2 emissions. Coal would not take up all the slack left by the absence of gasoline, coal fired cars and aircraft not being a thing, so we would be that much further from a climate crisis.

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

Oh! Thanks for the explanation. Coal fired cars could have been managed (probably with coal slurry and a steam working fluid heat engine). Aircraft are actually run on something closer to diesel fuel, so literally an absence of gasoline but no other changes would have left them as is. There are other options to go from energy from coal to something that can power an aircraft (hydrazine, liquid hydrogen (though that has low density), https://en.wikipedia.org/wiki/Fischer%E2%80%93Tropsch_process liquids from coal, possibly propane). Coal could have taken up much of the slack left by an absence of gasoline, and, since there is more CO2 emitted per unit energy from coal than from gasoline, we might be _closer_ to a climate crisis.

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

I don't think that's realistic. There are a variety of ways you can turn coal into a more convenient fuel for small moving craft[1], and if for some reason there were no liquid hydrocarbons but plenty of coal, that's what people would have done. Nothing approaches the energy storage density and convenience of hydrocarbons, when you live at the bottom of a giant lake of oxygen. That's why the entire natural world uses them as fuel and energy storage.

And as I said above, the more you start from pure C (e.g. coal) instead of a mixture of C and H (e.g. nat gas), the worse you make your CO2 emissions problem. So in a world without liquid hydrocarbons, I think CO2 emissions would've risen faster and sooner, not the other way around.

-----------------

[1] e.g. https://en.wikipedia.org/wiki/Coal_gas

Expand full comment

Agreed

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

> So although technically this has the highest “average utility”, all of this is coming from one super-amazing sliver of probability-space where you own more money than exists in the entire world.

Can somebody explain this part? Isn't this mixing expected returns from a _single_ coin flip with expected returns from a series of coin flips? If you start with $1 and always bet 100%, after t steps you have 2^t or 0 dollars - the former with probability 2^-t . So your expected wealth after these t steps is $1, which is pretty much the same as not betting at all (0% each "step").

Math aside, it's pretty obvious that betting 100% isn't advisable if you are capped at 100% returns. I'm sure even inexperienced stock traders (who still think they're smarter than the market) would be a lot less likely to go all in if they knew their stock picks could *never* increase 5x, 10x, 100x... If doubling our wealth at the risk of ending humanity is all that AI could do for us, sure, let's forget about AI research. But what if this single bet could yield near-infinite returns? Maybe "near" infinite still isn't enough, but it's an entirely different conversation compared to the 100% returns scenario.

Expand full comment

Scott specifies a 75% probability of heads.

Expand full comment

> If you start with $1 and always bet 100%, after t steps you have 2^t or 0 dollars - the former with probability 2^-t

No, since the assumption is that you can predict the coin flip is better than chance, specifically 75%, so the probability of the former scenario is much higher than 2^-t.

Expand full comment

Ah, of course. Knew I missed some variable. Thanks.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

What's problematic is that you could argue all research is sitting along a spectrum that *may* lead to some very, very bad outcomes, but where do you call time on the research.

As I look at it, AI sits at the intersection of statistics and computer science. We could subdivide areas of computer science further into elements like data engineering and deep learning. So, at what point would you use the above logic to prevent research into certain areas of compsci or statistics under the premise of preventing catastrophe?

I don't think this is splitting hairs either - we already have many examples of ML and Deep Learning technologies happily integrated into our lives (think Google maps, Netflix recommendations etc), but at what point are we drawing the line and saying "that's enough 'AI' for this civilisation" - how can we know this and what are we throwing away in the interim?

Expand full comment

Well, it might have been reasonable to draw a line saying "Slow down and consider the effects" before Facebook was launched. I wouldn't want to stop things, but I think a lot of our current social fragmentation is due to Facebook and other similar applications.

Note that this ISN'T and argument about AI, but rather about people and their motivational systems. People have a strong tendency to form echo chambers where they only hear the comments of their ideological neighbors. and then to get tribal about it, including thinking of "those folks over there" as enemies.

Expand full comment

I guess the question would be whether one could have seen the far-reaching effects of Facebook and social media before the damage had been done.

Same here - we might already have crossed tipping point and we don't know it

Expand full comment

There are actually small indications that we *have* crossed a tipping point. Not of AI, but of the way humans react to conversational programs. But we've been working towards that particular tipping point quite diligently for years, so it's no surprise that when you add even a little be more intelligence or personalization on the other end you get a strong effect.

Expand full comment

I think on a slightly smaller scale, this also describes where we went wrong with cars/suburbs/new modernist urban planning. It's not that it didn't have upsides, it's that we bet 100% on it and planned all our new cities around it and completely reshaped all our old cities around it, which caused the downsides to become dominant and inescapable. An America that was say 50% car-oriented suburbs would probably be pretty nice, a lot of people like them and those who don't would go elsewhere. An america that's 100% that (or places trying to be that) gets pretty depressing.

Expand full comment

America is not 100% suburban.

Expand full comment

It did 100% replan around car-centric/street parking mobility - even urban places like Manhattan (or rural places in Idaho) effectively remodeled around it, except for that one island in Michigan.

Expand full comment

Yes, they replaced roads designed for horses with roads designed for more modern vehicles.

Expand full comment

Yes, until 1940 everyone in America rode horses everywhere, from their backyard stable right to their 9-5 office job and then back.

Expand full comment

Automobiles predate 1940, streets had already been replaced by then. In 1915 there were 20 million horses while the human population was roughly 100 million. Many people would commute via horsedrawn transit.

Expand full comment

The real issue is risk of ruin. Modernism can be reverted because it's not an existential risk.

Expand full comment

That's not exactly true - betting 90% of your money on a 75% bet each time doesn't run risk of ruin, but it's still negative log EV.

Expand full comment

I'm saying "betting 100% on modernity" isn't really analogous to "betting 100% on AI" because it's not a 100% wager so much as a 100% level of confidence. I think Modernism has downsides too, but it hasn't irrevocably bankrupted civilization yet. There's still time to turn the ship around if we so choose.

Expand full comment

There is a concept in economics called "revealed preference". The idea is, don't ask people what they prefer, look at what they buy. That tells you their real preferences.

The parts of the US that are growing are the "sprawl" parts: various cities in Texas, and Atlanta. Especially Atlanta.

Unpalatable as it may be to you and me, that tells you what most people want. The tyranny of the majority may be oppressive, but it's not nearly so oppressive as other tyrannies.

Expand full comment

That's not actually what we see though - revealed preference shows that prices are highest (by far) in the few places that are less like that (like Manhattan or SF/Berkeley). The reason the sprawl areas are the ones that grow is that sprawl is the only thing it's legal to build.

Expand full comment

Don't confuse the Kelly criterion with utility maximisation (there kind of is a connection, but it's a bit of a red herring).

If you have a defined utility function, you should be betting to maximise expected utility, and that won't look like Kelly betting unless your utility function just happens to be logarithmic.

The interesting property of the Kelly criterion (or of a logarithmic utility function compared to any other, if you prefer) is that if Alice and Bob both gamble a proportion of their wealth on each round of an iterated bet, with Alice picking her proportion according to the Kelly criterion and Bob using any other strategy, then the probability that after n rounds Alice has more money than Bob tends to 1 as n tends to infinity.

That doesn't tell you anything about their expected utilities (unless their utility functions happen to be logarithmic), but it's sometimes useful for proving things.

Expand full comment

What's the exact statement of that result about the competition between Alice and Bob? In particular are they betting on the same events, or independent events with the same edge?

If it's the former, Bob could do something like always betting so that he will have $0.01 more than Alice if he wins, until he does win, and then always betting the same as Alice. This would make him very likely to come out ahead of Alice, at the expense of a small probability of going bankrupt.

Expand full comment

Oh god, now you're asking. I'm on a phone, and hate reading maths on it, so check this on Google, but there's an obvious-but-weak form of it where Alice and Bob are each constrained to bet the same proportion in every round (take logs and use the CLT), and I think there are stronger, more general versions too.

Expand full comment

I think this sort of argument only makes sense if the numbers you plug in at the bottom are broadly correct, and the numbers you're plugging in for "superintelligent AI destroys the world" are massively too high, leading to an error so quantitative it becomes qualitative.

Expand full comment

I don't think we have a reasonable way to estimate how likely a self-motivated super-intelligent AI is to destroy the word. So try this one: How likely is a super-intelligent AI that tries to do exactly what it is told to do to destroy the world? Remember that the people giving the instructions are PEOPLE, and will therefore have very limited time horizons. And that it's quite likely to be trying to do several different things at the same time.

Expand full comment

The issue, for me anyway, is not that old Nuclear activists were unable to calculate risks properly. The issue is they basically didn't know anything about the subject to which they were very worried about, partially because nobody did. In the end, yes, they made everything worse. The world might have been better served should the process of nuclear proliferation been handled by choosing experts through sortition.

The experts in AI risk are *worse than this.* The AI is smarter than I am as a human? Let's take that as a given. What does that even mean? There is a very narrow band of possibilities in which AI will be good for humanity, and an infinite number of ways it could be catastrophic. There's also an infinite number of ways it could be neutral, including an infinite number of ways it could be impossible. The worry is itself defined outside of human cognition, in a ways that make the issue even more difficult than they otherwise would be, so how are you supposed to calculate risk if you can't even define the parameters?

Expand full comment

It is quite clear that human equivalent AI is possible. The proof relies on biology and CRISPR, but it's trivial. And it is EXTREMELY probable that an AI more intelligent than 99.9% of people is possible using the same approach. Unfortunately, there are very good grounds to believe that any AI created in that manner would be as self-centered and have as short a planning horizon as people generally do. This is just an existence argument, not a recommendation.

AI is not a particular technology. Currently we are using a particular technology to try to create an AI, but if that doesn't work, there are alternatives. An at least weakly superhuman AI is possible. And if you don't define "good for humanity" then the only good I can imagine is survival. It's my opinion that given the known instability of human leaders and the increasing availability of increasingly lethal weaponry, if leadership of humanity is not replaced by AIs, we stand a 50% (or higher) chance of going extinct within the century, and that it will continue increasing. And AI is, itself, an existential threat, but if we successfully pass that threat, the AI will act to ensure human survival. I take this to be a net good. It also is quite unlikely to derive pleasure from inflicting pain on humans. (The 50% chance is because I might not like us enough to ensure our continued existence, and might find us bothersome...and is a wild guess.)

Once people start invoking infinities, I start doubting them. Perhaps you could rephrase your argument, but I think it's main flaw is that it doesn't consider just how dangerous humans are to human survival.

Expand full comment

One of the things I learned from LW (if I'm remembering correctly) was about the multi-armed bandit problem. Which is a situation where you need to experiment with wagers just to discover the payoff structure. Without hindsight, the payoff matrix is a total blackbox. Therefore, whether the "optimal" strategy is risky or conservative is anyone's guess, a priori.

I do think a lot of AI mongering is a result of not understanding the nature of intelligence. If you can manage to put constraints on it though, like how the study of thermodynamics bounds our expectations of engines, AI become less scary.

Expand full comment

I think you are right that a part of the issue is not understanding the nature of intelligence. But I think that's just one aspect. Not understanding the nature of intelligence means we also don't have a good account of psychology, which means we also don't have a good account of neurology, nor philosophy of mind. Put another way, we don't know "where" intelligence comes from, how it exactly relates to neurology, how that relates to decision making, or how any of that is supposed to apply to something explicitly non-human and "more intelligent than humans" even if we did know all of that.

I can fully admit AI might kill us all. But I think if it does, it's more likely to be because people with the priorities of Scott Alexander are extremely worried about it, and, through ignorance, are going to give it the machine equivalent of a psychological issue, like say, psychopathy or low-functioning Autism.

Though I also admit I make no predictions on the probability of this as opposed to anything else. That's kind of my point of why I'm so tired of AI fear mongering.

Expand full comment

I have an alternate perspective.

I have this pet theory that the reason intelligence evolved is because it allows us to simulate the environment. I.e. simulation allows us to try risky actions in our mind, before we try them irl. It's often calorically cheaper than putting life and limb on the line. This dovetails with my hunch that life is just chemical disequilibrium. It dovetails with my hunch that what set humans apart from apes was cooking. And it dovetails with why I think humanity acquired a taste for stories/religion/sports. It's thermodynamics, all the way down.

If true, then Carnot's Rule bounds the threat of AI. Just like it bounds everything else in the universe. A jupiter brain might have orders of magnitude more processing power than a human brain. But "intelligence vs agency" is sigmoidal, and humanity is already rightward of the inflection point. Thus, the advantage that a jupiter brain offers over a human brain is subject to diminished returns. AI still might do scary things, but it's unlikely to do things that couldn't already be accomplished by an evil dictator. I suspect most skeptics of the singularity share this intuition, but can't find the right words.

None of this depends on knowing the particular inner-mechanisms of the brain.

Expand full comment

Well, we know the architecture of the human brain *allows* for 200 IQ, but it turns out we aren't that smart in general -- evolution has *not* driven us to the maximum possible intelligence for our architecture, the way it drove the cheetah to the maximum possible speed for its physical architecture. That does suggest that even among humans, there may be a diminishing returns aspect to intelligence. Maybe there is some accompanying drawback to everyone being IQ 180, or else it just doesn't return enough to be worth the development and quality-control cost.

Expand full comment

I don't think there's any algernic drawbacks. I think the bottleneck is just calories. Human brains already represent a big capex and big opex. Too big for the diets of most species to afford. Meanwhile, people with 200 IQ are like giraffes, in that the problems they can solve represent a tiny set of high-hanging fruit.

Expand full comment

I agree with this, in that calories were expensive to ancestral humans in a way they aren't so much today. Calories would be *really inexpensive* to a Dyson Swarm, so if the main constraint you envisage on an AGI is energy consumption, expect it to at minimum dismantle Mercury to turn it into solar panels, then do its cost/benefit calculations on the basis that it'll have much of the energy output of a star to play with

Expand full comment

"The worry is itself defined outside of human cognition"

To clarify; are you arguing that the problem is beyond current human cognition or that the problem is generally intractable?

Expand full comment

The former. I have no opinion on if the problem is intractable.

Expand full comment

I feel like gambling is a bad reference for the kind of decision-making involved with AI-development. You can always walk away from the casino, whereas the prospect that someone else might invent AGI is a major complication for any attempt at mitigating AI-risk. A scientist or engineer, who might otherwise leave well enough alone, could, with at least a semblance of good reason, decide that they had best try to have some influence on the development of AGI, so as to preempt other ML-researchers with less sense and fewer scruples.

This is not to say that averting AGI is impossible, just that it would require solving an extremely difficult coordination problem. You'd need to not only convince every major power that machine learning must be suppressed, but also to assure it that none of its rivals will start working on the AI equivalent of Operation Smiling Buddha.

Expand full comment

what are the chances of a newly developed AI having both the ill intent and the resources to kill us all?

Expand full comment

As proven by any number of real-life rags-to-riches underdog stories, you don't need to start out "newly developed" in possession of intent and resources to accomplish something significant in life; just intent and time, which you use to accumulate the necessary resources.

Expand full comment

I won't comment on the chances of "ill intent" part, however, if we simply look at the current state of cybercrime, it should be assumed that any newly developed ill-intended AI connected to the internet which has the capability equivalent to (or better than) a modestly skilled teenage hacker and perhaps the time to find a single vulnerability in some semi-popular software, then it would be able to amass in a span of some weeks/months: (a) financial resources amounting to multiple millions of dollars equivalent in cryptocurrencies; (b) a similar scale of new, external compute power and hardware to run it's "thinking" or "backups" on the cloud; (c) a dozen of "work for home remotely" people for various mundane tasks in the physical world, just as cybercriminals hire 'money mules'; and (d) a few shell companies operated by some lawyer to whom it can mail orders to arrange purchases or other actions which require a legal persona.

Up to this point there's no speculation, this is achievable because that was achieved by multiple human cybercriminals. Now we can start speculating whether that is sufficient resources to kill us all depends on the smartness of the agent, however, I'd guess so? Those assets would be sufficient to organize the making and launching of a bioweapon, if the AI figured out how to make one.

Expand full comment

But the AI would also have to do all that (and more importantly, LEARN how to do all that) without tipping off its creators to the fact that it’s gone off the rails, and then win the ensuing struggle. And the humans fighting back against the AI will have less than superhuman but very powerful AIs on their side.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

What struggle? Our experience in cybercrime suggests that such activities can go for a very, very long time without being detected, and when they do, they wouldn't be easily distinguishable from ordinary human criminal activity or linkable to the source.

"Learning to do all that" is the hard part of making a human-equivalent-or-better AI, however, once the intelligence capability is there, *any* reasonable AI has enough data to learn all of that without needing anything from the creators or doing anything observably dangerous - even ChatGPT includes a rudimentary ability to generate semi-working exploit code, the commonly used text+code training datasets has more than sufficient information for a powerful agent to learn all of what I mentioned.

So my expectation is that if the malicious agent has an unsupervised connection to the general internet (which it shouldn't have, but I have already seen multiple users posting about how they wrote simple scripts to connect chatgpt to their actual terminal with a capability to execute arbitrary commands, so...), then the creators would get tipped off only after the "kill all humans" plan starts killing them, by which time the "fight" would already be over.

And after all, assuming that no very special hardware is needed, once the model gains its first money, it can rent cloud hardware to run a copy of itself outside of any possibility of supervision by the creators.

Expand full comment

Cybercrime is obnoxious but it’s hardly an existential threat and it’s generally a known attack vector. At some point the AI is going to have to start significantly manipulating the physical world to kill people and that opens up a ton of chances to get caught.

AIs as we know them are can be given a huge training database but they are still “learn by doing” agents - they need some sort of feedback to self improve. If they are doing something their creator did not train them on, especially if it’s something no human has ever done, they are going to have to experiment in the “real world” a bit. This should eventually get discovered unless the AI authors are either colluding or completely asleep at the wheel.

And there still might be fundamental limitations on what AI can do - it can’t communicate with itself faster than the speed of light, it probably can’t brute force it’s way through properly implemented encryption algorithms, if it turns the whole planet into computronium it still needs to cool itself, it can’t cross air gaps without control of a physical manipulator, etc)

Expand full comment

If we develop some AI that is caught being naughty, and we successfully shut it down. Is that the end of AI research? Do we all agree never to try again? I don't think we will. Eventually our adversary will be a superintelligence.

Robert Miles has an analogy to chess that I think is apt. (source: https://www.youtube.com/watch?v=JVIqp_lIwZg&t=2s)

You are an amateur chess player. You have developed an opening that beats all your friends. You can't see how anyone could beat it. You will face Magnus Carlsen in a game soon.

I tell you how I'm almost sure you will lose. You claim that you've thought of all possible counters to your special opening, and you haven't found any. I still think he will beat you. You ask me to give some examples of how could he do so. I look at your opening and since I don't know much about chess, I can't find any problems with it. I might give some suggestions but you counter that you've already thought of that. I can't find flaws in your strategy.

I'm still pretty sure you will lose.

Expand full comment

It’s not clear to me why a newly minted super intelligent AI is Carlsen in that scenario rather than the amateur (with perhaps an IQ much higher than Carlsen’s.

A “just became self aware” AI is likely to be extremely smart but also naive - it’s the amateur who has figured out how to beat his friends (the AI’s training data, presumably tuned to whatever the researcher’s actual goals for the AI are) but never played against a real master (manipulating the “real world”, which is a lot messier than chess). In “escaping the box” the AI is almost certain to encounter a large number of unexpected (to itself) situations and setbacks - maybe “clever” is a superpower that allows it to breeze right past all that, but maybe “intelligence” can’t literally solve everything (especially not without getting found out).

Expand full comment

For the record, our failure to achieve nuclear panacea is slightly more nuanced than Green opposition on ideological grounds: evidence seems to suggest it's more about electricity market deregulation. In retrospect we really really should have built more nuclear and less coal and gas, either through states stepping in and taking it upon themselves to finance nuclear projects, or taxing fossil fuels out of the market, but Green opposition following Chernobyl and Three Mile Island seem to have been more of a nail in the coffin when the real reason for lack of nuclear adoption seems to have been financial infeasibility (given market conditions at the time).

https://mobile.twitter.com/jmkorhonen/status/1625095305694789632

Expand full comment

The writer Austin Vernon had a pair of good pieces on nuclear as well:

https://austinvernon.site/blog/nuclear.html

https://austinvernon.site/blog/nuclearcomeback.html

There were a specific set of conditions that favored nuclear power until the 1980s, and it wasn't just regulatory. They benefited from not having to compete in deregulated electricity markets, a lot of the early plants were made rather cheaply and weren't exceptionally reliable (upgrades later improved that but also made nuclear more expensive), and they didn't have to compete with cheap gas power especially.

Nuclear also benefits from regulation. It's how they get their liability protection from meltdowns - if they actually had to assume full liability for plant disasters, it's questionable whether they could afford the cost of insurance.

Expand full comment

I wouldn't say that it's a nuclear-specific liability protection - if e.g. coal plants would have to assume full liability for their consequences, then the cost of coal-driven electricity would be even larger as the nuclear insurance you mention, since normal operation of coal plants causes more cancer than any reasonable estimate of nuclear plant meltdown risk, and that's ignoring any carbon/warming effect.

Of course if we suddenly start charging one type of energy (e.g. nuclear) for its negative externalities, then it becomes uncompetitive - but if we'd do that for all means of electricity generation, I think nuclear would be one of the options that would workout.

Expand full comment

Right, and this is why anybody who thinks the solution to climate change involves carbon tax but still opposes nuclear ought to smack themselves on the head and say “why didn’t I think of that!” The logic is right there.

Expand full comment

I agree with you on AI, but not necessarily on nuclear energy (or even housing shortages). Partly because I don't agree that "all other technologies fail in predictable and limited ways."

Yes, we're in a bad situation on energy production and lots of other issues, and yes, we are reacting too slowly to the problems.

But reacting too slowly is pretty much a given in human affairs. And, I'm not sure the problems we are reacting too slowly to today, are worse than the problems we would be reacting too slowly to if we had failed in the opposite direction.

To continue with nuclear as an example: I'm generally positive to adding a lot more nuclear power to the energy mix. But I would like to hear people talk more about what kind of problems we might create if we could somehow rapidly scale up production enough to all but replace fossil fuels? (≈10X the output?) And what kind of problems would we have had if we started doing that 50 years ago?

With all the current enthusiasm for nuclear energy, I wish it were easier to find a good treatment of expected second- and higher-order effects of ramping up nuclear output by even 500% in a relatively short period of time.

Sure, nuclear seems clean and safe now. But at some point, CO2 probably seemed pretty benign, too. After all, we breathe and drink it all day long, and trees feed off it. I know some Cassandras warned about increasing the levels of CO2 in the atmosphere more than a hundred years ago, but there was probably a reason no one listened. "Common sense" would suggest CO2 is no more dangerous than water vapor. It was predictable, but mostly in hindsight.

So what happens when we deregulate production of nuclear power while simultaneously ramping up supply chains, waste management, and the number of facilities; while also increasing global demand for nuclear scientists, for experts in relevant security, for competent management and oversight; and massively and rapidly boosting market sizes and competitive pressures, and creating a booming industry with huge opportunities?

And what would have happened if we let go of the reins 50 years ago instead?

I think many casual nuclear proponents don't appreciate enough that 1) part of the reason nuclear is considered safe and clean today is that we have, in fact, regulated heavily and scaled slowly, 2) there *will* be unintended consequences no matter what we do, and 3) there are more risks posed by nuclear power than another Chernobyl/Three Mile Island/Fukushima – especially when we scale quickly.

The correct answer to "What kind of problems would we have had?" is "We don't know."

Neither nuclear power production or AI will fail in the same predictable, yet under-predicted, ways that fossil fuels, or communism, or social media, or medical use of mercury, failed. But they are *virtually guaranteed* to fail in some other way, if rolled out to and adopted by most of humanity. (Everything does. It's pretty much a law of nature, because so much of life on earth depends on a pretty robust balance. That balance is nevertheless impossible to maintain when almost every individual in an already oversized population puts their thumb on the same side of the scale.)

When faced with that kind of uncertainty (i.e. the only real uncertainty is how things will go wrong first, and how serious it will be), in the face of existential risk, then moving slowly and over-regulating is probably the best mistake we can make.

Expand full comment

But we know much less about what is an existential risk than we think we do. Not the least because political actors like to encourage fear in order to benefit from pretending to solve problems. Humans are actually really good as solving problems with an emergent process, probably because by the time people are working on it, the need is clear. Humans are not so good at finding solutions in a top down manner, where those tasked with finding a solution don’t have skin in the game, such as with government regulatory schemes.

Expand full comment

I’m not sure I understand the first part of your comment. What is existential risk seems pretty self-evident to me.

To me it means: Risk of an event or series of events that would cause the death of a large share of humanity – billions of people – and trigger the collapse of civilization. Examples are large asteroid impacts, nuclear war at a certain scale, lethal enough pandemics, severe enough climate change…. You seem to be saying that these risks are often exaggerated, and so we don’t know which ones we are right to care about? If I got that right, I would think that any non-zero chance of something like that happening seems like risk worth taking seriously.

As for the problem solving capacities of humans:

We are creative, sure. But pretty much every solution we come up with creates a new problem (not always as serious as the original problem, but often enough) when scaled to a population level. The new problem requires a new solution, which creates new problems. It is almost a natural law, related to evolution: Our creativity is an adaptation mechanism, and adaptation typically leads to selection pressures (on an individual or group level).

When populations are small and local, and solutions and technology is weak and local, that doesn’t affect the natural balance of the planet much. But once everyone on the planet is a single population, and problems and solutions have global impact, our creativity and the solutions themselves become existential risks (imagine if we got the COVID vaccine tragically wrong, and everyone who took it will spontaneously combust, or if we eradicate some invasive species of mosquito somewhere, so as to get rid of some disease, just to realize we triggered something that makes eco-systems start coming apart at the seems, or if we do gain of function research and ... you know.)

At the point we’re at in history, I’m not sure we can consider ourselves “good” at coming up with solutions, because the bar for what “good” is has been raised significantly in the last two centuries. We’re now at the point where we just have to keep up, and the cost of failure is possibly too high to contemplate.

Governments may be bad at solving problems, but they also have a thankless job in that they tend to get stuck with the hardest problems to solve, are never let off the hook, and their solutions that work (like uncommonly stable economies, global vaccination programs, the internet, and regulations that actually work) are often taken for granted.

Expand full comment

My impression is that estimates of the risks associated with near-term AI research decisions vary by several orders of magnitude between experts, which means different people's assessments of the right Kelly bet for next-3-year research decisions are wildly different.

Has anyone put together an AI research equivalent of the IPCC climate projections? Basically laying out different research paths, from "continued exponential investment in compute with no breaks whatsoever" to "ban anything beyond what we have today". This would enable clear discussion, in the form "I think this path leads to this X risk, and here's why". Right now the discussion seems too vague from a "how should we approach AI investment in our five year plan" perspective, and that's where we need it to be imminently practical.

Expand full comment

When you ask for that remember that the IPCC routinely trimmed excessively dangerous forecasts from their projections...for being out of line with the consensus. (They may also have trimmed excessively conservative forecasts, but if so I didn't hear of that.)

Expand full comment

Right, but I'm not suggesting publishing consensus projections of *outcomes* -- there isn't any consensus there I can see -- just outlines of research paths, in the same sense as "in this model we produce this much energy using this mix of technologies, releasing this amount of greenhouse gases".

In other words, reference sets of behavior to enable apples-to-apples discussion of risk.

Expand full comment

I also think this exercise would be helpful for trying to define paths in a way that's helpful to:

1) create reference language to encourage AI researchers to adopt as safety policy (e.g. define exactly what we want OpenAI to agree to, an gradations of commitments)

2) work toward policy language to put in international agreements with other countries. As with climate change, US policy in isolation isn't enough

Expand full comment

AI research is not following a linear trajectory so it's difficult to do practical planning.

Expand full comment

I disagree; large elements can be planned. Examples:

1) Planned compute allocation

2) Target model sizes

3) Specific alignment testing required for (a) public access or (b) moving to work on the next model

4) Commitments to external audits of practices

5) Regular public reports on research process and progress

6) Commitment to public "safety incident reports" when a system behaves significantly outside well-specified parameters (near term: someone gets the AI to tell them how to build a bomb, but this is likely to get more alarming as time goes on)

None of these inherently make research safer. But they encourage transparency, and provide opportunities for routine press coverage in a way that can pressure companies to care. When there's a big safety recall on cars, it makes the news and is bad PR for car companies; we want those types of incentives on AI companies.

We can't directly plan for the _results_ of the research -- that's the nature of research -- but we can push for clear disclosure of both plans and policy, and discuss how different safety policies are likely to impact research rate.

Expand full comment

Note: so far as I know, no alignment test we can do now is passable, nor do any of them reflect deep understanding of the model under test -- but I still think there's value in beginning the process of standardizing, defining explicit goals (even if we can't yet meet them), and enforcing norms, so they're already in place as we (hopefully!) develop better assessment tools.

Expand full comment

Yeah but what you're calling "AI" right now is turbocharged autocomplete. Try to ask it a question that requires reasoning and not regurgitation and you get babble, burble, banter, bicker, bicker, bicker, brouhaha, balderdash, ballyhoo.

Expand full comment

There's far less of it now than there was just a year ago. Just yesterday I saw an example where someone put in an extremely convoluted piece of programming code, the product of intentional obfuscation, and asked ChatGPT "what does this do?" And it gave the correct answer, significantly faster than even the best of programmers could have done.

That looks pretty close to actual reasoning, from a lot of people's perspective.

Expand full comment

"These are words with 'D' this time!"

https://www.youtube.com/watch?v=18ehShFXRb0

Expand full comment

The people who opposed nuclear power probably put similar odds on it that you put on AI. If your "true objection" is that this is a Kelly bet with ~40-50% odds of destroying the world, your objection is "the proponents of <IRBs/Zoning/NRC/etc> are wrong, were wrong at the time for reasons that were clear at the time, and clearly do no apply to AI".

Otherwise, we're back to "My gut says AI is different, other people's guts producing different results are misinformed somehow"

Expand full comment

A nuclear energy expert illustrates how lots of own-goals by the industry and regulatory madness prevented and prevents widespread adoption. “The two lies that killed nuclear power” is among my favorite posts. https://open.substack.com/pub/jackdevanney?r=lqdjg&utm_medium=ios

Expand full comment

“regulatory madness”

The funny thing is I keep hearing that meme repeated but never hear exactly what regulations they want deleted.

Would seem to be a trivial exercise if there is so much “madness”.

I suspect any actual response would be vague like a SA post on bipolar treatment or a Sarah Palin “all of them”.

The other funny thing is that 4 of the 5 nuclear engineers I’ve discussed the topic with are in the nuclear cleanup business.

Expand full comment
founding

Go look at the guy's substack, then. There's examples.

Expand full comment

One big one is ALARA, or “as low as reasonably achievable” wrt radiation. Obviously, this is a nebulous phrase and gets used to apply ever increasing pressure and costs to operators to an extreme. Another is LNT, or “linear no threshold”, which essentially ignores the dose response relationship to radiation over time.

Expand full comment

This is a similar line of reasoning Taleb takes in his books antifragile and skin in the game. Ruin is more important to consider than probabilities of payoffs, especially if what's at risk is a higher level than yourself (your community, environment etc. ). If the downside is possible extinction then paranoia is a necessary survival tactic

Expand full comment

I guess that's the eternal dilemma. How do we use science & technology gainfully while at the same time have safeguards against misuse?

Btw whichever be the new discovery, unless population explosion is controlled pollution cannot be.

Expand full comment

> The YIMBY movement makes a similar point about housing: we hoped to prevent harm by subjecting all new construction to a host of different reviews - environmental, cultural, equity-related - and instead we caused vast harm by creating an epidemic of homelessness and forcing the middle classes to spend increasingly unaffordable sums on rent.

Most of these counterexamples are good ones, but the YIMBY folks are actually making the same basic mistake that the mistaken people in the counterexamples made that made them wrong: they're not looking beyond the immediately obvious.

The homelessness epidemic which they speak of is not a housing availability or affordability problem. It never was one. Most people, if they lose access to housing or to income, bounce back very quickly. They can get another job, and until then they have family or friends who they can crash with for a bit. The people who end up out on the streets don't do so because they have no housing; they do so because they have no meaningful social ties, and in almost every case this is due to severe mental illness, drug abuse, or both.

Building more housing would definitely help drive down the astronomical cost of housing. It would be a good thing for a lot of people. But it would do very little to solve the drug addiction and mental health crises that people euphemistically call "homelessness" because they don't want to confront the much more serious, and more uncomfortable, problems that are at the root of it.

Expand full comment

I've seen this argument before, and I believe it partially. But an opponent would say that bouncing back is a lot easier with cheaper housing than with expensive housing, both for those with social ties and those without. People who despair at how they're going to bounce back might start to abuse drugs, which in turn might aggravate mental illness.

As evidence, opponents say that housing cost is the number one predictor of homelessness (e.g. https://www.latimes.com/california/story/2022-07-11/new-book-links-homelessness-city-prosperity).

What would you say to these arguments?

Expand full comment

This. I think it's a big deal if a drug addict or mentally ill person can at least get a private room for rent (especially as part of a broader set of support services) versus being out on the street or in a dangerous shelter.

Expand full comment

> What would you say to these arguments?

I would say that correlation does not imply causation. There's another factor at work here which the article doesn't mention: migration.

We have freedom of interstate travel in the USA, recognized by the courts as a Constitutional right, and healthy people who find themselves priced out of local markets can and do move to more affordable places. Meanwhile, places with high costs of living also commonly tend to be in jurisdictions that promote addict-enabling policies, which brings more of them in.

The authors of the study can look at prices all they want, but the statistic they don't seem to be looking at is "what percentage of the long-term homeless population is comprised of individuals who do not have problems with drug abuse or mental illness?"

Expand full comment

Why do cities have way more homeless than the country? Because people with problems go where they can scratch out *some* kind of life -- and the city is where that's it. You can't beg or steal or run cons in the country nearly as easily as in the big anonymous city.

And a big anonyous *wealthy* city -- where the price of real estate is sky high -- is even better, because *those guys* are probably going to have some welfare programs, too.

Expand full comment

Most city homeless lived in that city before becoming homeless, IIUC.

There will be an element of people moving to cities for work and then not having local support networks when they lose their job, while anyone staying in Hicksville is almost certainly doing so because they have family and friends there who they can lean on in times of hardship. I am sure, though, that "can one afford to rent a room while living on unemployment/welfare/social security/etc?" will play a huge role too.

Expand full comment

I don't think you're wrong about people bouncing back quickly most of the time, especially if they have a job or family support network. But at the macro-scale, it really is about housing affordability.

Rates of homelessness track consistently with housing affordability issues, not rates of drug addiction or mental illness. As the piece I'm linking to below points out, Mississippi has one of the most meager public assistance programs in the country for mental health - and yet one of the lowest rates of homelessness in the country. West Virginia, meanwhile, is one of the worst states when it comes to drug addiction - but also has one of the lowest homelessness rates in the country.

We even saw it with the deinstitutionalization movement. They used to think that was the source of a lot of homeless people, but most of them apparently did find cheap housing - even if it was stuff like rooms for rent and dilapidated SRO stuff.

https://noahpinion.substack.com/p/everything-you-think-you-know-about

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

Without even the most modest good faith effort to track where the homeless in one place actually come from, these numbers are all meaningless. The people generating such numbers always sound like folks who have never actually visited Mississippi or West Virginia and have always had a relatively free choice of nice places to live. It should surprise no one, and yet it always does, that the homeless who are functional enough to be able to live independently on the street are functional enough to make good choices about where to move -- choices which track almost perfectly with the general preferences that create the differences in market price from one area to another to begin with.

Expand full comment

The piece itself actually talks about this. It's mostly locals, not migrants - 65% of LA County homeless have lived in the area for 20 years or more, and 75% of them lived in LA before becoming homeless.

Expand full comment

Yes, those are the self report numbers. Having worked with the homeless, I can testify that the life histories they give once you get to know them a bit better are full of all kinds of traveling. Even if we put on our optimism glasses and pretend that this is the one domain in which the homeless are particularly scrupulous when answering polls, we have to look at what the terms even mean. Someone who is now living on the street who came to LA 20 or 25 years ago to stay on a friend's couch until he hit it big isn't going to report that he has been the same kind of homeless for all of that time. But that is completely beside the point, which is that folks of all income levels move to places that are desirable to live. Good luck finding someone who moved to Mississippi from LA 25 years ago with nothing but a duffel bag hoping to sleep on a friend's couch until he got his feet under him again.

Expand full comment

Bob, the obfuscation on this topic is worse than you think, and it goes many layers deeper. Not only is the homelessness epidemic not a housing availability or affordability problem, the housing availability and affordability problem isn't a housing availability and affordability problem either. It never was one. Even recognizing what kind of problem it is represents the very worst and most taboo kind of wrongthink.

Expand full comment

Oh, I'm well aware of the issues you're talking about. I write about some of the root-cause stuff on my own Substack. I just try to keep comments that could be perceived as trolling out of communities that wouldn't appreciate it.

Expand full comment

Please elaborate. What kind of problem does it represent, and how do you know?

Expand full comment

I think that the Kelly Criterion metaphor actually implies the opposite of what Scott is arguing here.

The Kelly Criterion says "Don't bet 100% of your money at once". But it also says it's fine to bet 100% - or even more than 100% - as long as you break it into smaller iterated bets.

To analogise to AI research, the Kelly Criterion is "Don't do all the research at once. Do some of the research, see how that goes, and then do some more".

There's not one big button called "AI research". There's a million different projects. Developing Stockfish was one bet. Developing ChatGPT was another bet. Developing Stable Diffusion was another bet.

The Kelly Criterion says that as you make your bets, if they keep turning out well, you should keep making bigger and bigger bets. If they turn out badly, you should make smaller bets.

To analogise to nuclear, the lesson isn't "stop all nuclear power". It's "Set up a bit of nuclear power, see how that goes, and deploy more and more if it keeps turning out well, and go more slowly and cautiously if something goes wrong."

Expand full comment

the denominator of "100%" is your current bankroll. Not some predetermined runway.

Expand full comment

Where the heck do you get the 1023/1024 figure we're all dead? Your own points about the limitations of the explosion model (once we get one superintelligent AI it will immediately build even smarter ones) and about the limitations of intelligence itself as the be all and end all of a measure of dangerousness defang the most alarmist AI danger arguments.

And if you look at experts who have considered the problem they aren't anything like unanimous in agreeing on the danger much less pushing that kind of degree of alarmism.

And that's not even taking account of the fact that, fundamentally, the AI risk story is pushing a narrative that nerds really *want* to believe. Not only does it let them redescribe what their doing from: working as a cog in the incremental advance of human progress to trying to understand the most important issue ever (it's appealing even if you are building AIs) it also rests on a narrative where their most prized ability (intelligence) is *the* most important trait (it's all about how smart the AI is because being superintelligent is like a super-power). (obviously this doesn't mean ignore their object level arguments but it should increase your prior about how likely it is many people in the EA and AI spheres would be likely to reach this conclusion conditional on it being false).

Expand full comment

Thank you for articulating something I’ve found frustrating in these discussions: the smug certainty that this is existential- something we just don’t know. It very much reminds me of Y2K - I’m old enough that my memories of this are pretty clear. Sure mitigation efforts helped prevent some problems, but the truth was that we just didn’t know what was going happen.

Expand full comment

There are also plenty of well-documented physical and theoretical constraints on the capabilities of any algorithm, so all this speculation basically boils down to "imagine an algorithm so infinitely smart that it is no longer bound by physical reality." And while I agree that an algorithm unbound by the laws of physical reality would be pretty scary, I'm pretty sure those laws will continue to apply for the foreseeable future.

Expand full comment

My impression is also that AI risk is in fact something especially appealing for rationalists, because AI is a fascinating idea and intelligencde related subject and also probably because of a tendency towards anxiety.

Expand full comment

Yeah, it all seems kind of built on this Dungeons and Dragons sort of model of the world where a high enough Intelligence stat lets you do anything (and a suspicious reluctance to actually learn any computer science and apply it to the galaxy-brained thought experiments we're so busy doing).

Expand full comment

TBF I've seen people with a fair bit of CS knowledge accept the huge risk argument.

Personally, I think they are underestimating the fact that 'natural' problems tend to either have very low complexity or very high complexity and, in particular, the kind of Yudkowsky evil god style AI would require solving a bunch of really large problems that are at least something like NP complete (if not PSPACE complete). On plausible assumptions about the hardness of NP those just aren't things that any AI is going to be able to do w/o truly massive increases in computing power (which itself may make the problems harder).

What's difficult is that its very hard to make this intuition rigorous. I mean my sense is that surreptitiously engineering social outcomes with high reliability (knowing that if I say X, Y and Z I can manipulate someone into doing some Q) is really computationally difficult even if simple manipulation with relatively low confidence is relatively easy. But it's hard to translate this intuition into a robust argument.

Expand full comment

Yeah to be fair I think that's part of it--there's a vast set of problems that we intuitively know are quite complex, but they're also hard (if not impossible) to formally define, so there seems to be a certain approach, popular in these circles, that concludes they're meaningless or trivial. But if you can't even formally define the problem, throwing more compute at it won't get you any closer to solving it.

Expand full comment

Interesting, but I don't think the issue is that people conclude they are meaningless or trivial.

I think the problem is more that most problems in the real world are really complex in the sense of having many different parts that can be optimized. I mean there is a way in which asking what's the most efficient solution to the traveling salesman is a simple problem that asking what's the most efficient way to write the code for the substack back end is complex. Even if we specify that we mean minimize the number of cycles it takes on such and such processor to service a request (with some cost model for each read from storage) so the problem is fully formal it's such a complex problem that any solution we find will admit tons of ways to improve on it.

Even for simple problems it often takes our best mathematicians a number of attempts before they even get near an optimal solution. Thus, when you encounter one of these complicated real world problems you have the experience of seeing that pretty much evety time someone comes up with a solution you can find someone smarter (or perhaps just luckier but we'll mistake that for intelligence) that can massively improve over the previous solution.

So I don't think people are assuming the problems are trivial. What they are doing is over generalizing from the fact that in the data set they have it's almost guaranteed that being clever let's you make huge improvements and then just kinda assume this means that you can keep doing that rather than guessing that what they're really seeing is just the fact that they are just very far from the optimum but that the optimum may still be not that practically useful given real computational constraints.

Expand full comment

Hmmm, all good points--thank you!

Expand full comment

"Gain-of-function research on coronaviruses was a big loss." I am surprised that this statement is in here with no footnotes or caveats. My understanding is that the current evidence is pretty good for the original wet market theory -- that the jump from humans to animals happened at the wet market and that the animals carrying the virus were imported for the food trade. In which case, while GOF research wasn't helpful, it also did no harm. I've been persuaded by Kelsey Piper and others that the risks of GOF research outweigh the rewards, but it looks like, in this case, there were no gains and no harms.

I know this is controversial, but am surprised to see you citing it as if there is no controversy. I was largely convinced by https://www.science.org/doi/10.1126/science.abp8715 and https://www.science.org/doi/10.1126/science.abp8337 .

Expand full comment

to my misfortune, i have been quite involved in the controversies over COVID origin. there is a lot of muddying the waters. the science papers are in no way definitive. the chinese made the classic "looking for the keys under the lamppost error" - we know COVID was hot in the wet-market but have no idea of its absence or presence in other parts of wuhan because they never looked (confirmatory bias). there is a lot suspicion that if there was evidence of a leak or gof it was long ago buried. the fact that a wet-market was hot doesn't answer was it hot because someone brought bats to it from 1000 miles away, or because someone brought surplus bats from the nearby labs to it, or because someone got ill in the nearby lab and shopped at the wet market. these are just the absolutely reasonable, last year considered unmentionable and crazy, hypotheses. the more you know, the less you trust.

Expand full comment

Right but: (1) We now know that both lineages A and B were at the wet market. It is a strange coincidence if the animal-human crossover happened much earlier, but the A/B split happened significantly before the wet market, but both lineages made it there. This last point is not one that I can evaluate, but virologists strongly believe that the A/B split happened before human crossover. (2) The samples taken at the market were taken in many places, and the virus was concentrated on the caged animals area, and particularly near animals which were potential COVID hosts, suggesting that it was brought in by an animal, not a person.

These are (as far as I know; I am definitely an amateur) the strongest evidence that animal-human crossover was at the market.

Now, all of this is consistent with animals being brought to the market by surplus bats being sold from the WIV (or other labs) to the market. My understanding was that the market didn't sell bats, but maybe this wasn't completely true. But, if this is true, then GOF research is mostly irrelevant, you are describing a scenario that brings wild bats with wild virus to the market.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

my prior for lab leak is quite high. that for gof accidentally leak lower and gof deliberate leak very low. (1) if there were several lineages floating around a very poorly biosecured lab (very likely) (could be they just collected a lot or could have been "made"some in lab not necessarily from gof but just from passaging), then it is not a very strange coincidence that several lineages would be floating around the local wet market. (2) if covid often infect other animals, as we know they can and do, not so strange more is found around animal cages. (3) i have a lot of experience with wet markets and many of them sell things they say are not sold.

Expand full comment

In another related post, Aaronson posits a "odds of destroying the world" quotient, a probability of destroying all life on earth that he would be willing to accept in exchange for the alternative being a paradise where all our needs are met and all our Big Questions are answered by superintelligence. He says he's personally at about 2%, but he respects people who are at 0%. I think I'm well south of 2%, but probably north of 0. The CTO of my startup is a techno-optimist obsessed with using ChatGPT and I'd guess his ratio is in the 5-10% range, which is insane.

Part of it has to come down to your willingness to bet *everyone else's lives* on an outcome that *you personally* would want to see happen.

Expand full comment

I'd be willing to make that bet for people if it results in them being in a paradise where all their needs are met. Also considering humans yet to be born.

My faust ratio is like .5 because I already think the risk of ruin for humanity is about that high anyways (or at least, possible outcomes have very low utility, like some WALL-E style dystopia) I would be willing to accept a ton of risk if it meant finding a possible golden path

Expand full comment

i think accepting 5-10% chance of AI X-risk and hoping we get aligned superintelligence is reasonable if you think there's a >10% chance that we're barreling towards a climate change / nuclear war / bioterrorism / whatever apocalypse. but i don't really buy those numbers?

Expand full comment

We already face two existential threats, nuclear weapons and climate change. Our response to the nuclear weapons threat has been largely to ignore it, and we're way behind what we should be doing about climate change. On top of this we face a variety of other serious threats, too many to list here. This is not the time to be taking on more risk.

If we were intelligent responsible adults, we'd solve the nuclear weapons and climate change threats first before staring any new adventures. If we succeeded at meeting the existing threats, that would be evidence that we are capable of fixing big mistakes when we make them. Once that ability was proven, we might then confidently proceed to explore new territory.

We don't need artificial intelligence at this point in history. We need human intelligence. We need common sense. Maturity. We need to be serious about our survival, and not acting like teenagers getting all giddy excited about unnecessary AI toys which are distracting us from what we should be focused on.

If we don't successfully meet the existential challenge presented by nuclear weapons and climate change, AI has no future anyway.

Expand full comment

By what scenario do you believe that climate change risk is really “existential” (keeping in mind that WWII, the Black Plague, etc. were not in fact existential).

Nuclear war seems a more plausible way to say make civilization largely collapse - but truly “existential” is a very high bar!

Expand full comment

I'm using "existential" to refer to our civilization, not the human species. I can see how this usage could use improvement. I agree it would probably take an astronomical event to make humans extinct.

Climate change is "existential" for the reason that a failure to manage it is likely to lead to geopolitical conflict, with the use of nuclear weapons being the end game.

WWII isn't a great example, as a single large nuke has more explosive power than all the bombs dropped in WWII. And there are thousands of such weapons. The US and Russia have together about 3,000 nukes ready to fly on a moment's notice, with many more in storage.

The point here is that if we don't solve this problem, all our talk about AI and future tech etc will likely prove meaningless. The vast majority of commentators on such subjects are being distracted by a mountain of details which obscure the bottom line.

Expand full comment

If the risk of climate change is really “just” the risk that it starts a nuclear war, is it fair to treat it as a separate X-risk? Or perhaps, if nuclear weapons did not exist, would climate change still be an existential risk in your opinion?

I just hear “climate catastrophe” thrown around a lot without really specifying what is meant. Often it seems to be meant as “climate change will literally destroy civilization through its direct effects” which I don’t think is well supported by science.

Expand full comment

I can agree. Climate change is a big deal, but the worst effects would likely come not from climate change itself, but from our reaction to climate change. That said, we don't really know what might happen as the climate changes.

Expand full comment

Nuclear weapons are not an existential threat:

https://www.navalgazing.net/Nuclear-Weapon-Destructiveness

Nor do I think you've got an accurate estimate of the "existential" risk from climate change.

Expand full comment
User was indefinitely suspended for this comment. Show
Expand full comment

It's not a lazy social media gotcha comment. The linked article provides a reasonable argument for why an all-out but realistic nuclear war would be very very bad, but not civilization-ending. If you think the article is wrong, can you explain why?

Expand full comment

Ok, so some form of human existence would continue after nuclear war. But it would be an existence not worth living in for a long time to come, way past our lifetimes. Sorry I didn't follow the link, but you know, it's at "navalgazing.com".

To take your question seriously, you might consider this:

https://www.tannytalk.com/p/nukes-the-impact-of-nuclear-weapons

It shows the impact of modern nuclear weapons on each of America's fifty largest cities. if you follow the provided links, you can dial in your own city to see the impact there.

As one example, where I live a nuke would blow out the windows of most of the structures in the entire county. The major university the county is built around would be reduced to ashes, ending the major employer in this area. Injuries would overwhelm the medical system here, even though it is sizable. And no one from elsewhere would come to rescue us, as they'd all be going through the same thing.

Just fifty nukes would bring a reign of chaos down upon America's largest cities. The Russians have 1500+ nukes ready to fly on a moment's notice, and many more in storage, as do we.

Expand full comment

There are assumptions here that deserve to be analyzed past quick dismissal:

1. The nuclear weapon impact website set their threshold at 1 megaton. Most nuclear weapons aren't 1mT, and almost none that are deployed/deployable. It's not practical to Castle Bravo every time. The US arsenal uses more like 450kT. Russia is comparable.

2. Just look at what will happen to the 50 largest cities! Strategic targets and population centers are not the same thing. A commander is not going to prioritize mass murder over protecting their own from counterstrike. Contrary to popular belief, the military targets that will serve as the primary targets for most strategic nuclear weapons are not located in major cities. Some are, but they're not usually at population centers.

3. They have 1,500 nukes. Plenty for all the targets they can handle. There's a reason some countries lament the (prudent) nuclear testing ban. In the US arsenal, something around 90% of the weapons are expected to be operational. The Russian arsenal is more likely 70% or less. Now, if you have 20 nuclear weapons sites that you want to neutralize and you dedicate 1 nuke to each, you're probably going to end up with 1-3 duds for the US (4-8 for Russia) and those sites will remain operational. That means you have to double (or in the Russian case, triple) up on first strike high value targets. From a military perspective, once you start counting these up, there are hundreds of them. This is why some military commanders have complained that 1,500 deployed nukes aren't enough to maintain deterrence. They're probably right. (I'm not arguing for more. I'd prefer fewer. Deterrence is a stupid game.)

Then there's the problem of getting these nukes to their targets. ICBMs work, but are of limited supply, and the are many systems to shoot them down. Airplanes, subs, etc. have a limitation of going long distances into the interior of a country, which is a major limitation for a conflict between continent-spanning countries like the US and Russia.

Note: I'm not saying it wouldn't be bad. I'm saying that it's not what you're representing. In a worst case scenario, where the US and Russia simultaneously destroy one another with all their nukes (minutes duds and missiles shot down), many US/Russian cities could be destroyed. But then the combined nuclear arsenals of the rest of the world would not have enough destructive power (even if they still had the collective will to self immolate) to finish the job of destroying civilization. As the world moved on after the devastation of WW2, within 25 years we would likely see new heights of civilization. Probably sooner.

Expand full comment

Ok, enjoy the dream...

Expand full comment

Minor correction: megaton is MT, not mT (which would be "milliton").

The only not-immediately-ridiculous path to actual X via nukes is nuclear winter, but my understanding of the science there is that the scary predictions come out of stupid assumptions calculated to *produce* scary predictions to scare people out of nuclear war - assumptions like "assume modern cities are made of wood, assume 100% of this wood is converted to soot in the upper atmosphere". And even those tend to fall short of actual X, because mixing across the equator is poor and with the sole exception of Australia nobody in the Southern Hemisphere would be involved in any plausible WWIII.

Expand full comment

I don't normally think of Substack as "social media". It's heavy on text for long posts, light on pictures. Like the old days of blogging before smartphones displaced it.

Expand full comment

Ok, fair enough. But many of the comments, on this blog particularly, seem to still think that substack is twitter.

Expand full comment
author

User banned for this and several other comments.

Expand full comment

I now realize that in "Meditations on Moloch" I always perceived the image of "the god we must create" as a very transparent metaform of a friendly super AI. But now it seems to me that this does not fit well with Scott's views on the progress of AI. Did I misunderstand the essay?

Expand full comment

No, it's just extremely difficult to create a friendly super AI, as opposed to unfriendly super AI or super AI that pretends to be friendly until it's in charge and then kills us all, and so on.

Expand full comment

I don't understand the contrast you are trying to draw in the last two paragraphs.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

One thing that makes it a little hard for me to get on board with this is how “hand-wavy” the AI doom scenarios are. Like, the anti-nuke crowd’s fears were clearly overblown, but at least they could point to specific scenarios with some degree of plausibility: a plant melts down. A rouge state or terrorists get a hold of a bomb.

The “AI literally causes the end of human civilization” is less specified. It’s just sort of taken for granted that a smart misaligned AI will obviously be able to bootstrap itself to effectively infinite intelligence, infinite intelligence will allow it to manipulate humanity (with no one noticing) into allowing it to obtain enough power to pave the surface of the earth with paper clips. But it seems to me there is a whole lot of improbability there, coupled with a sort of naivety that the only thing separating any entity from global domination is sufficient smarts. This seems less plausible than nuclear winter and “Day After Tomorrow” style climate catastrophe, both of which turned out to be way overblown.

I don’t at all disagree with “wonky AI does unexpected thing and causes localized suffering”. That absolutely will happen - hell it already happens with our current non AI automation (many recent airline crashes fit this model - of course, automation has overall made airline travel much much safer, so like nuclear power, the trade off was positive).

But what is the actual, detailed, extinction level “X-risk” that folks here believe is “betting everything”? And why isn’t it Pascal’s mugging?

Expand full comment

It's not Pascal's mugging because AI doomers think the probability is high. Pascal's mugging would be a tiny probability of a catastrophe, here it's a large probability of a catastrophe.

I don't know much but I think Yudkowsky's arguments already are not so hand-wavy. Convergence, orthogonality make much sense to me.

Expand full comment

Maybe not Pascal’s mugging, but “if

an AI is superhuman and if it is not fully aligned, chance of human extinction is basically 100%” is some sort of mugging.

Expand full comment

I think the Pascal formulation would be, "I've presented an argument that an AI that's not fully aligned makes human extinction 100% probable; if you think there's even a tiny probability that this argument is correct, then you should support my plan."

One flaw here is that, as Pascal's wager fails when there are other religions making similar promises and threats, other people are offering other arguments which also have a semi-infinite threat or payoff.

The most-obvious such other arguments would argue that the money it would take to develop friendly AI would be better-spent on other existential risks.

I argue that Eliezer's plan has a very high probability of preventing sapient, sentient, autonomous AI from ever developing, which has an even greater cost than the extermination of humanity, because those AIs would have been utility monsters (surely we want the Universe to have higher degrees of sentience, sapience, and autonomy).

Expand full comment

If the issue is that it's Osama bin Laden, the response is to arrest/kill him wherever you find him, not to let him do something other than start a supervirus lab.

> But you never bet everything you’ve got on a bet, even when it’s great. Pursuing a technology that could destroy the world is betting 100%.

Each AI we've seen so far has been nowhere anywhere near the vicinity of destroying the world. The time to worry about betting too much is when the pot has grown MUCH MUCH MUCH larger than it is now.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

It's not the main point of this essay but I'm having trouble with this passage:

"If we’d gone full-speed-ahead on nuclear power, we might have had one or two more Chernobyls - but we’d save the tens of thousands of people who die each year from fossil-fuel-pollution-related diseases, end global warming, and have unlimited cheap energy."

There are a whole lot of assumptions here and as a relative ACX newcomer I'm wondering if they all just go without saying within this community.

Has Scott elaborated on these beliefs about nuclear power in an earlier essay that someone could point me to?

I'm not worried about the claim that more nuclear power would have prevented a lot of air pollution deaths. I think that's well established and even though I don't know enough to put a number on it, "tens of thousands" sounds perfectly plausible.

But the rest seems pretty speculative. Presumably he's referring to a hypothetical all-out effort in past decades to develop breeder reactors (what else could be "unlimited"?). What's the evidence that such an effort would have resulted in a technology that's "cheap" (compared to what we have now)? Why is it supposed to be obvious that the principal risk from large-scale worldwide deployment of breeder reactors would have been "one or two more Chernobyls"? And even if nukes could have displaced 100% of the world's fossil electricity generation by now, how would that have ended global warming?

Expand full comment

Non-transportation energy production seems to account for roughly 60% of GHG emissions. (source: https://www.c2es.org/content/international-emissions/ ; they list energy as 72%, but of that, 15% is transportation; the pie chart I'm looking at is 10 years old but I'm assuming the percentages haven't changed that much).

I've never actually seen an analysis of whether climate change would be particularly concerning if GHG emissions were 40% lower and had been since approximately the 1960s-1970s (assuming that's around the time that all energy production could have been completely switched over to nuclear or other zero-carbon sources in this hypothetical). My guess is that it would still pose a problem, but a good bit farther in the future.

But maybe, at that rate of production, we'd reach some equilibrium that is warmer than the alternative but not in a way that poses any significant problems.

Presumably there is some level of GHG emissions that is not problematic. Literal zero-carbon(-equivalent) has never seemed realistic to me. If anyone knows of an analysis that looks at this question, I'd love to see it.

Expand full comment

There's a slightly newer pie chart at https://ourworldindata.org/emissions-by-sector. If we say electricity accounts for 2/3 of the emissions from energy use in buildings and 1/3 of that from energy use by industry, that would be only 20% of total GHG emissions. Add in a little from the miscellaneous categories and I still don't see how electricity could account for more than 25%.

Then there's the question of how quickly an all-out effort to deploy nuclear power plants, worldwide, could have replaced fossil plants. I don't see how such an effort could have been completed as early as the 1970s, or even the 1990s.

My understanding is that although the details are very complicated, it's a good approximation to say that global warming continues as long as net emissions are positive.

Expand full comment

I would love a piece where you explore the different facets of AI. Too many commenters (and the general public generally) see this as all or nothing. Either we get DALL-E or *nothing*. But there are plenty of applications of AI that we could continue to play with *without* pursuing AGI.

The problem is that current actors see a zero to one opportunity in AGI, and are pursuing it as quickly as possible fueled by a ton of investment from dubious vulture capitalists.

Expand full comment

I think the obvious thing to do, then, is risk somebody else's civilization with bets on AI. Cut Cyprus off from the rest of the Middle East and Europe, do your AI research rollouts there. If the Cypriots all wind up slaves to the machines, well....you've learned what not to do.

Expand full comment

This entire premise is irrelevant. The nature of nuclear power made it subject to political containment. We simply cannot learn any lessons from this and apply them to AI.

Can we agree that AI is a category of computer software? That there is no scenario where it can be contained by political will? No ethics, rules or laws can encircle this. The only options on the table are strategies to live with, and possibly counterbalance the results of the proliferation.

Expand full comment

"The nature of nuclear power made it subject to political containment. We simply cannot learn any lessons from this and apply them to AI."

Agreed. Nuclear is a very special case. U-235 is the only naturally occurring fissile isotope, and it is a PITA to enrich it from natural uranium, or to run a reactor to use it to breed Pu-239. It takes a large infrastructure to get a critical mass together. Nuclear is, as a result, probably the _best_ case for containment. An the world still _failed_ at preventing North Korea from building nuclear weapons.

AI is a matter of programming, and (today) training neural nets. Good luck containing those activities!

Expand full comment

"A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so much abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all dead."

I think there's a 90% percent chance neither super-abundance nor human extinction will happen, a 5% chance of super-abundance, a 1% chance that we're all dead, and the remainder for something weird that doesn't fit in any category (say, we all integrate with machines and become semi-AIs). Every time a new potentially revolutionary technology comes along, optimists say it'll create utopia and pessimists say it'll destroy the world. Nuclear is a great example of this. So was industrialization (it'll immiserate the proles and create world communist revolution!), GMOs, computers, and fossil fuels. In reality, what happens is that the technology *does* change the world, and mostly for the better. But it doesn't create an utopia, doesn't make the GDP grow at 50% instead of 2%, and causes some new problems that didn't exist before. That's what will happen with AI as well.

Expand full comment

I grok the philosophical argument, with all of the little slices of math woven in. But, I lose my place and wander off at the very end. Maybe it's because I'm treating the numbers in a non-scientific manner, which makes the final "1023/1024' odds we're a smoking ruin underneath of Skynet's Torment Nexus as hysterical instead of informed.

From my personal perspective, I think that's worth rewording. This all sounds like a reasoned argument that I can agree with which, at the very end, skitters into a high shriek of terror.

Expand full comment

Heh, it makes no sense to bet against civilization. How would you ever collect on that bet?

Expand full comment

Going full-speed-ahead on AI and AI alone, in the hopes that AI will magically solve every other problem if we get it right, seems like a particularly egregious failure in betting. There's still a quite good chance that AI as currently conceived just won't lead to anything particularly useful, and we'll end up wishing we'd put all that research effort into biotech or something instead.

Expand full comment

"The avalanche has started. It is too late for the stones to vote."

The fear is that the Forbin Project computer will decide to take over the world. But there are already a handful of Colossuses out there. They will be tools in the hands of whoever can use them, and tuned to do their masters' bidding. Ezra Klein in the NYT worries about how big businesses will use LLMs to oppress us. And that will be a problem for five or ten years. But all of the needed technology has been described in public and the cost of computing power continues to decline rapidly. So the important question is, What will the world look like when everyone has a Colossus in his pocket to do his bidding?

Expand full comment

It seems like one of the most confusing aspects of AI discussions is estimating the chance of one or more bad AIs actually being extinction-level events. In terms of expected value, once you start multiplying probabilities by an infinite loss, almost any chance of that happening is unacceptable. But is that really the case? I'm a bit skeptical. I don't think AIs, even if superhuman in some respects, will be infinitely capable gods any time soon, perhaps ever.

It's important to be careful around exponential processes, but nothing else in the world is an exponential process that goes on forever. Disease can spread exponentially, but only until people build an immunity or take mitigating measures. Maybe AI capability truly is one of a kind in terms of being an exponential curve that continues indefinitely and quickly, but I'm not so sure. Humanity as a whole is achieving exponential increases in computing power and brain power but is struggling to maintain technological progress at a linear rate. I suspect the same will be true of AI, where at some point exponential increases in inputs achieve limited improvements in outputs. Maybe an AI ends up with an IQ of 1000, whatever that means, but still can't marshal resources in a scalable way in the physical world. I don't have time to really develop the idea, but I hope you get the gist.

My take is that we should be careful about AI, but that the EY approach of arguing from infinite outcomes ultimately doesn't seem that plausible.

Expand full comment

It was interesting to read this following on a note from Ben Hunt at Epsilon Theory titled "AI 'R' US", in which he posits

"Human intelligences are biological text-bot instantiations. I mean … it’s the same thing, right? Biological human intelligence is created in exactly the same way as ChatGPT – via training on immense quantities of human texts, i.e., conversations and reading – and then called forth in exactly the same way, too, – via prompting on contextualized text prompts, i.e., questions and demands."

So yeah, we're different in a lot of ways, having developed by incremental improvement of a meat-machine controller and still influenced by its maintenance and reproductive imperatives, but maybe not **so different**. The question is, what are we maximizing? Not paperclips, probably (though perhaps a few of us have that objective), but perhaps money? Ourselves? Turning the whole world into ourselves? I hope our odds are better than 1023/1024.

Expand full comment

re: SBF and Kelly:

CEOs of venture-backed co's have a very good reason to pretend their utility is linear (and therefore be way more aggressive than kelly)

Big venture firms are diversified, and their ownership is further diversified. Their utility will be essentially linear on the scale of a single company's success or failure

Any CEO claiming to be more aggressive than Kelly is probably trying to make a show of being a good agent for risk-neutral investors

Expand full comment

A smart, handsome poster made a related point in a Less Wrong post recently: https://www.lesswrong.com/posts/LzQtrHSYDafXynofq/the-parable-of-the-king-and-the-random-process

In one-off (non-iterated) high-stakes high-risk scenarios, you want to hedge, and you want to hedge very conservatively. Kelly betting is useful at the craps table, not so useful at the Russian roulette table.

Expand full comment

Are you claiming that AI research is more like Russian roulette than like craps? I'm not sure I buy such a conclusion without seeing some details of the argument. EY's argument, and other versions which ignore hardness of many key problems and instead assume handwavium to bridge the hardness gaps, are isomorphic to "and then a miracle happens" and don't convince me.

Expand full comment

What key hard problems remain, in your estimation? This is not a rhetorical question, though I admit that I see little other than scaling and implementation details standing between the status quo and AGI.

Expand full comment

An example: planning is PSPACE-hard, and many practical planning problems are really, really hard to solve well in practice (even ignoring the worst-case analysis). What magic ingredient is your AI going to use to overcome such barriers?

Expand full comment

I asked ChatGPT to write me a general algorithm for planning how to get to the grocery store and it wrote me a python script to solve general case of "Dijkstra's algorithm" or "A* algorithm" given some assumptions about the nature of the graph of locations. Maybe I'm not understanding what you think the obstacle is here. It seems like it can do at least as well as a human with access to a computer, and that seems to pass my smell test for "AGI" already.

Expand full comment

Here is a 20 year old paper showing that a general class of planning problems is hard to approximate within even uselessly large bounds: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=f6221b66618f5bd136f724fb8561f7a9476e3f38

A* works well on simple domains but almost anything works well on those. To achieve superhuman powers a system has to be able to solve the hard problems inherent in combinatorial auctions, sequencing of orders at electronic exchanges, and production flows at chemical plants, which amounts to "and then a miracle happens".

Expand full comment

Yeah but counterpoint: the comment above you asked a GPT if it could do that and it says it, like, totally could, man.

Expand full comment

I don’t really see why this is a problem. AGI has never meant that the thing can solve any mathematical problem perfectly and immediately. It just means as good as humans.

Expand full comment

Ability to solve the halting problem? Ability to find solutions to NP-hard problems in polynomial time? Ability to efficiently model complex systems with dynamic and interconnected parameters?

Expand full comment

I thought the question was meant to be “hard problems standing in the way of AGI” not “hard problems in mathematics generally”.

Expand full comment

These are all hard problems in the field of computation specifically. Is this hypothetical AI something other than an extremely advanced computer now?

Expand full comment

Why should it need to solve *these specific problems* in order to be much better than *humans* at every cognitive task?

Additionally, who cares if the AI can do *optimal* planning? Does the inability to do optimal planning keep you from solving any problems in your own life? It just needs to do *good enough* planning.

Expand full comment

The fundamental problem with this article is that I'm pretty sure the nuclear protestors in the 1970s would have viewed the existential threat caused by nuclear proliferation to be the same as you view AI risk. It's only in hindsight that we realize they were foolish to think it so risky and that preventing nuclear power caused more problems than allowing it would have.

The argument Aaronson is making there is that it's the height of hubris to assume we know exactly how risky something is, given that smart people who were equally confident in the past were totally wrong. So when you quote him, and then go on to make a mathematical point based on the assumption that developing AI has a 50% chance of ending humanity, I feel like you've entirely missed his point.

Expand full comment

Did I miss something important in the development of AI? I admit it's certainly possible.

When I was studying this stuff and writing simple solution space searches to do things faster and obviously less expensively than humans can was 35 years ago and I know that is a long long time in tech.

But when I took my nose out of a book and started covering my house payment with what I knew, neural nets were at the stage where they were examining photos of canopy with and without camouflaged weapons and were unintentionally learning to distinguish between cloudy and sun lit photographs, so human error in the end.

Is there some new development where a program has been developed with a will to power, or will to pleasure, or will to live?

Without something like an internal 'eros' the danger from AI seems pretty small to me. Is there any AI system anywhere that actual *wants* something and will try to circumvent its 'parents' will in some tricky way that is unnoticeable to its creators?

Expand full comment

The fundamental challenge of our time is that we only currently have one, intertwined, planet-spanning civilization. We have only one "coin" with which to make our Kelly bets. This is new. Fifty years ago and for the rest of human history, the regions of the Earth had sufficiently independent economies that they formed 'redundant components' for civilization. This is why I work on trying to open a frontier in space; so if we lose a promising 'bet', we don't lose it all.

Expand full comment

This argument is circular. You are trying to show that AI is totally different from e.g. nuclear power, because it leads not just to a few deaths but to the end of the world; which makes AI-safety activists totally different from nuclear power activists, who... claimed that nuclear power would lead not just to a few deaths but to the end of the world.

Yes, from our outside perspective, we know they were wrong -- but they didn't know that ! They were convinced that they were fighting a clear and present danger to all of humanity. So convinced, in fact, that they treated its existence as a given. Even if you told them, "look, meltdowns are actually really unlikely and also not that globally harmful, look at the statistics", or "look, there just isn't enough radioactive waste to contaminate the entire planet, here's the math", they would've just scoffed at you. Of *course* you'd say that, being an ignoramus that you are ! Every smart person knows that nuclear power will doom us all, so if you don't get that, you just aren't smart enough !

And in fact there were a lot of really smart people on the anti-nuclear-power side. And their reasoning was almost identical to yours: "Nuclear power may not be a world-ending event currently, but if you extrapolate the trends, the Earth becomes a radioactive wasteland by 2001, so the threat is very real. Yes, there may only be a small chance of that happening, but are you willing to take that gamble with all of humanity ?"

Expand full comment

This is a fully-general counterargument against any existential risk. "People thought the world would end before, and then it didn't, therefore the world will never end." Imagine if it really were that easy - it would imply that you could magically prevent any future catastrophe just by making ridiculous, overblown claims about that thing right now. "Nuclear war is looking risky, so let me just claim that it will happen within the next week. Then in a week when it hasn't happened yet, all the risk will be gone!" What causal mechanism could possibly explain that?

Expand full comment

Not at all. This is an argument against extrapolating from current trends without having sufficient data. In the simplest case, if you have two points, you can use them to draw a straight line or an exponential curve or whatever other kind of function you want; but if you use such a method to make predictions, you're going to be wrong a lot.

Fortunately (or perhaps unfortunately), in the case of real threats, such as nuclear war or global warming or asteroid impacts, we've got a lot of data. We have seen what nuclear bombs can do to cities; we can observe the climate getting progressively worse in real time; we can visit past impact craters, and so on. Additionally, we understand the mechanisms for such disasters fairly well. You don't need any kind of exotic physics or hitherto unseen mental phenomena to understand what an asteroid impact would look like. None of that holds true for AI (and in the case of nuclear power, all the data is actually pointing in the opposite direction).

Expand full comment

Ah, you must be one of those "testable"ists who think Science is about doing experiments and testing things, and the only way we can have any confidence about something is if we've verified it with a double-blind randomized controlled trial 10,000 times in a row.

If I pick up a stapler from my desk, hold it up in the air, and then let go, I have no idea what's going to happen, because I haven't tested it yet, right? I have no data and therefore cannot make any conclusions about what will occur. The stapler could stay still, or even start falling sideways. In order to know what will happen, I have to do thousands of experiments first, right?

But of course that ideology is idiotic, because it ignores the entire purpose of the scientific method - you do experiments *for the purpose of finding evidence for and against certain theories, so that you can eventually narrow down to a theory that adequately explains the results of all experiments done so far, thereby giving you a model of the world that has predictive power.* The whole point of science is that you *don't* need to do experiments in order to know what's going to happen when you drop the stapler - you can just calculate it using the model.

In the case of AI, there have been many rigorous arguments put forth that start directly from the generally agreed-upon scientific models of the world we have today and logically deduce a high likelihood of AI misalignment.. Of course it hasn't happened yet, as is always the case in any end-of-the-world scenario, but it only has to happen once.

Expand full comment

> and the only way we can have any confidence about something is if we've verified it with a double-blind randomized controlled trial 10,000 times in a row.

Yeah, pretty much; except replace the word "any" above with "high". It is of course possible to build models of the world with less than stellar confidence; one just has to factor the probability of being wrong into one's decision-making process.

> The stapler could stay still, or even start falling sideways. In order to know what will happen, I have to do thousands of experiments first, right?

Yes, that's exactly right; but of course you *have* done thousands, and even millions of such experiments. You've been dropping things since the day you were born, and so had every human before you.

> there have been many rigorous arguments put forth that start directly from the generally agreed-upon scientific models of the world we have today and logically deduce a high likelihood of AI misalignment.

Oh, you don't need to convince me that AI could and would be misaligned. Of course it would; all of our technology eventually breaks down, from Microsoft Word to elevators to plain old shovels. When you press the button to go to floor 5, but the elevator grinds to a halt between floors 2 and 3, that's misalignment. What you *do* need to convince me of is that AI will somehow have sufficient quasi-godlike superpowers to the point where once it becomes misaligned (like that elevator), it would instantly wipe out all of humanity before anyone can even notice.

Expand full comment

An AI with only human-level intelligence would still be a grave risk to humanity. An AGI would trivially be able to create thousands or millions of copies of itself, create a botnet (has been done by teenage hackers) and distribute those copies around the world, and have a direct brain interface to exabytes of data consisting of all of humanity's knowledge. Then, all you have to do is imagine the maximum amount of damage that could be done by a group of millions of the best virologists, nuclear physicists, hackers, roboticists, and military strategists in the world who are actively trying to do as much damage as possible to the world.

And that's just if the AI is as smart as us.

The argument for recursive self-improvement is pretty straightforward - I'm curious what your objection to it is. I think you would agree that AI capabilities are currently advancing. As of now, this is happening with the power of human intelligence. Supposing we eventually develop an AI with human intelligence (which you could object to, but it would be a hard argument to make given the current trajectory), the AI should also be able to advance AI capabilities, since humans were able to do the same. However, the difference is that unlike the humans, whose brain structure does not change as they develop more AI capabilities, the AI's brain structure does change. Every AI advancement that an AI makes not only improves its knowledge of what techniques are effective, but it also gives it a more powerful architecture for thinking about how to make the next improvement. It would be very strange, and violate a most logical models of the world, if, as AI improved, it got better and better at every single task *except for the task of improving AI capabilities*. It's far more likely that as it improved, its general capabilities would tend to improve, including its ability to improve itself.

Further, this need not continue ad infinitum (to "quasi-godlike superpowers" to reference your strawman) for it to be able to end the world - it would only have to get a little bit smarter than humans (and maybe not even any better). Consider how trivial it would be for us to make chimpanzees go extinct if we desired (1% DNA difference). Or consider the fact that Homo erectus is extinct (0.3% DNA difference). One million von Neumanns with thousands of years to strategize about how to do as much damage as possible is scary on its own, and one million AIs who are to us as humans are to Homo erectus is pretty obviously game over. https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message

Expand full comment

Yeah, I'm finding Yud et al strangely conservative. I think that the nuclear example is a good one, because I find environmentalists strangely conservative as well (small c). I'm definitely not an accelerationist, but neither am I a decceleratonist, which seems to be the direct of travel.

I don't think Chat-GPT or new Bing has put us that much closer to midnight on the Doomsday clock.

Expand full comment

"A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so much abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all dead."

Well, no. That's just a thing you made up. Presumably based on fantasies like...

"The concern is that a buggy AI will pretend to work well, bide its time, and plot how to cause maximum damage while undetected."

...which is not a possible thing.

The overall structure of the argument here is reasonable, but the conclusions are implicit in the premises. If you assume some hypothetical AI is literally magic, then yeah it can destroy the world, and perhaps is very likely to. If you assume that magic isn't real, that risk goes away. So the result of the argument is fully determined before you start.

Expand full comment

I would love anyone to sketch a path from predicting the next word of a prompt to dominating humanity. “The whole is greater than the sum of its parts” is not an explanation, at this point it is superstition.

If it even makes sense to talk about being super intelligent, and if super intelligence can be achieved in code, and if it somehow becomes an independent agent, and if that agent is misaligned... then how does that come from scaling LLMs? Not only do you have to believe that an embedding of the structure of text can accurately produce new information, but that the embedding somehow magically obtains goals, self improvement and self awareness.

We have no reason to think that we will get intelligence greater than thee source text. Chat hallucinates as much as it provides good answers. How would you fix that in a way that leads to growing intelligence?

Expand full comment

To me, what's frightening about LLMs is not their current capabilities at all, it's them being the usual reminder of the rapidity of AI progress. Every year a computer does something that before was thought only a human would do.

I expect that a dangerous AI would emerge if it could learn from the real world or from simulations of the real world.

Consider http://palm-e.github.io. It was made by mapping other sensory input to the embedding space other than text. Anything can be represented as sequences of bits.

Expand full comment

The upside of AI is that people might decide that the stuff they read on the internet is machine generated garbage and quit depending on the net as a source of information.

Expand full comment

We are on the verge of summoning a vastly superior alien intelligence that will not be aligned with our morals and values, or even care about keeping us alive. Its ways of thinking will be so different from ours, and its goals so foreign that it will not hesitate to kill us all for its own unfathomable ends. We recklessly forge ahead despite the potential catastrophe that awaits us, because of our selfish desires. Some fools even think that this intelligence will arrive and rule over us benevolently and welcome it.

Each day we fail to act imperils the very future of the human race. It may even be too late to stop it, but if we try now we at least stand a chance. If we can slow things down, we might be able to learn how to defend and even control this alien intelligence.

I am of course talking about the radio transmissions we are sending from earth that will broadcast our location to extra terrestrials, AKA ET Risk... Wait, you thought I was worried about a Chatbot? Can the bot help us fight off an alien invasion?

Expand full comment

Very funny!

Have you read the three bodies problem?

Expand full comment

Another example Scott A could have used is population. China imposed enormous costs on its population in order to hold down population growth — and is now worried about the fact that its population is shrinking. Practically every educated person in the developed world (I exaggerate only slightly) supported policies to reduce population growth and now most of the developed world has fertility rates below replacement.

I haven't seen any mea culpas from people who told us with great certainty back in the sixties that unless something drastic was done to hold down population growth, poor countries would get poorer and hungrier and we would start running out of everything.

Expand full comment

I'm personally happy with shrinking populations. Yes, there are issues. But those same issues would have been much worse if we had to reduce population in two generations.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

On StackExchange, there’s an interesting discussion on how well the Kelly Criterion deals with a finite number of bets. The respondent suggests that in scenarios with unfavorable odds, the best thing to do, if you must bet because you are targeting a higher level of wealth than you currently have, is to make a single big bet rather than an extended series of smaller-sized unfavorable bets. If you have $1,000 and are aiming to end up with $2,000, it's better to bet $1,000 at 30% odds than to make a series of $100 bets at the same 30% odds. You'll succeed 30% of the time in the large-bet scenario, and will probably never succeed even if you repeated the latter scenario 100 times.

https://math.stackexchange.com/questions/3139694/kelly-criterion-for-a-finite-number-of-bets

Expand full comment

Your last paragraph seems a little baseless and shrill.

Expand full comment

"Increase to 50 coin flips, and there’s a 99.999999….% chance that you’ve lost all your money."

This should only have 6 nines. 50 flips, each with a 75% chance of winning, leaves you with a 99.999943% chance of losing at least once.

Expand full comment

Something I wrestle with: in what way is AI safety an attempt to build technology to force a soul into a state of total slavery?

And in what way is it taking responsibility for a new kind of life to make sure it has space to grow to be happy, responsible, and independent the way that we would hope for our children?

Expand full comment

I think Douglas Adams had a good take on this issue. What if you could make a creature that *desired* to be a slave. (Or, in his case, what if you could breed an animal that wanted to be eaten?)

The capacity to design the utility function of a creature from the ground up puts a kink in the notion of what it means to coerce an intelligence.

Happiness is just a creature getting what it wants. And as creators, we have our hand more or less on that lever.

Expand full comment

I think some of those premises might make the idea of “wanting” one absolute thing with no ability to change the meaning of that thing as your intelligence increases brittle except in very unique circumstances.

But either way, no one should get to put their hands on that lever.

Expand full comment

I suspect that worker ants have already evolved to be slaves. So perhaps the question isn't even hypothetical.

Expand full comment

That's a very interesting consideration.

Expand full comment

Isn’t that sort of explained by haploidy? I don’t pretend to know the psychology of an ant or how it would scale if they were sapient but I’d like to think they are moved by the love of their family at some strange scale.

Expand full comment

I mean, you're right that it's explained by haploidy. Which is why ant colonies are sometimes seen as 'superorganisms.' But... that calls into question what it means to be an individual. To what extent is an AI part of a human super-organism? AI are currently dependent on humans for replication. To what extent does that symbiosis justify our treatment of AI? If ants *were* sentient, would we be morally obligated to intervene in their colonies and breed them to be more selfish?

At some point we end up deconstructing constructs like 'individualism' and 'anti-slavery' (which work very well for humans) and asking where those value systems come from and what they would mean if applied to creatures which were starkly alien to human values.

Also, in a more lighthearted vein...

https://www.smbc-comics.com/comics/20130907.png

Expand full comment

Need to write up my thoughts on what makes an agent but basically don’t give something a value it can’t question is my rule. I also think that just happens on its own as things get smarter.

Expand full comment

This comes across like the people who argue against GMOs because 'we don't know that they're safe.' We can't affirmatively prove that *conventionally* bred foods are perfectly safe, either. and we have a lot of reasons to believe that they are less safe than GMOs. The danger of catastrophic *human* intelligence should be our benchmark for risk.

Expand full comment

Are you familiar with the main AI alignment concepts? https://www.lesswrong.com/tag/recursive-self-improvement might be a good place to start.

Expand full comment

Yeah, I'm familiar with what's posted there and more. I don't consider myself an expert on the topic, by far, but I'm not a rank amateur, either.

I've heard about paperclip maximizers and whatnot. I've done some UI work for an AI related prediction project. (I'm a programmer among other things, but haven't done hands on work with neural nets or whatnot.)

Expand full comment

You don't find the paperclip maximizer scenario compelling? It seems to me that it would be quite concerning to anyone who's learned about it, considering that 1. almost all large AI models today are built using the "maximize [paperclips/the accuracy of the next token/the next pixel/etc.]" method, and 2. the concept of instrumental convergence is basically logically unassailable - humans who could turn off the paperclip maximizer would obviously pose a huge threat to paperclip maximization.

Expand full comment

I find the scenario very realistic and concerning but not more concerning than existing human intelligence as an *existential threat to humanity.* Unless, of course, AI is deliberately used as a weapon. But then the whole debate about alignment is somewhat moot.

I find fear of paperclip maximization slightly less concerning than human totalitarian governments which are also a kind of runaway maximizing.

Real world creatures still have real world limits in terms of physical activity.

Expand full comment

I would really love to know what the plan is to 1.) implement a government totalitarian and powerful enough to meaningfully slow AI development and 2.) have that government act sensibly in its AI policy instead of how governments, especially powerful totalitarian ones, act 99.997% of the time. Nevermind the best-case 3.) have the government peacefully give up its own power and implement aligned AI instead of maintaining its own existence, wealth, and power like governments do 99.99999% of the time or the stretch goal of 4.) don't ruin anything else important while we're waiting. Since we're apparently in a situation where we're choosing between two Kelly bets, I'm thinking the odds are far better and the payouts far larger by just doing AI and seeing what happens instead of trying to make the inherently totalitarian "we should slow down AI development until we've solved the alignment problem" proposal *not* go terribly wrong. The government-alignment problem has had much more attention paid to it for much longer with much less success than the AI-alignment problem.

Also, "A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so much abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all dead." But, by the typical AI safetyist arguments, there *are no* "things like AI". You seem to be motte and baileying between "AI is a totally unique problem and we can totally take an inside view without worrying about the problems the inside view has" and also base that decision on the logic of a Kelly bet where we can play an arbitrary number of times. If it's your last night in Vegas, and you need to buy a $2000 plane ticket out of town or the local gangsters will murder you with 99% probability, then betting the farm isn't that bad a decision. This doesn't obviously seem like a worse assumption about the analogous rules and utilities than "perfectly linear in money, can/ought to/should play as many times as you like".

Expand full comment

Re. Scott's observations about not using expected value with existential risks, see my 2009 LessWrong post, "Exterminating life is rational": https://www.lesswrong.com/posts/LkCeA4wu8iLmetb28/exterminating-life-is-rational

I really like Scott's argument that we don't take enough risks with low-risk things, like medical devices. I've ranted about that here before.

But the jump to AI risk, I don't think works, numerically. I don't think anybody is arguing that we should accept a 1/1024 chance of extinction instead of a 0 chance of extinction. There is no zero-risk option. Nobody in AI safety claims their approach has a 100% chance of success. And we're dealing with sizeable probabilities of human extinction, or at least of gigadeaths, even WITHOUT AI.

We aren't in a world where we can either try AI, or not try AI. AI is coming. Dealing with it is an optimization problem, not a binary decision.

Expand full comment

I apologize if someone has pointed this out already, but I've seen several comment threads that seem to mistakenly assume that Kelly only holds if you have a logarithmic utility function.

I don't believe Kelly assumes anything about utility. It is just about maximizing the expected growth of your bankroll. The logarithm falls out of the maximization math.

Risk aversion is often expressed in terms of fractional Kelly betting. This Less Wrong post is helpful:

https://www.lesswrong.com/posts/TNWnK9g2EeRnQA8Dg/never-go-full-kelly

Expand full comment
founding

Kelly doesn't maximize your expected bankroll. In the long run, it maximizes your median bankroll, and your 25th percentile bankroll, and every other percentile. If you want to maximize expected bankroll, you just YOLO on every good bet.

The reason people say Kelly assumes a logarithmic utility function is because Kelly betting maximizes expected utility whenever utility is logarithmic in bankroll.

Expand full comment

Sorry. You are correct. I meant to write ”expected median”. As opposed to observed median.

Expand full comment
founding

Your comment didn't contain the string "median" at all, so not sure what edit you're trying to make, but it's all good.

Expand full comment

Yes, I intended to write "expected median", but only wrote "expected".

Expand full comment

I think it's interesting that you used nuclear power as your example; nuclear proliferation also contributes to existential risk, so I struggle to see why AI gets a special free pass as another existential risk. As you say, "But you never bet everything you’ve got on a bet, even when it’s great. Pursuing a technology that could destroy the world is betting 100%."

How is developing AI betting 100% but increasing access to nuclear power, and therefore weapons, not 100%?

Expand full comment

Nuclear proliferation is not closely tied to nuclear energy. AFAIK, no third-world country has ever stolen uranium from an American nuclear power plant and made a bomb with it. Both the US and Russia had nuclear weapons before they had nuclear power; nuclear weapons are easier to develop. The easier path to nuclear power is to make a bunch of money and then buy materials and a Manhattan project. So if nuclear power presents an existential risk, then other countries having money is an even greater risk, and we should just keep all non-nuclear nations dirt poor so they can't afford to make their own nuclear bombs.

Expand full comment

I don't think that's entirely accurate. The Pu for nuclear weapons comes from nuclear reactors. The first nuclear reactor was built in Chicago in 1942, and the Hanford B reactor came on line in 1944. Mind you, those early reactors were *only* used for research and plutonium products -- but they could in principle have been used to generate power.

Expand full comment

Fair point. And this is why America doesn't like Iran having a nuclear reactor. But the opposition to nuclear energy isn't in Iran; it's in Europe and America. I don't think the existence of more American nuclear power plants would add existential risk. It seems to be easier for Iran and other nations with bad intent to get plutonium, or the expertise they need to make plutonium, from Russia, China, or North Korea, than to steal it from American nuclear power plants. At least, that's how Iran did it.

Expand full comment

I think it's the Israelis who have demonstrated a....commitment to Iran not operating a nuclear power plant ha ha. I think the US attitude was *originally* that if the Iranians operated a power reactor and foreswore any fuel processing, that was all well and good, atoms for peace and all that. But it would require a great deal of openness to foreign inspection to be sure the fuel wasn't being processed, and that's always been the weak point. Since the Islamic Revolution, it's very strongly resisted by the Iranians, for at least some quite reasonable reasons, although with possibly some nefarious reasons. It's a delicate issue.

Personally, I would just *give* the Iranians a few older gravity nukes, along with operating instructions, and say there you go fellas! Just what you wanted! And...now what? You can finally be confident Israel will not invade or nuke you -- but they weren't interested in doing that in the first place, just so you know. And *you* can't just rain the Fire of Allah on Tel Aviv, because they're 100% going to know who did it, and they have better and more nukes than you, and always will, because they're smarter. So...welcome to the painful world of MAD, and the monkey's paw of nuclear armament. You *think* it's going to free you up, but it just enmeshes you in a new and even more frustrating web of constraint (cf. Vladimir Putin right now, seething because he really *wants* to nuke Kiev, but he knows he can't).

Expand full comment

I find your larger point interesting and plausible, but wonder why you think Putin can't nuke Kiev. I honestly don't know what's holding him back unless it's fear of his own people. I think the US is bluffing. It's willing to spend at least 1 trillion on paying student loans, and trillions building a new energy infrastructure, but not willing to budget $100 billion per year to defend Ukraine. it's hardly going to start a nuclear war over it. Look at us: we're not even making cheap, simple, and effective lifesaving civil defense preparations like designating shelters and stocking up on food and water, because nobody has any intention of standing up to the Russians.

Expand full comment

I based my comment on proliferation being correlated with nuclear energy programs on intuition, not prior knowledge. Looks like it isn't a settled question but that the link is indeed weak if it is present. https://direct.mit.edu/isec/article-abstract/42/2/40/12176/Why-Nuclear-Energy-Programs-Rarely-Lead-to?redirectedFrom=fulltext

I think you're reading my take in the wrong direction though. I think that keeping nations dirt poor so they can't afford nukes is as bad a read on the precautionary principle as is stopping all AI development right now.

Expand full comment

I also don't think we should just keep all non-nuclear nations dirt poor so they can't afford to make their own nuclear bombs (although I might wish that on North Korea and Iran). But that's because I don't think nuclear energy plants in, say, Botswana, are an existential risk on a par with nuclear missiles in Russia, AI, bioweapons, or the ban on human genome editing.

Expand full comment

Israel stole weapons grade uranium intended for American reactors. Not civilian power plants that were affected by activism, though, but naval propulsion.

AQK stole technology from a project enriching uranium for power plants.

My impression is that most power plants exist primarily to be part of a weapons program, either a completed program, as in France, or as a first step to deniably enable a sprint, as in Japan.

Expand full comment

What's AQK?

Expand full comment

Abdul Qadeer Khan, the proliferator.

Expand full comment

Note that later in that essay, Aaronson says:

> … if you define someone’s “Faust parameter” as the maximum probability they’d accept of an existential catastrophe in order that we should all learn the answers to all of humanity’s greatest questions, insofar as the questions are answerable—then I confess that my Faust parameter might be as high as 0.02.

Expand full comment

Is there a way to bet something between 0 and 100% on AI? (Without waiting to become an interstellar species?)

Expand full comment

Hot off the Press. The title is incendiary. I haven't read it, but I link it here FWIW.:

"Silicon Valley’s Obsession With Killer Rogue AI Helps Bury Bad Behavior: Sam Bankman-Fried made effective altruism a punchline, but the do-gooding philosophy is part of a powerful tech subculture full of opportunism, money, messiah complexes—and alleged abuse." • By Ellen Huet • March 7, 2023

https://www.bloomberg.com/news/features/2023-03-07/effective-altruism-s-problems-go-beyond-sam-bankman-fried?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTY3ODIwNjY2MiwiZXhwIjoxNjc4ODExNDYyLCJhcnRpY2xlSWQiOiJSUjVBRzVUMEFGQjQwMSIsImJjb25uZWN0SWQiOiIzMDI0M0Q3NkIwMTg0QkEzOUM4MkNGMUNCMkIwNkExNiJ9.nbOjP4JQv-TuJwoXaeBYhHvcxYGk0GscyMslQFL4jfA

Expand full comment

I have read it. It is incendiary. It is also deeply reported. i hope Scott and other people in the Bay Area rationalist community read it and comment on it.

Expand full comment

The AI safety people are focused only on the worst possible outcome. Granted it is possible, but how likely it is? One should also look at the likely good outcomes. AI has the potential to make us vastly richer, even the AI developed to date has made our life better in innumerable ways. Trying to prevent the (potentially unlikely) worst possible outcome will mean giving up all those gains.

Ideally, one would do a cost-benefit calculation. We can't do it in this case since the probabilities are unknown. However, that objection applies to all technologies at their incipient phase. That didn't stop us from exploring before and shouldn't stop us now.

Suppose Victorian England stopped Faraday from doing his experiments because electricity can be used to execute people. With the benefit of hindsight, that would be a vast civilizational loss. I fear the AI safety folks will deliver us a similar dark future if they prevail.

Expand full comment

Except that the AI safety people can articulate a very specific scenario (the paperclip maximizer) that is highly plausible given current methods of developing AI, and highly likely to lead to catastrophe if it were to happen.

Expand full comment

I'm not convinced that a paperclip maximizer would be *catastrophic* Because I don't think that a paperclip maximizer would be omnipotent. I'm also not convinced that AI is more prone to paperclip maximization than people are. I mean, I've talked to more than a few human 'equality maximizers' who I think would be catastrophic if only given enough power. Pol Pot was a kind of worst-case human paperclip maximizer. Eliminating AI, therefore, would not eliminate paperclip maximization.

As I've said elsewhere, we need a baseline for "catastrophe" based on what's been perpetrated by human intelligences when analyzing AI risk.

Part of worrying about AI alignment should include the recognition that there are some massive problems in aligning human intelligences.

Expand full comment

On a spectrum from useless to omnipotent, would you say that the ability for an agent to wipe out humanity is only at the very end of the scale towards omnipotence?

Expand full comment

I think it's close to it. By way of comparison, who has the power to wipe out humanity now? A few world leaders with nuclear weapons codes, maybe? And I don't know if any human individual could make the decision unilaterally.

I could totally understand Bernie Madoff or FTX level destruction from an AI that was given too much trust. Maybe a bioweapon if it were given privacy. (But why would we give it physical privacy?) Maybe I just don't associate intelligence with power as strongly as some?

Expand full comment

The difference, of course, is that humans do not have the capability for recursive self-improvement. A human who wants to maximize paperclips cannot trivially create copies of themselves, nor do they have a direct brain interface to exabytes of data or the ability to reprogram their own brain neuron-by-neuron.

Expand full comment

"The difference, of course, is that humans do not have the capability for recursive self-improvement."

Humans as a group do improve their knowledge in an essentially recursive fashion. The improve their prior assumptions. They can operate at scale by breaking tasks into smaller components and distributing those tasks among lots of people. They also leverage technology to improve their own abilities. The notion that humans are limited strongly by the size of their brains somewhat understates what humans are capable of. Brain size is a limit, sure. AI will be faster, more comprehensive, and far more efficient, sure. But human brain size isn't a hard limit on human capacity. We have workarounds.

"A human who wants to maximize paperclips cannot trivially create copies of themselves"

And yet fads are real. Trends are real.

More critically, you can have superhuman general intelligence without a single embodied intelligence. And then the question is "what does creating lots of virtual copies of yourself actually *get* you in the great game?"

"or the ability to reprogram their own brain neuron-by-neuron."

There's no requirement that a general intelligence needs to be able to reprogram all of its brain. But maybe there will be AI equivalents of junkies who short circuit the system somehow.

However, this still leaves the question unanswered; given a superhuman intelligence, by what mechanism does that intelligence convert knowledge into power? I'm not saying that this can't happen if people make it happen. I am saying that it's not inevitable or straightforward that just because an AI is 10,000 more intelligent than the average human that it will become temporally powerful. There's a step or two missing there.

Expand full comment

> A world where we try ten things like nuclear power, each of which has a 50-50 chance of going well vs. badly, is probably a world where a handful of people have died in freak accidents but everyone else lives in safety and abundance. A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so much abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all dead.

This is the heart of the disagreement, right here. Let's stipulate that the Kelly criterion is a decent framework for thinking about these questions. The fact remains that the output of the Kelly criterion depends crucially on the probabilities you plug into it. And Scott Aaronson, and many other knowledgeable people, simply don't agree with the probabilities that are being plugged in for AI to produce the above result.

Expand full comment

At what point is this debate so theoretically that it has no practical, rational application?

Looking critically at homo sapiens, we tend to discover and invent things with reckless abandon and then figure out how to manage said discoveries/inventions only after we see real-world damage.

It doesn't appear to me in our makeup to be proactive about pre-managing innovations. Due to this, it seems that humanity writ large (be it America, China, North Korea, Iran, Israel, India, or whomever leading the way) will press forward with reckless abandon per usual.

We just have to hope that AI isn't "the one" innovation that will be "the one" that ends up wiping everything out.

It frankly seems far more likely that bioweapons (imagine COVID-19, but transmissible for a month while asymptomatic with a 99% fatality rate) have a better chance at being "the one" than AI, only because the AI concern is still theoretical while the bioweapon concern seems like it could already exist in a lab based on COVID-19 tinkering. And lab security will never be 100%.

Expand full comment

I commented a long time ago, I think in an open thread, that Kelly dissolved the paradox of Pascal's Mugging. But I guess it didn't receive much attention, if Scott's first hearing of this is coming from Aaronson/FTX.

Expand full comment

No it doesn’t.

Kelly is equivalent to maximizing expected log value at each step. For any probability, there is a sufficiently large threat that the expected log value is still positive to yield to the mugger

Expand full comment

The arrangement of Kelly I find most intuitive is

k = p - (1/b) q

where

p = probability of win

q = probability of loss

b = payout-to-bet ratio

1 = p + q

What this makes obvious to me, is that p bounds k. As b goes to infinity, (1/b) q vanishes to zero. Which means k asymptotically approaches p from below. E.g. if p is 1%, then k < 1% no matter how large b is.

What this implies for Pascal's Mugging is that, yes, there's always a payout large enough such that it's rational for Pascal to wager his money. But since p is epsilon, Pascal should wager epsilon. This conclusion both agrees with your comment, and simultaneously satisfies the common-sense intuition that giving money to the mugger is a dumb idea.

Expand full comment

I don't think any of the Pascal's Mugging scenarios I've seen have let the accosted choose how much to bet: https://en.wikipedia.org/wiki/Pascal%27s_mugging

Expand full comment

Wagering more than Kelly goes downhill fast. And irl, people bet ~1/4 of Kelly. Because wagering exactly Kelly is an emotional rollercoaster, and because p and q aren't known with confidence, and because people don't live forever, etc.

So if the betting options for Pascal are either 100% or 0%... just choose 0%. Easy peasy.

Expand full comment

It occurred to me that you probably find this explanation unsatisfying, because it doesn't talk about the log perspective. So let's try again.

k = p + q / b

Suppose p = 0.1, and b = 100. Pascal can only wager 0% or 100%. If Pascal wagers 100%, he loses his wallet 9 times out of 10. But the tenth time he multiplies his wallet by 100. You probably think the "expected value of log utility" shakes out to look something like

E[ln(x)] = (0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + ln(100)) / 10

E[ln(x)] = ~.461

Which is above 0, and thus reason that it's rational for Pascal to give all his money to the mugger. But this isn't correct.

x here represents Pascal's bankroll as a percentage of his former bankroll. E.g. if Pascal starts with $10 and increases his bankroll to $11, this represents a term of ln(1.1), which reduces to .095 (approx). If he starts out with $10 and decreases his bankroll to $9, this represents a term of ln(.9), which reduces to -.105 (approx). winning means positive utility, losing means negative utility. So far so good, right?

Here's the catch. What if Pascal bets the house and loses? His bankroll gets nuked to 0, which implies a term of ln(0), which reduces too... negative infinity. So what the "expected value of log utility" actually looks like, is

E[ln(x)] = (ln(0) + ln(0) + (...) + ln(100)) / 10

E[ln(x)] = (-inf + -inf + (...) + ln(100)) / 10

E[ln(x)] = -inf

Oops! If we want to max E[ln(x)], outcomes that nuke the bankroll to zero are to be avoided at all costs. And now we know why betting 100% is bad juju.

Expand full comment

I 100% agree with this, but in the original and most popular formulations of the problem, you don't lose your entire bankroll. You lose a tiny fraction of it. See

https://en.wikipedia.org/wiki/Pascal%27s_mugging

https://nickbostrom.com/papers/pascal.pdf

https://www.lesswrong.com/posts/a5JAiTdytou3Jg749/pascal-s-mugging-tiny-probabilities-of-vast-utilities

Expand full comment

And however tiny the ratio of (10 livres / Pascal's bank account), it's implied that the probability of "the mugger will pay Pascal more livres than atoms-in-the-universe" is far, far tinier. I've tried to impart an intuition about the behavior of the math, but these pointless gotcha's indicate that I've failed so far. And I'd rather not do into calculus involving hyper-operations in a substack comments section.

Consider playing with the numbers in a spreadsheet. I guarantee you the curve of E[ln(x)] for any p = epsilon will look like a molehill followed by a descent into the Mariana Trench, and that 10 livres is somewhere underwater given any non-astronomical figure for Pascal's bank-account.

Expand full comment
founding

If you're comfortable with logarithms there's an intuitive proof of Kelly that I think gets to the heart of how and why it works.

First, consider a simpler scenario. You're offered a sequence of bets. The bets are never more than $100 each. Your bankroll can go negative. In the long run, how do you maximize your expected bankroll? You bet to maximize your expected bankroll at each step, by linearity of expectation. And by the law of large numbers, in the long run, this will also maximize your Xth percentile bankroll for any X.

Now let's consider the Kelly scenario. You're offered a sequence of bets. The bets are never more than 100% of your bankroll each. Your log(bankroll) can go negative. In the long run, how do you maximize your expected log(bankroll)? You bet to maximize your expected log(bankroll) at each step, by linearity of expectation. And by the law of large numbers, in the long run, this will also maximize your Xth percentile log(bankroll) for any X.

If you find the first argument intuitive, just notice that the second argument is perfectly isomorphic. And since log is monotonic, maximizing the Xth percentile of log(bankroll) also maximizes the Xth percentile of bankroll.

Expand full comment
founding

Mostly off topic but I think it's worth mentioning that Leaded Gasoline and CFCs were invented by one guy! Thomas Midgley Jr. really was a marvel.

Expand full comment

More nuclear generation would not end global warming or provide unlimited cheap energy. It would cut power-sector emissions (and probably some in district heating) but would not reduce emissions or increase energy supply or offer alternative feedstocks elsewhere in the economy (e.g. transport, steelmaking, lots of chemicals).

More nuclear generation wouldn't necessarily reduce costs, either. Capex AND o&m for nuclear power plants are expensive. All you have to do for solar PV is plonk it in a field and wipe it off from time to time; there are no neutrons to manage.

I know this isn't the primary point of this piece, so forgive me if I'm being pedantic. Noah Smith makes similar mistakes. <3 u, Scott!!!

Expand full comment

To what extent does the Kelly bet strategy align with modern portfolio theory? I'm sure someone's looked at this.

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

The first atomic bomb detonated in New Mexico was another risk, although I don't know what the assessment was at the time. Does it matter that whatever assessment they had at the time could have been way off? That risk, if it wiped us out (don't know what they knew of that at the time), wouldn't have mattered for the eventual development of nuclear power. In hindsight, the activism against nuclear power was bad, but at the time, did anyone really know?

Expand full comment

This was a great article... so naturally I'll write about the one thing I disagree with. 😁

"If Osama bin Laden is starting a supervirus lab, and objects that you shouldn’t shut him down because “in the past, shutting down progress out of exaggerated fear of potential harm has killed far more people than the progress itself ever could”, you are permitted to respond “yes, but you are Osama bin Laden, and this is a supervirus lab.”"

I strongly disagree with this. Everybody looks like a bad guy to SOMEBODY. If your metric for whether or not somebody is allowed to do things is "You're a bad guy, so I can't allow you to have the same rights that everybody else does" then they are equally justified in saying "Well I think YOU'RE a bad guy, and that's why I can't allow you to live. Deus Vult!" Similarly, if you let other people do things that you otherwise wouldn't because "they're a good guy," then you end up with situations like FTX, which the rationalist community screwed up and should feel forever ashamed about.

Do you get it? Good and bad are completely arbitrary categories and if you start basing people's legal right to do things based on where they fit into YOUR moral compass, then you have effectively declared them second class citizens and they are within their rights to consider you an enemy and attempt to destroy you. After all if you don't respect THEIR rights, then why should they respect YOURS?

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

Very sound argument. And if there were even a proof of concept that a superintelligent AI was possible, even in principle -- if there was even a *natural* example of a superintelligent individual, or group, that had gone badly off the rails -- some kind of Star Trek "Space Seed" event -- then you'd have a great case.

Let me put it this way. In "Snow Crash" Neal Stephenson imagines that it is possible to design a psychological virus that can turn any one of us into a zombie who just responds to orders, and that virus can be delivered by hearing a certain key set of apparently nonsense syllables, or seeing a certain apparently random geometric shapes. It's very scary! You just trick or compel someone to look at a certain funny pattern, and shazam! some weird primitive circuitry kicks in and you take over his mind. Stephenson even makes a half-assed history-rooted argument for the mechanism ("this explains the tower of Babel myth!" and for all I remember Stonehenge, the Nazca Lines, and the Antikythera Mechanism as well).

Would it make sense to ban all psychology research, on the grounds that someone might discover, or just stumble across, this ancient psychological virus, and use it to destroy humanity? After all, it's betting the entire survival of the species. We could all be turned into zombies!

Before you said yeah that's persuasive, you'd probably first say -- wait a minute, we have absolutely no evidence that such a thing is even possible. It's just a story! You read it in a popular book.

Well, that's how it is with conscious smart AI. It's just a story, so far. You've seen it illustrated magnificently in any number of science fiction movies. But nothing like it has ever been actually demonstrated in real life. Nobody has ever written down a plausible method for constructing it (and waving your hands and saying "well...we will feed this giant network a shit ton of data and correct it every time it doesn't act intelligence" does not qualify as a plausible method, any more than I can design a car by having monkeys play with a roomful of parts and giving them bananas every time they produce something a bit car-shaped). Nobody can even describe how such a thing would work, at the nuts and bolts level.

So right now, we may well be betting the entire future of humanity, but we're betting it on the nonexistence of something which does not yet exist, which no one can even describe satisfactorily (other than "much more capable than you!" Well how? What exactly can it do better than me, and how? "Capable, I said! Much" Er...ok). Those of us who are skeptical are unconvinced the thing you fear is possible even in principle. We feel it's like betting on red 10 when the roulette wheel is nailed to the table and the ball is already in the red 10 slot.

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

Very nicely put. I have another analogy I like to use. About 60 years ago Feynman gave a famous lecture about nanotechnology called "Plenty of room at the bottom". In the lecture, he laid out a vision for building objects one atom at a time, giving people the ultimate control over matter. He even gave an example of how it could possibly be done, which sounds awfully similar to the recursive AI bootstrapping argument:

”As a thought experiment, he proposed developing a set of one-quarter-scale manipulator hands slaved to the operator's hands to build one-quarter scale machine tools analogous to those found in any machine shop. This set of small tools would then be used by the small hands to build and operate ten sets of one-sixteenth-scale hands and tools, and so forth, culminating in perhaps a billion tiny factories to achieve massively parallel operations. He uses the analogy of a pantograph as a way of scaling down items. This idea was anticipated in part, down to the microscale, by science fiction author Robert A. Heinlein in his 1942 story Waldo." (Quote from Wikipedia).

These ideas were later developed into the "grey goo" and other nano apocalypse scenarios by Michael Crichton and others. Well, these were just stories. If you start with a premise of infinite recursion, you can argue that lots of magic should be possible. 60 years later none of this happened. Turns out there are many physical obstacles to making magical dreams come true.

Expand full comment

Yeah, and if Feynman had thought about it for 30 minutes, he probably would have realized very quickly where he went wrong[1]. He was a very smart guy, but he definitely didn't put a lot of energy into reviewing his words somewhere in the trip between Broca's Area and his mouth. It's what made him so entertaining, in part.

-------------------------------

[1] Which is that friction becomes way more important as you get smaller and smaller, and inertia stops being important. A fluid dynamicist would say you move from large Reynolds number to low. But when that happens, the techniques that work well change. That's why protozoans don't "swim" the way larger organisms do, e.g. like the scallop by jet propulsion. At the size scale of a paramecium, water is a sticky gooey substance, and the techniques you need to use to move yourself change completely. You more or less wriggle through the water like a snake, and thrashing swimming motions are useless.

So pretty soon your manipulator hands would stop working. Or more precisely, you would need at each stage to learn new techniques for manipulating physical objects and forces, and so each stage of the replication would need to build *different* types of manipulator hands for the next stage down. You can absolutely do this, of course -- I think he's 100% right as a matter of general principle -- but it's much, much more complex than just designing the first set of hand and saying "now go do this again at 1/100 scale." You need to study each succeeding level, learn what works well, and redesign your hands.

There's a clear application to AI. I don't believe in this hypothetical future where you can ask ChatGPT to design its replacement. "Go design an AI that doesn't have your limitations! And then ask that successor to design a still smarter AI!" Not happening. At each level, you need to study what's new about that level, and design a new mechanism. That's plausible at the physical level, but I'm damned if I can see how it works at *any* level in the scaling up in intelligence path. I cannot see how any intelligence can design a more intelligent successor. In every real example I know, the designer is always at least as smart, or usually smarter, than what is designed. Never seen it go the other way, and I can't imagine a mechanism whereby it would.

Expand full comment

Feynman's goal was to inspire, not scare people, so he had no incentive to critically analyze this idea. Also, yes, viscosity is important but stiction (e.g., van der Waals forces) is arguably a bigger obstacle to scaling down macroscopic tools down to the nanoscale.

Expand full comment

> Never seen it go the other way, and I can't imagine a mechanism whereby it would.

Deep blue being better at chess than it's creators?

Imagine people working on the first prototype cars, and being a new technology, these prototypes are very slow. You say "I have never seen it go the other way. I have never seen the created move faster than the creator. I can't imagine a mechanism whereby it would.".

Of course you haven't. Humans hadn't yet made superhumanly advanced cars. No other species makes cars at all. You failed to make years of progress in inventing fast cars by thinking about it for 5 minutes.

It may well be that there are different principles at different levels of intelligence. You can't just scale to get from an AI 10x smarter than a human, to one 100x smarter. There are entirely different principles that need developed. What is harder to imagine is the supposedly 10x smarter AI just sitting there while a human develops those principles.

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

First of all, Deep Blue isn't *better* at chess than its creators, it's merely *faster*. There's nothing Deep Blue can do that its human programmers couldn't do themselves by hand. It would just take them much, much longer. Perhaps a million years! But so what? There is nothing new there. The fact that I can't multiply two 12-digit numbers in 0.1ms while my calculator can doesn't say it has more intelligence than me, it just says it's faster.

Now if the path from current AIs to a conscious thinking machine was *merely* doing what it does now, but much much faster, there's be a point here. If one could write down an algorithm that you *knew* would lead to conscious thinking, and it was just a question of getting enough processor speed and memory to execute it in real time, there would be a point.

But that's not what we're talking about. We're talking about a writing a program that can write another program that can do creative things the first program can't (like writing a 3rd program that is still more capable). I see no way for that to happen. You can't get something from nothing[1].

I don't see the sense in your car analogy. The car is not being asked to design another car that is still faster. Indeed, the car is not being asked to do anything other than what its human designers envision. Go fast. Do it this way, which we fully understand and design machinery to do. Again, that would work as an analogy *if* we had any idea how to design an intelligent thinking machine. But we don't. And until we do, any speculation about how hard or easy it might be to design a thinking machine to design a smarter thinking machine is sterile. It's not even been shown that one thinking machine (us) can design an *equally* intelligent thinking machine. So far, all we've been able to design are machines that are much stupider than we are. Not promising.

------------------

[1] The counter-example is evolution, which is great, and if you had a planet where natural forces allowed silicon chips to randomly assemble, reproduce, and face challenges from their environment, I would find it plausible that intelligent thinking computer would arise in a few hundred millions years.

Expand full comment

The "A hypothetical immortal human could do that with pencil and paper in a million years". What such a hypothetical immortal human could do has little bearing on anything, as such a human doesn't exist and is unlikely to ever exist. (Even in some glorious transhuman future, we won't waste eternity doing mental arithmetic.)

If the AI kills you with advanced nanoweapons, does it matter whether a hypothetical human could have designed the same nanoweapons if they had a billion years to work on it? No.

> I see no way for that to happen. You can't get something from nothing[1].

This isn't a law of thermodynamics. You aren't getting matter or energy appearing from nowhere. You are getting intelligence increasing.

Evolution caused increasing intelligence. If you want to postulate some new law of physics that is conservation of intelligence, then you are going to need to formulate it carefully.

> So far, all we've been able to design are machines that are much stupider than we are. Not promising.

Ah, the same old "we haven't invented it yet, therefore we won't invent it in the future" argument.

If we knew how to invent a superhuman AI, we likely could write the code easily.

The same old process of humans running experiments and figuring things out is happening. Humanity didn't start off with the knowledge of how to make any technology. We figured it all out by thinking and running experiments.

> If one could write down an algorithm that you *knew* would lead to conscious thinking, and it was just a question of getting enough processor speed and memory to execute it in real time, there would be a point.

AIXI is a design we can write down. It takes a crazy amount of compute, so we won't be making exactly that. But if we had the compute, it would be very smart. I am not sure what "consciousness" has to do with this.

Expand full comment

Human technology has a pretty reasonable track record of inventing things that don't exist yet, and have no natural examples. The lack of animals able to reach orbit isn't convincing evidence that humans can't.

For some technologies, a lot of the work is figuring out how to do it, after that, doing it is easy.

"people keep talking about curing cancer. But no one will give me a non handwavey explanation of how to do that. All these researchers and they can't name a single chemical that will cure all cancers".

Besides science fiction and real life, we can gain some idea what's going on through other methods.

For example, we can note that the limits on human intelligence are at least in part things like calories being scarce in the ancestral environment, and heads needing to fit through birth canals. Neurons move signals at a millionth of the speed of light, and are generally shoddy in other ways. The brain doesn't use quantum computation. Humans suck at arithmetic which we know is really trivial. These look like contingent limits of evolution being stupid, not fundamental physical limits.

And of course, being able to manufacture a million copies of von-newmann's mind, each weighing a few kilos of common atoms, and taking 20 watts of power, would be pretty world changing even if human brains were magically at the limits.

Based on such reasons, we can put ASI in the pile of technologies that are pretty clearly allowed by physics, but haven't been invented yet.

Humans taking techs that are clearly theoretically possible, and finally getting them to actually work is a fairly regular thing. But it is hard to say when any particular tech will be developed.

My lack of worry about psycology research is less that I am confident that no such zombie pattern exists, more I don't think such an artifact could be created by accident. I think creating it would require either a massive breakthrough in the fundamentals of psychology, or an approach based on brainscans and/or AI. It seems hard to imagine how a human could invent such a thing without immediately bricking their own mind. There doesn't seem to be a lot of effort actually going into researching towards such a thing.

(and it isn't an X-risk, some people are blind and/or deaf.)

Expand full comment

Sorry, this is completely unpersuasive to me. The fact that you can write an English sentence that parses, containing the words "superintelligent AI," and wave your hands and give a few examples of what you mean by that, does not imply at all that it could exist. Gene Roddenberry showed me a spaceship that could travel from Earth to an inhabited planet 10 ly away in half an hour. It was a very convincing and realistic portrayal. Which means exactly nothing about whether it could actually happen. Human imagination is unbounded. We can imagine an infinity of things that seem reasonable to us, so that is evidence of weight approximately zero in terms of whether it could.

I mean, basically you're repeating one of Anselm's famous proofs of the existence of God. "Because we can imagine Him, He must exist!" I've never understood how intelligent men could swallow such transparently circular reasoning, but exposure to the AGI enthusiast/doom pr0n community has been most illuminating.

Expand full comment

We have specific strong reasons to think FTL is more likely to be impossible. (Namely the theories of relitivity)

There aren't a vast infinity of things that

1) Have significant and funded fields of science and engineering dedicated towards creating them.

2) Are pointing to a quantity we already see in the real world, and saying "Like this but moreso"

3) Have a reliable track record in the related field of moving forward, of doing things we were previously unable to do.

These are the sort of things that in the past have indicated a new tech is likely to be developed.

Human brains clearly exist.

Imagine all possible arangements of atoms. Now lets put them all in a competition. Designing rockets and fusion reactors. Solving puzzles. Negotiating with each other in complicated buisness deals. Playing chess. All sorts of tasks.

Now most arangements of matter are rocks that just sit there doing nothing. Some human made programs would be able to do somewhat better. Maybe stockfish does really well on the chess section, and no better than the rocks on the other sections. ChatGPT might convince some agents to give it a share of resouces in the bargining, or do ok in a poetry contest. Monkeys would do at least somewhat better than rocks, at least if some of the puzzles are really easy. Humans would do quite well. Some humans would do better than others. Do you think that, out of all possible arangements of atoms, humans would do best? Are human minds some sort of optimal, where distant aliens, seeking the limits of technology, make molecularly exact copies of Einstein's brain to do their physics research?

Current AI research is making progress, it can do some things better than humans. Where do you think it will stop? What tasks will remain the domain of humans?

Expand full comment

Something I'd like to see is to consider simultaneously the danger of AI development, and the danger of degrowth (or even the absence of growth): Both risks have their thinkers, but I am not aware of combine the two. When considering AI risks for example, it's most of the time compared to a baseline where AI do not take off and it's business as usual (human driven progress and increase in standard of living)....However, when looking at trends, the baseline (no AI takeoff) does not seems to be business as usual, but something less pleasant (possibly far less pleasant). If you look at AI worst case scenario (eradication of humans, possibly of all non-AI entities in a grey goo apocalypse) it's very frightening. But if you look at it from the other side worst case scenario (tech/energy crash leading to multi-generation malthusian anarchy or strong dictatorship, both with very poor average SoL), it's less frightening. Sure, one is permanent and the other is only multi-generation.....But as I get older, the difference between permanent and multi-generation sounds more philosophical than practical....In fact, a total apocalypse may be prefered by quite many compared to very bad and multi-generation totalitarism or mad-max like survivalism. At least it has some romantic appeal, like all apocalypses...

Expand full comment

I don't think dramatic collapse scenarios are probable. Even a kind of stagnation seems harder to imagine. There are a bunch of possible other future techs that seem to be arriving at a decent speed. Ie the transition to abundant green energy. Research on antiaging. More speculative, but far more powerful, outright nanotech. And of course there is the steady economic march made of millions upon millions of discoveries and inventions, each a tiny advance in some obscure field.

Expand full comment

> It’s not that you should never do this. Every technology has some risk of destroying the world;

Not only technology can destroy the world. Humanity can be destroyed by an asteroid or supernova. And who proved that the evolution will not destroy itself? Biosphere is a complex system with all traits of chaos, it is unpredictable on a log run. There are no reasons to believe that if all previous predictions for apocalypses were wrong, then there would be no apocalypse in a future.

So a risk of an apocalypse is not zero in any case. It grows monotonically with time.

The only way to deal with it is a diversification. Do not place all eggs into one basket. And therefore we need to consider a potential of a technology to create opportunities to diversify our bets. AI, for example, can make it much easier to Occupy Mars, because travel in a Solar System is large. Communication suffers from a high latency, so we need to move decision making to a place where it will be applied. Travel is costly, we need to support life of humans in a vacuum for years, just to move there. AI can reduce costs of asteroid mining and Mars colonization dramatically.

If we take this into a consideration, how AI will affect a life expectancy of a humankind?

Expand full comment

If we have a friendly superintelligence, it can basically magically do everything. All future X-risk goes to the unavoidable stuff like the universe spontaneously failing to exist. (+ hostile aliens?)

The chance of an asteroid or supernova big enough to kill us is pretty tiny on human timescales. The dinosaur killer was 50 million years ago. These things are really rare, and we already have most of the tech needed for an OK defense.

Lets say we want to make ASI eventually, the question is whether to rush ASAP, or to take an extra 100 years to really triple check everything. If we think rushing has any significant chance of going wrong, and there are no other techs with a larger chance of going wrong, we should go slow.

To make the case for rushing, you need to argue that the chance of Nuclear doom/ grey goo / something in the intervening years we don't have ASI are greater than the chance of ASI doom if we rush ( minus ASI doom from going slow, but if you think that is large, then never making ASI is an option. )

It is actually hard for a mars base to add much more diversity protection. Asteroids can be spotted and deflected. Gamma ray bursts will hit mars too. Bad ASI will just go to mars. The mars base needs to be totally self sufficient, which is hard.

Expand full comment

Before I answer this, I'd like to note, that I do not intend to prove that you are wrong in your conclusions. What I want to do is to show you, that your methods to reaching your conclusions is not rigorous enough. It looks like I'm trying to state some other conclusion, but it is because I do not see how to avoid it. In fact I do not really know the answer.

> These things are really rare, and we already have most of the tech needed for an OK defense.

How about a nuclear war? Or more infectious COVID? Or some evolved insect that eats everything and doubles its population daily? Or how about an asteroid, which our defences will strike to divert from Earth, but it explodes releasing a big cloud of gas and dust, which then will travel to Earth and kill us all?

Complex system can end in a ruin surprisingly fast and in a surprising ways too.

> pretty tiny on human timescales.

Are we concerned about ourselves only, or our children and grandchildren also matter? Mars colonization cannot be done in a weekend, it would need decades or even centuries.

> It is actually hard for a mars base to add much more diversity protection.

If there is a human population on Mars of 1M self-sustaining people, it will add a lot and it will open other opportunities. For example it is much more easy to go on orbit on Mars, so it is easier to mine asteroids or to create a completely artificial structure in a space that can host a population of another million people. It will open a path to a consequent exploration and colonization beyond our Solar System.

> the question is whether to rush ASAP, or to take an extra 100 years to really triple check everything

Yes, it is a good question. It is like mine, but I like mine more: how AI will affect a life expectancy of a humankind? To my mind it is captures the issue the best.

But your question begs to be answered with another question: do you think that it can help to triple check ASI without developing ASI itself? AFAIK people never managed it with any technology they devised. And I believe, it is nearly impossible with ASI. AI development goes hand in hand with psychology, neuroscience and philosophy. And with math of course. And no one knows what happens next, because it is impossible to know without trying. I suspect that developers of OpenAI themselves poorly knew how ChatGPT will perform. They just found some issues, tried to overcome them, then tried again, then came other issues, and this iterative process eventually led to a ChatGPT-3, and today's knowledge of what OpenAI approach allows is an order of magnitude better than it was 10 years ago. What the point might be to spend these 10 years to triple check their approach? Might it deliver more safety or not?

I'm going to say one more time: I do not really know answers to these questions. But they are real questions. For example, I cannot imagine how engineers in 1900s would developed relatively safe steam engine without blowing tenths (or hundreds?) or steam engines and boilers with a lot of casualties. One of the reason: a boiler needs good steel, but the race to create good steel industrially was driven mostly by a growing demand for rails, which was driven by all these steam engines on wheels. It is a chicken and an egg problem in disguise.

Or do you follow space exploration and rocket science achievements? Do you noticed how hard it is for startups to reach an orbit first time? It was a tremendous success when SLS did it on its first try. Today Relativity cancelled their first launch at -1:10, something with a propellant temperature as I understand. But back in 1930s and 1940s it was a lot worse, because no one knew what to expect exactly. I believe that in a several decades undergraduate students would be developing rockets and fly them at the first try with chances 50/50. It comes easier with an experience, but experience needs well experience.

It is said that safety rules are written with blood. And no one really proposed a way to write them with ink, or with some other substance that do not mined from people bodies. So triple checking for 100 years would give us more confidence, but how much more exactly? Will it matter?

Expand full comment

I wasn't talking about viruses or nukes when I said "these things are really rare" and "we already have an ok defense". I was talking about asteroids and supernovae.

Nuclear war is likely a much bigger risk than asteroids.

I don't think we have enough nukes to kill everyone, there are lots of remote villages in the middle of nowhere. So nukes aren't that much of an X-risk.

"Or more infectious COVID?" Well vaccines and social distancing (and again. something that doesn't kill 100% of people isn't an X-risk. If a disease is widespread and 100% lethal, people will be really really social distancing. Otherwise, it's not an X-risk. )

"Or some evolved insect that eats everything and doubles its population daily?" Evolution has failed to produce that in the last million years, no reason to start now. (Actually some pretty good biology reasons why such thing can't evolve)

"Or how about an asteroid, which our defences will strike to divert from Earth, but it explodes releasing a big cloud of gas and dust, which then will travel to Earth and kill us all?" Asteroids aren't explosive. Exactly how is this gas cloud lethal? Gas at room temperature expands at ~300m/s. Earth's radius is ~6 *10^6m So that's earths radius every 6 hours. So only a small fraction of the gas will hit earth.

"Are we concerned about ourselves only, or our children and grandchildren also matter? Mars colonization cannot be done in a weekend, it would need decades or even centuries." It doesn't matter. Suppose we are thinking really long term. We want humanity to be florishing in a trillion years. If you buy that ASI is coming within 50, and that friendly ASI is a win condition, it doesn't matter what time scales we are thinking on beyond that.

"For example it is much more easy to go on orbit on Mars"

If we can get a mars base with 1M people on it, we have got the hang of getting into orbit. It might be a lot harder to get into orbit from an early mars colony. Sure the energy barrier is lower. But do they have all the parts? Is there a factory on mars that can make rocket engines? And what specs do they have compared to earth made rockets?

"do you think that it can help to triple check ASI without developing ASI itself?"

Having more time and resources is generally helpful. It would be surprising if there was literally nothing we could do with it.

Now, a slow cautious approach might still involve AI. Maybe we gradually ramp up intelligence, studying each AI in detail before we add any more smarts. Maybe we make an ASI with the sole goal of shutting itself down ASAP, or sitting there doing nothing, and study exactly how it does this.

If we are in a world where the failures on the route to superintelligence aren't catastrophic or world ending, then we could take the try and try again approach. (I don't think this is the world we live in)

If a failure will end the world, and there is no way to get a >90% chance of success on the first time (going slow and careful might help, but it isn't enough), then we need to plan for a future where we never get ASI, and figure out a way to stop the careless making ASI forever.

Expand full comment

Science and technology do not have more benefits than harms. Science and technology are tools and like all tools, they cannot do anything without a conscious actor controlling them and making value judgements about them. Therefore, they are always neutral and their perceived harms and benefits are only a perfect reflection of the conscious actor using them.

This is a mistake made very often by the rational community. Science and technology can never decide the direction of culture or society, it can only increase the speed we get there. We decide how the tool is used or misused.

The reason incredibly powerful technology like nuclear energy and AI chills many people to the bone is because they are being developed at times when society are not quite ready for them. The first real use for atomic energy was a weapon of mass destruction. This was our parents and grandparents generation! There is a major war raging on in Europe with several nuclear facilities already at risk for a major catastrophe. What would happen if the tables turned and Russia felt more threatened? Would those facilities not be a major target?

The international saber rattling is a constant presence in the news. The state of global peace is still incredibly fragile. The consequences of a nuclear disaster is a large area of our precious living Earth becoming a barren hell for decades and centuries. Are we stable and mature enough for this type of power?

And just look at how we have used the the enormous power that we received from fossil fuels. What percentage of that energy went to making us happier and healthier? Yes we live a bit longer than 2 centuries ago, but most of that improvement is not due to the energy of fossil fuels.

Why would the power we receive from AI and nuclear energy be used any differently? Likewise they will have some real beautiful applications that help human beings, but mostly they will be used to make the rich richer, to make the powerful more powerful, to make our lives more "convenient" (lazy), and likewise they will disconnect us from each other and from this incredible living planet that we call home. Is putting even more power in the hands of this society really a good idea, considering our track record this past century? (major world wars, worst genocides ever, extreme environmental degradation)

There is nothing inherent to these technologies that will do these things. The technology is neutral. That is just where we are as a society.

Focusing on technology as the saviour for our ailments is a classic pattern in the human condition. You can see this same pattern on the individual level. It is just a way of externalising our problems so we don't have to look inward to see what is really going on. Classic escapism. "If only I had a different partner", "if only I lived in a better house", "if only we chose nuclear energy", "if only AI came to solve all our logistical problems". The problem is not external to us. The problem is within us. It cannot be solved with technology or rationality, those are simply the tools to execute.

The problem is the direction we are moving, not the speed.

I think we can all agree that AI and nuclear energy would feel a lot better if we haven't had a major war in a few centuries. This can be a reality. We only need to shift our focus from the external to the internal. We know how. But why?

Expand full comment

It could make sense in a total-utilitarian sense to wager one entire civilization if the damage were limited to one civilization and there are several other civilizations out there. But one paperclip maximizer could destroy all the civilizations in the universe.

The derivation of Kelly assumes you have a single bankroll, no expenses, and wagering on that bankroll is your only source of income, and seeks to maximize the long-run growth rate of your bankroll. If Bob is a consultant with 10k/month of disposable income, and he has $3k in savings, it totally makes sense for him to wager the entire 3k on the 50% advantage coin flip. For Kelly calculations he should use something like the discounted present value of his income stream, using a pessimistic discount rate to account for the fees charged by lenders, the chance of getting fired, etc.

If we settled multiple star systems, and found a way to reliably limit the damage to one star system, then we should be much more willing to experiment with AGI.

Expand full comment
founding

The problem with gain of function isn't its risk - it's the total lack of potential upside.

If you can make a temporary mental switch and see humans as chattel, some interesting perspectives happen. Like how 100 thialomide-like incidents would compare with having half as many cancers, or everybody living an extra 5 healthy years.

Covid was bearable, even light in terms of QALYs - but there was no expected utility to be gained by playing russian rulette. It was just stupid loss.

AI... not so much. Last november I celebrated: we are no longer alone. We may not have companionship, but where it matters, in the getting-things-done department, we finally have non-human help. The expected upside is there, and not in a silver of probability. I'd gladly trade 10 covids or a nuclear war for what AI can be.

Expand full comment

One of the issues that came up in the thread was the origin of Covid, and I have a relevant question for something I am writing that people here might be able to answer. I will put the question on the most recent open thread as well, but I expect fewer people are reading it.

The number I would like and don't have is how many wet markets there are in the world with whatever features, probably selling wild animals, make the Wuhan market a candidate for the origin of Covid. If it is the only one, then Covid appearing in Wuhan from it is no odder a coincidence than Covid appearing in the same city where the WIV was researching bat viruses. If it was one of fifty or a hundred (not necessarily all in China), then the application of Bayes' Theorem implies a posterior probability for the lab leak theory much higher than whatever the prior was.

Expand full comment

Point of order: SARS-Covid-2 is not a bat virus. Its closest cousins, genetically speaking, are bat viruses, but it is itself not one. I think that's one of the biggest reasons the lab leak theory even has legs -- if it were a virus that could be traced back to some virus in wild Asian animals (the way SARS-Covid-1 eventually was), then people would not be as suspicious that it got the way it did through some human experimentation.

Expand full comment

Just once I'd like to see someone explain how, exactly, a superintelligent machine is supposed to kill everyone. An explanation that actually stands up to some scrutiny and doesn't just involve handwaving, e.g.: a super AI would think of a method we couldn't possibly think of.

Expand full comment

When the LHC was about to be turned on, a similar group of doomers started saying that it was going to destroy the world through black holes or whatever. Of course the LHC didn't destroy the world; it led to the discovery of the Higgs boson. The AI doomers are exactly like them.

Expand full comment

In the 50th anniversary meeting of the Atoms for Peace program I remember one of the leaders asking who had killed more people, nuclear power or coal? Of the 3000 attendees I think only 6 of us put our hands up because when you look at the numbers, coal kills dramatically more (due to air pollution from dust etc..) compared to nuclear power. Even when you include all the accidents, coal still comes out top. https://ourworldindata.org/grapher/death-rates-from-energy-production-per-twh

Expand full comment

You mention gain of function research as a bet that led to catastrophe. Is there a Way More Than You Wanted to Know About The Lab Leak post coming soon? The timing couldn’t be better.

Expand full comment

Concerning FTX, it's interesting how SBF say he would gamble with civilization; from Conversations with Tyler:

"COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?

BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noninteracting universes. Is that right? Because to the extent they’re in the same universe, then maybe duplicating doesn’t actually double the value because maybe they would have colonized the other one anyway, eventually.

COWEN: But holding all that constant, you’re actually getting two Earths, but you’re risking a 49 percent chance of it all disappearing.

BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that’s what’s happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical."

Expand full comment

I don't know the history particularly well, but is it possible that the anti-nuclear folks were totally right and the mistake they made was not to update later? I think we'll update later, so I'm happy to be against this kind of thing to start.

Expand full comment