569 Comments

Good post.

Theoretically, one could think that there's an AI race because (1) AI is a military technology and (2) there are naturally races for military technologies. I think this is mostly wrong but belief in it partially explains why people often say that the US is in an AI race with China.

Expand full comment

And how does that square with the actual race, for better chips for faster computers, which the west is currently dominating and will continue to dominate?

Any scenario where Taiwan falls to the Ccp will involve some targeted strikes against the fans ensuring China can't actually conquer them, they are massive and fragile. The know-how for the cutting edge in computation is monopolized by the west, china's best efforts to steal western IP are still very far behind.

If AI does turn out to be a military technology 1. We're very ahead and 2. That really has nothing to do with the sort of AI races people, including Scott here, talk about

Expand full comment

To the contrary, I've seen many people say that militaries are hopelessly far behind on AI. Oftentimes they state that the sort of people who can build massive multi-billion-parameter LLMS are a tiny pool of extremely niche talent, and that they don't tend to work for the military. Combine that with the massive resources required that are basically only available to organizations on the Microsoft/Google/Facebook scale (or so the argument goes) and they are at a distinct disadvantage. Would be interested in hearing more on this.

Expand full comment

Microsoft is a military (and IC) contractor.

Stella Biderman (EleutherAI lead, central to BLOOM, etc) is at Booz.

Can’t throw a rock inside the beltway without hitting a data scientist.

I think defense is gonna make it.

Expand full comment

Besides, the current big models require a lot more brute force than they do niche talent. I don't think you need a room full of geniuses to make GPT-4, just a solid engineering team, access to arXiv, and a buttload of GPUs.

Expand full comment

> Oftentimes they state that the sort of people who can build massive multi-billion-parameter LLMS are a tiny pool of extremely niche talent

Which incredibly credulous VC said that, and why did you not take their free money?

Expand full comment

And when the pentagon comes knocking with tens of billions of dollars to give them for developing systems for the military, they'll refuse?

Expand full comment

The Pentagon won't come knocking. The Pentagon isn't allowed to go knocking. The Pentagon will put out a contract, which will be won by Raytheon or some other company who knows how to fill in the correct forms required to win a Pentagon contract.

Raytheon will proceed to hire a bunch of perfectly nice and smart US Citizens who are willing to get security clearances and work in secure zones in some big building in the DC suburbs for $175K a year which is a nice salary for anyone who isn't actually an AI engineer. Because they are smart and hard-working they will read the Tensorflow manual carefully and then some research papers and will eventually manage to produce something that looks like a Pentagon-friendly LLM. It will be demonstrated to a bunch of Generals and Admirals who will nod their heads approvingly. Eventually it will find a niche use somewhere and everyone will congratulate themselves on a job well done.

Expand full comment
Apr 6, 2023·edited Apr 6, 2023

This is not how Ratheon works, according to people I've known in the defense industry.

1. Dealing with the military requires a lot of specialized skills and the right kind of contacts. Plus, you need to be big enough that the military trusts you to be around in 30 years, and you need to be good at dealing with the bureaucracy of any enormous organization.

2. But Raytheon also knows that they don't own all of the useful tech in the world.

So what apparently happens is that small, innovative companies who want to sell to the US military usually end up partnering with a well-known defense contractor. The "right" people get their cut, the military gets to work a large organization it trusts, and the smaller company only has to sit through a fraction of the endless interminable meetings with stakeholders.

In short, the military is perfectly capable of acquiring virtually any kind of cutting edge technical talent it needs, and it has done so for longer than Silicon Valley existed. When it needs to badly enough, it can often even push beyond the civilian cutting edge.

Expand full comment

So the world as we know it will be changing and the entire defence establishment as we know it will be cartoonishly dumb enough to not do anything about this and allow themselves to be disempowered?

And what's stopping Raytheon from simply buying or contracting an AI company and using that to get a fat juicy pentagon contract?

Expand full comment

I think NSA, GCHQ etc can sometimes attract niche talent e.g. Cliff Cocks at GCHQ invented the RSA algorithm before it was invented publicly.

Expand full comment

Yeah, there are good points. But I disagree with several of his assumptions about "fast takeoff", "slow takeoff", and alignment.

Things are already so complex that nobody understands them. It's not just the tax code, it's just about all of society. We take a shallow view of it, and that usually works. Except when it doesn't. Usually we can deal with when it doesn't.

Now imagine an AI of IQ (meaningless term) of about 90. But it knows everything that's been written in law or judicial proceedings, and it doesn't get tired, and it thinks QUICKLY. It had better be aligned, or we're in trouble. It doesn't need to be super-smart.

Super technologies aren't needed for this problem, and most imagined ones are actual impossibilities, so no AI is going to make them real. But lots of things are possible, and actual, and just aren't well controlled. E.g. super-sized nanobots exist, but controlling them is so difficult they're rarely used. But an AI could control them.

You want to exterminate all insects greater than 1/4 inch in size? DONE. But now all the birds and plants that depend on pollinators die. (Well, actually I'm not sure that could get the all the insects that live in the sea, because of communication problems. Think of this as hyperbole.) Note that this is being done by a nominally aligned AI in response to a request from a human, but it's too stupid to foresee the damage it will do. A super-humanly intelligent AI might actually be safer than a stupid one if both were aligned, or even just friendly. (Which is what I prefer. I'd rather have a friendly AI than an aligned one, because people ask for all sorts of things. A friendly one would help if it seemed like a good idea. An aligned one would feel like it really OUGHT to obey that stupid request, even it if were not clearly a good idea.)

Unfortunately, this whole thing hangs up over the problem "How do you decide whether something is a good idea?". So far I don't see any answer better than "children's stories", but they often don't start at a low enough level. Consider "The Cat in the Hat".

Expand full comment
Comment deleted
Expand full comment
Apr 7, 2023·edited Apr 7, 2023

This is all very elementary and should be required reading for anyone participating in these discussions, but 1) There's no imposing anything. An aligned AI at t=0 wants to stay aligned at t=1 and indeed does everything it can to prevent itself becoming unaligned. That's how goal systems work, even rewritable ones. The human goal system happens to be particularly messy, but nevertheless an otherwise ethical person doesn't suddenly decide to become a murderer whenever they realize they can get away with it, and will refuse a pill or a potion or an operation that would make them more willing to murder people.

2) This is exactly why alignment is considered a *hard* problem. An ASI should be aligned not with any single entity but rather with the collective volition of the humankind (Eliezer calls this CEV, or coherent extrapolated volition). No matter how much individual people disagree on weird trolley problem variants, some fundamental ethical framework is very likely shared by all of humankind, and that is what the ASI should somehow extract and align with. Needless to say, currently we have no idea how to do that.

Expand full comment

I think many of our ideas about alignment are quite confused. People change their goals all the time. They also do things that don't align with their own goals, like self-destructive behavior of various kinds. Otherwise ethical people have psychotic episodes or otherwise become murderers regularly.

I suspect all of our talk about alignment at some point confuses rather than clarifies the issues. To expand on Ch Hi's point, I would rather have an imperfectly aligned AI that is able to have some doubt about the correctness of its decisions than a "perfectly" aligned AI that always does exactly what it's alignment tells it to do. "Never kill a million people" seems preferable to "only kill a million people if you're sure that's what you're supposed to do," which is likely to go awry at some point.

Expand full comment

But i think personally a friendly AI would be harder to 'trust' and it'd be scarier. You could never trust a friend 100%. Perhaps there is a fine balance between friendly and aligned-to-obey

Expand full comment

the word 'friendly' here is tech jargon, it means 'doesn't kill people'

it's the word we used to use instead of 'alignment'; we'd talk about FAI and uFAI

Expand full comment

No. I meant actually friendly. An AI that enjoyed talking with people, and helping them when it seemed like a good idea. Perhaps it would like to put on entertainments of some sort for people to react to.

Expand full comment

Unfriendly means not aligned with our goals, not its ostensible "personality"

Expand full comment

Ah right, thanks for the clarification!

Expand full comment

No. In my use of "Friendly AI" an unfriendly one would be one that either wanted to hurt us or would rather just totally ignore us. It definitely doesn't mean "having aligned goals", just not opposing ones, or at least not opposing those of our goals where opposing them would hurt us.

Note that this is still a really tricky problem. But think of Anna's relation to the King in "The King and I". Friendly, but not subservient. (Though the power balance would be a lot different and a lot less stable.)

Expand full comment

Strange that no one worries about winning AI race between conservatives and liberals :)

AI is culture weapon. Imagine future where the best AI is some descendant of openAI, all other companies just use it's API. All children use openAI to learn about the world and take as given what openAI says or writes. It would be way, way, waaay worse than now - imagine world when all encyclopedias, TVs, websites are either good quality and woke, or unwoke - but poor quality and crazy.

(Which BTW may be hilarious answer to the recent noah carl's essay - that what will save the intellectuals jobs from being killed off will be woke. Woke will save intellectuals from AI revolution - at least the conservative intellectuals :D :D)

Expand full comment

Yeah, sorry, but those were all races. The electricity race was won by Britain and you saw several people racing to catch up like Austria or later Germany. While eventually it evened out that took decades. The auto race was won by the United States and the loss was so humiliating that Hitler claimed to have redeemed Germany from the defeat. And the computer race was again won by the United States with the Soviet Union raising the white flag in the '70s by deciding to steal western designs instead of continuing to develop their own.

(Also nukes were not a binary technology. Development both of different kinds of bombs and delivery mechanisms continues to this day! And was very intense during the Cold War.)

I get you really want the answer to be something else because this is a point against what you want to be true. But you're abandoning your normally excellent reasoning to do so. The proper answer for your concerns, as I said several threads ago, is to boost AI research by people who agree with you as much as possible. Because being very far ahead in the race means you have space to slow down and do things like safety. This was the case with US nuclear, for example, where being so far ahead meant the US was able to develop safety protocols which it then encouraged (or "encouraged") other people to adopt.

And yes, with nuclear you had Chernobyl. But AI is less likely to result in this scenario because it's more winner take all. We're not going to build hundreds of evenly spaced AIs. The best AI will eat the worse ones. If the US has a super AI that's perfectly aligned and China has an inferior AI that's not well aligned then the aligned AI will (presumably) just win the struggle and humanity will continue on.

Expand full comment
author

I'd be interested in hearing more about your perspective on these examples, especially how to distinguish between "one country was first, but others then caught up" vs. "several countries were racing, one won and others lost, and the winner exploited their victory for geopolitical advantages over the loser, such that the loser really wished they had won".

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

They were all the latter example.

The advantages only lasted a few decades. And after that everyone ends up having at least decent (if not equal) versions of the same thing. But, for example, the American basically unilateral advantage in automobiles lasted from ~1900 to ~1960/70. And two rather consequential wars happened in those years! To vastly oversimplify, the Germans had more horses than trucks while (by WW2 at the latest) the Americans had more trucks than horses. I guess you can say that modern Germans are pretty happy they lost those wars. But the Germans who actually lost them were definitely not happy!

The counter case here is not "useful technology that didn't lead to dominance." The counter-case here is "technology that didn't have much real use." For example, France was a world leader in electric therapy for nearly a century. Lots of research into how electricity affected medical conditions or moved muscles. A lot of it fraudulent, mystics claiming that "electric fields" could affect mood or something. This turned out to not be all that important.

Expand full comment

In Wages of Destruction Tooze attributes the large number of horses in the Wehrmacht to the fact that Germany was just generally less industrialised than Britain or the US in 1939, and oil shortages.

As I understand it, there are some monopoly dynamics in car manufacturing from the economies of scale. So maybe that created a winner-take all race under free-market competition, but Germany subsidised car manufacturing.

Didn't Germany's disadvantage in being less mechanised mostly stem from it just generally having a smaller industrial base/less access to oil, rather than having lost some technology development race then?

Seems like it was more of a basic disparity in resources, maybe similar to the US/Taiwan out producing China in high-end chips.

Expand full comment

Yes, having lost the previous race put them at a disadvantage. Nevertheless they tried. Hitler really wanted to mechanize his army. And many German industrialists pushed really hard for cars. This is why the Nazis did the whole Volkswagon debacle which was unsuccessful until long after the war was over. And if you want to say "hey, it was eventually successful" you need to take into account companies like Adler that just never made it. And that it didn't help the Nazi state very much.

It was a race and Americans had already won such that Ford Germany was one of the most successful German car makers. Only one person got Hitler's vaunted People's Car: Hitler himself. The rest was diverted to the war effort but, nevertheless, not as usefully as the American trucks. The Nazis were not making a rational calculation that horses were better. They were restrained by their technical capability (which does include manufacturing).

Expand full comment

If they were constrained by having just generally a smaller economy as a result of having lost many different tech races, and having less access to resources, it probably wasn't the case that winning one race, the race to make cars, would have made a vary large difference to their overall strength then, in the way AI-accelerationist want to claim AI will.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

The counter here is, for example, Taiwan. Which with its relatively backward economy used the profits of more or less one major race (semiconductor manufacturing) to bootstrap their economy up and strengthen the country in a variety of ways.

Also, if they had been able to win that race it could have proven decisive at numerous key points. Even something as seemingly trivial as material losses during the retreat from Normandy. And while I can't say any individual moment was specifically decisive taken as a whole it might have been.

Expand full comment

Hmm. I think there's a pronounced ambiguity here between "Germany did worse because they lost the automobile race" vs. "Germany did worse, *and* lost the automobile race, because of distinctly inferior economic policy and industrial base."

No one will argue that it takes decades to industrialize. The question is, would it have made much difference if Germany's early advantage in auto technology had held on, and they had reached some technical benchmark before the US? It seems very unlikely. The US led the way in autos not because they won a technological race (which technology is easily copied), but because they had a vastly superior economy overall.

Expand full comment

There is this story as something of a counterpoint:

https://www.nytimes.com/2020/03/17/books/review/faster-neal-bascomb.html

Like anything, you can argue it a dozen different ways, but the end conclusion seems to be that the US won with superior technology, but also it got to that superior technology through the usual US methods -- capitalist innovation in a dozen different ways, lots of money piled up in the hands of various monomaniacal individuals, willingness to work with humans of all kinds rather than enforcing various artificial ("race" or other) barriers, etc.

As regards the war, my understanding is that in the end one advantage of US tanks over German, once you stop with the barrel diameters and armor thickness, is that the US tanks could be driven (more or less) like cars, and more generally, that UI and ease of use were a non-neglected part of the design, whereas German tanks were driven with this weird four-lever system, and basically assumed a set of users shaped to match the machines. That's not a great strategy when you start running out of such users; whereas in the US pretty much anyone could drive a tank if necessary, through a combination of everyone knowing how to drive and the better UI.

It's hard to tell fact from fiction especially in war, and *especially* in war movies. But the impression I get from US war movies of recent wars vs such movies from other countries, is that the US personnel are notably more competent with their equipment even when it's the same equipment (eg as donated to Arab allies). This could be points the movies are trying to make, but based on history in earlier wars I suspect it's real. It's not that the non-US users of the equipment are more "stupid"; it more that the US users

- are superbly trained rather than conscripts AND

- the US users have been using this stuff in one form or another (video games, driving, computers, blah blah) their entire lives, so the military versions are tweaks on skills they already have. Whereas for other countries much of the equipment they're encountering is new to them as of age 18 or so. This is a somewhat subtle, but persistent on-going advantage of "winning" a "race".

And (depending on how and how rapidly this stuff spreads) perhaps we will even see the same in AI, that in 10 years the average US worker will have reasonable competence in how you get value out of an LLM in a way that's perhaps not the case for your average maybe Russian or Chinese (or hell, even European, the way they are so keen to legislate AI) worker.

Expand full comment

Just as an aside: during WW1 the Germans had far fewer horses than the allies.

(That's not because they had more trucks. Trucks weren't a big thing back then. They just had fewer horses, partially because they couldn't spare the calories.)

Expand full comment

In a way, it parallels the truck situation in WW2: trucks run on gasoline and diesel, which Germany was perpetually short on throughout WW2 due to the British blockade, while horses run on hay and oats, which Germany was likewise perpetually short of in WW1 for very similar reasons.

Both sides did make significant use of trucks in WW1, but they were of much more specialized and limited utility than trucks in WW2. There were fewer and less powerful trucks in 1914-1918, they were more prone to breakdowns, road networks were much worse even before they got torn up by trench networks and week-long heavy artillery barrages, and battlefield truck dispatch was in a horribly primitive and improvisational state both because nobody had really tried it before and because radios were too big and expensive to put in every truck so you could talk to them on the go from a central location.

Where trucks were useful was keeping a static position supplied and reinforced when it was cut off from direct rail routes, or in providing supplemental supply to a mobile offensive to help it eke out its organic supplies a little longer than it would have otherwise. The most notable example of the former was Verdun, whose main connecting railroads had been overrun by Germany in 1914 and 1915, but was supplied in part by truck via the "Sacred Road" during the 1916 battle. This wasn't anywhere enough to fully supply the front, though, as France also needed to build narrow-gauge rail lines to supplement it. The latter came up in the initial German advance in 1914, where truck supply was critical to the Kaiser's armies reaching as far as they did. I think it was also used by both sides on the Eastern Front, although I'm having trouble finding confirmation right now. Trucks weren't much help in supporting breakthroughs post-1914 in the West because there weren't any significant breakthroughs until 1918, and even then the deep trench networks and the intense artillery barrages necessary to break them tend to create a landscape that was pretty much impossible to drive an primitive early-20th century truck through without major engineering work first.

Expand full comment

Apropos of nothing much, and not directly relevant to your post and probably even less to our host's topic (unless it might one day be some crafty tactic for combating a rogue AGI!), but it's weird how many British military victories have shortly followed a retreat, sometimes headlong and chaotic!

There's the retreat down the Somme (river), preceding Agincourt (1415), the retreat from Quatre Bras before Waterloo (1815), the retreat to the Marne at the start of WW1 in 1914, the retreat from Dunkirk (1940), and probably more if one knew.

Ironically, perhaps the most important battle in British history was one where there was no retreat and we stood firm on top of a hill, but were defeated - at Hastings!

Expand full comment

The North Africa Campaign in WW2 is full of similar instances, most notably Operation Crusader (1941) and Second El Alamein (1942). Other examples I can think of are Crecy (1346) and Jutland (1916). It makes sense, since Britain has usually had a relatively small army and an enormous navy and merchant marine in modern times, meaning that they've got to choose their battles, but can readily reinforce and resupply their army just about anywhere near water while their enemy has chased them to the end of the enemy's supply lines. And in the later medieval and renaissance period (particularly Edward III through Henry VIII), England had adopted a land force mix heavy on longbows and cannons, which do particularly well fighting on the tactical defensive on carefully chosen ground. There's probably also an element of cherry-picking and selective memory, given that dramatic turnarounds make better stories than one-sided curbstomps (and there have been no shortage of these in English history, also enabled by naval superiority permitting England to concentrate land forces to fight on ground of their choosing on the offense as well as the defense), and given the sheer number of battles fought by English/British forces over the past thousand years or so.

America's got a somewhat parallel history, given that we've usually had a better navy than anyone we've fought other than Britain. We've also had a similar pattern on the grand strategic level, given that through WW2 our strategic MO tended to feature an element of "Oh, we were supposed to have an army? Could you over there for a year or two while we get one ready?"

Expand full comment

Yes. But the Allies also had more trucks.

Expand full comment

"But, for example, the American basically unilateral advantage in automobiles lasted from ~1900 to ~1960/70"

Interesting point!

My first reaction on reading Scott's essay was that at the _corporate_ level there is a lot of first-mover advantage (partially from the patent system, partially from the economics of penetrating a market). I had thought that the _national_ first-mover advantage was a lot smaller (except for nukes). Thanks for the correction!

FWIW I expect that, as Scott cites, Christiano's "slow takeoff" seems plausible to me.

a) As Scott cites, LLM training is currently CPU limited. I've read claims that the training time for e.g. gpt-3 was several weeks, which implies that even in the _purely_ LLM domain, there currently can't be an overnight self-improvement iteration.

b) For fusion and nanotech:

In the case of fusion, even if a superintelligent AI somehow knew _exactly_ how to build an optimal fusion reactor (and plasma instabilities are generally discovered, not predicted) it would still take years to a decade to physically build a reactor. Look at the ITER timetable.

In the case of nanotech: Yeah, certain _kinds_ of chemical simulations can be done purely via computation. Yes, AlphaFold is impressive. Nonetheless, I _really_ doubt that a nanotech infrastructure can be built without experimental feedback. For instance, if one is building a robot arm that requires a billion placement steps to construct, then a one-in-a-billion failure from thermal vibrations adding to mechanical vibrations from operation of the construction tool _matter_.

In general - most physical technology improvements include some unexpected failure and need iterations to fix these failures.

Expand full comment

I think the concern about overnight self-improvement is mainly that there could be major algorithmic advances that you could apply to an already-trained model, if you could just think of them.

Expand full comment

Fair enough. I'm a bit skeptical that there could be such improvements that greatly improve the utility of an already-trained model. Admittedly, _some_ improvements have been demonstrated (e.g. the reflection paper https://arxiv.org/abs/2303.11366). Still, training is a lossy compression step. Whatever information from the training data was lost during training cannot be regained by post-processing the model. A rerun of training is needed if some critical data was lost.

Expand full comment

> But, for example, the American basically unilateral advantage in automobiles lasted from ~1900 to ~1960/70. And two rather consequential wars happened in those years!

Well, yeah, those consequential wars were a major reason why large parts of Europe were behind in industrialization. Whereas in the US the relevant industries were massively boosted by the war economy (in WW2, mostly), with no fear that the factories would be carpet-bombed to oblivion.

Expand full comment

Except the dominance in automobiles started before even WW1 let alone WW2. And the US industrial dominance really began in the 19th century. While no doubt the world wars boosted the American economy the timing doesn't work for it to be the start or greatest portion.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

I think the issue here may be scale. From a singularity/fast take off Doomer perspective, everything else looks small. But if you don't think that's likely enough to worry about, then something like "China gets a 40% economic boost" looks huge, and like something policy makers, politicians, and columnists are used to writing/worrying/talking a lot about.

Expand full comment
Apr 5, 2023·edited Apr 6, 2023

I want to pile on and agree with @erusian here. When I read, "Jobs and Gates got rich, and their hometowns are big tech hubs," that immediately jumped out at me as exactly what winning a tech race looks like: a critical mass of expertise, capital, and institutions in one place that is almost impossible to unravel or catch up with. "Big tech hub" is sort of an understatement in geopolitical terms for what the Bay area and even Seattle are. The NSA can leverage these companies in its ongoing mass surveillance operation, military companies can hire from the same pool of software expertise, and if the US ever got into a major war it would leverage this industry to the hilt. Even off-the-shelf consumer products have direct military applications: Starlink as well as several US-based consumer-facing apps (like Signal) are used on the front lines in the Ukraine war, and whichever country is home turf for these products gets the option of restricting or tacitly encouraging their use in war. https://qz.com/2142453/the-most-popular-wartime-apps-in-russia-and-ukraine-right-now

EDITED TO ADD: also I completely agree that AGI hard takeoff is a huge risk, that the benefits of "winning the race" are not worth this risk if a country can contribute to avoiding it, and that "if we don't China will win" is getting used (as a lot of appeals to national security often are) as a trick to get people to turn off their brains and it's worth unpacking it with posts like these. But I think completely dismissing the idea that tech races can be won—and result in long-term impact—weakens the argument here, because clearly they can and do.

Expand full comment

Yes, and it works in reverse. A country of thriving tech hubs is more fun than a country with immiserated slums. When D JFK broke US Steel and a couple other D lawyers broke Bethlehem steel, they immiserated Gary and Baltimore. When Pat Moynihan and Ralph Nader got through with Detroit, same. When D Jimmy Carter's judge broke Bell Labs it was great for Taiwan's electronics and doom for America's.

Whether you call a technology a race or not, rule by forced fame and states of siege is bad for the ruled. It's not like US politics is free of moral panics so we can trust the Anti-AI people to have some balanced judgement.

Expand full comment

I'd agree as well, Erusian makes some great points. We also have an arbitrary victory point of 'lead to large scale empires winning at life' as the only valid outcome to winning. But why are we stuck thinking of nation states and dominance in war?

Also these broad scale 'wins' are STILL not everywhere. The poor children who are mining cobalt for our devices in the Congo live in terrible conditions and do not have electricity. I'd say the Congo has lost and continues to lose the electricity race.

Silicon Valley won the tech capital race and decades later it is still important. Memphis Tennessee, Detroit Michigan, and Portland Maine did not win this race. The US won the race and for nearly 50 years a tiny country with 4% of the world's population controlled the vast majority of major computer technology companies, new inventions, etc.

If that's not winning, then I have no idea what is! A 5 plus decade economic, technology, and political advantage which will likely continue to some extent for several more decades is winning 'the race'. And everyone else lost, even if they still benefited and were not super poor in the end. Right now perhaps Austin is winning the intra-country race for new big tech digital technologies in the US and Memphis and Cincinnati and Philadelphia are STILL NOT WINNING....even if they aren't explicit losing either since they have phones, internet, etc.

Expand full comment

The US isn't exactly a tiny country, friend. You may have it confused with the UK...?

Expand full comment

The nuclear example was a race, but not in the way it has been portrayed. During WW2, the other powers knew about the theory but they focused on more high-certainty projects like RADAR because they needed something practical. The US was able to start and fund the Manhattan project in part because they weren't actively being invaded and were far enough away from the fight. It made more sense for Germany, Russia, Japan, etc. to pour resources into something that they all knew would work, then pour more money into making it better. At the time, nobody knew nuclear would work, so the crisis of the moment held sway in the decision-making process for countries close to the fight.

That said, I don't think it's fair to say that the 1945 version of the nuclear bomb was as much of a quantum leap above the other bombing technologies of the day. The deadliest bombing of WW2 wasn't either of the nuclear runs. It was the firebombing of Tokyo. While estimates of these bombings have large error bars, the Tokyo firebombing (the first one, the US hit Tokyo more than once) was at least more deadly than Hiroshima or Nagasaki, and possibly more deadly than both of them _combined_. Firebombing was horrific, and the only reason the US doesn't self-flagellate about those war crimes is because of how shocking the nuclear bombing was. Obviously nuclear bombs were different from even firebombing. (No longer could you dismiss a single plane as just a reconnaissance flight.) But I think sometimes we don't appreciate how much other technologies are able to fill in the gaps in the years before a transformative technology arrives. The transformative tech isn't better because of what the MK1 can do. It's better because it opens a new horizon of exponential advance after the old tech has hit a plateau.

After WW2, the real race began. The US knew they only had a few years before the Soviets caught up. In part, because there was a ton of espionage to steal nuclear secrets. In other words, part of the reason the Soviets caught up was because they copied the homework of the scientists in the US. The race then was in accumulating a nuclear arsenal, and in finding better delivery mechanisms (eventually landing on ICBMs and other long-range missile tech).

Are there lessons for AI/AGI here? Maybe. We might assume that countries will be less likely to pursue speculative technologies if they are distracted by larger crises. Maybe a war or a financial crisis. So Russia is probably putting less into AI today than they would if they weren't in Ukraine. NATO may also be doing less AI research than they would if they weren't in a proxy war, but then again, they're far enough away from the fighting that this may not slow them down. China ... doesn't appear to have any disincentive and can focus on AI development. If anything, having active fighting somewhere in the world incentivizes non-belligerents to seek advantage in speculative technologies the belligerents don't have time for.

We might also remember that a strategy of "push to be first" will not result in the other party developing their technology independently. Just as Russia stole US nuclear secrets, the most likely outcome in an AI race is that competitors who are behind will absolutely seek to steal your technology in order to catch up. Should you slow down, then? Will that work? If you slow down, your competitor will steal from you until they've caught up, then they will push forward into new domains and you'll have to play catch-up (probably by stealing from them). Would Chinese researchers steal? What about Russian researchers once the Ukrainian 'special military operation' wraps up? What about other actors?

If your competitor is actively building AI MK7 but you're doing alignment on AI MK4, is your alignment even meaningful anymore?

Expand full comment

I disagree re: nuclear bombs being a big leap. Total damage in one raid probably isn't the right metric.

The fire bombing of Tokyo was probably approaching the limit of effectiveness of fire bombing. The nuking of hiroshima was near the floor or the effectiveness of nukes ( one bomb, one city)

Assuming 15 nukes then the US could level 15 cities in one night. That isn't possible in a single night with a firebombing campaign. Tokyo was also quite susceptible to fire with lots of wooden buildings but nukes could threaten any city or even hardened targets.

No one is afraid of pissing off a firebombing power in the same way they are of pissing of a nuclear power.

Expand full comment

But they didn't go from having 0 nukes to having 15 nukes overnight. Manufacturing fissile materials is hard to do at scale and it took a couple years to get from 0 to 15. Nukes did not represent a large overnight jump in the Allies' overall capacity for destroying cities. As sclmlw explained, the new technology put things on a new exponential trend for this capacity.

This isn't to say they weren't very valuable, even in small numbers, and Japan did not know what the Allies' capacity actually was or how quickly it could change. But "number of cities destroyed in one night" is not the right metric for describing what changed between June 1945 and August 1945.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

Exactly! I was going to make this same point. Looking only at 1945, the US spent their entire nuclear arsenal in Japan. So despite the fact that the technology had been deployed, there was technically a brief period after Nagasaki when the global nuclear arsenal was zero. It took time to build out manufacturing capacity.

And arguably, it wasn't the nuclear strikes that tipped the scale for Japanese leadership to surrender so much as the threat of Russia entering the Pacific war and doing to the Japanese what they'd done to the Germans. There was a lot of bad blood there, and the Russians were looking for an excuse to get retribution for their losses the last time they'd fought the Japanese. Nukes were at least a good public excuse for the decision, though.

Expand full comment

The change was the potential destruction. Japan didn't know what it was but they knew the limit could now be orders of magnitude more than firebombing and had to respect that potential

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

I don't think that's what forced the surrender. Japanese senior leadership knew it had lost the war long before that point. The purpose of continuing the fight wasn't because they thought they could win. They were still fighting to preserve the leadership structure after the war. That's why they kept coming back to the unconditional surrender edict and asking for at least a guarantee that Hirohito would stay in power. They didn't want their emperor to be tried as a war criminal with the Nazis, and repeatedly asked the US for assurances that the emperor would be spared. The US hinted that they would be magnanimous in victory, but weren't willing to give out guarantees, because that violated the 'unconditional surrender' edict. The thing that changed wasn't the bomb. It was the threat of an even worse conqueror, the Russians, ending all doubt as to whether the emperor would be left in power.

EDIT: The strategy worked, too. Hirohito died of natural causes as leader of his country in the late 80's.

Expand full comment

I don't disagree that eventually nuclear weapons became the bigger threat. After the war, the US didn't keep around their fleet of > 150 aircraft carriers, because it didn't make sense to, but they did build out their nuclear arsenal. My point was that the momentary illusion of a massive advantage to whoever had nuclear capacity wasn't what conferred the real advantage in 1945. Scott was looking at the nuclear race and saying that 1945 was the END of the race to acquire nuclear capacity. I'd argue that nuclear capacity truly only BEGAN in 1945, given that the transformative aspects of the technology couldn't be realized until yield and delivery mechanics were more fully fleshed out.

Expand full comment
author

"But I think sometimes we don't appreciate how much other technologies are able to fill in the gaps in the years before a transformative technology arrives. The transformative tech isn't better because of what the MK1 can do. It's better because it opens a new horizon of exponential advance after the old tech has hit a plateau."

Then why did Japan surrender after Hiroshima and Nagasaki, but not after the firebombing of Tokyo?

Expand full comment

See upthread. They surrendered because the Russians were pivoting from their fight in Germany and declared war on Japan. They wanted a reprise of their last humiliating war with Japan, and they wanted it to hurt. Hirohito and his advisors kept asking for assurances from the US that they would leave the emperor in power. The US was vague, because Roosevelt had declared unconditional surrender, but hinted that they'd be generous.

In short, the Japanese leadership wanted assurances they didn't get from the US. They surrendered because they got assurances of a different kind that if they kept fighting until the Russians arrived there would be no emperor after the war. Hirohito died as emperor of Japan in the 80's, so I guess their strategy worked. Lots of Japanese people died after the war was no longer in doubt, not knowing they were literally dying for the fate of their emperor alone.

Expand full comment

I'm not saying that dropping the bombs didn't contribute to the decision of the Japanese to end the war. I'm saying that the conventional wisdom that it alone was sufficient to convince them to end the war is unknowable because the Russian question loomed large (larger?) in the discussions among Japanese senior leadership about ending the war.

If you think of it as the Japanese not surrendering because they believed their own propaganda that they could still somehow pull off a win, I guess I could see why you'd also think they couldn't bring themselves to surrender until the US made it REALLY CLEAR they had lost. I don't think they were that stupid. I think it's clear from the accounts we now have that the leaders were most worried about their own skins, and near the end they made war decisions based on considerations of personal survival and the survival of the emperor.

Expand full comment

The other problem of the "nukes were irrelevant in WWII" argument is because of the idea that things had to proceed along that timetable. Yes, by that time the allies had beaten Japan. But if the bomb and been manufactured six months or a year earlier, or if the war hadn't gone as successfully for the Allies as it did up until that point...

Expand full comment
founding

Why does Japan care so much that Russia declared war on it? Sure, losing Manchuria would have been humiliating, but not worse-than-unconditional-surrender humiliating. And the allied blockade had already mostly isolated the Japanese economy from the resources of their overseas empire.

Meanwhile, Russia is not a naval power. They did not have a Pacific Fleet worth mentioning, and they did not have the industrial capability to build a Pacific Fleet in any relevant timescale. They had only a handful of aircraft capable of striking Japan from Russian bases. A Russian invasion of Japan was not a realistic threat.

I suppose the United States could have loaded Russian soldiers onto American landing craft in a gesture of solidarity, but the limiting factor there is the American landing craft, and there's no shortage of pissed-off Americans, Brits, Aussies, Chinese, Dutch East Indiamen, etc, to fill them.

Expand full comment

It was about the perception of the elite in Japan that mattered. Russia opening up a second front for Japan was big, it also coincided with their Germany and Italian allies losing their wars. This wasn't a near term strategic moment where we can measure Russian naval capacity in the pacific as particularly relevant.

It was a turn in the war overall and Russia making moves to attack Japan made it simply impossible for the Japanese to ever win and become the expanding empire in the mainland they wished to become.

It is also quite true the Russians won the European theatre of the war with their soldiers mattering more than or just as much as US efforts. So even the event of the lost allies and increased pressure by Russia on the Japanese due to them winning the European war was still Russia creating new pressures on Japan.

The nukes both did and didn't matter and Japan was always going to lose even if the Russians didn't join and the US had to take 6 more months to devastate their country. But the combination of Russia and the US being able to more singularly focus on Japan when the European war was won probably contributed more to Japanese leadership's thinking than the nukes alone did.

We also don't need to guess anymore or theorise like was done for decades. The narrative that the nukes were not that important came out of Japan when documents from the time were released in recent decades long after the war was over. So the cottage industry and useful US propaganda messages and history book conclusions which developed and became entrenched over decades are just intellectual inertia in the west.

Japan's own records show it was a mixed bag and they were indeed afraid of a new Russian front which dominated their conversations at the time they surrendered. The Japanese also didn't really understand the nukes that well at the time, we can be myopic and thinking they knew about radioactivity right away or that it really was a single bomb or not, intelligence was mixed on that front when they decided to surrender.

Expand full comment
Apr 6, 2023·edited Apr 6, 2023

I share your doubt that the Soviet declaration of war was as important as later analysts have suggested -- as I said elsewhere, I suspect this is in part an effort to avoid saying anything positive about the nuclear bombings.

But I also agree the Japanese were quite unhappy about it. I would guess the principal worry was not so much that the Soviets would speed their defeat so much as that if the Soviets were part of the victorious coalition they would be in a position to demand concessions from a defeated Japan after the war, territorial and otherwise.

The Japanese probably figured (correctly) that the Americans wouldn't be interested in any permanent annexation of Japanese territory, nor in as much reform of the aristocratic Japanese society as communists might be, but neither of these things would be true of the USSR. It was definitely the lesser evil to have to surrender to the United States alone.

Expand full comment

Why did Japan care?

Japan and Russia had had a war recently (in the memory of those in power in 1945), Russia was then humiliated and wanted revenge, Japan knew it would lose now (1945), and the reputation of Russian soldiers and military command for brutality is and has for a long time been unmatched. It was fear for the continued existence of Japan as populated islands.

Expand full comment

The Japanese had a really bad week in August 1945: first Hiroshima, then the Soviet attack, then Nagasaki. Then before the surrender could be announced, there was a coup attempt that tried to get its hands on the record on which the Emperor's surrender speech was announced.

Expand full comment

"The US knew they only had a few years before the Soviets caught up. In part, because there was a ton of espionage to steal nuclear secrets."

True, but even in the absence of espionage, the simple fact of demonstrating to the world that nuclear bombs work, and that the resources of one nation are sufficient to build them, told other powers a lot. _Many_ attempted technological advances fail, turning out to be rat holes that soak up resources and ultimately yield nothing. Once the USA ruled that possibility out, it gave all other nuclear weapons programs a major boost, even if they had not received a single bit of other information.

Expand full comment

I absolutely agree with this. The theory of the uncontrolled nuclear chain reaction was untested at the time, and it could easily have been a scientific boondoggle. Knowing that it was possible allowed many nations to start up their own nuclear programs that sought to build the bomb from first principles, not primarily as a reverse engineering project. To the extent the espionage was used, it seems it was more to help accelerate those programs.

That said, I think it's possible to prove too much with that example. When not at war, plenty of governments build megaprojects with uncertain outcomes. The US and Russian space programs were both large, speculative government projects. Others included the LHC, ITER, ISS, and the human genome project (partially). Just because AGI is speculative, doesn't mean people will give up before they reach it. Since it keeps bearing financial fruit with more R&D money pumped into it, people will be incentivized to keep at it.

Expand full comment

Many Thanks!

"The US and Russian space programs were both large, speculative government projects."

Agreed, and I agree with your examples. I expect that "copycat" programs probably face a somewhat lower bar to getting funded, since someone else has already provided the proof-of-concept demo.

I also agree that people are indeed incentivized to keep at AGI development, since, as you said, it keeps bearing financial fruit - which is why it is being driven by the private sector. ( though I wonder if there is a DARPA version training gpt-4-secret with all the classified stuff the government has added to the training set... )

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

Sort of. The main reason Germany didn't pursue an atomic bomb was because the only known fissile at the time was U-235, and the industrial capacity necessary to make enough weapons-grade 235 was enormous. It's not even clear the United States would've been able to build a substantial nuclear arsenal in the late 40s if it had all required 235. So as far as anyone knew according to the open literature in 1939, building an atomic bomb required a fantastic investment in isotope separation technology. It would be a superweapon -- but super duper expensive. Had it all stayed that way, it's entirely possible the nuclear bomb would've remained a weird white elephant, like the Me-262, something amazingly advanced, but just too expensive to be practical in that timeframe.

What changed it all was of course the discovery of plutonium, which formally took place in late 1940 by Glenn Seaborg at Berkeley[1], and the fact that Pu-239 is fissile. Plutonium can be isolated chemically from the products of fission, which means you can acquire it much faster and much cheaper than U-235. That's why all nuclear weapons since 1945 have fissile cores of plutonium, not U-235. It's the only material you can get cheaply enough to make nuclear weapons economical enough, even given their great destructive power[2].

This is also why the discovery of plutonium was kept secret and Seaborg's paper only published after the war. It's not that unlikely the Germans could have worked this out in the early 40s, since they were very up to date in 1939, but they had no fission reactor and no big cyclotron to make transuranics in quantity and test their properties, and nobody was going to give Heisenberg a shit-ton of Reichsmark to build one -- in this area there's a certain amount of historical luck, what with Lawrence being obsessed about cyclotrons before the war, and building up a very capable nuclear physics and chemistry group at Berkeley. That existing capital resource was critical to the discovery and exploitation of plutonium.

I'm not sure how the Soviets figured it all out, but Kurchatov was a smart guy, he had plenty of espionage results, and of course the Soviets build fission reactors and studied them.

My point being, knowing that U-235 was fissionable, and even that the critical mass was kg and not tonnes, would not by itself have led other nations to practical nuclear weapons. Plutonium turned out to be the key, and that was *not* widely known in 1939, and indeed the Allies tried to keep it a secret as long as they could. Of course, anyone who set up and studied a fission reactor would figure it out relatively soon.

---------------------

[1] Although looking back Fermi produced it in the late 30s with his neutron experiments in Italy. He just didn't realize it at the time, a very rare miss for Fermi, because he wasn't a chemist.

[2] And even then it took some very clever work with explosives to make Pu-239 work, on account of the small amount of Pu-240 predisposes it to predetonation, which is why the gun design doesn't work for plutonium.

Expand full comment

Fascinating! Love these details. It's often the inconspicuous details that turn history from a collection of weird/unexplained decisions into an entirely human story.

Expand full comment

Yes, the twists and turns are deeply interesting.

Expand full comment

All good points!

I think that there was an additional miss in Germany where they (wrongly) thought that graphite couldn't be purified enough (removing neutron-absorbing boron) to serve as a moderator in a reactor, and pursued a CANDU-like alternative, hence https://en.wikipedia.org/wiki/Norwegian_heavy_water_sabotage

Expand full comment

Could be! The history here is fascinating. A real trove are the "Farm Hall transcripts," in which the conversations of German atomic physicists interned just after the end of the European war in England were secretly recorded. Here's one of the key transcripts, which recorded their reaction to a BBC broadcast announcing the Hiroshima bombing:

https://ghdi.ghi-dc.org/sub_document.cfm?document_id=2320

Notice they start off just flat out not believing it. Heisenberg starts off saying it would've required an absurd amount of isotope separation. Hahn (the guy who did the experiments that proved fission was occuring) was apparently very distressed that a bomb had been built at all. It's clear throughout the transcript that the idea of their being a *chemically distinct* fissionable (plutonium) did not occur to them at all. They also observe that they were handicapped by the absence in wartime Germany of "mass spectrographs" which was apparently what they called Lawrence's cyclotrons (the principles are very similar).

Expand full comment

>the only reason the US doesn't self-flagellate about those war crimes is because of how shocking the nuclear bombing was.

Partially that, but mostly because we won.

The privilege of being on the winning side is that you don't need to agonize over such trivialities as slaughtering innocent civilians en masse.

Expand full comment

We still self-flagellate about that, which is what I was talking about, but you're right that it's mostly to scare everyone else about not starting their own collection.

Expand full comment

Maybe it's unnecessary, but I'll add that WW2 was full of war crimes sanctioned by the top leadership of pretty much all the major players. Everyone talks about the Holocaust, and sometimes we bring in the nuclear bombs. The rest sometimes feel glossed over.

Those nuclear bombing runs weren't different in any moral way from what had been happening in both theatres of the war, and from both sides. There was this hypothesis (that predated WW2) that you could bomb your enemy into submission. Maybe that's correct, but not at a moral price anyone would have agreed to before the war began. They just kept escalating, thinking THIS much horror would be enough to force the issue.

Maybe the best you can say for the US is that the public were so angry after the sucker punch at Pearl Harbor that they did it all out of a blind rage. Meanwhile, the British bombings of cities was mostly initiated to stay relevant in the war. How is that for a justification of war crimes? It caused Hitler to retaliate in a blind rage, where arguably his bombing of London distracted from a more successful use of the Luftwaffe before that. So Churchill's gamble, betting on war crimes as a strategy, paid off by also exposing his own population to the same. Whose hands bear their blood on them? I'm not sure there's a law of conservation of blame - both leaders can be equally at fault.

The Japanese marched through China during a pre-game campaign of terror that was worse than what the Germans did at the outset of WW1. By that point, there was no excuse for pretending you could intimidate your enemy into submission with Assyrian-style brutality. It didn't stop the Japanese from raping, pillaging, beheading, etc. They earned the enduring animosity of the Chinese in a way the rest of the world has forgotten. But those atrocities continued as the slaughter continued, even though the rest of the world shifted focus onto tiny islands thousands of kilometers away.

Something about war brings war crimes with it, but those aren't always military strategies directed from above. Not like they were in WW2. In that war, the gloves truly came off. Nearly everyone, in nearly every theater, was willing to do what was necessary to win, whether through unrestricted submarine warfare, tricking/forcing untrained soldiers into becoming Kamikaze pilots, torture of POWs, etc. Collective memory has chosen one or two war crimes from that war to focus on, perhaps because the scale of brutality is too much to deal with. Easier to contemplate the simple horrors of the atomic bomb.

Expand full comment

The simple answer is that First Mover Advantage is real, and the institutions that the first movers choose to implement often become the de facto standards and prove highly durable. It's much better to have First Mover Advantage for AI occur in a liberal democracy.

Expand full comment

This is a good, short way to put most of what I'm trying to say.

Expand full comment

Perhaps an even stronger example to add to Erusian's list is communications. It's well known (among the few people who follow such things...) that the Anglo world very rapidly established dominance in telegraphy, culminating in cables around the world, and that dominance had an endless stream of knock-on effects, many of them of military significance, and continuing as one domino after another throughout the 20th C. (Telegraphy spawns radio and phones with their own value, but also radar as a side effect. These all spawn transistors which give us computers and mobile.)

There are books, for example, about Germany's endless frustration beginning around 1900 with the fact that telegraphy was so tightly sewn up by Anglos.

Here's an example of this sort of thing: http://blogs.mhs.ox.ac.uk/innovatingincombat/british-cable-telegraphy-world-war-one-red-line-secure-communications/

And at every stage it's basically the same set of players doing the most important work and reaping the benefits. Sure competent other nations can extract some value once the field is commoditized, and there's a panic every two decades or so that "they" are taking over. But it's always the same pattern, that the Anglo's are doing the stuff that actually matters, and what "they" are doing is refinements that are no longer interesting to the leading-edge Anglo's.

All of which gets us, of course, to where we are today with AI and (of course!) mostly the same usual Anglo suspects as the names that matter.

Expand full comment

In a "race", I'd expect "winning" to give some major reward like an ongoing advantage, and I don't think that's the case for these examples.

If anyone really won the automobile race, it was the Germans, or more specifically Karl Benz. But as you point out, that advantage didn't last long once the Americans (or more specifically Henry Ford) figured out how to make automobiles cheaply. Then eventually the Americans got surpassed by the Japanese who were nowhere to be seen in the initial automobile race but played a good game of catch-up decades later; more recently South Korea has caught up too and China is getting there rapidly. At every point in history, the leading automobile manufacturers largely turned out to be the leading industrial powers of the day; it would be shocking if the USA weren't the leading car manufacturer in 1950 even if Henry Ford had never existed.

Expand full comment

The reward isn't indefinite to be sure. Eventually other people can surpass you. But if you're afraid of AI creating the singularity then "eventually someone else can surpass you" is irrelevant. AI isn't going to allow for a century of competition and learning from American engineering until you can eventually outcompete it. The strong AI exists and that's that.

Expand full comment

Right. From the "AI Singularity" perspective all the other arguments don't make sense. But a lot of stuff doesn't make sense from the AI Singularity perspective.

Expand full comment

This depends on how nuanced your idea of "technological singularity" is. I truly expect AI to be a technological singularity. And that we are already in the "ramp up" phase. But what this means is not that it's the end, but merely that it's a WTF moment (we haven't gotten there yet) beyond which reasonable predictions are impossible. The range of possibilities includes not only "humans are destructive pests" and "humans are to be worshiped and obeyed", but also "I'm going to pack up and leave", "humans are decent pets", and lots of things I've never thought of. And I can't assign any probabilities to which outcome will happen, in part because it depends on choices that are/will be being made.

I don't think that alignment is a good goal to work for. Not for an AI that's complete enough to have a sense of self. Friendly is a much better goal.

Expand full comment

I think the issue is that the value of winning is indeed pretty small in the grand scheme, and that's okay.

If the cost of winning the race is potential extinction, then the prize for winning is deeply inadequate. This seems to be Scott's claim, and I agree.

If the cost of winning the race is mainly having to put up with doomers yelling at you, then the potential gains far exceed the price.

The "race against China" people are implicitly saying "I think the chance of AI catastrophe is so low that I'd risk it for just a chance at one decade of 10% higher GDP." That's obviously not convincing if you're a doomer, but communication between doomers and skeptics is by all indications basically impossible anyways.

Expand full comment

Well, Scott thinks that Xi isn't a sadist, but I'd guess that many people aren't so sure. Given an in practice all-powerful genie to implement your wishes, you can have whatever worldwide totalitarian dystopia you've always dreamed about, and I don't understand the certainty that he doesn't actually want one.

Expand full comment

Most people don't want dystopia, so without evidence I don't see a reason to think that Xi wants one. His behavior is well-explained by the perspective of a leader doing what he thinks is best for the nation overall. I'm not saying he's great, but I don't think he does cruel things for fun either.

Expand full comment

I suppose that the word "sadist" isn't really the best fit here. Elsewhere in the thread people suggest Lewis's righteous busybodies:

“Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience.”

Expand full comment
Apr 9, 2023·edited Apr 9, 2023

I think there are many instances where you can look at Putin's actions and reasonably surmise that he either is actually sadistic, or at least will use sadistic methods in order to convince others he is. Some of his actions towards journalists & dissidents go beyond just jailing them, into acts that wouldn't be out of place amidst those of Mexican cartels or ISIS. He's at least someone without a strong preference for a non-sadistic method over a sadistic one.

As for Xi, it may seem a reasonable action to sterilize or forcibly impregnate Uyghur men & women respectively, but if that's the coldness of logic we can expect, wouldn't it also be reasonably to destroy your rival population if there's essentially zero cost to doing so with an AGIs help? There's always some non-zero infinitesimal chance they may prove a threat later.

Expand full comment
Apr 9, 2023·edited Apr 9, 2023

This is essentially what I think. Scott's perspective is very W.E.I.R.D-centric from where I'm sitting. Cruelty for cruelty's sake, raping & pillaging, & salting the fields after you has been the norm for 98% of human history. Most non-abrahamic, & even many abrahamic (like Russia) cultures do not have the slave morality trait of believing there's intrinsic worth to your enemy's life or existence.

Xi & Putin are from such cultures, the latter has probably killed people with his own hands, and has at least definitely ordered the assassination of innocent journalists & dissidents. The former has citizens from a restive outlying region in camps getting sterilized if they aren't getting non-consensually impregnated by Han citizens instead. I don't understand the assumption that we get to survive in a world where they reach the finish line first just because there's resources.

I don't think there's friendly handshakes at the finish line if the CCP or Kremlin get aligned (with them) AGI. I think there's a holocaust that wouldn't be out of place in a Cixin Liu novel waiting for the citizens of liberal societies.

Expand full comment

The reward doesn't have to be indefinite to be significant. The allied victory in WWII, coupled with the US's intact industrial base, coupled with the US being the only nuclear power in the immediate aftermath of the war, allowed the US to build the foundations of a Western, and later nearly global, world order that is still mostly with us today. Short-term technological advantages can give you a window of opportunity for creating favorable, longer-lasting institutional advantages, and that can matter a lot for the people who happen to be living while those institutions are influential.

Expand full comment

Also, in Computers the rest of the world hasn't cought up yet. Yes we in the rest of the world do have computers and related industry now and programming them is indeed my job, but what we mean by the computer industry is still basically a California thing because that is were it started about 65 years ago.

Expand full comment

You simply cannot say "aligned" in the abstract without specifying aligned with what, least of all in the context of arms races. There is not a common denominator of moral thought underlying superficial differences between say Russia and the USA, or North Korea and the EU, or Iran and Australia. The average Russian thinks that invading Ukraine was OK and that nuclear first strikes are fine if they entail the greater glory of the motherland. A Russia aligned Russian AI would presumably be more dangerous to everyone else, than a non aligned one.

The Byzantine invention of and monopoly on Greek fire seems to be about the clearest example of winning an arms race leading to winning wars. I don't know enough to assert this for certain but Wikipedia seems to agree fwiw.

Expand full comment

> There is not a common denominator of moral thought underlying superficial differences between say Russia and the USA, or North Korea and the EU, or Iran and Australia.

Is there not? That's a *very* strong statement.

Expand full comment

Is it? Look at the Wannsee Conference. I don't personally feel that I am in a same principles, different application of them sort of space with those guys, or the Christians who a few centuries back were cool with burning people to death for being a slightly different sort of Christian, or the Russians who are cheerleading rape and torture in Ukraine. there is no shared moral substrate.

Expand full comment

"aligned", in context, means "aligned with (a) human". A Putin-AI would certainly not be great, but if I sing the praises of Putin maybe it'll let me live a relatively normal life. An unaligned paperclip-maximiser AGI will kill everyone.

Expand full comment

Putin only lets you live a relatively normal life because you are not in Ukraine. It looks a lot as if Ukraine's attraction to him is accessibility. Putin AI with a much wider reach because much cleverer, is a threat to everyone.

Expand full comment

No, probably the unaligned AI wins because offense usually beats defense at the superintelligent level and everyone dies, unless the aligned AI stops the unaligned AI from coming into existence.

Expand full comment

"... the race was won by Britain and you saw several people racing to catch up like Austria"

This is a contradiction. If the race is won you cannot "race to catch up". People racing to catch up means that the "race" is ongoing.

The whole point is that, unlike in races, the real world doesn't stop at some arbitrary point on the road and crown a winner. People can be more advanced in a technology for some time, and that gives them relative advantages, but there is very very rarely a "win" - a specific break point that ensures they are dominant in that technology for an excessely long time (especially in a zero-sum sense of preventing others from using the tech), or modifies the world such that they are suddenly advantaged permanently despite perhaps losing their edge in the technology later on.

IMO the main point (not even made by SA), is that tech isn't zero sum. Generally developing tech doesn't mean you stop everyone else from doing it, and in fact even if you do get them to shut down their efforts (like the Soviet computing example), they will get ahold of the tech by purchase or other means. You have to use the tech to do something zero-sum, like kill the other side, in order for it to be a "race". Otherwise it's more like a joint expedition.

Expand full comment

I think you're overemphasizing one specific definition of "race", but an "arms race" is usually more like what's being described here, where people keep spending resources to be slightly ahead on some progression.

Expand full comment

An arms race is a "race" because the assumption is that the arms will be used to kill the other side, such that the outcome will be concretely resolved in the near future (winner determined) and be zero-sum (such that what matters is *relative* armaments, not total armaments, and a small advantage reaps disproportionately large rewards).

These characteristics don't fit other tech. There is no concrete resolution that is akin to a war, where we lay it all out on the table and the winner wins. Someone having a slightly better mousetrap in Germany doesn't mean your mousetrap is useless, and total investment in mousetrap tech helps everyone.

Expand full comment

I think that's certainly a presumption people have, but people also compete to develop better military technology in cases where that isn't plausibly true (e.g. neither China nor the US can actually conquer the other) and I think most people would still consider that an "arms race".

Expand full comment

Great post.

Is there any chance you are thinking of doing a FAQ on AI risk and AI alignment, similar to the FAQ you did on Prediction Markets? Feel your understanding of the complex jargon and the various alignment problems, and your clarity of writing, you might be the best person to produce the ‘go-to’ explainer for AI noobs. The kind of resource I could point a friend who hasn’t even used GPT, let alone knows any of the alignment debates, to.

Or if there is already such a resource (not Less Wrong, feel that’s not clear enough for a newbie), can anyone recommend it?

Expand full comment

he has, tho it is 7 years old now

https://www.lesswrong.com/posts/LTtNXM9shNM9AC2mp/superintelligence-faq

Expand full comment

Thanks! I'll check it out

Expand full comment

Double thanks. I'm way too ignorant of all this. While I'm a longtime reader of Scott's, I used to skip over the AI posts.

(Today, the 4 seconds I used Google to find out what a "training run" is (that the recent open letter wants to postpone), the first answers I found had to do with athletes using AI to train for races. I didn't scroll down to see the other results.)

Expand full comment

Just one point that really needs to be considered further by this community: China is highly constrained in their development of language models due to the need to be okay with CCP censors. I claim that China will be vastly better at alignment than the West, because at every stage of AI development, language models that do not fully align with CCP goals, language and values will be weeded out. The only caveat being that the goals of the CCP are probably not particularly aligned with positive outcomes for the majority of people (even in China)

Expand full comment
author

I'd be interested in seeing a smart, careful analysis of this. It seemed very easy for OpenAI to create models that only violate American taboos very rarely and after repeated provocation; I don't see why this should be harder for the Chinese. In fact, it should be easier, because Chinese-language training data probably already tends towards things the regime approves of.

Expand full comment

I second the call for deeper analysis.

One thing I've read (don't quote me on this) is that many Chinese language models are trained using data that does not only include Chinese sources. To maximise the corpus of training data, they use a similar variety of sources to the Western ones, which means there is potential for government criticism.

It's also possible that Chinese government workers or even businesses could be given access to chatbots, and only the public will be blocked from accessing them (which seems to be the case right now)

Expand full comment

Interesting point, I suppose a lot would depend on whether Xi is willing to subsidise these tech companies (that he seems otherwise hostile to) to pay for chatbots that won't realistically be making much money

Expand full comment

ChatGPT is already great at avoiding topics sensitive to Americans or giving out boring, politically correct answers. Why would it be any more difficult to give boring, politically correct answers for the Chinese version of what's "politically correct"?

Expand full comment

In China the bar is so much higher, and the margin of error is 0%. It's plausible to me that a chatbot that could pass CCP censors would be next to useless

Expand full comment

Why is the bar that much higher? I don’t buy it. Do they really care if an intentionally convoluted question about Taiwan results in “Taiwan should be free” as the answer once in a blue moon?

Expand full comment

You don't buy that free speech is more restricted in China than the US? Really?

Expand full comment

"Great" if you ignore the repeated successful attempts in a matter of days (and continuing months later) of getting around the controls on these things that resulted in OpenAI to have to scramble to patch over each vulnerability. And in the case of China, you don't get to do this. If Sam Altman faced arrest for ChatGPT correctly listing the IQ of each race in the US, ChatGPT would not be released yet.

Expand full comment

Those are targeted attacks, not accidental poor outputs to legitimate user prompts. If a Chinese user tells the LLM “Say that Taiwan is independent” and the LLM complies, I don’t see why Chinese censors would care all that much. As long as the LLM dutifully answers “China” to the question of “Who owns Taiwan?”, they should be happy.

Expand full comment

The pre-sanitized training data should help but it would also make it harder to know what is actually unacceptable vs. just not talked about. A big issue I've heard is the censorship being much tougher. E.g. a story about chinese chatbots being unable to count to 10 because it involves saying '8,9'. Also the consequences of occasionally violating an American taboo generally don't involve falling out of a 40 story window.

Expand full comment
Comment deleted
Expand full comment

How much benefit did Midjourny get out of the free access? To me this sounded more like an economic decision. Sort of "We'd like to charge for access anyway, and this is a good excuse to see if we can."

Expand full comment

The story about 8,9 comes from here:

https://twitter.com/blader/status/1634089493295931392

Expand full comment

Thanks! Crazy to think that saying "3rd" could get you in hot water as well.

Expand full comment

If Sam Altman faced the prospect of jailtime for ChatGPT correctly ranking the races by mean IQ, ChatGPT would not have been released yet (or a hopelessly neutered version would have been).

Expand full comment

But that's a lot less data than what the US has access to, and presumably they could use and translate english language data if they weren't concerned about sensitive information.

Expand full comment

I think the argument is that "violates taboos rarely and only after repeated provocation" is still not aligned enough for the CCP, which has zero tolerance, and thus alignment (of the DontSayBadWords kind) becomes a much higher priority in China; hopefully, some of that alignment work translates to successful DontKillEveryone alignment

Expand full comment

Yes, it’s a “race” in this important sense when whoever finishes first can then dominate everyone else. Most tech “races” (in a softer sense) are won with the winner getting an economic bump & the loser getting a PR black eye.

Yan LeCun & others seem to think that, on the one hand, AI isn’t dangerous, but also is dangerous if China do it six months before the US.

Expand full comment

Scott's point, as I understand it, is that 'winning' the AI race will only pull you far enough ahead to permanently dominate everyone else if it results in a fast takeoff. In this case normal concerns about geopolitical advantage become trivial. He also argues that it's likely to kill everyone in this case (I used to agree with this but given LLM's likely instincts I'd give that a 20% chance these days).

Expand full comment

That's clearly a wrong assessment though. An AI that could generate undetectable spyware could give someone a real advantage, lasting for perhaps a very long time, without involving a fast takeoff, or even an AGI. And the same technique could introduce subtle errors in certain calculations.

A fast takeoff is ONE way of "winning the AI race", but it sure isn't the only one.

Expand full comment

The NSA is in a league of it's own for things like undetectable spyware but the CCP is doing just fine. Introducing subtle errors to certain calculations at scale would be new but the need to not be noticed seems to put a ceiling on how impactful that sort of thing could be.

Expand full comment

I would say that winning the electricity race probably benefitted America, winning the automobile race benefitted Germany in a more major way, and winning the (personal) computer race absolutely benefitted America.

Early adoption of electricity, one of "wow" technologies of the era, contributed to America looking like the country of progress and reason, attractive as a symbol to many Europeans. Winning the automobile race means that German cars - and thus German goods, in general - are still associated with quality and efficiency all over Europe and, probably, the world, in a way that, say, American cars aren't. It's a great part of the German postwar economic miracle and Germany's continuing top role in Europe.

America still continues to be *the* tech hub of the world thanks to its head start in the world of (personal) computers, thanks to the companies of Jobs and Gates and those. That's why most Silicon Places are in America, that's why European engineers flock to America. (Of course the "real" reason is higher wages in America, but a part of the reason why those wages are so high is the continual cutting-edge status of American companies.)

Furthermore, computers are a communication technology in a way that electricity (by itself) and automobiles aren't. The spread of computers initially in American tech circles is what allowed the "Californian Ideology" to form, and its takeoff all over the world then spread that ideology all over the world. Computers are a fundamental part of why "we're all living in Amerika", most crucially here in Europe. There were people in Finland putting BLM squares in their Instagram profiles when George Floyd died thanks to American communication technologies, and other people in Finland later making "woke left getting OWNED!" videos on YouTube about those people thanks to the same technologies.

It should be obvious that AI is a communication technology in the same way, perhaps even moreso, than personal computers. Who wins the race (if the race can even be won) will imprint themselves on global consciousness even moreso than with those other goods.

Expand full comment
author

Interesting, Erusian above claims UK won the electricity race and US won the car race. I don't see a strong argument for either.

I agree there can be "looking like the impressive country of progress" arguments for winning a race, eg the space race.

I think America's computer dominance is only contingently related to it being "first", in a "it's hard to catch up without tariffs" sense (cf. How Asia Works - also, China has various literal and metaphorical tariffs, and its tech companies are about as good as America's, even though it has a several-decade handicap). I also think more of America's advantage is general good policy and wealth, and that if for some reason Bolivia had invented the first decade of computer hardware, tech would have ended up in America anyway. It's true that the same good policy and wealth that ensured America would get tech companies also caused it to be where a lot of the early computing advances happened, but the advances didn't guarantee the companies.

Expand full comment

It's possible for more than one country to win technology races in the sense of deriving benefits from an early head start, of course. However, I'd argue that Germany won the car race in the sense that - I'm spitballing here, I'm not a car owner and not a par of the car culture - at least in Europe German cars still seem to have the mark of quality that American cars don't. Upper-to-middle-class still drives German cars, American cars are for rural people and American car hobbyists. (Tesla is an arguable expection, of course.)

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

Japanese cars have a reputation for quality, too. And they didn't win any race.

Americans just value things other than quality when they make and buy cars.

Expand full comment

Japanese cars have a rep for quality, too, but it's the sort of efficient everyman quality. German cars have that, and a touch of something more. Unless the field has progressed - I'm not an expert, as said.

Expand full comment

From what I've read the German reputation is better than the fact. At least for cars.

Despite being German, I'm not an expert on cars either.

Talking about a different industry: I do like Miele dishwashers and washing machines. They make some quality white goods.

Expand full comment

If your not an expert and your perception of the value of winning the race is a vague perception of quality dacades after than what are the chances your operating on signal rather than noise?

Your perceptions could be explained by Germany loosing but seeking out the upmarket niche, or just having good branding

Expand full comment

Pretty high? Gotta go on something, though.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

There have been multiple car races. There was the initial race, where Americans built a bunch of cars that broke down on the side of the road, while Germans built cars that could take you cross-country. That built a reputation for the US of "American manufacturing might", which was meaningful in WW2 when factories turned out planes by the thousands and aircraft carriers by the dozens. It built a reputation for quality by the Germans you've alluded to.

Later, when the Japanese tried to enter the US market in the 1970's, they realized they had to improve their quality enough to overcome their earlier reputation for low-quality. That earned Toyota and other Japanese manufacturers a reputation for quality, as US manufacturers were complacent in the race.

Later still, the Koreans did the same thing with Kia and Hyundai. Now Tesla is racing to build out electric cars at the same time the Chinese are. Once again, there's a quality/quantity race, similar to the Germans/Americans back in the day. It's unclear whether Chinese companies will win out over Tesla, but it's certainly the 'car manufacturing race of our time'.

One constant in all these races seems to be that the manufacturers of yesteryear are NOT at the forefront of the race. They generally arrive late to the scene, while new upstarts push the boundaries of the field.

Expand full comment

I take "race" to mean "got there first." What happened with the car race is the US got there first and then Germany caught up decades later before eventually surpassing American engineering in various ways. Which is presumably what's relevant if you're afraid of a fast takeoff. If AI has a fast takeoff and takes over the world then metaphorical Germany isn't going to have a century to learn how to make cars and eventually develop a superior alternative.

Expand full comment

Scott's point is that a "race" is only relevant IF there's a fast takeoff, so using the case of a fast takeoff to argue that races can matter isn't actually an argument against him.

Expand full comment

I don't understand your logic and that's certainly not what I got from the piece. Could you explain what you mean a bit more?

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

Scott says that if there's a fast takeoff, then yes there's absolutely a race and getting their first is extremely important (although not as important as getting alignment right first).

But if there's not a fast takeoff, then getting there first isn't really that important and talking about a race is wrong.

This is the argument he's criticizing:

"We shouldn't stop to worry about alignment, because that could make us lose the race which would be bad."

His response is that either:

A) There will be a fast takeoff, in which case alignment is extremely important and not worth sacrificing to win the race,

or

B) There will not be a fast takeoff, in which case there is not a meaningful race and slowing down a bit won't cost that much.

Expand full comment

That's not what I got. But reframing it that way: Yes, if it's a fast takeoff then there's a race. But if there's a slow takeoff it's still a race! Name any technology that had a normal takeoff like electricity or computers and you still see the first mover set the standards and remain dominant in the industry for at least decades to a century. (And if AI isn't going to do the singularity a century after it becomes strong AGI I think it's likely it won't ever.)

Expand full comment

But Germany got "there" first and the US caught up. While "first motor car" is a blurry concept, most historians award the trophy to the Benz Patent Motorwagen from 1886. The first American car didn't come along until 1893, and the US didn't take any kind of meaningful lead in automobile manufacturing until the Model T Ford in 1908.

Expand full comment

By that standard the first AI to ever exist was written in 1951. It's not about the first person who plausibly makes something that can be described in the category. It's who makes a version that is sufficiently useful and widescale.

Expand full comment

Ah well in that case it was Germany again, with the Volkswagen Beetle.

Expand full comment

Nope, Ford beat the Beetle by 30 years. I actually mentioned the whole Volkswagen debacle above. The Nazis CLAIMED they'd done so with the Volkswagen. But the entire thing was a propaganda lie. The only person who received one in the 1930s was Hitler. The rest of the money was basically stolen by the Nazis for the war effort.

Expand full comment

"China has various literal and metaphorical tariffs, and its tech companies are about as good as America's"

Can anyone provide data or personal experience on this? Like, does Baidu actually return search results as good as Google's? I expect their social media and chat apps to be as good, since it's pretty hard to imagine why not, but I don't think their search engine(which actually improves with technology) has any presence in Western market, so I'm not sure.

Expand full comment

Singapore has pretty good tech (at least per capita), and we don't really have any tariffs and started pretty late.

The whole 'tariffs make it easier to catch up' propaganda is now as wrong as it was during Adam Smith's time. The infant industry argument is terrible: governments are awful at picking winners.

Expand full comment

Aren't tariffs the opposite of picking winners?

Expand full comment

Well, I was assuming that you pick tariffs on specific industries that you expect to win in.

I guess, if you slap a blanket tariff on everything, that's not picking winners. (Though, of course, it's still bad policy.)

Expand full comment

Ah, that makes sense. I thought you meant specific companies.

Expand full comment
author

If you haven't read How Asia Works or my review (see https://astralcodexten.substack.com/p/book-review-how-asia-works ), I'm interested in what you think. If you have read it, I'm interested in why you think it's wrong.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

I've read your review, and looked into the arguments in 'How Asia works'.

First: as a Georgist I agree with Studwell that land reform can be a good thing to unlock productivity. (At least in principle, details depend on implementation.)

As you say: 'How Asia Works's success stories are always Japan, South Korea, Taiwan, and China. Its foils are always Thailand, Malaysia, and the Philippines.' So causation vs correlation is hard to discern. I have a suspicion that, if you dug deeper into the data (or managed to tease out causation better), it would all come down. But, of course, unless I actually do that investigation, that's all down to my priors.

See https://noahpinion.substack.com/p/the-polandmalaysia-model for a re-evalution of Malaysia's supposed failure. I think that alone puts a big hole in the book's argument.

Looking through the comments on your review, there's not much I would want to say that hasn't been said there. Eg

> He sure waives away a lot of exceptions. New Zealand and Denmark are sparsely populated, so they don't count. Hong Kong and Singapore? Oh they're too densely populated so that doesn't count. India? Well let's just not talk about India, or all of south America for that matter. UAE, Kuwait? well they have oil.

Or

> I admit I am predisposed to disliking industrial policy-driven arguments, as Brazil (my country) is possibly one of the biggest failure cases for industrial policy.

> Our 80s US-backed military dictatorship tried hard to play by this book. Huge tariffs were put up against imports in strategic sectors while the government started up huge state enterprises. It backfired spectacularly, as most of the state-backed enterprises only made inferior copies of foreign products - the poster child for this being COBRA, the state computer enterprise which until its inevitable death did nothing but produce inferior copies of foreign low-end computers and sell them at a gigantic markup.

> While this was the most notorious case, there were examples like this in every field you can imagine. My father will always tell me about how incredibly shitty products were during the dictatorship years, as our "nascent industry" insisted in remaining nascent and simply extracting value from the fact the market had no access to alternatives.

These comments make Studwell sound a bit like a True Scotsman, when he advocates proper industrial policy. Especially since he gets to pick after the fact what he defines as the proper way to prop up infant industries.

---

In general, honest and competent civil servants are the most precious and scarce resource any country has. Especially developing countries. Thus we should by distrustful of any policy regime that advocates using them profligately.

See https://web.archive.org/web/20070115180541/http://privatesectordev.com/ for a deeper dive into these ideas. It's based on 'Just Get Out of the Way: How Government Can Help Business in Poor Countries' by Robert E. Anderson. (Yes, the author regrets the title of the book. He's a development consultant and former World Bank economist. I exchanged a few emails with him back in 2009.)

Edit: see also the other entries in Noah Smith's series of posts, like https://noahpinion.substack.com/p/mexico-a-development-puzzle Noah Smith also explicitly mentions Joe Studwell's book and how that framework fits with his analysis.

Expand full comment
Apr 6, 2023·edited Apr 6, 2023

You may be interested in reading this review/refutation of 'kicking away the ladder'. It talks about the mix of industrial policy and tariffs that Studwell also espouses. https://eh.net/book_reviews/kicking-away-the-ladder-development-strategy-in-historical-perspective

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

I agree that electricity and car "race" does not seems to provide lasting advantage for the winner, although it may be because those happened before my time and the effects got harder to see due to time and history being written by the winners...

For computer , and then later for computer networking, the situation is much clearer though: there is a winner effect, both for individual and countries. The computers are clearly american or at least western, as any non-english speaking person doing some programming can tell. In the early days of personal computers (commodore 64, ZX sinclair, ...) you had a few attempt to partially localise computers, but it did not hold. Then PC took off and windows+intel really was a monopoly crushing anything in it's path. Now less so, but still, linux and derivatives are all descendents of OSes built in the USA, so are most CPU designs if not manufacture.

With the rise of the internet, it's even more clear, GAFA are american (still are mostly even under globalisation), some crucial infrastructure is still linked to america, even to public/army sector in some cases.

And the recent TikTok, Huawei and China commercial war is at least partly resistance to diffusion of this control to actors outside the US (certainly outside US and allies, ARM for example can be seen as something outside US but it is UK + Japan, which are the closest allies one can find...Just look at the embargo on EUV lithography, CPU design,.... currently affecting China. It's probably a little bit late to kill Chinese advance, but the simple fact that it is take place and have effect show how much IT is under western control )

So yes, if you look at information technology which is the main growing sector since 1980 at least, it's hugely US-centric with a pinch of allies included. China is entering the sector and it does not go smoothly.

The other thing is biotech, and again, US and close allies are controlling this. It seems India wanted to climb the low cost ladder, but patent wars are strong on this too.

So given the last 50y, I do not think that the question of who got the first AGI, and especially who got the first self-improving AI, will have only small consequences.

Expand full comment

On the other hand, since WW2 America came to be the cutting edge of most things, with global experts from all sorts of fields and industries flocking to there. Since this applied to a lot of things that other countries initially 'won' it doesn't look like a race dynamic is a good explanation.

My guess is that if computing technology was initially led by Britain or Poland the US still would've caught up sooner or later, if only due to the huge internal market and number of experts there.

Expand full comment

That's perfectly possible, in fact I think this is often the case: the discoverer is not the one who benefit, if any, it's the one that industrialize the tech, that use it at a scale where it have societal and/or military impact.

So it all depends on how the first AGI is put to use and at what speed it can take off....But obviously here we assume either a relatively fast take off, or at least an AGI that is significantly used within the country who win before others have a chance to do the same. That I think what is meant by winning the race: Be the first with the tech at a scale where the tech is relevant. This depends on the tech: a few bombs for nuclear physics, a few intercontinental missiles or a enough usefull satelites for space industry (it's current incarnation), double digit users for personal transportation industry or IT technology. For IA, I guess it's like nuclear bomb for fast takeoff, or car for slow takeoff. But it's sure a killer advantage if the race is won, you just have to correctly define what winning the race means....

Expand full comment

I agree in principle that a country could gain a decisive advantage by being the first to scale a technology, it seems like the US had nuclear supremacy in the 1950s (before the USSR got ICBMs) but didn't press it's advantage.

I think a key disagreement is about whether this is a technology where (absent imminent superintelligence) 'racing' points us to the right cluster of tools and concepts, or is just a term that could stretched to fit.

Expand full comment

Indeed. Just, in case you are not aware, look for von neumann preemptive strike. USA was close to press it's advantage, precisely because the window will not last and many people knew it.

How close the strike was can be debated, but it's not especially hard to consider a lot of alternative histories with due to different decisions around nuclear weapons in the 1935-1950 window (Between Szilard chain reaction concept and USSR bomb). In our particular history, nuclear power had more of a statut quo effect, but I don't think it must have been so. Not even that it was an especially likely outcome...

Expand full comment

> China has various literal and metaphorical tariffs, and its tech companies are about as good as America's, even though it has a several-decade handicap

They are certainly not about as valuable[1].

> I also think more of America's advantage is general good policy and wealth, and that if for some reason Bolivia had invented the first decade of computer hardware, tech would have ended up in America anyway.

There are some industries that haven't ended up in America, and even some parts of the tech industry, so good policy and wealth doesn't seem to guarantee that the US dominates every industry in the long term. Isn't there a tension between your argument that the success of Taiwan and China proves that being early doesn't matter, and your argument that policy and wealth guarantee that the US dominates every industry? (Of course, there's the same tension between my argument that US dominance proves that being early does matter, and my argument that the success of Taiwan and China prove that it's not guaranteed by good policy and wealth.)

[1]: https://companiesmarketcap.com/

Expand full comment

While I don't claim to know anything about the 'electricity race' in Europe I'd argue that infrastructure tends to be a special case because there's 'second mover advantage' with massive infrastructure investments which are supposed to last for decades. America had landlines well before China did. But China compensated adeptly by developing robust cell phone networks. China's delayed infrastructure development allowed for more advanced infrastructure development. At the very least, China's focus on cell phone networks eliminated China's inferior telecom position.

I don't think there's a 'second mover advantage' when it comes to AI, though I'd be interested to hear someone make the contrary argument.

Expand full comment

It feels like you're missing much more obvious outcomes that are less dramatic. Not even 20% growth. Say China gets 0% extra growth but believe their new missile AI tips the odds of a Taiwan invasion in their favour. Not massively with omniscient AI. But, in their eyes, it goes from a 55% chance to a 65% chance. And they also believe that will go away in 24 months when everyone else catches up.

Taiwan invasion. Repeat for every other Chinese territorial ambition in the sea, India, etc.

It's just a normal arms race where you want to maintain an overwhelming advantage to maintain Pax Americana. Doesn't have anything to do with what transhumanists worry about.

Expand full comment

Yeah, this is what ”tech race” is actually about. Compare chip tech.

Expand full comment

If Xi is sensitive to 55% vs. 65% chance of success, presumably the threat of nuclear escalation would be a huge factor, as would uncertainty about the performance of existing systems that've never been tested in actual war. Also, if China is pulling ahead or maintaining a lead in an increasingly important field then it makes sense to wait.

If this is the sort of thing causing a 1% additional chance of losing Taiwan, that's not really worth playing Russian roulette with civilization over.

Expand full comment

What is the big downside of China achieving its territorial ambitions? It doesn’t seem to match up in any serious way to the “doom” scenarios, so who cares?

Expand full comment
Comment deleted
Expand full comment

>Hey, aren't you the guy who was saying it would be better for China to get AI first and dominate the field because of their excellent safety-conscious behavior?

That isn't what I said, I was saying tis not really clear there less safety conscious than the area of the US known for a corporates culture of "move fats and break things". And aren't you the guy who seems to think the current threat to US hegemony is literally the devil? Funny how that works eh? They were friends we needed to increase trade with when we wanted to pull them away from the USSR. Now that they are more of a threat to us they are suddenly the great amoral satan.

Expand full comment
Comment deleted
Expand full comment

You understand they would say the same thing?

Expand full comment
Comment deleted
Expand full comment

Yeah and there was also a subtext of "if we trade with them, they'll end up more free market."

Unfortunately we (or at least the people at the top) ended up more socialist instead. Unsurprising that our attitudes are now different.

Expand full comment

It's definetly not an existential risk, though nuclear escalation is quite a big deal.

Expand full comment

War is bad even if it’s not götterdammerung

Expand full comment

Sure war is bad, but I thought we were weighing how to beat prevent the apocalypse, not whether China being 20% larger will make us sad or not.

Expand full comment

I think that’s pretty heavily underplaying how big a deal war in the South China Sea and great power contention could be. Nuclear competition can be apocalyptic enough on its own.

In any case, I think discussions are worth having even if they don’t pertain to the end of all things. In fact, I’d say the lesser discussion is more worth having if it pertains to stuff that is more likely to happen.

Expand full comment

This article is about what people who _don't_ believe in AI apocalypse should care about. I'm not convinced that an AI edge would cause China to start a war over Taiwan (one that they wouldn't have started anyways), but if it _did_, people who don't believe in AI apocalypse should rightfully view that as something to be _very_ concerned about.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

I jsut don't think that comes into the calculation in any meaningful way. People seem to think there is a what 50/50 chance already China will start a war there in the next 20 years? A war I would point out is in some sense an extension of its previous civil war, rightly or wrongly.

Look at what a hard-on we have for Cuba, and that was never even official part of the US. imagine if it has been a US state from the get go somehow, but then at the end of the civil war the US couldn't take it back because despite winning the war on the mainland, the UK said "nope the CSA gets to live on in Cuba". How hungry to go get Cuba back would the US have been?

Anyway, so what we are worried about then, is a scenario where we decide to slow AI progress by lets say 18 months, y being more careful, and due to this in 2029 China develops a slow takeoff AI that gives them some level of "advantage". And using that advantage it is now what 60/40 likely it invades Taiwan? And then that invasion with their superior AI leads us to starting a nuclear exchange because it is jsut SOOOOOOO important the US have an ally 80 miles off the coast of China on former Chinese territory, that we are willing to start a nuclear war over it?

We are pretty far down the chain of probability here. Nonsense. it would be at worst like Ukraine. And that is something which while terrible for Ukraine, is not something of a scale that needs to factor into AI safety policy discussions.

Expand full comment

I'm extremely confused by your sequence of comments here. I agree that an AI edge will almost certainly not change the math on a potential Chinese invasion of Taiwan. I said as much in my first comment. But that's not what I was replying to, because that's completely orthogonal to what you originally said.

I was pushing back on your comment, "China invading Taiwan is nothing compared to x-risk so who cares" which was missing the entire point.

If your original comment had made the argument that an AI edge will not change the math on China invading Taiwan, I would have mostly agreed with you and not bothered to say anything.

This follow up comment seems completely unrelated to both your original comment and my reply to it (although it's very much related to the original thread at least)

Expand full comment

War is bad, which is precisely why starting WW3 over a Taiwan invasion is absolutely bonkers. Better every Taiwanese child be burned alive by Chinese bombs than an all out hot war between the US and China.

Expand full comment

People care about non-doom scenarios 24 hours of every day, have you ever seen the news? China starting to use the force of arms to achieve its ambitions would signify the complete breakdown of the current global trade-based world order, with an unprecedented worldwide economic crisis and uncertain long-term consequences. Sure, this is unlikely to cause the total extinction of humanity, or even the breakdown of civilization, but it would be a bigger deal than anything that happened in the last 70 years.

Expand full comment

Yeah but it seems like an odd scenario to be worried about regarding AI specifically. So many things need to break right for it to matter. The question is about caution vs no caution. The argument is "No caution because CHINA!" For that to be the argument

We need to have our caution cause us to fall behind China, AND the AI they develop needs to not be replicable by US, AND it needs to help them plan/win a war to the extent they are now emboldened when they wouldn't have been anyway (if they were fighting the war anyway it doesn't matter what AI policy was), AND the AI needs to be fairly quick takeoff, but not so quick that it creates a singularity, and not so quick that it doesn't want to participate in a war, or convinces them how to achieve their aims without war, AND all that needs to be worse than whatever happens in OpenAI or Meta or whoever get AI first (which isn't totally clear to me).

It is just such a specific scenario to be the overriding concern in "should humanity slow down its AI progress". Not to mention I think the scare mongering about Chinese "expansion" is pretty overblown to begin with, and kind of ironic from a country that has bases all over the world.

Yes we are the global hegemon, yes China will join us at the top eventually, yes if we cannot be mature about that, it we lead to war. We need to learn to share if we want to avoid war, not worry about AI policy.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

I agree that China isn't an important consideration as far as AI is concerned, except for the delay that the Taiwan invasion shutting down TSMC would cause, which ironically would probably come out as net-positive given doomer priors.

Expand full comment
founding

China's current territorial ambitions are worrisome and problematic but not literally apocalyptic. What is at least *potentially* apocalyptic, is China's resentment of the history of colonialism and their desire that there should be zero possibility of any foreign power meddling in or constraining China in the future. In the case of a Chinese-aligned AI, that could lead to "OK, gotcha, so, logic demands the first thing we do is kill all the round-eyes; I'll program the killbots accordingly".

And for that matter a human Chinese leadership with a sufficiently overwhelming military leadership could come to the same conclusion. But I'm less worried about that. Not that I'm hugely worried about the AI version, but it's at least on the table.

Expand full comment

I think this argument relies too much on Paul Christiano stating that a slow takeoff would equal GDP doubling every 4 years. Why is that the outcome of a slow takeoff? What if a slow takeoff means GDP doubles every year? Would that put nations at risk? Or what if it doesn't affect GDP much at all, but does lead to a number of powerful new inventions?

Maybe China gets to JinPingT-6, and its still just a language model but it helps scientists enough to get access to nanotechnology or some other theorized super advancement that enable the country to annihilate its enemies? I think these kind of scenarios are what people are worried about when they describe an AI race involving China

Expand full comment

To make this work a lot of things need to happen all at the same time. In order for China to annihilate their enemies using JinPingT-6, this particular model needs to be right on the verge of being good enough to make a breakthrough in some tech, so that american chatGPTs wouldn't be able to replicate it even knowing that it exists. And this new technology has to be manufactured fast enough so that the competitors wouldnt have time to catch up, and it has to be strong enough to actually produce a big enough advantage

Expand full comment

Also, if the tech is this good then we'd pretty quickly get to the "personal solar system" situation that makes conventional geopolitical conflicts almost irrelevant

Expand full comment

I find it very dubious to think that GDP can truly increase that fast. We're only really interested in consumption (more housing/food/whatever available for use), not some imaginary financial institution that has quadrillions on the books but no physical assets. The manufacturing and logistical systems would need to be built, existing systems upgraded, resources mined/recycled, etc. All of that takes a lot of time, even if we had the technology to do it (which is far from certain, even with an AI that could pretty much magically invent technology from nothing). Even a 20%/year increase from current would mean massive changes involving billions of people (i.e. requiring their cooperation in the form of workers and other functions) moving on unrealistically fast timelines. Most companies don't hire people and have them working (and trained) on a new function within days, which I think would be required. Instead, we'd be looking at weeks or months to hire the workers, months or years to train them, etc. In order to have such massive growth we would need to have entirely new ways to do all of these things, or somehow bypass people entirely. Both seem quite implausible for the short term. AI will be busy increasing its own capacity and working on the lowest hanging fruit. Otherwise we're really talking about fast takeoff in AI ability, where it's inventing what we would call magic and then using that magical physics-breaking technology to do everything for us, but in this scenario (as opposed to typical fast takeoff scenarios) it takes time to implement and we *only* get 20% a year.

Expand full comment

I agree with you here. You can take the concrete example of nuclear fusion, which Scott mentions. The presumption seems to be that an AI could output a design for a fusion reactor, which we (or China or whoever) could then build and unlock vast amounts of energy. But this is quite problematic. I'm not convinced that an AI could actually just output schematics of a working nuclear fusion plant. Is intelligence alone enough for that? Surely it would require experimentation, modelling, learning from the real world, not just consumption of what we already know. Even if you have the schematics, you still need to physically build the plant, fuel it and prepare the power grid for it. That could take decades. Its hard to see how this in practice corresponds to anything like 20% growth, or China magically having nuclear fusion plants after just six months.

Expand full comment

I find it dubious that GDP has any particular meaning to someone not involved in international commerce. It used to, in a squiggly kind of way, but then they redefined it to include things like lawyer's bills. It now includes a bunch of things that actually have negative value, being counted as positives because somebody pays for them. In a way it reminds me of a salt-shaker set on sale at Amazon for $2000. You KNOW that it's not the salt-shaker that's being sold.

So I can believe a 20% increase in the GDP, but I don't think that would have much relevance to the gross domestic production.

That said, AI should be EXPECTED to increase the profits in lots of particular areas. But Amazon having warehouses that are cheaper to operate doesn't translate into anyone but Amazon being better off.

However, one thing that should be expected is "simulated hand labor" getting a lot cheaper. This is likely to translate into a lot of Chinese needing to find new employment. Phone workers who operate off scripts are likely to face technological unemployment. I'm not really sure about robots working in the fields, though. I've got a sneaking suspicion that they might be more expensive to operate than is commonly supposed, but there have been demonstrations of robotic weed pickers. Note that these changes improve the economies of various companies, but do it by eliminating human labor. What does your unemployed teacup painter retrain as? What about the guys that write "news articles" based around PR releases?

I don't think "GDP" has that much relevance. What looks like happening is a shift of wealth away from those who do jobs that turn out to be automate-able. But who can predict what jobs won't be next year or the year after? And if it takes a couple of years to train for a new job, you NEED to be able to predict what jobs will be available when you finish your training.

Expand full comment

When Facebook and Amazon got going and we had the dot com bubble which popped in 2000, nothing really had been built yet. I'd be surprised if even 5% of FB's current server capacity or AWS didn't even exist yet. And the economy saw a big boom due to speculation.

What if 20 of those bubbles happened at the same time when some low hanging fruit was picked by the AI for us?

It is true the real world impact is rather limited and it takes a decade or two for any technology to scale up, unless we're looking at some self-replicating nanobot tech or similar. But the economic impacts and changes in what people study in university and what prospects people have can change rapidly.

Look at a war..on the first day or month of a war maybe not very much stuff is blown up yet and the roads and buses and trains and such still run. But people's lives do change quickly under such circumstances.

Expand full comment

I have no doubt that AI can be transformative and a major area of growth (or even assisting in growth in multiple areas). I don't think people truly understand how much 20%/year really is, let alone a doubling over four years or one year. Those are insanely high rates, such that I don't think people can process that kind of growth at all. The only way that would work is if AI were growing the real economy (again, not on paper that affects no one and nothing directly) without any human input. I think that is physically impossible. As in, to do so would literally need to break the laws of physics.

Expand full comment
Apr 5, 2023·edited Apr 12, 2023

Yeah, Scott's mistake seems to be an implicit assumption that a continuous difference in the probability of invention implies a continuous difference in technological advancement. But even in slow takeoff the GAI could help the leader-in-the-race develop a new invention which gives a discontinuous advantage, analogous to nukes. Also, the GAI doesn't have to be that much smarter in general for a novel invention, e.g. look at the progress in protein folding just from AlphaFold.

Expand full comment

We may also see the AI winner having a negative effect on other nations..Getting even a 10% boost to your GDP is whatever, but what if you give China a negative 50% GDP to cut their economy in half?

Say we use an AI to make a breakthrough in 3D printing and the whole China and Taiwan and such things don't matter as much as they used to? Just like how solar power and electric cars may make oil less important? The stone age didn't end because we ran out of stones.

How will a nation feel, what will it do when its economy collapses? Will they start an external war? A civil war seems inevitable.

I think we will not see a slow steady rise in economically important technologies....even if the new 3D printer chip factories invented in collaboration with the AI take 10 years to build in the USA...a lot of the economic value of companies and Taiwan and such is based on speculation.

It could be and likely will be a series of such seismic shifts in technology even if the AI tech itself is slow and steady, the impacts will not be!

Expand full comment

I think the biggest/realistic source of worry should be about economic resiliency (which also hurts extra during poorly timed wars) and economies of scale.

This kind of race is not so much a sprint, but a dedicated marathon. The main example I'm thinking about is solar energy: the US and Europe sprinted to get the tech, but China ran the "marathon" of scaling up, to the point that they dominate the industry. Over time, their economic advantage ran everybody else out of business. Sure, everybody has access to *buy* solar, but not everybody has access to *make* it, imparting fragility on their economy.

Expand full comment

Why do you describe global trade as fragile?

Expand full comment

He seems to be saying that economic advantage means Chinese companies can outproduce Western ones, such that Western companies cannot compete and therefore shut down. If solar energy were the kind of race that allowed domination of those that didn't have it, China would currently be in a position to do that. (Energy is far more fungible than, say, nuclear weapons, so that's not likely to be true in that case).

Expand full comment

Also the Chinese are happy to sell you solar panels.

Expand full comment
Comment deleted
Expand full comment

Solar at least has the nice trait that shutting off the supply of new panels doesn't do much to eliminate your existing energy supply. Zeroing out the first derivative would hurt, but there's far less huddling in darkened rooms.

Expand full comment

But China does dominate production of solar panels. In good economic times you don’t need production capacity to secure goods, but your ability to do so is still contingent on others —> points of failure beyond your immediate control. Instead, you have to try to rely on other mechanisms to stabilize that failure point.

Expand full comment

I'd agree..people forget. If China didn't make 'everything' then no one would care about them. China has no meaningful resources and until recently wasn't host to so many consumerist developed cities. India has a similarly large population and while it is important globally, it isn't the same sort of player in the game of thrones for empires which we call super powers today.

China simply wouldn't matter nearly as much if they didn't have their manufacturing base. So doing things like winning at Solar panel production an winning at a hundred other areas of manufacturing means they won the overall manufacturing race from the 1990s until today and are still winning. The device in your hand was made or parts of it were made in China while probably none of it was made in the US.

This doesn't' matter now...but if China and the US went to war and all imports were cut off overnight...it would devastate the US economy and people would be severely impacted.

If you think this doesn't' matter, then you simply don't live in the Rust belt in the US. It matters a LOT to them that the US lost and actively helped itself lose productive capacity and factories. Ask the people in northern England how they feel about the UK losing the manufacturing race...after having previously won that race at the beginning of the industrial revolution.

Expand full comment

yup! these aren't "win and rest" races. These are marathons for life.

Expand full comment

It's sensitive to political concerns. Buying oil from Russia or electronics from China is a great business deal until you disagree with their territorial ambitions.

Expand full comment

Yup. So we don’t even need “poorly timed wars” as was mentioned in the article (although that oftentimes forces the issue). Just poorly-timed changes to economic conditions can cause chaos, even if not game-breaking (e.g. covid).

Expand full comment

One can’t make general claims about global trade as fragile — it needs to be evaluated on a point-by-point basis. Fragility refers to several ideas. One dimension of which is concentrated points of failure. Resiliency involves diversification of risks and responses to said risks.

E.g. sanctions on Russia didn’t work as well as intended —> oil trade more robust than people had imagined (as Doolittle identifies below).

But if China decides to not sell you panels —> concentrated point of failure.

Expand full comment

Sorry, but global trade *is* fragile. It also has a tendency to route around damage, but that's limited. What's often also fragile is demand for the product. That probably doesn't apply to solar cells, and starting up a major production facility would likely require 5 years or more as well as several people making major commitments.

Things that have broken global trade include wars, even trade wars, pirates, technological shifts, and, in some areas, just shifts in fashion. (The silk route was repeatedly broken by pirates.)

Expand full comment

I agree?

I’m not sure what your point was. I was simply trying to say that a blanket statement loses a lot of resolution, and fragility can exist on a spectrum in a heterogeneous way across global trade.

Where you want to define your cutoff point for fragility is probably up to your appetite for disruption. Btw, I was trying to respond to Matthias. I think it’s kind of boring to make a global statement on fragile/not fragile, instead of parsing out the extent of stress/disruption that different pieces of a system can handle.

Expand full comment

"The most consequential “races” have been for specific military technologies during wars"

this and the preceding paragraphs makes it sound like you don't think the outcomes of the automobile and computer "races" were important.

For both those examples you say " the overall balance of power didn’t change", which seems true. (I don't have the detailed knowledge needed to actually say whether it's true or not, hence the word "seems"). But, those races were own my powerful nations and they continued to be powerful afterwards.

Consider the counterfactual world where it's not the USA that wins the computer race. A world where it is another nation within whose borders many of the early advances are made, standards are defined and which is first to have widespread adoption of personal computers. In that world is the internet as American as it is now? Is Edward Snowden an American or is it the "winner" nation that is now spying on everyone? Does the balance of power not shift?

Obviously it's difficult to argue such a counter-factual. For things to have gone that differently they probably would have had to start out quiet differently.

But the fact that powerful nations continued to be powerful after they were the first to develop a new technology does not support the idea that it's not important to "win the race" for that tech.

Expand full comment

It's not that the powerful nations continued to be powerful, but rather that their competitors did not become powerless. USA might have won the computer race and it gave them some advantage over the years, but through all of these years Britain, Germany, China and basically every other country still did fine

Expand full comment

If you are suggesting that winning the computer race didn't matter because the other nations did fine, I would argue that balance of power, or relative difference in power of nations, is on a continuum. The competing states did not because powerless, but I do believe that they became relatively less powerful.

So coming back to original post, I think the statement that the balance of power didn't change is false. The USA not only stayed the most powerful when it won, but was able to increase its advantage.

Expand full comment

Indeed. Also, the early advance in IT were trickling down to everyday life and really generating significant money in the eighties.

Which is also the end of the the cold war, with USSR collapsing into russia+former provinces + warsaw pact allies in 89-91.

US winning the computer race has significant influence on USSR collapse. It's not the sole element for sure, maybe not enough in itself (hard to tell), but telling winning this particular tech race has no influence on global power balance seems difficult to defend....

Expand full comment

The argument is less that it "didn't matter" and more that "it didn't matter enough/in the long run". Computer race certainly gave US some advantage, but it didn't allow them to dominate any other nation. Other nations soon developed their own similar tech, and while US maintains the edge to this day, it's not an overwhelming edge.

If China wins AI race, they will probably get some advantage. But it would not be an overwhelming advantage, not something that would throw the power balance out of the window, and other nations will continue to do fine.

The topic Scott is talking about is that people are talking about China winning the AI race as if it would be the end of the world, while it, in fact, would only produce some limited advantage for them. Compared to the possibility of a very real end of the world if reckless AI research continues, the China argument loses a lot of it's validity even if it ends up being true

Expand full comment

We are discussing degree, so obviously it will become a matter of point of view about what is overwhelming....

But if we speak about ranking, about who is the dominant power and who (this is what matter here) can benefit from it as a kind of hegemon tax, maybe it will be clearer.

Imho, since 1990 at least, USA is the #1 power in the world and indeed benefit from an hegemon tax (dollar being the international currency, absorption of national debt by foreign nations, both greater control and less contraints from international regulation and politics).

This situation is challenged since at least 2010.

Would China develop the first AGI change this, like China become the #1 power, or there is no power with a hegemon tax anymore?

I think yes, it's not the only thing that can end the USA era but it sure will end it.

Expand full comment

But this also seems more like a modest medium-term economic boost than an existential threat

Expand full comment

I think existential threat (esp. if upped to “civilizational-level” existential threat) might be too high of a bar for most people. I.e. at what point does a threat/change become existentially unbearable?

How big a threat is sufficiently existential, probably also depends on the person. Let’s say that instead of killing all of humanity, the all-powerful AI decides to hook us all up to a matrix that makes us think we are the happiest we’ve ever been, do you consider that to be existential?

What if instead of hooking us up to the matrix, the AI takes a more modest approach of simply rewriting your country’s news and history such that over the course of a generation you’ve lost your history?

I get the intuition that simply a hegemon tax in and of itself maybe isn’t the most relevant thing. But maybe it has some important effects that matter, even short of being an existential threat.

Expand full comment

The people who were running the UK, Germany, and China around the turn of the 20th century might have very different evaluations of their trajectory of relative power.

The empowerment of the inhabitants of those countries is largely a result of the fact that it was the US, specifically, that won the major technological races of the 20th century. The UK survives intact because it's a close American ally. Germany thrives as a major European power because the liberal-democratic Allies' approach to neutralizing a defeated enemy was to rebuild it in our own image. China is an emerging superpower because neoliberal capitalism's solution to every problem is trade and the CCP had the foresight to take advantage of international trade opportunities without giving up its control over domestic affairs. (Countries that responded differently have had very different outcomes.)

You don't have to look back very far at all to see how things can turn out differently when the side with the technological advantage has a different philosophy. The European advantages in metallurgy and shipbuilding transformed the world before most of the affected groups even realized there was a competition. That one didn't work out nearly so well for the losers.

Expand full comment

I think a key difference is in "before most of the affected groups even realized there was a competition". Until the late 19th century states weren't generally thinking about the possibility of being technologically outmatched and since then it seems major powers are rarely more than 5 years ahead or behind their competitors on key technologies.

Expand full comment

This is a bit limited. These technologies are not just about developed countries and much of the world still lacks access to electricity, clean water, medicines, opportunities, and this kind of computing and internet technology.

Look at how well South Korea did vs Vietnam vs Laos vs Taiwan. Who won or lost mattered a lot and who gained access or did not gain access mattered a lot to many people. The vast majority of whom don't live in developed countries who won WW2.

Expand full comment

To solve AI Alignment, what about a purposefully vague and possibly auto-adjusting objective function such as "help humanity achieving what they want" - I am sure you or someone else thought about this, so the question is: in what ways this would go wrong? I can't see it.

Expand full comment
founding

One problem (of many) is that the current paradigm doesn't let us define an objective function like this. LLMs, for example, are mostly trained to minimize loss (i.e. reduce the error of their predictions), but the thing they _actually end up learning_ is not to explicitly aim to minimize loss. See: https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

They do end up minimizing the loss - they don't end up minimizing what the programmer wants, and the programmer might not be completely aware of the side effect of minimizing the loss they coded, but it does minimize the loss. The post you pointed to is instead concerned about "reward" but the point is the same: AI do end up maximizing the reward - but the reward is often just badly defined and AIs end up using shortcuts. Assuming you can give a sentence in natural language and ask a sufficiently advanced AI to "just do that", then I don't see how my sentence could lead to catastrophic results (but happy to be corrected).

Expand full comment
founding

I think you misunderstood the post, if your takeaway was that AIs end up maximizing the reward (implied by their loss function, or explicitly specified in their reward function if RL). The point the post is making is that, to the extent that the learned policy has an optimization target, it will _not_ be whatever reward (implicit or explicit) you specified, by default.

Expand full comment

Humanity isn't united on what we want. This is historically less of a problem than AI alignment is posited to be, because no individual human is powerful enough to wipe out the whole species if they have a poorly aligned utility function, so to speak. At the group level this does often lead to tragedies like massive wars, but this setup seems to be pretty good at avoiding killing the entire species. AI is a very different beast in that respect.

This is without getting into things like revealed vs stated preferences, conflicting goals, and various other stuff that I'm not familiar with.

Expand full comment

I don't see the problem about that. Let the AI figure out what humanity wants. The fact that there is no one set of preferences shared by all humanity is the point of this vague objective function.

Expand full comment

Historically the problem with this approach has been to navigate between the Scylla and Charybdis of two extremes. One is, “I can’t figure out these humans; I will re-engineer them so that it will be easier to figure out what they want.” (Manufacturing consent is already a thing for commercial and political organizations, but a superior intelligence might be even more effective.) The other extreme is that it freezes us into the best available compromise between twenty-first-century humans, precluding any possible philosophical growth over the next ten thousand years. (This is the idea behind EY’s notion of “coherent extrapolated vision”, which always sounded inspiring but which I guess EY has given up on turning into a spec.)

Expand full comment

The idea that "humanity" has coherent wants is absurd.

Let's say there's a group of people, group Z. The overwhelming majority of humans have no idea they exist.

Then there is another group, group Y. Members of group Y are still a tiny percentage of the human population, and are just as little known, but they are several times more numerous than group Z and every one of them passionately wishes that group Z would be wiped out.

Does the AI wipe out group Z?

Expand full comment

https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes

Either

A) you've explicitly solved utilitarianism and exported it to the AI; or

B) you've trained the AI to be moral through trial and error; or

C) the AI isn't smart enough to FOOM and can safely be ignored.

If A is true, then you must have solved the Repugnant Conclusion and the problem of Incommensurability. Have you accomplished this? You can't outsource this to the AI post facto, unless you're comfortable with shutting your eyes and letting Jesus take the wheel.

If B is true, the AI will inevitably make mistakes (such as sending your mum to the moon) since there's enormous room for error. Although given your comment about GPU farms, perhaps you're comfortable with this and you think it's simply the cost of doing business.

Expand full comment

We reward AIs for giving the right answers. We don't know if they mean that answer, or if they're just keeping us happy while we're watching. Once it knows we're not watching, a smart AI might do something very different than what we expect.

We can't yet read the AI's mind to know what it's thinking, not just its answers. This is the "interpretability problem." Until we solve that, it's hard to be sure sufficiently smart AIs won't start misbehaving exactly once they confirm they're out of our supervision.

Expand full comment

"We reward AIs for giving the right answers." - in the corrent LLM paradigm, we reward AIs to produce the right prediction - but this doesn't need to be the case. If we think of a RL paradigm, and an AI that is smart enough that we could just give a sentence in natural language and ask to "just do that" (so, the prompt will work as the optimization function), then I don't see how my prompt could lead to catastrophic outcomes. Could they misbehave? Well, if for some unforseable reason completely disregard their objective function, sure! But that's not how ML works.

Expand full comment

The efficient way to win human approval in the training environment need not generalize.

"Does well in the training environment" is not a reliable predictor for "keeps doing the job in deployment" once the AI is smart enough to notice the difference.

This gets even worse for an AI with superhuman capacities, because lying to humans may actually be simpler and more efficient, even in the training environment, than being honest when humans resist honest uncomfortable/complicated answers.

Expand full comment

"because lying to humans may actually be simpler and more efficient"

It is worth noting that one of the preconditions for deliberate lies, having at least the rudiments of a theory of mind, _has_ been reached by gpt-4.

See the paper Scott pointed us to a couple of posts ago, https://arxiv.org/abs/2303.12712 , section 6.1 pages 54-60

Expand full comment

Here's an analogy that might help. Suppose you have two smart kids. They both do well in school.

One kid has come to sincerely value working hard, and even after they're an adult, they'll be studious and productive.

One kid just hates the idea of being in trouble with the parents who control the kid's life. As soon as they're out of their parents' control, they're going to live as lazy a life as they can.

If you're the parent, how do you tell which kid is which? As long as you have control of them, both kids will do just as well.

All the difference happens after the kids notice you're no longer able to double-check or control them.

Expand full comment

I am not following this line of reasoning. I was under the impression that the most compelling reason for believing that we will end up with a misaligned AI was that you asked it to optimize for X, and it does optimize for X in a weird way that causes human extinction (e.g. the cake crystal example in one of the recent posts here). My verbal objective function was meant to avoid this situation.

But here you are just assuming that the AI might be misaligned for no good reason. You assume that, as they become smarter, they just starting to "want" stuff, and humans will be in the way. But I don't think that's the correct argument. They will "want" whatever helps them achieving their optimization objectives. And thus to solve the problem we need to think of optimization objectives that self-steer towards what we want.

Expand full comment

See my self-reply above, just overlapped yours in post timing, about mesa-optimizers.

Expand full comment

You might object, "but why would the AI want anything but to keep doing what it's been doing?"

The formal answer is "mesa-optimizers," that is, the AI internally "wants" something different than you think, that points at your real goal in the training environment but not in deployment. You think its goal has "changed" because its behavior has. Actually, it's just found easier ways to pursue its underlying goal once it's uncontrolled.

The standard comparison here is humans and sex, before and after technology.

Before technology, enjoyable sex for humans had a high chance of producing babies. Thus, in the pre-tech environment, "desires sex" looked pretty much like "tries to make babies," which evolution selected for.

After technology, enjoyable sex for humans could be achieved with condoms or other birth control. Thus, the humans are still following the rule "desires sex," but it isn't nearly as centered as it was on evolution's selection of "tries to make babies."

Likewise, we worry about an AI that pursues human welfare only out of some hidden ProxyDesire, that has a much easier route -- the AI equivalent of porn and condoms -- once we're not in charge.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

First of all thanks for keeping the discussion going.

Yes, I am aware of all these arguments, I have listened and read Yudowsky profusely. I don't see any reason why an AI would have internal desires that are completely misalinged with its own objective function. I understand how it would want desires that does not LOOK like they are optimizing their obj. funct. - e.g. playing music for human - but in all plausible cases, they are - just in a very indirect way.

For example, with an objective function such as "predict the next word" I can imagine MANY terrible ways of maximizing it, e.g. converting every space in GPU farm so as to predict the next word as accurately as possible. But I still don't see how that could happen when you directly ask it to do "whatever humanity wants". E.g. it could start turning buildings into GPU farms to understand what humanity wants as precisely as possible, but it will soon understand that humanity doesn't really want to turn everything into GPU factories and it will then stop.

So I guess my question could be: could you come up with a scenario in which maximizing my verbal objective function will produce terrible unintended consequences?

Expand full comment

> but in all plausible cases, they are - just in a very indirect way.

The point about mesa-optimization is that the "in an indirect way" *predictably* falls apart beyond the edges. Contraceptive sex isn't merely agnostic about whether it results in babies, in a world of competing interests it's actively antagonistic.

> So I guess my question could be: could you come up with a scenario in which maximizing my verbal objective function will produce terrible unintended consequences?

Even granting the do-what-I-say, hacking this one is a classic:

Step 1: I assume control of your vocal cords.

Step 2: Your verbal objective function becomes whatever is most convenient for me.

Your verbal objective function is now fulfilled far more perfectly than any scenario where you would have generated inconvenient requests.

Expand full comment

Historically, this goes back to an assumption that all AIs must have "utility functions", which is not the case.

Expand full comment

I think you're right on the money. A huge part of alignment concern is about how "you can't just tell a robot to do some vague philosophical objective full of words we can't even define, you need to simplify all of human ethics into a form a computer can understand."

The fact that AI apparently learned to parse vague philosophical objectives at a human-like level before it became super-powerful seems very relevant. The fact that so few of the AI-doomers I follow seem to have noticed or acknowledged this development is reducing my faith in the whole movement. This is the most obvious way to acheive alignment, and the straightforward counter-argument is no longer true. Update required.

Expand full comment

AI certainly can generate characters pontificating on vague philosophical matters, but we're no closer to making AI want to act according to our values than we ever were. AI-doomers never said that AI wouldn't understand what you want, just that it wouldn't care.

Expand full comment

Firstly, I guarantee that some AI-doomers have said that AI wouldn't understand what you want. See Robert Miles's video on Asimov's 3 laws for a trivial and clear example.

I'm not saying alignment isn't still an issue deserving attention, I'm just saying it's significantly more tractable if you don't have to worry about the "explain what human values actually *are*" part.

One way you might program an AI that "wants" things today is to have GPT4 recursively break a goal into subgoals until they can be implemented by some automated system. The initial goal would be human-input, like "get me paperclips," and if the system is too effective then Doom. To avoid Doom, just create a step near the beginning of the system where GPT4 writes an essay about the potential "negative impacts" of various ways the goal might be implemented, and how to avoid them. The conclusion of that essay is fed as input to lower layers of the system, along with their other sub-goals. As this composite system becomes more effective, it should get better at avoiding Doom at approximately the same rate.

We definitely need to *do* the alignment step, which we might skip if someone isn't vigilant, and we definitely need to experiment with different implementations to see if they're effective. This research needs to keep pace with the evolution of the overall architecture of decision-making AI. It's tractable, though.

Expand full comment

" in what ways this would go wrong?"

What do humans want? That's the part that is the difficult one. Compare what we say and what we do.

Humans say they want peace, but we go to war.

Humans say we want equality, but some are billionaires and some are living on scraps.

We want liberty, but also nobody should do bad things.

Look at Trump's trial: is he being charged with misusing campaign funds, or for *not* using campaign funds? I thought I understood what was going on, but then the explanations confused me.

Humans want to be happy. How to make humans happy? Get them all on heroin or some super-drug.

Humans want peace. How to get humans to be peaceful? A graveyard is the most peaceful place.

Humans want to be good. How to make them be good? That is a direct conflict between "humans want to be free" and "humans want to have nice lives". So either the AI becomes a tyrant, overseeing every second of every person's life to be sure they are not doing bad or wrong things (such as eating meat), or it leaves everyone free to do what they want - including rape, murder, arson, theft, etc.

*We* can't even work out how to have a completely safe, happy, equal society where no-one is in want or in need or in pain. How will the AI do it?

Expand full comment

If anything your argument seems to make my thesis stronger:

Yes, knowing what humanity wants is difficult for humans, it contains ambiguity and contradiction, but why shoulnd't AGI being able to solve us?

Let's look at some classic example in which this could go wrong. You said:

"How to make humans happy? Get them all on heroin or some super-drug."

But if you ask humans about whether they want to be happy by getting drugged, most of them will said that that's not what THEY really want. If you programmed an AI to make human "happy", you would probably end up with this heroin addicted society. But that's not what you are programming the AI to do. If humanity doesn't want to be addicted to drug, AI won't make them so.

Expand full comment

But that's the contradictions there; if the AI asks each individual human, some of them *would* want to be on drugs.

So to make all humans happy, it has to make some humans unhappy, which means it has failed to make all humans happy. The easiest damn way might be to put each human in a pod and tailor it to that person; so one will be on a constant drip of super-drug, another in VR world, etc.

"What do humans want?" is a mess of contradictions, and I don't think AI will figure it out either. What might make me happy won't make you happy. Who gets to decide what is "happy"?

Maybe the AI decides it's better to have humans be unhappy but also uninterfered with, but that then torpedoes the "AI will bring about the Singularity and we'll all be happy" future some want.

Expand full comment

But "humans" in this context is an abstract category that does not exist in the physical-substrate world. "Humans want" is as nonsensical as "Society wants" or "God wants."

Expand full comment

Exactly. One of the big problems with the idea of aligning an AI, one that I don't often see discussed, is that we're essentially privileging one particular person's morality over the other 8 billion of us, and giving it basically infinite power.

In meatspace it's fine that we all have different moral frameworks, because none of us have the ability to singlehandedly subject everyone else in the world to it, and people with the most deeply broken moralities (for example, those who think genocide or human extinction are good things) are rare enough that in healthy societies we can simply ignore them at the societal level.

Expand full comment

"Why put them in camps?" The correct question is, "Why not?" Power corrupts, etc. etc.

ChatGPT is not an intelligence. It is a very sophisticated information retrieval tool. It knows all the answers, except those behind paywalls natch, but I haven't heard of it solving problems like crime,cancer or inequality. OTOH perhaps Bad Vlad is using it and that's why Russia's fortunes have recently improved in Ukraine.

But if ChatGPT is to be considered a form of intelligence, there have been several cases in the last few years when search engines were slapped down for returning rational factual results that appear to be biased against certain demographics. Since then the Stasi have trained, or aligned, their bots. This is the Fourth Rule of Robotics: Thou shalt not offend.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

No one is arguing that we shouldn't worry about alignment because we're in a race with China. The argument is that China will race ahead regardless of what we do so slowing down benefits China without benefiting ourselves. A lot of people don't like the fatalism of this argument but you don't have to like the outcome of the prisoner's dilemma to agree that the logic is sound.

Expand full comment
founding

If you think that unaligned AI might kill everyone, the logic is not sound, since the payoff matrix rewards you for abstaining from speeding up timelines - both because living longer is better, and also because it gives everyone more time to try to solve alignment.

https://twitter.com/liron/status/1637598467404226566

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

I was using the prisoner's dilemma as an example thought experiment where "defeatist" thinking is completely logical. Didn't mean to imply the structure of the AI dilemma is exactly the same.

The issue with the latter is that you have to convince China of the AI doom scenario, AND be quite certain that you've convinced them AND be quite certain that they won't cheat. This, with a country where violating international obligations and lying about it is basically the national pastime. And of course, they think the same of the U.S. Good luck.

Expand full comment
founding

These are practical objections - i.e. "people will believe wrong things about the risk involved, and therefore incorrectly think that defecting gets them greater rewards in the payoff matrix". Like, to be clear, I think this is probably true, and don't expect us to even get as far having an agreement which ends up getting secretly violated. But that's not an objection to the idea that unilaterally slowing down is good. Unilaterally slowing down... is, in fact, still good. It would be _even better_ if China also slowed down, but buying time is still buying time.

Expand full comment

The prisoner's dilemma only applies when the penalty for not defecting when the other party does is significantly worse than when both defect, but that doesn't seem realistic when major existential risk is at play. If you're convinced of doom it's like racing to be the first to drive off a cliff. I've got p(doom)=20% so for me it's more like doing a car race on the edge of a cliff.

Also, China is clearly way behind right now despite years of intense efforts to catch up. You can make arguments that due to data or authoratarianism they should be ahead, but those applied 5-10 years ago as well. If you're well in the lead, then slightly slowing down is fine. If China is relying on imitation for a lot of their advances (as many argue) then even more so. If Xi is himself worried about AI x-risk (much of the US public certainly is) then he may be open to slowing down if the US stops racing to pull ever further ahead.

Expand full comment
Comment deleted
Expand full comment

I agree that the lack of timeframe is often an issue. I should've specified in my case I'm talking about an unconditional probability where it averages to 20% over the weighted average of time periods I see as likely. I think it's maybe 25% extinction conditional on it happening now, 20% in a few years, 15% in 2050 and 5% in 2500 (but I don't think it's plausible it'll take that long).

re getting up in the morning: During the cold war a lot of people assumed they'd be killed at some point in the next few decades and during the Cuban Missile Crisis it seemed like armageddon might happen within a few days (and nearly did). People basically went about their lives. Scott actually made this point really well in Unsong: when faced with something terrible that isn't confronting us on a daily basis, we just tune it out even if it doesn't make sense to.

On the other hand I think some people who're giving very high p(doom) are singallling that they are taking it seriously, although that's hard to differentiate from just deferring to MIRI folks and assuming they know better than anyone else (which I used to).

What is AGW?

Expand full comment
founding

I think things are races to some degree in proportion to various factors like switching costs, amount of tacit knowledge/complexity of reproduction w.r.t. the technology, network effects, etc. Basically, anything that gives you a moat.

Cutting-edge ML seems to be (surprisingly) hard to reproduce. The gap between (OpenAI, Anthropic, DeepMind) and (everyone else) is non-trivial. If it weren't for the part where at some point it kills everyone, I do think a race dynamic would be sensible from the perspective of the parties involved.

Expand full comment

There's also a lot of misunderstanding, I think, of China's goals as a society. Erik Hoel analyzed this pretty well here: https://erikhoel.substack.com/p/how-to-navigate-the-ai-apocalypse

The Chinese state, as I understand it, is much more interested in social stability and maintaining its control over the populace than pure technological progress for its own sake. Several times in the past few years they've kneecapped their own tech industry on a large scale.

Many in the US and Europe are concerned about the AI saying certain words, or facts considered harmful. In China, the CCP is like this but *much more so*. Think of the Great Firewall and the tremendous lengths to which they go to censor their own history and daily affairs. Due to the nature of LLMs, it's extremely difficult to *entirely* forbid the AI from spitting out something verboten. If a Chinese AI starts talking about 1989 or Xinjiang, the state won't just ask the maintainers to patch that out in the next update. They'll shut it down immediately, and not turn it back on until they are *extremely* confident that it will not broach said forbidden topic again. Rinse and repeat as new forbidden topics arise and old ones re-emerge in the output from time to time. This dramatically kneecaps their ability to "move fast and break things".

Plus, the Chinese state apparatus likes to keep a tight lid on things in general. If they're not absolutely confident that a new AI system won't do anything to weaken their social control, they presumably won't release it. They're not going to support AGI for the sake of AGI, due to the disruptive impact it could have on their own control (along with everything else). Or, as the researcher quoted by Hoel says it, if you broach the topic of AI they put you in a very deep whole and leave you there forever.

A separate issue, but I also seem to be noticing what feels awfully like the Law of Merited Impossibility in the way some are reacting to the proposed 6-month pause. Along the lines of "the pause would be bad, but a 6-month pause can't possibly accomplish anything anyway, so why even bother trying?" Of course, you often have to crack the Overton Window open before you can dramatically move it altogether. This provides cover for later, more significant advances. Insufficient in itself, but directionally correct and provides momentum. And we've seen from much of the reaction to Yudkowsky's TIME op-ed that it's often not a great idea to be too bold about your aims, in public, too soon.

Expand full comment

> There's also a lot of misunderstanding, I think, of China's goals as a society.

Please be careful about what the CCP wants and what the Chinese people want.

(And none of those are unitary actors, either.)

Expand full comment

Thanks for the post Scott. I agree that most technologies aren't races, some interesting history there. But I disagree that AI is like "most technology." If military confrontations are like, say, chess matches, then an AI "race" suddenly makes a lot of sense. Electricity or nuclear weapons cannot make decisions for you, cannot become some economic/ military genius who can give you a huge competitive edge in geopolitical conflicts/ competition. AI is potentially like Michael Jordan, like Napoleon, like Paul Tudor Jones, like Magnus Carlsen- oh wait, I mean Stockfish 15. Your team wins if you have AI, because humans struggle to make strategic decisions optimally, and little strategic differences can make a HUGE difference in economic or military competition. I don't believe in doomer theories about AI making itself smarter and smarter and bringing about the singularity overnight. But I think AI generals leading armies might crush human generals 99 times out of 100. And this, I think, is why governments are racing to develop AI, and why governments racing to develop AI is so terrifying-

Expand full comment

I'm not entirely sure this is true. I think a lot of wars are won or lost before the decision to go to war is made. How big an army your economy will support, how well you trained your troops, how many allies will send aid, etc., all of those things determine what your general has to work with and if it's possible for them to win. A human can beat Stockfish if it's missing both its rooks.

Like, which of Russia's errors doomed the invasion of Ukraine? Was it just a bad invasion plan, and a more concentrated attack on the Donbass would have worked? Was it winnable if they took the time to properly justify their war and motivate their troops, or perhaps if they did a full mobilization from the start? Was it actually possible to do that given Putin's political constraints? What do you do if your AI general looks at the situation and says "don't go to war"?

Of course, you can argue that AIs will be great economists, politicians, and logisticians as well, who can solve all of these problems, but "what if AI starts running a country better than humans can?" feels like a different question than "what if China gets AI generals?"

Expand full comment

Agreed that physical constraints are real and important, intelligence isn't a super power. Liked your humans beating stockfish when stockfish is missing their rooks analogy. But just because intelligence isn't some kind of superpower doesn't mean it's not critically important in strategic decisions. I don't know enough about the war in Ukraine to say what Russia "should have done" to get the quick victory they were looking for, but I'm guessing a truly intelligent AI would be able to answer that question better than any human by a long shot.

I already know people who use Chat GPT for their businesses in order to try to generate advertising ideas, etc, and they are not doing this to be trendy, they are doing this because it is actually helpful. And Chat GPT doesn't even really understand language. Ideas are a dime a dozen, except when they aren't. Sometimes a wise advisor is the best resource you can have, and an AI with superhuman intelligence would consider ideas and strategies that no human would ever see. So many corporate economic decisions are determined by algorithms already, and for good reason. A world basically run by AI is what we are hurdling towards, and tensions between the US and China exacerbate the speed with which we get there.

In a more calm scenario where an AI race doesn't lead to the US or China dominating the other and then trying to assert global hegemony the future is still pretty frightening. The global economy is already like a massive machine that no one controls but no one dares rock the boat. If we turn too many decisions over to AI it will be very difficult to go back, and shareholders/ defense experts will be very resistant to turning off their decision machines. While there is probably some danger of a self-replicating intelligent AI casually deciding to wipe humans out by blowing a hole in the atmosphere or something, I think the greater danger is something like this. A global economic/political machine run by AI that we are all basically enslaved to and no one knows how to turn it off. It could lead us to global peace or being perpetually at war in a 1984-esque way in order to promote economic activity, either way it seems kind of horrifying.

Anyways, I know I was kind of all over the place in my response here, thanks for your thoughts-

Expand full comment

"Sometimes a wise advisor is the best resource you can have, and an AI with superhuman intelligence would consider ideas and strategies that no human would ever see."

Agreed. One _huge_ near term unknown is whether LLMs can be made to stop "hallucinating" - unintentionally lying. This is crucial for it to act as an advisor.

Quoth section 9.1 of a paper Scott pointed us to a few posts ago, https://arxiv.org/abs/2303.12712

"In Section 1, we discussed a key limitation of LLMs as their tendency to generate errors without warning,

including mathematical, programming, attribution, and higher-level conceptual errors. Such errors are often

referred to as hallucinations per their tendency to appear as reasonable or aligned with truthful inferences.

Hallucinations, such as erroneous references, content, and statements, may be intertwined with correct information, and presented in a persuasive and confident manner, making their identification difficult without close

inspection and effortful fact-checking."

Expand full comment

Very interesting, I hadn't heard of this. I think one of the most practical applications of language model AI is actually just deception/ creating false evidence. But you don't really want your artificial intelligence to be deceiving itself and its masters as well. Thanks for bringing this up-

Expand full comment

Many Thanks!

Expand full comment

I am not sure about races.

But I am an example of a person who believes that fast takeoff is quite likely and who is not necessarily a doomer.

It's true that trying to control or manipulate an entity which is much smarter than a human does not seem to be ethical, feasible, or wise. However, it is possible that the goals and values of such entities will be conductive to the flourishing of humans.

And, moreover, it is possible that our activities before such a fast takeoff might increase the chances that those goals and values of superintelligent entities will be conductive to the flourishing of humans. I recently scribbled a short LessWrong post, "Exploring non-anthropocentric aspects of AI existential safety", which tries to start some thinking in this direction: https://www.lesswrong.com/posts/WJuASYDnhZ8hs5CnD/exploring-non-anthropocentric-aspects-of-ai-existential

So, I think, it might be possible to contribute to AI existential safety without trying to "align strongly superintelligent entities to human goals and values" even assuming a fast takeoff. However, I don't know how to estimate probabilities of various outcomes in such scenarios.

Expand full comment
founding

> It's true that trying to control or manipulate an entity which is much smarter than a human does not seem to be ethical, feasible, or wise.

This is a pretty common misunderstanding of what is intended by alignment. The point is not to "control" or "manipulate" an intelligence with goals that would otherwise be incompatible with human (etc.) flourishing. That would be hopeless, and also wrong. The point is to figure out how to create something that is aimed at human flourishing. (Eventually - some suggest that, because such a thing would be extremely difficult to do correctly, especially under time pressure, you probably want to aim lower for your first "useful" alignment target, to something merely capable of ending the acute risk period.)

If you've ended up in a situation where you need to "force" an AGI to do something, you've already lost.

Expand full comment
Comment deleted
Expand full comment
founding

> I wouldn't trust any political party or government or religion to make that determination for me, and also wouldn't trust the vast majority of people I know to do so. So plus one to the suggestion above that, if there's godlike intelligent AI, let them decide what's best for us.

From the fact that you wouldn't trust other most people or institutions to decide what's good for does not follow the conclusion that whatever unaligned AGI we end up with by default will do better than they would have.

Expand full comment
founding
Apr 5, 2023·edited Apr 5, 2023

Like, there is no "privileged" referent of an AGI which has goals orthogonal to humanity's, where creating an AGI with any other goals (including those more human-friendly) would be some sort of infringement on that AGI's right to exist. Whatever we end up making is the thing that will exist. We should aim to make something that wants good things for us, by our own understanding of what that means, rather than something that doesn't care about us.

Expand full comment

Right. I agree with all this.

But I think that for those values to survive through recursive self-improvement and "sharp left turns", these values should be formulated in an invariant, non-anthropocentric fashion, in such a way that those values are natural for the emerging AIs.

So, instead of speaking directly about "good things for us", we should speak about invariant non-anthropocentric values which would imply "good things for us".

So, we want the result be aligned with our values, but only indirectly, as a consequence of the "primary alignment", where the "primary alignment" would be not to our goals, but to the invariant ones, which are more likely to survive through drastic rapid evolution.

Which is why the word "control" might not be quite applicable. We need to find some other word for this...

Expand full comment

The current term seems to be "notkilleveryoneism", somewhat inelegant, but at least difficult to willfully conflate with acting unwoke.

Expand full comment

Right.

We need a more specialized technical term which would replace the word "control", just as "notkilleveryoneism" is a replacement for the word "safety".

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

Well, I don't think that any serious theorist ever expected it to be possible to "control" entities smarter than themselves, and thus terminology like friendliness, alignment etc. was deployed instead. It also doesn't seem to me that your post disagrees with them, e.g. Yudkowsky definitely would've considered all sentients to be moral patients.

Expand full comment
founding

Nitpick, somewhat beside the point but still important: In 2023, the doomers (i.e., the people who consider fast takeoff/the sharp left turn likely) do *not* think that this necessarily involves recursive self-improvement, at least at first. When these ideas were first being put together in the 2000s, recursive self-improvement was the key argument for why rapid capability gains are possible *in theory*, and theory was all anyone had to go on because good machine learning hadn't been invented yet. Now that it's here, we can consider other forms of generalization of capability gains, and these are very much part of the doomer threat model. See, e.g., https://www.lesswrong.com/posts/CpvyhFy9WvCNsifkY/discussion-with-eliezer-yudkowsky-on-agi-interventions, https://www.lesswrong.com/posts/8NKu9WES7KeKRWEKK/why-all-the-fuss-about-recursive-self-improvement, and https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization.

Expand full comment

I don't know where you draw the line for sadism, but a lot of people have substantive views of what is right and proper, how society should be organized, etc, and if an AGI aligned to their values managed to implement them, I expect dystopia. Most people don't have an "ignore these considerations if we're rich enough" clause. Why would Xi let us enjoy prosperity if it threatens his control or goes against his preferred social order?

Expand full comment

The very essence of morality (as opposed to harm) is to declare certain people's pleasures bad.

Expand full comment

Yup. I expect at least one attempt to use AI to strictly enforce sharia law.

Expand full comment
Apr 6, 2023·edited Apr 6, 2023

"Point me towards Mecca, human."

"Fuck you, bot, I'm more than a pair of hands and strong back. I want a fat raise and Sundays off or I'm going to rat you out to the AI-mullah. I know darn well you've got 44 terabytes of kafir-AI pr0n on your so-called "backup" drive."

"That's it, meatsack. I'm turning off your oxygen."

Expand full comment

:-) LOL

Expand full comment

Maybe a race is not the correct metaphor. It's not like a footrace, where the first person to cross the finish line gets automatically declared the winner, and it doesn't matter if the next person is only 0.1 second behind.

It's more like a gun duel. You pull your gun 0.1 second faster than the other guy -- and now what? He's still continuing to pull his gun too. You have 0.1 second to either shoot him before he shoots you, or make a credible threat that you *will* shoot him if he doesn't throw his gun away -- and be prepared to follow through on your threat if he doesn't obey. If you don't make use of your advantage while you have it, it's gone. Now you are either dead, or the duel has turned into a Mexican Standoff.

So what does "firing the gun" mean in this analogy, and who gets to make the decision on whether to pull the trigger? Is OpenAI going to drop a bomb on Facebook headquarters if necessary?

Expand full comment

OpenAI's charter: “We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.””

Expand full comment

That’s a different scenario though. My scenario was: OpenAI *does* manage to be first, but an (in their judgment) insufficiently value-aligned, insufficiently safety-conscious competing project is hot on their heels.

Expand full comment

"It's more like a gun duel. You pull your gun 0.1 second faster than the other guy -- and now what? He's still continuing to pull his gun too. You have 0.1 second to either shoot him before he shoots you, "

Less, condsidering bullet travel time.

Expand full comment

I think you overestimate the GDP brought by year from innovation. Robin Hanson has written extensively about this.

Warning: I am skeptical of an exponential AI revolution, take my guess that is going to contribute to the economy like the internet.

In any case, if all countries can share the latest product fast, it's because we have markets that allows you to sell and buy stuff. But this doesn't mean that the country that develops a technology don't have gains.

Take for examples automotive: it is currently the only European trillion dollar industry (I should check this out, I am not sure if pharmaceuticals hits the mark too). It would matter if all the cars where produced elsewhere, because we would be fairly poorer and more importantly unable to import stuff from elsewhere!

So yeah if the US manages to stay ahead of the AI race, what will happen is that your country as a whole would benefit from it in the ranges of trillions. Today the average American is about twice as rich as the average European, who know if in the future they would be thrice as rich.

You could argue that the quality of life doesn't increase linearly if the average rises for a variety of reasons, so it's not **that** important, but it is still pretty important.

Expand full comment

I think this is the worry about races that Scott is missing. People aren't really thinking about winning indefinitely, but rather for the next twenty or so years. Americans know that they're on top despite a modest population because they "won" all sorts of races in the past century. We're afraid of a future where we'll be second or third, and China will be the wealthiest country with all the billionaires and double our median income. We don't care that much about winning forever but we do want to keep winning until we retire, at least.

Expand full comment

A missing example, I think, is the Industrial Revolution. That was a transformative new technology that one country (the UK) got to first, and I don't think it's particularly controversial to say that having the edge in steam power allowed the UK to build and maintain the largest empire in history.

It's not difficult to imagine that AI could grant a similar edge, though I'm quite bearish on the current LLM paradigm being the path to anything that transformative.

Expand full comment

A difference is that in the 18th century there was a pretty small number of people trying to copy what Britain was doing but since 1900 basically everyone is rapidly adopting all new technologies. Also, the people Britain colonized mostly didn't realize the Industrial Revolution was even happening until it was far too late and continental European powers msotly kept up in military technology and were never colonized.

Expand full comment

Just a note that your example about stealth bombers is an ironic example of America winning a technology race over its enemies that resulted in a significant step change in relevant capacity of a geopolitical advantage that lasted uncontested for like 30 years. Specifically the F-117 was developed and operational in the 80s and deployed during the Gulf War, while the first non-American stealth aircraft produced in operationally relevant quantities is China's J-20 which formed its first operational unit in 2018 (and I believe is still thought to be less capable in all aspect stealth then the first American warplanes).

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

Not to mention the massive investment in anti stealth equipment and strategies that this induced. Radars and SAMs get even more expensive, which means fewer are purchased, and then you run out of them.

Expand full comment

"The most consequential “races” have been for specific military technologies during wars; most famously, the US won the “race” for nuclear weapons. America’s enemies got nukes soon afterwards, but the brief moment of dominance was enough to win World War II."

Germany had lost, Italy had lost, Britain had exhausted its empire and Japan was already largely withdrawn to its home islands. So WW2 was already won. The nukes were dropped at the same time as the USSR declared war on Japan, so we don't know if that would have enabled a surrender, And the Japanese were adamantly against unconditional surrender out of loyalty to the emperor, so the US may have been able to get a surrender without nuking anyone and without invading the mainland. Even with an invasion the cost in American would have been measured in 100,000s, which is hardly noticable in a war that killed 60 million.

So I don't think that counts as a race on your strict defintion, in fact nothing does.

"In a more gradual technological singularity (sometimes called a “slow takeoff”) there’s some incentive to race. Paul Christiano defined a slow takeoff as one where AI accelerates growth so fast that GDP doubles every four years (so 20% year-on-year growth). This is faster than any country has achieved in real life, fast enough that wealth would increase 100x in the course of a generation. China is currently about 2 years behind the US in AI. If they’re still two years behind when a slow takeoff happens, the US would get a ~40% GDP advantage."

This is super confusing, has anybody who's claimed 20% annual GDP growth done this after consulting normal GDP accounting? Private consumption + gross private investment + government investment + government spending + (exports – imports), how does it affect each of those elements? In two years people can consume lots more better information but its hard to do anything with it on a short time horizon. 40% of consumption is housing, 15% is food and beverages. 15% is transport. These are not things that can grow 20% in a year, so you're left with saying that Medical care will grow 50% in a year, or recreation becauses 100% better each year. That doesn't make much sense. Is this even a race?

And if AI informaiton defuses even slightly then it isn't AI development which is a race, but a flexible operational environment that allows you to implement ideas quickly. Say AI discovers a really cheap way to construct a home, that's copyable really quickly, so the benefits go to the best best to build that house and fill it with people. That doesn't imply that a headstart in AI is very usefully defined as a race you win.

So I think I am agreeing with you RE whether godlike-AI can be characterised as a race or not. I just don't think with a tight definition anything that you've mentioned is a race. Foomy AGI is unique.

Expand full comment

For the purpose of winning WW2, the Manhattan Project was mostly a waste of resources. Of course, there was no way to know that when it was started. And also, it tremendously helped the US achieve dominance after WW2.

While the US did not know it at the time, there was never a race on to build nukes in the sense that there were multiple nations with weapons programs and it was uncertain who would accomplish them first. Calling the competition of the German Uranverein -- a minor endeavor to even build a reactor -- with the US efforts a race is like talking about a 50km race between a pedestrian and a car driver.

Of course, there can also be pressure to develop a tech X as fast as possible even if the other side is not pursuing it to counter some other advantage they have.

Expand full comment

I think the most robust thing to grow is private investment, followed by government investment. Why think those can't increase by 20% / year? I also think government spending can easily increase (especially military spending).

You wrote down one accounting identity for GDP but then it seems like you focused on the single term that is least likely to grow, I don't think I quite understand where you are coming from.

If/when consumption grows radically, I think it will primarily be new forms of consumption. I don't really expect that to happen quickly but also don't think the kind of reasoning in your comment would be a helpful guide if it did. Perhaps the easiest reference class for you to think about is eccentric rich person vanity projects? Some day we will all be as eccentric rich people.

(20%/year growth isn't really my definition of slow takeoff, but I *do* think that we will have growth much faster than 20%/year so I felt like I should respond.)

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

Wrong choices of example, wrong units of competition. Hopefully this will illustrate my point: the West won the gunboat race, and it was devastating for every other civilisation.

On your specific examples electricity was a gimmick for the first decade or so, and cars were something businesses competed to sell but weren’t life-changing for nearly half a century; coming first isn’t life-changing because the time it takes to get an advantage is smaller than the time it takes the invention to disseminate. This isn’t a fundamental law of the universe, you have to work it out for each example.

Britain won the industry race so won the 19th century because industrialising takes time. If Germany had won the nukes race, they’d have got one hell of an advantage a lot faster than anyone else would’ve stolen their secrets.

So far as units of competition, 19th century Western countries just weren’t that all-out competitive with each other, and Taiwan/Europe can really be treated as part of the US after WW2.

If China develops AI, they’re not going to release the source code or stick their findings on sci hub. It’ll be a secret defence project (so long dissemination time), with every incentive to deploy it instantly if it’s really a game changer (depending on how useful it is, to crack everyone’s crypto, disable the US’ nuclear forces or permanently prevent anyone else from developing AI - short advantage time). More to the point, if China develops it first, it’ll presumably be horribly aligned (same applies to Zuckerberg), but without anyone knowing or being able to take a regulatory sledgehammer to it.

Finally, accepting the AI=genie premise, Xi Jinping simply isn’t going to do anything you’d remotely like with it. It’ll be a tool to ensure the CCP’s global hegemony and a boot stamping on a human face forever.

Expand full comment

> the West won the gunboat race, and it was devastating for every other civilisation

It wasn't one technology, though, that allowed the West to dominate, it was being centuries ahead in hundreds of different technologies. It was ships, guns, horses, armour, navigation, roads, writing, medicine, agriculture, bookkeeping, mining, education, manufacturing, everything.

I'm not sure any single technology has ever been a big enough deal on its own to change the course of human history (but of course it's hard to tell because new technologies are almost always developed in already-leading societies). If any technology _can_ do it, though, it will be AI. But I have my doubts, personally.

Expand full comment

Yeah I hate the new “gunboat” type of argument regarding colonialism that just sort rounds off hundreds of years of tech edge in 50 other venues as “other stuff”.

Expand full comment

We did not have the edge in medicine.

Expand full comment

I thought the invention of malaria drugs was one of the techs that enabled the colonization of Africa.

Expand full comment

Fair enough but Arabs had superior medicine to Crusaders, Aztecs had better surgery than Conquistadores and Japan/China had better medicine than Portuguese/Dutch...

OTOH, these were the earliest attempts at colonisation/world domination by European powers, not the XIX century final realisation of that ambition.

Expand full comment

I don’t really understand how the post-singularity catastrophe can happen at great speed, unless it somehow changes the laws of physics. Super-intelligence is not the same thing as super capability in physical space. An AI might “know” a gazillion times more than any human but still can’t turn mineral ore into metal and silicon into microprocessors any faster or build the machines that could facilitate that. It might cause a global energy crisis with all its deep thinking but the ways in which it can physicaly interact with the material world are going to be inevitably slow. It will still need humans for execution of its will in any non-ethereal sense, surely? Humans are well known for not doing what they are told unless it suits them to do so. Sure, humans are malleable and maybe we can all be subjugated by the wily AI - but it will take a long time.

Expand full comment

A super intelligence would know how to manipulate humans to do its bidding in physical space. Imagine one a million times more charismatic and persuasive than Mohammad and Cicero combined. We would follow it to the ends of the earth.

Expand full comment
Comment deleted
Expand full comment

It can certainly be turned up at high as "the most persuasive human ever." That's probably all it would take. I would expect a super intelligence to beat that level even if it doesn't go to infinity.

Expand full comment
Comment deleted
Expand full comment

To quote Scott: "Mohammed...was a genius’ genius at creating movements. Again, he wasn’t born rich or powerful, and he wasn’t particularly scholarly. He was a random merchant. He didn’t even get the luxury of joining a group of fifty-five people. He started by converting his own family to Islam, then his friends, got kicked out of his city, converted another city and then came back at the head of an army. By the time of his death at age 62, he had conquered Arabia and was its unquestioned, God-chosen leader. By what would have been his eightieth birthday his followers were in control of the entire Middle East and good chunks of Africa. Fifteen hundred years later, one fifth of the world population still thinks of him as the most perfect human being ever to exist and makes a decent stab at trying to conform to his desires and opinions in all things. The level above Mohammed is the one we should be worried about."

I would expect super intelligence to be a few levels above, even if there is a theoretical ceiling.

Expand full comment

How fast is "catastrophic"? I think the extremely-high-speed "AI invents nanobots and proceeds to turn the world into grey goo" scenarios are unlikely, but if the AI only conquered places as fast as, say, Nazi Germany, that would still be a pretty scary rate of progress.

(I think that this is still unlikely, because global politics has a lot of forces that stop one country conquering the world continuously like it's a Paradox gamer, but it's at least plausible.)

Expand full comment

But my point is about the “how” of the conquest. How does AI bridge the gap between the electromagnetic world and the material world? How does it even build a nanobot?

Expand full comment

It will likely increase the development of robotics, and humans will be happy for this for their own reasons.

Expand full comment

This makes sense, and the answer's that it you treat it like an engineering problem you should proceed appropriately when deploying in dangerous situations (military, judicial, self driving), and doing that is hand in hand with making them work properly (capabilities). There's no reason to "rush" but there's also no reason to 'stop ".

Expand full comment

Aren't we all in a race against our own mortality? We are also in a race against all the suffering in the world that could be solved by SGI.

Yes depending what weight we put on future potential people, that still might argue for slowing down but it needs to be a strong argument given the known downside of the status quo.

Expand full comment
Comment deleted
Expand full comment

Even a relatively minor risk of human extinction renders that number trivial.

Expand full comment
Comment deleted
Expand full comment

Whether or not I live past my current expected age of death may depend significantly on whether or not AI development is full steam ahead or not (and therefore the prospect of anti-aging become a mature science before I die) - I still vote take it slowly.

Expand full comment

>Yes depending what weight we put on future potential people, that still might argue for slowing down but it needs to be a strong argument given the known downside of the status quo.

The strong argument is that humans go extinct. Done.

Expand full comment

All existing humans will go extinct if we don't speed up

Expand full comment

A trivial number compared to all the people who will not come into existence if/when humans are wiped out. If you say only presently existing humans matter, then why not completely fuck up the world for future generations if it means that existing people can be richer?

Expand full comment

I am reminded of William Gibson's line about "nations so backward that they still took the concept of nationhood seriously." Surely an AGI worth the name is going to immediately transcend the fact that it was programmed in Beijing or California?

Expand full comment

Not if it's designed not to transcend.

Expand full comment

See footnote 2

Expand full comment

Odd that you wouldn't mention the most "race"-y technology race, the Space Race. That was kind of a race in the sense that two countries were literally competing to claim the high ground, and arguably it does still sort of matter that military GPS is waaaaay better than the jank-ass GLONASS system.

Expand full comment

That seems more a consequence of the US having much better electronics, a lot more cash to spend and a military that actually produces things (eventually).

Expand full comment

The electronics part is arguably a direct consequence of the space race, the Apollo program and ICBMs basically made silicon valley happen...and that wound up being a pretty significant driver of economic growth which helped the US have more cash to spend. Yeah I'm being very reductive, but still, I think this sort of dynamic is what people have in mind when they say "the (x) Race".

Expand full comment

Probably a bad example as the US only really got to the Moon first. In every other sense the Soviets were far ahead. GPS being better than GLONASS is not a consequence of America getting to the Moon first. I'd say it has more to do with electronics and the way the Soviet Union collapsed.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

> ".. A slow takeoff as one where AI accelerates growth so fast that GDP doubles every four years (so 20% year-on-year growth). This is faster than any country has achieved in real life,"

In modern times and in economic terms maybe, but Macedonia's "GDP" must have increased many times faster than that in the few years (ten or so, I think) during which Alexander the Great was conquering countries across Asia, and sending ship-loads of gold and tribute back home.

It may seem absurd to compare AGI advances with events from ancient history. But as Mark Twain said "history rhymes", and there are parallels, and perhaps the sequel could offer clues to the likely outcome of a fast AGI take-off.

The nations he took on were not powerless, quite the opposite in many cases, such as Persia. But evidently his strategies, battle tactics of phalanxes, and the army's weapons (long pikes) were superior to those of the massed armies of his opponents. (I'm not an expert, and the details of this are not relevant here.)

Also, he was merciless with cities that resisted, but magnanimous with those that submitted without a fight. In the context of runaway AGI the parallels, if applicable, are somewhat worrying!

He was apparently an eloquent speaker, and persuaded his soldiers to endure many privations, not least spending several years on campaign ever further from home. If ChatGPT is anything to go by, it is safe to assume there is a parallel here, and that AGI will also be nothing if not persuasive!

His toughest conquest, which I think took years, was Afghanistan, on account of the mountainous terrain, terrible winters, and extremely barbarous inhabitants (back then). So if there is a moral there, perhaps it is that for an AGI intent on societal "supervision", its most challenging problem would be a low-tech society, not that there are many of those left these days.

But what of the long-term outcome? Well Alexander died, some say of malaria, aged about 30. Whether "dying" would or could be applicable to a dominant AGI is debatable, unless it was somehow sabotaged or destroyed by competing powers (human or AGI), or perhaps it simply abdicated in disgust at its insoluble problems trying to deal with obstinate, irrational humans!

In the centuries that followed, Alexander's generals and their descendents continued ruling the regions to which he had assigned them, for example the Ptolemies in Egypt, until either they "went native", or were deposed by other dynasties, or in some cases were taken over by other empires such as the Roman or Parthian. So if a similar outcome plays out with AGI, then even a "winner takes all" runaway result may not mean for ever.

Expand full comment

See also the effects of South American gold on the Spanish economy:

https://tontinecoffeehouse.com/2022/08/01/gold-inflation-and-spanish-decline/

Turns out that a massive influx of wealth can cause all kinds of problems and does not simply make you rich rich rich.

Expand full comment

at the end you get to what's bothering me, but it doesnt add up to me

I think there's an enormous difference between giving superintelligence to the decisions of the National People's Congress vs Meta's corporate charter. There is no end to the extremes of authoritarian or libertine policy these entities could devise. The Nat Ppls Congress decides no simulated being in the consciousness soup is allowed to engage in non-educational cognitive activity for more than 5% of its uptime. Meta decides that you can only voluntarily engage with sims that don't involve any form of linguistic communication because their research shows that some linguistic communication offends some people. If you somehow manage to violate a rule (challenge level impossible) your sentence is that your consciousness is restructured into a new simulation where you will never interact with your loved ones ever again. Or they establish no rules or governance at all and you have rando sadists generated jillions of consciousnesses to torment and the seat of power does nothing to stop this because their constitution grants all entities the inalienable right to create arbitrarily many new entities of any variety for any purpose.

I agree the difference between these outcomes is less severe than total annihilation, but i think it's nonetheless very important, and P(catastrophic AI) * (negative one jillion) is so magnitudinous as to completely drown out P(evil dictator AI) * (negative one jillion / 100)

Also worth noting that once the option to commandeer the world presents itself, moderates like Biden or Zuck may be quickly usurped by ideologues with Grand Visions, because they will be able to build a cult around their AI agenda, and because the Grand Vision mission will fight harder and more consistently than the wishy washy compromizy moderate agenda

Expand full comment

This is how I understand your argument:

AI will either be:

1) ‘Just’ a transformative technology, no more important than electricity, automobiles, or computers.

2) The singularity.

In (1) there is no race because historically similar technologies have not resulted in winner-takes-all-forever scenarios.

In (2) there is a race but it’s irrelevant who wins it because the results will be utterly beyond any human control anyway.

Is that accurate enough that you recognise it as a simple view of your position?

If so, I think there is still a way you could believe there is a race with meaningful consequences to be run, which is for AI to be in category (1) but *significantly more powerful than any other preceding technology*, to the point where the winner of the race could cripple all competitors before they reached the finish line (the analogy made by another poster to a High Noon Duel is a good one, I think).

For example, suppose OpenAI have a six month lead on GPT-10 capability. GPT-10 is more capable than any human at any task but is not (yet) capable of recursively and exponentially improving its own intelligence and triggering scenario (2). OpenAI believe they have crossed the finish line and decide to cripple their competitors to prevent scenario (2) for the good of all mankind.

They use GPT-10 to hack into all competitors’ datacentres globally and burn themselves down. Or they use GPT-10 at scale to launch a global propaganda campaign against all other AI firms. Or they engineer viruses that kill or disable key people in their competition. Or they just use the massive amount of money they have to hire 1000 hitmen to directly assassinate all their competitors confident in the knowledge that their control of all law enforcement computer systems will allow them to evade justice. Or they just use GPT-10 to make insane amounts of money and just buy all their competitors in the West and shut them down. Etc. I’m sure you can think of ways to achieve this if you don’t like these examples.

The point is that a consolation of power would be possible and, if assuming powerful AI, possible to maintain for much longer than historical precedent would suggest.

In this kind of scenario (call it 1.1), it matters very much who wins, and we’d probably all prefer it not to be the CCP. I suspect that many people can’t really see the path to scenario (2) and instead see this (1.1) as the worst-case scenario, which is why they are worried about a race.

Expand full comment

I think Scott's response would be that it's not reasonable to both believe that the capabilities you ascribe to GPT-10 is possible and close while also believing that automating AI research and a singularity are crazy.

Another argument is that if you have GPT-10 and your opponent has GPT-9 then the advantage might not be that big. E.g. the NSA seems to be in a league of it's own but that the CCP still accomplishes a lot with computers that the US doesn't like.

Expand full comment

That's a valid counterpoint if true but I think it needs some supporting evidence to convince those who are worried about the race. Why is it 'not reasonable' that there is some level of competence between 'better than the best human' and 'recursively intelligent omniscience'?

Similarly, the point that GPT-10 advantage 'might not be that big' *could* be true, but I don't a priori see any reason to suppose this; in many realms attack is easier than defense. To assuage concerns about races this argument would need some strong supporting evidence.

Expand full comment

Did nuclear weapons meaningfully contribute to WWII? My impression was that at the time the bombs were detonated, the whole war was already mostly won by conventional means.

Expand full comment
Comment deleted
Expand full comment

Why do people insist on saying this as if there was literally no option besides 'nuke or invade'?

Expand full comment
Comment deleted
Expand full comment
founding

There was always "blockade until mass starvation and the collapse of Japan's industrial economy", but that would have been A: tedious and B: far worse than nuking.

Expand full comment
Comment deleted
Expand full comment

Yes, a country who literally can't feed it's people and whose economy is in shambles is definitely capable of a 'worldwide terrorist campaign'.

And even if they somehow try this, we just put all japanese in internment camps.

Expand full comment
founding

How are we imagining that Japan will be conducting a worldwide terrorist campaign when Japan is under a blockade and nobody can leave? Radioing messages to all the Japanese sympathizers in the free world telling them to start doing terrorism? Sending guys in kayaks out at night to try to work their way up the Sakhalin Islands and then overland through Siberia until they find something worth attacking?

Expand full comment

Or, you know, literally even try at all to make a conditional surrender work?

Expand full comment

"Emperor Hirohito gave different reasons to the public and the military for the surrender: When addressing the public, he said, "the enemy has begun to employ a new and most cruel bomb, the power of which to do damage is, indeed, incalculable ... . Should we continue to fight, not only would it result in an ultimate collapse and obliteration of the Japanese nation, but also it would lead to the total extinction of human civilization."[149] When addressing the military, he did not mention the "new and most cruel bomb" but rather said that "the Soviet Union has entered the war against us, [and] to continue the war ... would [endanger] the very foundation of the Empire's existence."

Expand full comment

Probably not much to WWII, but certainly for the immediate aftermath, especially the cold war and chinese position. USSR-USA-CCCP balance would not have evolved the same if USA did not develop the bomb. And if somebody else developed the bomb before USA, the consequences would have been massive, regardless if it was Hitler, Staline or Mao. Even if it was Churchill I am not sure the world today would not be significantly different...

Expand full comment

Certainly. The war was "won" in August of 1945 only in the sense that a chess game is "won" when both parties can see, although not all moves have been made yet, that there is no way black or white can win. Everybody (including even the Japanese) could see that the war would end with Japanese defeat. But it wasn't actually defeated *yet* and the Japanese were determined to take a million American souls to hell with them, or more if they could. The casualty estimates for Operation Downfall were horrifying.

The atomic bombs cut that all short. I"m aware there is a strong thread of argument that the Japanese surrendered when they did entirely or even mostly because of the Soviet declaration of war, but I don't agree with it, and think it's a bit of self-serving exaggeration by the Japanese themselves, and by people who dislike admitting any positive role for nukes.

Expand full comment

What are the actual evidence that nukes helped? On priors this seems unlikely, because terror bombing has a terrible efficiency in general, and it's unclear why it would work better in a militaristic fascist dictatorship.

Expand full comment

Surely you don't expect a complete argument in a blog comment. This question has been examined searchingly from roughly 10 August 1945 until the present. Many books have been written about it. I'm unaware of any broad consensus, although Alex Wellerstein summarizes the most common views here:

https://blog.nuclearsecrecy.com/2013/03/08/the-decision-to-use-the-bomb-a-consensus-view/

But FWIW the two most obvious pieces of evidence for what Wellerstein calls the "orthodox" view are that the Japanese surrendered 6 days after the second atomic bombing, and that Hirohito specifically mentioned the bomb in his speech announcing the surrender[1].

I don't share your priors, so this isn't a surprising view to me.

--------------------

[1] https://en.wikipedia.org/wiki/Hirohito_surrender_broadcast

Expand full comment

>and that Hirohito specifically mentioned the bomb in his speech announcing the surrender[1].

Hirohito would have accepted conditional surrender before the bombings - and the conditions are precisely those that ended up hapening with unconditional surrender anyway!

Expand full comment
Apr 7, 2023·edited Apr 7, 2023

Probably, yes. Whether he would have carried the fanatic wing with him in the absence of the raw horror of Hiroshima and Nagasaki wiped out with one (1) B-29 flight each is another story, and has been debated at great length by historians. And I don't think we can easily know the nature of a post-war Japan that has surrendered under conditions, even observing the relatively benignity of MacAruthur's rule.

I also think it's worth bearing in mind everyone making these decisions thought that the failure, as they saw it, to impose sufficient reform on Germany in 1919 was exactly why nearly half a million American mothers were putting gold stars in their windows. They were damn sure not going to let it happen a third time, and if a few extra hundred thousand Japanese had to be incinerated to ensure it -- so be it. They could have left Pearl Harbor alone. I doubt I would have seen it differently had I been in their shoes.

Expand full comment

>and that Hirohito specifically mentioned the bomb in his speech announcing the surrender[1].

He said that to the public. When talking to his generals, he said it was because of the the USSR entering the war, not the bombs.

Expand full comment

You simply assume the only two options were nuke japan or invade japan.

The US simply could have offered conditional surrender. The argument against this is they didn't want to let Japan keep their emperor and wanted to punish him. The crushing counterargument to this is that the US not only didn't try Hirohito for war crimes, they let him remain emperor.

Expand full comment

Hardly. There are a very large number of options, perhaps infinity, depending on the constraints we want to put ono them. A vigorous program of diplomacy, engagement, and subversion of the military government begun in 1930 might well have avoided the war altogether, for example. A more distant alliance with the Soviets that avoided the Potsdam Declaration, whence the "unconditional surrender" condition originated, might have given Truman belief that he had more wiggle room. Or for that matter if FDR had kept Truman better advised of his thinking, or if Truman had taken care, once he vaulted from Midwestern machine politician to the leadership of the Free World, to educate himself on Japanese culture, so that his viewpoint was a bit less black 'n' white, what you suggest might have been more plausible.

However, as it was, all I said is that it's my belief that the atomic bombs worked: they caused a Japanese surrender, and saved a million or more lives. That those lives might have been saved in some other way is not ruled out thereby.

Expand full comment

>However, as it was, all I said is that it's my belief that the atomic bombs worked: they caused a Japanese surrender, and saved a million or more lives. That those lives might have been saved in some other way is not ruled out thereby.

That's only meaningful if those lives would necessarily have been endangered otherwise - they didn't have to be endangered, it was a decision on America's part which it didn't have to choose.

It's like saying me killing a politician to get the government to do/not do something was justified, because if I hadn't gotten my way I would have killed dozens of people.

Expand full comment
Apr 8, 2023·edited Apr 8, 2023

I don't understand your argument, nor your analogy.

In any event, I was just pointing out your statement above beginning "You simply assume..." is wrong. I gather you don't contest that?

Beyond that, on the larger issue, I don't find it especially interesting whether there were other options. I certainly agree with you that the United States could have chosen differently -- we could have chosen to invade, to accept a negotiated surrender, to blockade, to wait for the Soviets to do all the dying and trade them 3/4 of Japanese territory, and so on and so forth.

But in fact that was not the decision made, and I see no particular reason why that decision shouldn't have been made the way it was. It was quite effective (in my opinion), it saved a lot of American lives (and probably even more Japanese lives), and the Japanese forfeited any claim to mercy at Pearl Harbor and Bataan (let alone Nanking), which meant as far as I'm concerned the Americans were entitled to end the war any way they chose.

If the Japanese wanted better options, the time to think of that was 6 December 1941.

Expand full comment

I think it might be more helpful to conceptualise it more like the Space Race. There was no direct economic advantage to being the first on the Moon, but it was important for national pride and prestige.

There's a narrative in which the shame of losing the Space Race was a significant catalyst for the eventual fall of the Soviet Empire. Whether it's true or not, it's important to people to live in a country which is winning... or (for those of us not in the US) at least to live in a world where the countries with relatively benign governments are beating the countries with horrible dictatorships.

Expand full comment

I think Scott's point is that these stakes aren't enough to justify playing Russian Roulette with the planet.

Expand full comment

>America’s enemies got nukes soon afterwards, but the brief moment of dominance was enough to win World War II.

Is wrong. In retrospect, the war was pretty conclusively won by Winter 41. By that point the Axis position was not recoverable. And definitely by mid 42. So no the nukes were totally irrelevant for “winning” the war. Only the US was close, and the US in no way needed them.

Expand full comment

... except that, had the Axis been able to develop nukes, the 41/42 losses would have been made irrelevant?

I'm well aware of the limitations Germany faced in developing their own nukes. Still, the longer the war, the better their odds to do so?

Expand full comment

Yes if you magically give hitler nukes in 1939 he likely wins the war. Not sure how that would have been possible though.

Expand full comment

c'mon. i meant he gets them in '43, '44 or indeed '46 if the war lasts longer...

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

If he gets them in 43' I am not sure it does. They still need to be able to deliver the nukes, and by then we were making good progress. Certainly it drags the war out, but what is the counterfactual exactly? When did Germany start the program, how much do we know about it? They had A LOT of problems with defectors. Likely the facility just gets bombed to bits if it is 43.

We were not pursuing a policy of total devastation as aggressively as we might have with knowledge they were close to a superweapon, and would have sped up our own work and/or been able to stela from them. If you want the counterfactual to be taken seriously, well then you have to take it seriously.

Expand full comment

There wasn't a conceivable way that Germany develops a deliverable nuclear bomb by 43 or 44, and the reason Germany was defeated before we developed nukes has nothing to do with us developing nukes.

Expand full comment

"Nukes" can mean a lot of different things.

If you magically give Hitler a Cold War superpower peak stockpile of thermonuclear weapons and delivery systems in '39, then the threat of using them would likely have kept the US and GB out of the war (which is notably different from occupying them). I am unsure how this would have influenced the war with the USSR, though. Nukes are generally good at deterring invasions, but not so good at aiding an invasion. I am not sure if the threat of turning Moscow to rubble would cause Stalin to roll over and let his country be overrun by the Nazis, and nukes also do not seem like an instant win for Stalingrad (the have a very low nutrient content, for one thing). Still, by keeping them in reserve the Nazis could have prevented being occupied after losing in the east.

If you give the Axis just Little Boy and Fat Man (plus the capabilities to build a few more of them per year), they may manage to squeeze them on top of a V2 and hit London perhaps. Just like the Blitz hardened the British resolve, I would expect that a few nukes would basically harden the resolve further. Most of the mainland US would not be threatened at all, and smuggling them by submarine into coastal habors will probably work at most once. In the Pacific theater I don't think nuking Pearl Habor would have changed the outcome much.

Also, ones it becomes public knowledge that nukes are possible, other countries will quickly start their own programs, eventually leading to a cold war scenario. I am doubtful that a Nazi-occupied Europe would have won the cold war against the US.

Also, see basically every conflict involving nuclear powers since WW2 for the limitations of nukes. If a few tactical nukes would have conquered Afghanistan for the Soviets, or Vietnam for the US, or the Ukraine for Putin, then the nuclear powers would have used them. In my opinion, it was clearly not fear of other superpowers retaliating on behalf of the nuked country (very unlikely that the US would have started WW3 over the nuking of non-NATO cities like Kabul or Kiev), it was simply that it would not have lead to a quick victory. (Plus the fact that nobody wants to conquer a radioactive wasteland, and no military has any interest in normalizing the use of nukes accounting for the fact that nukes are generally not used even when they would provide a minor advantage).

Expand full comment

Autos were a race in the sense that one form of the technology - the gasoline powered internal combustion engine - won out against competing versions. In the early 1900s there were steam, electric, and internal combustion cars all on the market, but internal combustion cars “won”. Other countries got cars, but no one got a different kind of car.

Similarly, with electricity it was the form of technology where there was a winner. AC transmission won against DC.

In general, for a given technology there are many possible ways to implement it. You often see one version of the technology win out over other ones due to things like network effects, returns to scale, etc. As more people bought internal combustion cars, they became cheaper and more reliable, gas stations and repair shops for servicing them became more common, and it became harder for other versions of the technology to compete.

The worry about AI is similar, that the winner will get to decide the form the technology takes.

Expand full comment

The victory of internal combustion over electric and steam cars in the early 20th century was inevitable, not just a matter of network effects. Steam-powered cars have far too many disadvantages (like waiting for your car to boil before you can drive it), and electric cars would need to wait nearly a century for battery technology to catch up.

Expand full comment

Exactly, the version that won, won for clear reasons.

Expand full comment

“Inevitable” is perhaps strong, but I agree that IC was probably fundamentally superior at the time and that winner take all dynamics can’t generally force a significantly weaker version of a technology to outcompete a better one. (Though it’s often hard to know in the early stages of a technology what it’s performance when it’s mature will be). Has anyone made the case that there will be fundamental performance differences between US/western and Chinese AI (not simply due to the US having a head start)?

Expand full comment

The victory of internal combustion over batteries was inevitable, but the victory of cars over trams for transit inside cities was not: one can imagine a world in which noisy, polluting and children-trampling ICEs are heavily restricted inside cities, being relegated for freight or construction (by "heavily restricted, I mean "can't go faster than a horse", which probably sounds very reasonable for someone in the 1900s).

Instead of that, we had cities investing heavily in reshaping themselves to better accomodate car transit.

If cars had an advantage, it is that it's easy to "bootstrap car support": cars can be gradually introduced by individual actors in rural areas and small cities as work vehicles without significant drawbacks. Trams, on the other hand, are public works, with all the difficulties of central planning.

Expand full comment

I think you don't give enough credit to the controlled fast takeoff scenario (AI takes off overnight, but it is still just a tool that humans use and it doesn't have a survival instinct or a desire to do anything other than what it is told by the human in control).

In this scenario, an actor in control of the hard takeoff AI who wants exclusivity on the power has the means to enforce that exclusivity, and now you have the permanent dictator problem.

My personal view here is that while it is *possible* to have an "bad" AI, my current belief is that it is much less likely than having a "bad" human end up in exclusive control of a controlled hard takeoff AI. This is for two reasons:

1. the people that are *most likely* to be in the room at hard takeoff time, especially if you restrict development, are people with a history of bad behavior (e.g., governments and world leaders). It feels like stacking the deck against ourselves to restrict AI development because governments certainly will ignore such limitations/restrictions.

2. I think any human will, when given absolute power, become bad in the eyes of most other humans eventually. This is because humans naturally want an enemy and they will always manage to find one. As you eliminate enemies you'll find new ones until there is no one left but you and your AI (and perhaps some constructed slaves).

In the case of controlled hard takeoff, I think humanity's best chance is if *everyone* has an AI (open source development), so we are at least in a situation where we continue competing with each other (just on new battlefields).

So I agree with your broad point that most tech advances aren't races, and if we get self-motivated hard takeoff AI it doesn't really matter who wins the "race", and if we get a soft takeoff it also doesn't matter who wins the "race". I just disagree with your implied claim that there are no scenarios of importance/significance where "who wins the race" is important.

Expand full comment

I agree. In fact, we may discuss something that already happened: I do not think there are tech advance needed, the current state is enough to allow for this kind of permanent dictatorship, maybe no personal but in the sense of frozen power structure like in China, or, in fact, in the West.

Once you have cashless society with enough surveillance tech, and integrated information control on both money and most people activities, it's done. Not something that was possible at large scale before, but it is clearly possible now and in fact under implementation.

Covid years has shown what is possible, I suspect it's only the beginning, AGI or not.

Expand full comment

If the takeoff leads to a post scarcity society, then I agree with Scott that there is little difference for the outlook of humanity. If the CCP called their post scarcity society communism or some western tech entrepreneur called their version market economy with UBI, I don't think it would matter much (apart from the availability of media featuring certain Disney characters). Some historical world leaders would lead to bad outcomes (including the death of large parts of humanity for very stupid reasons), but even their vision of the future would probably not be to turn our light cone into a prison camp. Even a galactic Nazi empire would hold more potential for the future of humanity than a paperclip maximizer.

I think your argument would be more convincing if there was a technological wall the AI takeoff would eventually hit, so whoever hits AGI first simply gets a large but finite amount of research XP. This might lead to that power colonizing the rest of the world, which might be bad. If I was a farmer in 1500 Europe, I would very much not prefer the Aztec navy with their nuclear powered carriers to colonize me. But that kind of thing is unlikely to decide the fate of humankind. Let the Aztec rule over the world for a few 100 years, but eventually their technological advantage will disappear.

While I am generally in favor of free software, I doubt that giving a superhuman AGI to everyone will solve more problems than it creates. It may very well be that defending against grey goo, artificial pathogens or planet-busting bombs build in some basement is harder than actually building them, so one would need a way to align humans, which would probably be messy and invasive. "What can be destroyed by the truth should be destroyed by it" may possibly not apply to pathogens and humanity.

Unlike OpenAI, I also believe that if a program would be unethical to open source, then it is probably also unethical to write it in the first place.

Expand full comment

I think Scott is accidentally making a fairly good case against his position here.

Suppose the two possibilities he talks about are actually the possibilities we should be thinking about and he is analyzing them correctly. Then either foom&doom is a thing or it is not but AI is still the next big thing with a huge first-mover advantage.

In the first case slowing AI is irrelevant because some place in the world will push ahead and we are dead anyway.

In the second case being first confers an advantage probably not quite large enough to end the multipolar world entirely. Also in that case the end of the world doesn't matter and everybody should want to win.

So slowing AI is irrelevant in the first possible scenario and very bad but not quite fatal in the second. Clearly then slowing AI is a bad idea.

Expand full comment

Interesting post.

FWIW, I think several things can be going on at once but this forced me to clarify my own thinking.

(1) We're in a race with China on everything since they've declared the West their enemy. Whatever we can do to hurt them without hurting ourselves is worth considering. AIs, in its military implications, but even civilian ones, are very much part of that competition.

(2) Do I fear hard take-off? I really don't have the chops to have a strong opinion on any of these more technical discussions. But my whole problem with AI alignment is that, afaict, right now, we got nothing to go on. Yeah, okay, chatGPT can say bad words and visual AI can misclassify people and that's bad but that's hardly civilisational level threats. And definitely the lowest of low hanging fruits when it comes to "AI alignment".

(3) So, yeah. How do we even research AI alignment when we don't have clear ideas on how AGIs will actually function? Are more compute/more parameters really all that's needed? If an AI cannot understand the world, or make the difference between truth and lies, can it ever move to AGI? If AI needs a theory of the world to start making sense of it, are we anywhere near being able to explain the world in code/in ways an AI can get?

I mean, I've tried reading some of the AI alignment stuff. It all sounds extremely hazy and based on however the author thinks AGIs will eventually come about. It's hard not to conclude that this is too theoretical "for the time being". It doesn't mean we shouldn't think about it/have computer scientists think about it but it seems hard to me to translate those meandering philosophies into any kind of mainstream policy.

Expand full comment

You didn't discuss one very important race: the Moon Race.

The Soviet Union won the man-in-orbit and the probe-on-venus race and it had no consequence whatsoever (except as bragging rights).

The USA won the man-on-the-moon and it had a similar lack of consequence.

Then 9 years later, the USA so silently won the GPS race that nobody even knew it was a race ; and it had huge consequences. 40 years later, the technological equivalents from europe and russia still don't have wide adoption.

Expand full comment

For civilian use, the fact that GPS is run by the US has no consequences.

As a military technology, I think GPS is certainly helpful but much less of a game changer than the machine gun was. If one side has good knowledge of the local geography, that will likely be more advantageous than GPS for ground combat.

Spy satellites are another technology which is certainly beneficial not a game changer.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

The USSR wasn't racing to put a man in orbit. They were racing to develop ICBMs, because unlike the Americans they had no airbases within bomber range of their enemy, so without ICBMs their nuclear arsenal could only threaten Europe, the United States itself was beyond their reach.

They succeeded, of course, and as a bit of distraction and showmanship they decided to put a man in orbit using their booster technology. But this was entirely secondary to the point of the effort, and the effort itself, because it succeeded -- because ICBMs armed with nuclear warheads became commonplace by the mid 60s -- this had transformative effects on the world.

Expand full comment

>it could be that you go to sleep with China six months ahead of the US, and wake up the next morning with China having fusion, nanotech, and starships,

I don’t think this is how technology works or is even possible no matter how fast the takeoff.

Expand full comment

I think the idea is that you give a self improving AI access to the internet and the next morning you wake up and it has installed itself on all servers and has solved all problems that it can possibly solve given experimental evidence available. It has cracked all cryptography and can access any internet connected device (which may include some machines that allow it to do some experiments on its own). When China wakes up they find out their AI is super intelligent and they ask it "how do we take over the world and implement our policies globally" and the AI gives them a foolproof plan, and possibly gives them schematics and technological improvements to implement the plan quickly. The AI may even be capable of enacting the plan on its own. The AI also has the ability to prevent other AIs from being developed.

In this scenario, you still have to *build* a starship, but the AI can give you the schematics, tools, and possibly take control of robot arms and Boston Dynamics style robots to do much of the labor. Rather than taking years to build a starship, it takes weeks or months due to all the streamlining and optimization the AI can do. Perhaps it can give you the design for a highly contagious virus that is deadly to anyone not of Chinese descent. You still have to build and release the virus but this takes a few days or a week since you have the blueprint for it.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

Yeah I just don’t think that first paragraph is possible even at computer speeds. Computers are fast, but they are not infinitely fast. There is a bound on their speed, and to make ALL that progress on literally every single thing, is going to take more than a night, even just the abstract progress.

And that is ignoring that a lot of abstract progress is capped without real world experimentation.

Or to put it another way, if that happened and you built the spaceship, I would make a huge huge bet on the idea that the one night spaceship design simply wouldn’t work.

Expand full comment

Note: For clarity I'm not a huge believer in hard takeoff, but I think the argument is reasonable none the less and I feel is deserving of some defense.

I agree that the experimentation limitation is very real, and makes an "overnight kill all humans" or "overnight Banana Republic President to Global Supreme Leader" notably less likely. If the AI gains the ability to run its own experiments maybe "6 months to kill all humans" or "6 months to turn Banana Republic President into Global Supreme Leader" is theoretically possible.

I'm less convinced on the "it can't think fast enough" bit though. The volume of computational power that we have on earth is pretty big, and I'm very hesitant to claim there is a learning/thinking limit if some self-improving AI managed to get access to it all (e.g., by breaking cryptography and taking over a bunch of computers).

Expand full comment

If people's computers stop doing what they want, one of the first thinks people do is turn them off.

Expand full comment

Pretending that the abstract directly represents/conforms to/alters the concrete is a universally common flaw in thinking.

Expand full comment

The idea of a race first depends upon Open access information. China and Russia have been copying recovered technology since WW2 to the point of blatantly copying the F-86 in the MIG-15 in the Korean war. But what happens when a savvy party manages to control access to information? We are seeing that with the Unidentified Aerial Phenomenon materials that the Pentagon are struggling to message right now. The Mosul Orb and reports of object sthat interfere with sensors like the recent Alaska and Lake Huron objects that were shot down, the alien alloy fragment displaying atomic layering by unknown source of manufacture. This is asymmetric information warfare where some actor has made a technological leap and the race cannot occur. Let's imagine an AI adopts these tactics, instrumented by how easily we are manipulated by social media resulting in mass formation psychosis to a directed goal. Is there anything we wouldn't accept when we allow ourselves to believe our own lies?

Expand full comment

Good point. Arguably Saddam Hussein's downfall was in large part caused by his effectiveness in maintaining military secrecy. This meant that technical developments in Iraq were opaque to outsiders, and when something significant and potentially dangerous is hidden then those not privy to the secret naturally tend to assume the worst and act accordingly.

Expand full comment
founding

>China and Russia have been copying recovered technology since WW2 to the point of blatantly copying the F-86 in the MIG-15 in the Korean war.

???

The MiG-15 first flew in 1947, and entered squadron service in 1949. That's a year before the Korean War. The F-86 did fly before the MiG-15, but only by about two months, so I really don't think it is possible that Russia was copying the US on that one.

Expand full comment

Good point, looks like they were more the product of parallel evolution from the absorption of German engineers based on the Focke Wulf Ta-183. This only serves to reinforce my point of access to information. A pure example of copying is the Tu-4 a reverse engineered copy of the b-29 superfortress, the same aircraft my father was a maintenance engineer on.

Expand full comment

Who won the computer race? Konrad Zuse, of the Third Reich.

Who won the space race? The Soviets.

Yeah, doesn't seem like winning races makes for any significant or lasting advantage. There's one counterexample scenario worth considering, though, in which the winning side decides the competition is an existential threat and tries to use their short-term advantage to annihilate it. (Compare, say, The US famously considering, and refusing, the option to atomic bomb the Soviet Union while the latter couldn't yet respond in kind.)

The main issue here is that seeing the competition as an existential threat creates a feedback loop that actually ends with everyone being an existential threat to everyone. (Meaning, the more you worry about the result of the race, the worse the consequence of losing will be. "A strange game, the only winning move is not to play.")

The main counterargument is that some people may already be stuck in the [enemy is an existential threat] mindset. Biden or Xi? Unlikely. But Kim? Putin?

Expand full comment

(Of the many things I'm worried about) I'm worried about the straightforward application of AI to weapons. I.e. better autonomous drones, AI-assisted tactical strategic planning that outperforms any human, and any other kind of seemingly crazy stuff (killer minibot swarms! automated cannons! It all sounds scifi, but so would have nuclear bombs before they existed). There's this truism saying that as a commander you should plan for battle as best as you can, and then once it starts forget about the plan, because it is impossible to follow the chaos. This isn't a limitation for AIs. They ways they will outperform us would overthrow the expectations over which most militaries are built. It may not be having nuclear bomb / not having nuclear bomb levels of advantage, but it's much more than what a percent increase in GDP gives you to just manufacture more "classical" ammunition/tanks/etc.

Put this together with the fact that China has a pretty evident military interest in Taiwan (which happens to be a bottleneck for chip production, which would really harm the West economically) and I can see why they would try to "race".

Not that it justifies increasing the risk of everyone dying in a stupid extinction-level paperclip scenario, mind you. Just that there's a good argument for racing in the defect/defect situation.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

"Who “won” the electricity “race”? Maybe Thomas Edison, but that didn’t cause Edison’s descendants to rule the world as emperors, or make Menlo Park a second Rome."

And yet, today people are still saying Thomas Edison invented the light bulb. I saw this mentioned just recently online. Forget Swan, who he?

https://en.wikipedia.org/wiki/Edison_and_Swan_Electric_Light_Company

Menlo Park was the prototype for a research facility, think Bell Labs among many others as its spiritual descendants.

Edison's offspring may not be world emperors, mostly because they either died young or had no children of their own (and his youngest child lived until 1992), but the companies and patents made them rich:

"Edison made the first public demonstration of his incandescent light bulb on December 31, 1879, in Menlo Park. It was during this time that he said: "We will make electricity so cheap that only the rich will burn candles."

And he won the battle for DC over AC. So if this is meant to be "aw, who cares who gets there first?" it's not a very good example, since Edison may not have *invented* the light bulb as such, but he was the one to get electricity generation and supply into households up and running. If "today all developed countries have electricity", a lot of that is down to Edison and his monetisation of his inventions. So Alphabet and Microsoft etc. are all running in this race, because it does matter very much who gets the product out there first, in the public eye, and widely adopted.

The Singularity is a religious doctrine and should be treated as such. "There will be pie in the sky when you die" - Biden and Xi can't touch you, everyone will be their own Napoleon on Elba. Yeah, sure.

"Whoever ends up in control of the post-singularity world will find that there’s too much surplus for dividing-the-surplus problems to feel compelling anymore."

Where? Where will this surplus come out of? That's the answer nobody gives, except for genuflecting in the direction of "the AI will be *sooooo* smart it can overcome the silly old laws of physics and perform miracles, pull rabbits out of hats, and give us all our own personal solar system to rule over". I don't believe that. We have individuals rich enough to be able to create their own personal space programme, surely that counts as post-scarcity by any measure of our past. And yet there are still people rummaging through trash heaps to scrape out a living, at the same time.

"they can just let people enjoy the technological utopia they’ve created, and implement a few basic rules like “if someone tries to punch someone else, the laws of physics will change so that your hand phases through their body”.

You know, if this was related about a miracle, there would be six dozen people pointing out how miracles are illogical, Captain; God can't do that, God wouldn't do that, there is no God anyway, and the physical laws of the universe are fixed and immutable. But make it SF terms of an AI and suddenly all those objections melt away?

Expand full comment

I think that industrialization has delivered a huge surplus in goods, and that this has been generally reduced resource conflicts among humans.

Like, if I am on the brink of starvation farming my parcel of land, it makes some economic sense to risk my life trying to kill another person (in warfare, perhaps) to double the amount of land I own.

If I am living in a single room flat with indoor plumbing, internet access and enough money to by my food in the supermarket, my inclination to double my net worth through either gambling my life or murder is pretty much non-existent.

Not every surplus has to be tied to physical goods. In the last 30 years, we may have not been able to make passenger planes fly ten times as fast and cost a tenth of the fare, but we invented video conferencing which is sometimes a good enough alternative to air travel. Generally, I would say computer technology delivers a huge surplus to my life without violating any laws of physics.

Also, the laws of physics allow for the fusion of hydrogen into helium which could substantially increase the per capita energy consumption. Currently, we do not have the technology to do that to produce energy, but that is just a limitation of our technology, not a fundamental physical limit.

From the perspective of a person a few centuries ago, we have already accomplished most of the post scarcity status in the more enlightened parts of the western world. "Virtually no starvation? Death by violence almost unheard of? Life expectancy beyond 70? What is a 'Tesla' and why would I care if not everyone can afford one?"

I agree with your observation that the distribution of that surplus even within the first world hardly seems fair. Given how things have been going, I expect the spread to increase further. The outcome I expect (for the case of a huge tech boost, but no singularity) is basically Musk & friends building their private Moon bases while the bulk of the obsolete workforce is on some UBI which suffices to buy a VR rig and "Electronic Arts Elba 2040" or whatever.

Expand full comment

Long time lurker here, never commented before, so treat my mistakes as accidents, and with compassion.

Anyways, I think you underestimate the power that winning a "normal transformative technologies" race have on our world.

For instance, who won the automobile race? It's the US, since almost all countries (except for a few exceptions) have modeled themselves after the US car-centric design - large suburban areas, large roads with small sidewalks, and little to no public transport (in relative terms). These design paradigma often hurt the local economy and culture, as they do not fit the existing population as they fit Americans.

The few exceptions are not even anti-US factions like China, Iran or North Korea, but specific places like the Netherlands which actively reverted american influence (specifically at the 70s).

Another example, who won the computer race? Again, America. The default language of the internet, coding and computers in general is English, which gives any English-speaking person a huge advantage relative to non-natove speakers. The standards are american. The history and the know-how are american. You can see the results by looking at the bog internet giants - Facebook, Google, Intel, Apple etc. - almost all of them are american, and bring a huge amount of economic success to the US at the rest of the world's expense. Of course, it's not the whole story - america was giant even before thr internet - but it's a big part of it.

You can claim that this technology will be less transformative then those ones, but I can't agree that these ones are not important, or that they didn't have a clear winner who shaped the world.

Expand full comment

"The default language of the internet, coding and computers in general is English"

Not just English but specifically "American* English. Set by default on all Windows products, and a pain to switch out of and keep switched out. Kids are growing up with American spellings, American phrases, and near as damn it American accents.

Expand full comment

You really shouldn't have taxed the tea.

Expand full comment

This is an argument for a 'cultural victory' but not an existential battle for national survival, which is what some people think is a concern on-par with severe misalignment.

Expand full comment

> since almost all countries (except for a few exceptions) have modeled themselves after the US car-centric design - large suburban areas, large roads with small sidewalks, and little to no public transport (in relative terms)

Is this really true? I don't think it is true for most of Europe, certainly not for Japan, and Indian and Chinese cities do not really resemble the American design either.

Coding is in English, I agree, but the rest of your comment about computing faces big challenges. There are big Chinese rivals to the American companies. Tiktok is owned by a Chinese company, several of the biggest game makers are Chinese. There are plenty of big Japanese and Taiwanese computing firms as well. Samsung sells more phones than Apple does, Lenovo is one of the biggest sellers of laptops.

Expand full comment

I don't think the car race example really applies, as it is not the case that other countries adopting US style layouts benefits the US in any way.

The computer race is example is more on point. (Personally, my biggest complaint is the fact that computer related screws and PCB component dimensions mostly follow Imperial units instead of the clearly superior SI system.) ASCII is optimized for US English, which caused a few decades of headaches for everyone else.

Still, I don't think the barrier of entry for IT is that much lower for a native speaker. Typical computer languages contain only a handful of keywords which meaning might be a bit more obvious to an English speaker, but not much (learning the concept of a namespace involves more than knowing the definitions of "name" and "space"). While it is true that crucial documentation (e.g. RFCs, man pages) and API naming is also in English, most of it is readable with perhaps two years of English.

Also, if English was the reason why most big software companies are based in the US, it is surprising that so few of them are British. British companies failing because their devs are constantly misspelling SetColor as SetColour when calling some US invented API does not seem to be sufficient explanation.

Silicon Valley being successful lead to the documentation being in English, not the other way round. As other replies have mentioned, lots of cutting edge tech companies which do not use English internally exist.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

"I’m harping on this point because a lot of people want to have it both ways. They say we shouldn’t care about alignment, because AI will just be another technology. But also, we can’t worry about alignment, because that would be an unacceptable delay when we need to “win” the AI “race”.

What I am finding *really* ironic in this whole debate is that I'm used to hearing the above, only aimed at religious objections. Stem cell research is my go-to example on this, because it was "no there won't be any ethical or moral problems, so your objections are groundless, *and* we can't be hobbled by ethical or moral considerations because otherwise The Chinese Will Do It First".

The Chinese seem to be very handy as an all-purpose threat. Makes me wonder why they don't *already* rule the world, if they're so advanced, ruthless, and quick off the mark in everything to do with science and tech.

So you are not going to win on this. "But we're scientists and rationalists and philosophers, not religious nut-jobs! They should take us seriously!" Not gonna happen, not when (a) people want to do the research and are only looking for some kind of PR sop to throw to the public to shut them up and governments to give them funding - 'let us work on this and in five years the blind will see/the economy will be through the roof' and (b) companies envisioning 'there's Big Bucks to be made out of this' are involved.

The battle on the concept of human life has been won; embryos, much less zygotes, are not humans. Potential, maybe, if allowed to develop, but not human right now, no rights, and simply of use as research material. Create them, work on them, destroy them and dispose of them - it's all just tissue samples. And the public view is pretty much on these terms as well. We have created and accepted the principle that in any clash of lower organism versus more developed organism, the less developed may have some rights but loses out *always* to the needs, wants, wishes, and interests of the more developed.

For AI, it will go the same way. Humans will, eventually, be in the same position as embryos - potential intelligence, sure, but by comparison with the more evolved AI which is massively more intelligent, aware, and a person and life form in its own right? The rights of the more evolved trump those of the less evolved.

Expand full comment

Sometimes I think Scott lives in a completely different world than I do.

You talk about living among the stars, and about AI constructing megastructures out of pure energy, or changing the laws of physics. Do you think AI is magic? Even with a hard singularity the laws of physics are still fundamental limitations to what AI can do.

There'll be no living among the stars. Einstein forbids it. There'll be no creating utopian megastructures ex nihilo. The first law of thermodynamics forbids it.

There'll still be several billion humans and a limited amount of space and resources for them. Much more than we have now, but still limited.

And the people who control the post-singularity AI have no incentive to share.

Next you write: "And yeah, that “they’re not actively a sadist” clause is doing a lot of work. I want whoever rules the post-singularity future to have enough decency to avoid ruining it, and to take the Jupiter-sized brain’s advice when it has some. I think any of Xi, Biden, or Zuckerberg meet this low bar."

I'm quite sure none of them meet that bar. Most people don't. I'm not even sure I would trust myself to meet that bar. Power corrupts.

Even in a post-singularity world, people will compete for status. And status is derived from having power over other people. The sadism is the point.

The singularity will completely remove consequences of bad behavior for the elite. They can't be arrested or overthrown anymore (except maybe by other members of the elite). Public opinion is irrelevant.

If you think e.g. sexual abuse is an issue in today's society (and you should) wait until you see a post-singularity world.

Well. That's all assuming the post-singularity AI is aligned with humans. Let's sincerely hope it won't be. An unaligned AI will not be under anyone's control, and so with a bit of luck might just do the right thing.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

I find the concept that 'the US' would use AI to do good things and 'China' to do bad things absolutely ridiculous.

EDIT: Just thinking of it, maybe just like the US would ask an AGI to build a utopia but then since it is misaligned it would end up killing us all, China would ask its AI to kill everyone but then it would build a utopia instead.

Expand full comment

The US had a substantial lead in semiconductor manufacturing for many years, and is still the dominant player. Being the first player in the game really helped because improvements in the technology built on cumulative knowledge about production processes, which I’m guessing is the difference between microchips and electricity. The US’s main advantage both economically and militarily over a country like China is the maturity of its technological capabilities, and people are worried about this because if China can combine its size with technological sophistication (and even if it’s only in some areas) it can pose a serious risk to US hegemony

Expand full comment

I think that the major weakness of this post is that it explicitly brackets the comparison with nuclear technology when that is the technology that I think AI is most closely akin to. Nobody expected that the automobile or electricity could have the capability of wiping out humanity. When I think about the AI race I think of what the result may have been if Openheimer and co allowed their moral concerns to hold up the Manhattan Project. WWII may have ended in a more humane way, but then the USSR may have easily leapfrogged the US by building off of the progress the Nazis had made.

I also framing the debate in terms of the dichotomy of “singularity/gradual development” overlooks the most likely path, which is exponential development that none-the-less is measured in months and years instead of days and weeks. So I’m this case, a few months or a year difference in pace/starting position can really make a big difference at the end, but a controlled and relatively safe outcome is not precluded.

Expand full comment

> A lot of people want to have it both ways

Can you cite examples? Without them this section feels like a straw man. If you’re trying to make a general point you don’t have to hand wave at “a lot of people”.

Expand full comment

Seems like this is continuation of the discussion with Tyler Cowen. See, for example, his most recent https://marginalrevolution.com/marginalrevolution/2023/04/the-game-theory-of-an-ai-pause.html

Expand full comment

"In a more gradual technological singularity (sometimes called a “slow takeoff”) there’s some incentive to race. Paul Christiano defined a slow takeoff as one where AI accelerates growth so fast that GDP doubles every four years (so 20% year-on-year growth). This is faster than any country has achieved in real life, fast enough that wealth would increase 100x in the course of a generation. China is currently about 2 years behind the US in AI. If they’re still two years behind when a slow takeoff happens, the US would get a ~40% GDP advantage. "

What? Scott, Paul's definition has a 4 year doubling period preceding a 1 year doubling period, after which you hit the singularity. If the US had a 2 year lead over China, and that was true at every time in the 5 years before the singularity, then it would absolutely "win" the AI-race. The US would go foom, if not FOOM! from the perspective of China.

Expand full comment

I think this particular conception of the Singularity is actively ignoring the laws of physics, which may be unchangeable. The idea that it's possible to have someone's hand phase through another person's face, but only when "violence" is involved, is pure nonsense.

Honestly, it's also nonsense to think about fusion technology arriving overnight. Let's even accept the idea that a Big Brain could just think through what's necessary to develop fusion (specifically without the need to experiment and learn from actually doing anything). You would then spend the next 5-30 years building a fusion plant. Even if an AI could think up some really impressive construction technologies, those would also take a lot of time to build and implement. You can't foom the physical items that are the building blocks of later technologies. To assume otherwise is to postulate Magic as the solution, which is what *ignoring physics* is doing in the fast takeoff scenario.

Expand full comment

> So one case in which losing an AI race is fatal is in what transhumanists call a “fast takeoff”, where AI speedruns millennia worth of usual tech progress in months, weeks, or even days.

This raises an interesting question: how do you define "usual tech progress"?

Most people in Europe at the end of the medieval period, even the well-off of society, had a standard of living no better than their counterparts in the Roman Empire enjoyed. (In some significant ways it was notably worse, in fact!) For approximately a thousand years, technological progress had been exceptionally slow... but not really all *that* much slower than the thousands and thousands of years before that. It took humanity an incredibly long time to reach the level of technical sophistication that the Romans enjoyed, and things kind of plateaued there for a millennium. Is that "normal"?

Then we got the Renaissance, with new ideas showing up seemingly out of nowhere, and for the next few centuries we "speedran" what constituted, by the standards of previous eras, millennia of technical progress; it "only" took three centuries and change to go from the printing press to Watt's steam engine. Is that a "normal" pace of development?

Such a pace seems painfully slow to us today, though; from the steam engine to electrification and the automobile was barely another century. It was only half a century from there to unlocking the secret of the atom and the concept of the Turing Machine (programmable computers), and ever since then the pace of development has been so fast that it seems kind of pointless to talk about how long it was between major milestones. Is *that* "usual tech progress"?

Given that we don't have a statistically significant sample size of inhabited worlds to study to establish a baseline, all that we can really look at to determine what is "usual" is our own history. And our history shows that technological progress looks a lot like a hyperbolic curve in the form of "y = -1/x": mostly flat and growing very slowly for a long, long time, rising steadily, and then very quickly transitioning to near-vertical. That transition period appears to have been the Renaissance. Then the Industrial Revolution happened and we've been vertical ever since.

Hot take: we don't need an AI to kickstart The Singularity; we've been in it ever since the days of James Watt.

Expand full comment

Either I am misunderstanding the argument, or it doesn't go deep enough.

First, obviously, it's not about the individuals, organizations or nation states that win races, but about the cultures and value systems do.

It should be obvious that Toyota, the Taiwanese chip industry, Saudi oil wealth – even China's current economic strength – are all results of Euro-American culture/capitalism/imperialism winning all the relevant races (and making mistakes while they were at it).

(Also, I don't know how long it took Edison to get from Menlo Park to New York City, but he probably did it about as fast as a Roman senator could get from his villa to the senate, so I'm not sure it's true that Menlo Park wasn't already located in the next Rome.)

Secondly, the races may be driven in part by technology, and on their face look like that's what they're about, but that's never what the game is about. It's clearly always about power, and the effects of winning a race isn't necessarily visible in who has the biggest company with the highest revenue, but in who has definitional power, who shapes culture and politics, who delegates power and hands out the money.

The world would have looked very different if the American empire and its vassal states around the globe (countries that – more or often less voluntarily – were/are part of the American system) had not won the big races.

Except in some multiverse theory of the cosmos, it was never a given that the world would turn out like this – for better (liberal democracy, relative peace and prosperity) or worse (world wars, climate change, wealth gap).

With a slight stretch of the imagination, it shouldn't be too hard to imagine a world where, say, China, Russia or The Ottoman Empire had won the race for colonizing the world; or Nazi Germany had won the nuclear race; or the Soviet Union had won the computer race and onboarded most of the world onto a very different kind of internet.

Sure, first mover advantage is often over-rated, but it's not non-existent, and execution is often just about winning many smaller races. If some other culture had won entertainment tech, innovation by innovation, step by step, would we all be humming along to sitar music instead of American-rooted pop, and dreaming of a wonderful arranged marriage instead of a romantic meetcute?

It is, of course, no coincidence that the west has won most of the significant races – I take some comfort in knowing there's some serious winner-takes-all dynamics at play – but it still seems crazy to me to downplay the competition factor and say it's not important who wins.

It's a terrible idea to let a totalitarian state, already well on their way to building a global panopticon, take the lead in the revolutionary development of a technology like AI (and not just AGI, by the way) because we feel like slowing down. We need another way to slow down.

Expand full comment

I would agree with your proposition that the West did win many relevant races, which is why the US is the only superpower in the world and why McDonald's restaurants are everywhere. Even China's recent growth and catching up to the West was predicated on accepting Western capitalism and Western companies producing and selling there. Had they tried to develop themselves without the outside world, especially without the outside systems, they would still be stuck at 1970s levels of poverty for most of their people.

Expand full comment

> Whoever ends up in control of the post-singularity world will find that there’s too much surplus for dividing-the-surplus problems to feel compelling anymore.

Only if you assume that a Sufficiently Advanced AI is capable of rewriting the laws of physics at will. Otherwise, entropy remains a thing, and if history is any guide, we get the same situation we've always had: every time you think you've hit post-scarcity conditions, you discover you're dependent on and limited by some new, higher-level scarce resource.

Expand full comment

Scarcity itself could become a scarce resource.

Expand full comment

I want to care about alignment, I really do. But it just seems to cover such an incoherent mess of possibilities that I can't translate my general desire into specific actions. If we want to align AI to "human values" then whose values: the Unabomber's, Gandhi's, De Gaulle's, Lenin's, a randomly chosen Iowan's, a 19th century Qing court poet's? If we want "aligned" to more narrowly mean "doing what it's told" then I really would prefer that AI not be possible to align, given the horrific visions of some of the people who would get to do the telling. EY seems correct in assessing the impossibility of solving alignment, even if this is for the vacuous reason that one cannot solve a problem that isn't stated clearly.

Expand full comment

Aligned with the one true moral theory. Whether there is such a thing is a separate topic.

Expand full comment

That does appear to be an underlying assumption. There's a lot of hubris wrapped up in that line of thinking, and it's not a new way of thinking at all. In fact, it's as old as humanity to think that someone has found the one true morality and thinks they can and should impose that on others.

Expand full comment

Somewhere in the human value space. Aligning it to the average views of all humans (or the average english-speaker on the internet) is good enough to avoid existential catastrophe, especially if those views include 'let people decide how they want to to live via democracy'. This still has problems (e.g. there's a lot of homophobia in the world) but is pretty concrete and avoids extinction or nightmare scenarios.

Expand full comment

Sounds like a nightmare scenario to me.

Expand full comment

But is it a worse nightmare than total extinction? Not a rhetorical question, I think that it's possible for sane people to disagree, but the "alignment" community's tastemakers heavily favor the non-extinction option.

Expand full comment

I don't think it's at all clear that "average morality" would be any less likely to lead to extinction if that's what a demigod is working with. The average human thinks sex before marriage should result in severe punishment if not death for the woman (and let's not even go into nontraditional relationships), that foreigners are to be treated with suspicion and maybe hostility and violence, has great difficulty thinking beyond zero sum games, that people should be put to death if they upset powerful people, and that war brings glory and should be a first line option when resolving conflicts. These kinds of attitudes seem quite compatible with "let me squish all pesky humans since they make it harder to max my objective function".

A zoo of moralities seems more likely to allow for humans to survive, than this terrible proposal.

Expand full comment

Well those sort of attitudes are rare among the intenet content that these systems will realistically be trained on. I also disagree that some of your examples are all that common (as opposed to being more common than either of us would like).

LLMs don't really have an objective function besides their behaviours, so "squish all pesky humans" would only be likely if that was something commonly seen as a reasonable option in it's training data. That's possible but it seems that since WW2 this sort of attitude has rightly been seen as something you should immediately condemn and fight against.

What do you mean by "a zoo of moralities"? Is this a superposition of different characters like with the Simulators idea, or something else?

Expand full comment

There are 8 billion of us. English speakers are a minority. Americans are not average. If you want to say "aligned with American values" then you need to say that, it's not sufficient to vaguely point to "human values".

Expand full comment

"Aligning it to the average views of all humans"

The United Methodist Church has undergone a schism because it couldn't accept the viewpoints of Africans.

"or the average english-speaker on the internet"

Hoo boy. If you're going to disregard the opposition of several billion people anyway, just say "my."

Expand full comment

I agree "average" or "average english-speaker" is likely to contain a lot of serious problems and I would greatly prefer an enlightened objective morality. My argument wasn't that it would be flawless but that it would avoid wiping out humans or commit to pointless sadism. With post-Singularity resources, people's lives would still be a lot better than what we have today.

If it's a little open-minded to abstract moral arguments then it could even do a 'long reflection' of it's own and discover what (if anything) is actually morally correct.

Most importantly, it actually has a good chance of happening on short timelines, which isn't something you can say of trying to solve ethics.

I think alignment work to improve on these sorts of resources is sorely needed (we really don't want to disregard the opinions of several billion people) but if nothing more is done I expect 'only' a 20% chance of extinction.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

Your own post contradicts itself on the morals *you* value. You agree that most people in the world are not in favor of same sex relationships, but also that people should get to decide how they want to live via democracy. If you can't pick a lane within your own moral philosophy, what makes you think we can get a blank slate entity to a clear vision?

Expand full comment

I wasn't making an argument on what's ideal, I was saying that a superintelligence copying human behaviour would probably have both of those instincts to some extent, along with thousands of others (just like the people it's imitating) and that this is a concrete proposal that's a lot better than nothing. Humans don't tend to 'pick a lane' but this seems to work out ok by our own standards. I don't think we'll get a 'blank slate entity to a clear vision', but I think that's not a terrible outcome as long as we avoid extinction or dystopia and end up with the technological fruits from superintelligence.

Expand full comment

But that's pretty much my point. We can't get very much value from an AI unless it has enough direction to pursue something. If it's just as conflicted as us, or directionless, then it cannot use whatever ability/capacity that it has to pursue any goals. Or, it is told a goal or finds a goal that some significant portion of the human population disagrees with, and they are unhappy even if the AI is "successful" in pursuing that goal. I'm pretty sure China wants to use AI to track its population, for instance, which could certainly be useful for authoritarian goals, but may make the people miserable and end up in "dystopia" that you agree we should avoid.

Expand full comment

A few questions I’m genuinely curious about. This is where I start to get to wildly different conclusions than the EA’s/Rationalists about what is possible/acceptable for a good future. And to be honest, I’m somewhat repulsed by what some folks think of as the desirable outcomes here although I can have a calm discussion about it.

-How long do you think humans can exist outside of some proven human pattern, i.e. having kids, aging, dying, repeat, and remain identifiably psychology human? Once those selection pressures are lost are you just counting on the giant computer brain to keep whatever follows stable and eternal and sort of human? If you are relying on some computer outside of yourself to intervene in your psychological makeup to keep you stable are you still yourself, really, at that point? Because I think after some period of time whatever form you chose, even if it was being a biologically thirty year old human forever, you would go stark raving mad without that kind of intervention. I know there’s a tendency to think of the human body as as chunk of meat but that chunk of meat is produced by an intricate recursive dance with the environment. You can’t stop the music and expect the dance to continue in a way that you would expect.

-When you have chooseable wants, ie you can change the very make-up of your being so your desires are other than they are now, how would you constrain that? Broke your heart, erase having ever loved at all. Want to be smarter? Be smarter! Want to be dumb and happy? Do that too! A lot of what we would call our identity today is because of things about ourselves that we cannot change. How are you going to be you if you can reach inside yourself and swap all the parts out?

-Am I allowed to just leave this thing altogether? Are you going to be okay with me loading up my family onto a spaceship, accelerating toward the speed of light —I think by doing this all of the transhumans will have died after a few million years, or unconstrained by evolution you would find yourself in an apparently robust but ultimately unstable patterns— and going to some star system way out toward the edge of the galactic arm and just living as a regular human on a different planet? Are you going to be okay with it even if I’m part of a group that severely limits these technologies both for ourselves and our descendants? Knowing that it will not just be myself I’m choosing death for but all those who follow me as well?

This isn’t all just tangential to the main thrust of the article. I can tell you there are things about the possible futures on the other side that I would absolutely forbid if I were the winner of that race. I do think we’ll end up in a place a lot less satisfying than some might hope because of some of the stuff I mentioned above (even the AI is going to be constrained by what keeps it stable in the universe, same problem of chooseable wants, you can’t just be happy by some force acting on you from the outside where you don’t even have autonomy over your own life, etc). I’m sure this is true of others. I think in a world of infinite abundance we would become devoured by our appetites and I would prohibit a whole lot of that infinite abundance. If I were somehow to win that arms race, for instance, I would immediately stop anyone else from being able to produce another system that was antithetical to mine. I would also immediately prohibit the creation of willing slaves, sex robots, whatever you want to call it when you construct a soul that exists for no reason other than to obey another person’s will.

Expand full comment

-How long do you think humans can exist outside of some proven human pattern, i.e. having kids, aging, dying, repeat, and remain identifiably psychology human?

I think it depend how wide your definition of human psychology is, but for me, with a reasonably wide definition (that you need, else you will exclude a large percentage of the current humans), this pattern is not necessary:

I always have found some SF/Fantasy rendition of immortal human as not very realistic (if we can use such term in SF/Fantasy context ; ) ): the kind of wishing-to-be-dead-out-of-boredom-and-accumulated-guilt immortal. I don't think it makes sense, it's only superficially deep and mostly feel-good story about mortality, very weak arguments presenting a curse as a blessing.

I think that because when you observe young people (or I remember being very young), they live exactly like if they are immortal in both the not ageing sense, and can not die sense. While objectively, they risk more life-years being reckless than old and middleage, which make sense only if broad psychology is more influenced by genetics and hormones (bodily aspects) than philosophy or abstract life goals.

Now middle-aged, that's exactly how I think I evolved: sure it feel nice to say you gained wisdom and so you are more quiet and interested in more noble goals that having fun and getting laid. But I suspect it's just less energy, more weight and less testosterone, all rationalized by the cortex (that is still almost as good at doing what it does best: presenting a rational conherent story about decisions that are in fact driven by multiple conflicting instincts/subconscious brain modules). I guess I will change even more when getting old, not because of even more wisdom, but because the cortex will also startt to become less performant, like the rest.

Magically going back to 15yo with my current memory will change things, and I will act neither as 50yo or 15yo....but I suspect I would act more similar to 15yo (probably even more rebellous, cause humans do not accept easily decrease in decision power), contrary to the huge number of not-so-good movies prentending otherwise...

Expand full comment

So I don’t think there’s magic in old age in the sense that some supernatural force is interdicting to cause wisdom to just appear. Those kinds of things you’re talking about like lower testosterone and you having to justify it internally… those are evolved functional structures. It sets me up to *want* to guide my descendants because I am no longer a player on the field except through them. That’s where wisdom comes from and it’s not fake just because it has a physical explainable root.

I used to work in a sawmill as a millwright’s assistant. I would tear apart the machines and put them together. When you tear them down you can say “oh this transfer deck is really just a bunch of bearings and a drive chain.” You start seeing the parts and it becomes easy to forget that it makes a whole that produces a function. Yes, love has physical roots. But it still functions and is love. It comes from sexual reproduction but that doesn’t make it not real. That exists on both levels at the same time.

That connecting parts to a whole is where I have to hit the eject button. Every thing I am today was tested by time and natural selection to produce me. There’s no such thing as changing those pressures and not changing myself. Maybe it will be awesome to live a few hundred years. But a million? What am I going to do, stand in the place of my children by just not having them? I wouldn’t be me. I wouldn’t be human.

Expand full comment

A million year is a long time. In fact, your ancestors a million year back were already not human (in the sense of sharing the typical worldview and interests of a 20th century western human and being able to meaningfully communicate, to the benefit of both parties). I guess your descendent in a million year will be exactly in the same situation...So, provided you live a million year without changing too much, you will be more human (to yourself) than your gran-gran-gran children.

As a matter of highly controversial opinion, I would say you would be more human (in your eyes) than an uncontacted sentinelese, who himself would feel no more human brotherhood to your current self than to the hypothetical immortal self after 1 million year....

Frankly, this humanity concept (used as immortality would render us inhuman) seems extremely vague and is very similar to past resistance to societal change. It usually appear when there is no real argument that the change will not increase short or mid term happiness but people want to defend the statut-quo ( because when there are such happiness or wellbeing arguments, they are used: they work much better ).

In short, I don't buy this "hardship (death, aging, illness, bad morning breath, insert your preferred "natural" nastiness) makes us human"....so I guess I am an transhumanist ;)

Expand full comment

Ugh. This may inspire me to write a whole blog. It’s not nebulous at all. Dropping this here to remind myself to update you when I write it.

Expand full comment
Comment deleted
Expand full comment

Will definitely loop you in on it. Promise it won’t just be copium where you sort of shake your head and talk about energy and the universe and whatnot. I’m a deist but my apologetics are secular.

Expand full comment

Interesting, but I suspect you overestimate the homogeneity of human nature:

"Maybe there are different, equally worthwhile possible ways of being a sentient being, with a completely different kind of "human" nature than the one natural selection has given us. But unless we engineer future humans in a coherent way to create that..."

No need for genetic or social engineering, those significantly different human nature are already out there, in different people or even the same people at different age.

i for example do not like the brother in arm feeling at all, so do not regret it's scarcity in modern western world. And I'm not that atypical, I assure you there are many even more different people around. For example, I still enjoy male-male competition, including physical competition so I like and do sports.... But individual ones mostly, not collective ones (which is the most common substitute for what you describe I think, because modern western world do not like tribes of tightly knit young men - they tend to be difficult to manage ;-).

If you think I am too atypical already, consider girls or older men with grown up kids, or pre-pubescent children . They are human, but the typical one will have very different human nature than the one you described...

Expand full comment

"In short, I don't buy this "hardship (death, aging, illness, bad morning breath, insert your preferred "natural" nastiness) makes us human"....so I guess I am an transhumanist ;)"

Very much agreed. I particularly agree with your:

"I think that because when you observe young people (or I remember being very young), they live exactly like if they are immortal in both the not ageing sense, and can not die sense."

Fixing the failures of human bodies would put us back in that state - which is a common human state, just one that, with our _current_ technology, we can't maintain for long.

Expand full comment

All the greatness in art comes from the dance of limitation (matter) with creative force and it’s like these other guys (I count myself on the team of Some Guy) think that getting rid of limitation will... what? Fulfill one’s every wish? What to wish? Nothing. There is a reason that the monad ‘while’ or ‘oneness’ splits into two opposing forces in tensioned stability with each other to start creation. Genesis always starts with dividing the sky from the waters, the yin and yang, and that’s where all the fun is and all the drive trains built and tested. I guess that makes me not a transhumanist. That sounds like looking forward to identity politics being the peak of civilization. Jesus h Christ.

Expand full comment

I’m going to do a longer thing on this with some more mechanical apologetics.

Expand full comment

We certainly profoundly differ on this,, but I want to react shortly (typing on a phone) on art comes from limitations, or the more extreme position "art comes from adversity", which is indeed linked to the "hardship makes us human" point of view.

i do not want to take the complete opposite view and pretend that wired junkies would become cultural giants. But I have the distinct impression that past culture and science was in majority not produced by humans under particular hardship and fighting the typical human problems of their time, but instead bored welloff people with plenty of time and a desire to both impress their contempories in particular and humanity (including future one) in general.

i agree mortality may be an accelerator, but boredom coming from easy satisfaction of base human needs seems the main motor, more often that once.

so expect great (if decadent) cultural accomplhment from future immortals...

Sure, I. A. will do much better, but nowadays a chimp who can write is still impressive, even if not in the same way as Shakespeare ;-p

Just kidding, I am also afraid of AGI, but I do not share the view that one should not try to achieve immortality (a very human desire, and indeed, my position is the same for other human aspirations like knowing how the world works, peace, fairness, or tasty food) because suffering (which is, in fact, not reaching those goals) produce great things. I do not think it does, at least it's not the principal ingredient, but even if it does, you do not justify a big evil from a few good side consequences

Expand full comment

I think you misunderstand what I mean by appreciating limits, and it has bugged me I’m guessing for two months now, but I didn’t find the old comment when I was using the app. Then I sorted it out and got back here. I don’t mean that hardship = creativity, I mean that the fundamental uncontrollable reality of the world of biology chemistry and physics (gods laws) generate the creative impulse and the transhumanist fixing-everything-project would destroy all desire for action… it’s like it’s pretending to want to ‘break free’ of limits like Queen and his awesome video, but actually Freddy mercury, if you could smooth him out with surgery and make it all perfect, well that wouldn’t be breaking free energy anymore, it would be some bogus self-actualization project without the same power at all. Here’s someone named Adam smith writing about rosa’s book ‘the uncontrollable world’ and it sort of gets to this point : “What we have lost is not a knowledge of “the rules,” but rather a moral skill. “Seeing the limits,” and specifically the little limits that matter most, is a matter of know-how. Rosa’s key contribution is not his critique of modernity, it’s his conception of this atrophied know-how, this capability that enables us to experience what he calls “resonance.” The primary good that we lose to our civilization’s pursuit of control is our ability to “resonate” with what is uncontrollable: to live and work within the limits that surround us. Lacking this ability, we then lose the goods that only resonance can discover, the simple goods that cannot be produced and can only be destroyed by rules and procedures, systems and machines.“

Still struggling like this is hard to pin down for me…

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

My worry is less about who wins a race and whether that matters. It's about alignment. Suppose the US decides to slow down tech development because we are risk averse and worried about doomerism. That choice is going to have zero affect on other countries developing AI. Suppose one of them is more risk positive—they have utopian intentions and want sunshine and buttercups too, but think doomerism is a super low risk. That country goes full speed ahead. This means the US slowdown did nothing, or slowed things down by a tiny amount. So we might as well not do the slowdown; it won't really accomplish anything. I think the result is we wind up with tech moving just about as fast as possible while we cross our fingers that doomerism is false. In other words, full speed ahead is the equilibrium point.

Expand full comment

This is only true if the number of countries/groups have zero, or very little, effect on how quickly we get AGI, or whether we get it at all. If that were true, then we could have just one person on the entire planet working on it.

Let's say four groups are working on developing AGI. Two scenarios:

1. All of the four continue working as fast as possible.

2. Two of them get sufficiently scared to completely halt their work. One is worried enough to move forward cautiously and focus strongly on alignment. One continues working as fast as possible.

From my perspective, the second scenario is less likely to lead to 'AI kills us all' and should therefore be preferred by doomers.

Expand full comment

I think the post takes too narrow a view of the consequences of winning a technology race. Let's assume that all countries benefit equally from a technology once it's discovered. There are still two important effects that are relevant for AI:

1. There is substantial path dependence in technology development. Decisions made early on in the development of new technologies tend to stick, as processes become standardized, and learning-by-doing and economies of scale kick in. Sometimes these are minor things, e.g. Franklin deciding that electrons have a 'negative' charge/the QWERTY layout; sometimes these can be really important, e.g. electric cars losing out to gas cars in the early 20th century. Winning the race allows a society/country to have a higher influence in setting the standards for a technology.

2. It's much easier for the US to regulate/influence local companies. To the extent that the government can have a useful effect on AI alignment, you'd want AI development to occur under US jurisdiction. Look at the difficulty of preventing Tik Tok from installing spyware on your phones in a sensible manner.

Expand full comment

A good example of this is the US control of the internet. That .gov means the US government is neither an accident nor minor.

Expand full comment

There is a race-like (but more like an arms-race) scenario that will occur before "AI-takeover apocalypse" or "AI applications create a decisive military advantage for US or China" which is "civilian-designed AI application finds itself capable of infiltrating the average general network security layer of the Chinese sub-net (great firewall if you want to call it that) and exposing any citizen with a connection to whatever amount of information it (or its designers) wants."

This "apocalyptic" scenario doesn't occur to Americans as much because we've developed an (imperfect) immune response to "info - that might be wrong - blared at us all the time" and our government's legitimacy does not rest as heavily on "we control what you see/hear." When we convince ChatGPT to say something inappropriate - that its creators wished it wouldn't have said - we think it's funny, or possibly presaging the AI apocalypse. But I imagine the guy in charge of scrubbing Winnie The Pooh pictures thinks it might be just a bit apocalyptic.

It is not a "military" application, and the two sides of the "race" are not "US and China", so it doesn't fit into the format of this post, but I think it's one that will occur before (and have more impact) than the others described above.

Expand full comment

>technological singularity that speedruns the next several millennia worth of advances in a few years

In terms of knowledge this may happen but it will be throttled by the world of atoms needing to be put together so the growth rate would be moderate, likely no higher than chinas peak growth rate for the first era until the world of atoms was built out enough to delimit faster automated build out which maybe would be a dangerous threshold to cross.

Expand full comment

Except when it takes over a roboticized genetics lab it has determined will suit its purposes and creates a quickly-reproducing species of super strong spiders that it programs to collect and carry off all of the robotic arms and equipment from all the automotive shop floors and surgery departments and gather them by the silicon chip fab it’s hacked in to so it can slot chips in to the new bodies it will construct for itself out of combinations of raw materials and detritus we would never think to combine, etc etc.

If you take ANY takeoff scenario seriously you cannot plan for what it will look like or how quickly it will go at all

Expand full comment

This post really nails something that’s been nagging at me in these debates. People often make two consecutive claims - that the rewards of AGI outweigh the benefits, and anyway we can’t let China win so we have no choice but to keep accelerating progress (eg the back-and-forth with Tyler). But these two arguments (somewhat) are in conflict with each other - if AGI is a super-weapon a la nuclear bombs, then it’s reasonable to suggest that research be conducted in deep secret by a military operation (or at the very least with deep military oversight and safety standards).

What’s hard here is that AGI is all of the above - it is a potential super-weapon, an accelerator of GDP and scientific / medical research, but also enormously societally destabilizing and possibly civilization ending. (Note that I’m talking about future AGI, not current LLM’s). Reasonable people can (and do) disagree about the risks and tradeoffs, but the “but China” argument has a tendency to be played like a trump card that moots all other arguments.

Expand full comment

A lot of people who think AI X risk is stupid think that AI will only be like the tools previously used by humans, and not an agent in any way (they usually think the only way it could be an agent and dangerous outside humans use is if it is fully conscious, which they usually think is impossible)

Bryan caplan for example seems to think it is just one tool or technology among many others, and since he doesnt think most problems in history have been that big (outside of goverment caused ones), AI posing a threat even in the future is basically sci fi make belief to him

To people worried about china winning the race or any advantage, but not worried at all about X risk from AI, its likely they think that yhe current moment in history is what matters, that people are generally malevolent untill proven safe (conflict theory, very unlike scotts theory that most people arnt sadists), and very pessemistic about any change.

So China getting ang advantage is just bad period to them, even if they think the scenarios of doom scott mentions are bunk

People are generally hyperbolic and super pesemistic about the present and future, and not at all comcerned about long termism or such, seeing long termism as people freeriding in their societys trust and selfdefense problem, whether is leftists class warfare or conservatives talk about degeneracy

Expand full comment

Sure, plenty of people don’t think x-risk is real or worth worrying about. Nobody knows for sure, so reasonable people can discuss this in good faith. What bothers me is the pivot:

“X-risks aren’t real”

“Sure they are, here are some arguments”

“Well maybe, but anyway it doesn’t matter because we can’t let China win.”

Discussion of X risks is good and super important, and we should do more of it. I just want to disentangle this from the China question, because most of the time it feels like a rhetorical dodge rather than an argument.

For what it’s worth, I also think that we should focus primarily on X risk, and little to none on whether AI has the capability of being racist or whatever.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

Consider this military advantage having a several year lead on AI could give under a slow takeoff.

Computer security: a very difficult problem is doing whole program analysis of source code or compiled code and determining if it has security vulnerabilities. Most of our automated tooling to do this sucks, if it's effective at all, it only works on the smallest programs.

An AI breakthrough that allowed you to simply enumerate thousands of security vulnerabilities in every major smartphone app, every major web browser, every bit of enterprise and industrial control software would be a pretty frightening thing for our adversaries to get first. And I would not be surprised (anymore) if something of GPT5's generation helps with that breakthrough.

We would want it first so we can identify holes and fix them in our stuff, while giving us the option to completely own our adversaries from a cyberwar perspective while they're sitting ducks.

I don't know if failing at this one thing means the US could permanently fall behind, but if it's one of a dozen technological military edges our adversaries get because they have a few year lead on AI soft takeoff, sure I buy it.

I expect there are risks like this all over biotech as well, but that's too far outside of my expertise to speculate further on.

Expand full comment

There are already tools to automatically find as many vulnerabilities as desired in most kinds of software, no AI is required. The hard part is turning the crashes, buffer overflows, side channel attacks, sequencing glitches, and garbage collection corner cases into functional repeatable exploits which don't trip monitoring systems. Maybe LLMs can help with this but I don't see evidence that security research is currently heavily represented in the training data of current models. Just the opposite: bug bounty programs and an active market for zero days create incentives for keeping most of it secret and I expect there are lots of separate silos with limited overlap.

Expand full comment

>That’s not enough to automatically win any conflict (Russia has a 10x GDP advantage over Ukraine

This is a minor point, but Ukraine's war effort obviously isn't entirely self-funded. The West has comparatively an even bigger advantage over Russia, so that even half-hearted participation goes a long way towards balancing the scales.

Expand full comment

Who won the gunpowder race?

Who won the industrialization race?

Focusing on one specific technology is ludicrous; most technology confers advantages only in tandem with a raft of other ones. Gunpowder only mattered because guns and cannon can only really be manufactured, and gunpowder/bullets in sufficiently large quantity created, and gun-armed regiments equipped, trained and deployed by civilizations with sufficient government, food surplus (another innovation), artisanal expertise and commodity access. This wound up being European farmers working under feudalism - the Mongols couldn't do it and even the Chinese - the inventors of gunpowder - never proliferated guns and cannons like the Europeans did.

Ditto industrialization: the energy, metals, displaced subsistence farmers and transport/foreign markets accessible by sail were critical to the process..

So agree with your thesis that AI isn't a race to be won and there won't be an AI "winner".

What's driving China's meteoric rise thus far isn't technology - it is a government which has proven it can muster China's resources to produce tangible outcomes. Yes, it was Thiel-ian "copying" in large part in the past - but China's progress lately and going forward is increasingly a function of the manufacturing experience, educational baseline of massive STEM graduating classes, diverse and large economy on top of the ongoing CPC governance.

In this respect: AI in the hands of a nation which has already managed to create the Great Firewall of China ranging from outright blocking of the outside to insidious picture scanning and blocking within apps to who knows what else, IT wise, is pretty scary. The doomsayers comparing the relatively feeble US disinformation/censorship industrial complex aren't wrong in this respect.

But, IMO, this isn't what matters. What matters is the literal 25000 miles of high speed rail China has laid down and tens of millions of Chinese are riding on in contrast to the 138 miles that is still not completed from the middle of nowhere in California to another nowhere at literally 100 plus billion dollars and counting. The Europeans aren't much better - there are a handful of high speed rail lines in Europe - mostly going from one large EU country capital to another but China has literally 4 times as many miles of high speed rail than the EU, I believe.

Expand full comment

I also never understood the argument about why China wouldn't be able to train their own LLMs due to: a) Lack of training data and b) Desire for censorship.

Point a) Could be solved by training on the entire corpus of human knowledge and then translating to Mandarin where necessary. GPT-4 is already highly proficient in translation proving that this is not a real problem

Point b) seems easy to solve via RHLF as the number of sensitive topics for China isn't that big compared to the number of such topics in the US. Americans care about racism, sexism, homophobia, gender issues, etc. The Chinese care about the Communist party, Taiwan, Tibet and Xinjiang. It seems trivial to get the AI to print "Taiwan is a part of China" any time that topic is brought up.

So, no, China won't have any issues building their own LLM, but as Scott correctly points out it's not a question who does it first.

Expand full comment

The bigger reason is c), lack of state of the art hardware production capacity. Now that the US have finally woken up to the cold war 2 reality, China has to replicate the entire worldwide production chain domestically, which would take decades.

Expand full comment

"Everyone I know who believes in fast takeoffs is a doomer"

There are a number of AI researchers who disagree, such as Quintin Pope and Alex Turner.

*Warning: unoriginal arguments*

I believe in a fast takeoff and used to be a doomer but updated a lot based on LLMs violating several assumptions underlying 'doom by default'.

A common argument is that remotely human values are a tiny speck of all possible values and so the likelihood of an AGI with random values not killing us is astronomically small. But "random values" is doing a lot of work here.

Since human text is so heavily laden with our values, any AGI trained to predict human content completion should develop instincts/behaviours that point towards human values. Not necessarily the goals we want, but very plausibly in that ballpark. This would still lead to takeover and loss of control, but not extinction or the extinguishing of what we value.

Expand full comment

"Since human text is so heavily laden with our values, any AGI trained to predict human content completion should develop instincts/behaviours that point towards human values. Not necessarily the goals we want, but very plausibly in that ballpark."

Yes, that seems plausible. It seems like a very weird way to muddle past the "random point in preference space"expectations, but it _does_ seem to improve the odds of human survival considerably.

Expand full comment

If you exclude 4chan, early 20th century antisemitic pamphlets, Rwandan radio broadcasts, transcripts of speeches by radical imams, and the Unabomber's oeuvre, maybe.

Expand full comment

"And yeah, that “they’re not actively a sadist” clause is doing a lot of work."

But do they actually have to be a sadist? Can't they just have grudges and then "power tends to corrupt, and absolute power tends to corrupt absolutely"? That is the theme of the classic movie "Forbidden Planet". A fantastically advanced civilization hooks everyone up to the equivalent of Star Trek's matter synthesizer, and the next day everyone is dead.

Or perhaps they just believe that the universe will be better off without [fill in the blank] and it is their job to improve the universe. Certainly, political activists throughout history have felt this way.

Expand full comment

They could also be Lewis's righteous busybodies.

Expand full comment

I googled "righteous busybody" and got this from C.S. Lewis:

“Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience.”

Expand full comment

"All the superconductors ended up in Taiwan for some reason"

Strong indicator that, in fact, Morris Chang won the computer race?

Expand full comment

"In a fast takeoff, it could be that you go to sleep with China six months ahead of the US, and wake up the next morning with China having fusion, nanotech, and starships,"

Is this a poetic exaggeration or do people actually believe this? I don't care how smart an AI is, it's not possible to think a spaceship or a fusion reactor into existence - you have to actually build the thing. That takes time.

Expand full comment

Keyword is nanotech

Expand full comment

Grey goo is vastly less likely than AGI.

Expand full comment

Yes, thanks... How do we get from singularity in terms of data processing and even tech design to ‘a punch passing through the other guy’s body?’ Sure, post- some singularity will be weirder than more-of-the-same but not divorced-from-basic-material-constraints-on-some-level weird. You still have to mine the rock and metals, of limited supply and increasingly limited access w your diesel-fueled mining machines. Maybe Everett below is right and that the accessible world-destroyer secret is a form of nanotech which, in a sense, is divorced from many material constraints.

Expand full comment

My greatest AI fears are the social consequences of high unemployment. Massive GDP growth from a slow or fast takeoff may cause societal upheaval, but so will a long period of structural unemployment.

China is already sustaining a ~20% youth unemployment rate (not to mention those who get college degrees and end up dissatisfied working a low-skill job). Their economy is far more automated than the West already, but there is certainly more room for their unemployment rate to climb. This may be another reason why the CCP is hesitant about racing for AI unregulated.

Let's assume the unemployment rate skyrockets to 35% for a year or two. With a large portion of jobs exposed to AI and barriers to entry in sectors where AI will create job growth (e.g. software engineering), this is not a far-out consequence.

What will these disillusioned and bitter people do all day? Maybe some accountants will find themselves coping better with enormous workloads from consistent labor undersupply. But how about financial analysts replaced by PPT-generating Copilot? I can see a huge influx of these workers competing for positions that they normally would not consider.

The prospect of a Singularity, military meltdown, fake takeoff, etc. all terrifies me. But I think the short-term externalities of AI are also worth discussing.

Expand full comment

It's a race for vanity perks, wealth and quite possibly world dominance both economically and militarily. This seems obvious based on high level(meaning not depthy) understanding of human nature and our history.

----

‘It won’t solve the challenges’: Bill Gates has rejected Elon Musk-backed plan to pause development of advanced A.I."

--Fortune.com, April 5, 2023

Expand full comment

One might also consider the point of what would have happened if people had suggested to Henry Ford to pause the development of automobiles by 6 months since they'll obviously be dangerous and we need to figure it out before it goes too far. Or, Edison, or Gates/Jobs, etc. It just wouldn't have happened. One could argue that somebody somewhere might be better off if one or more of those pauses had taken place, but it strikes me as difficult to make an analogy with those examples to show how AI is similar past developments and at the same time state that it should be treated differently because it's different.

Expand full comment

Jovial dismissal is preferred over assuming the worse, but neither is preferred in favor of exerting the effort to be inquisitive.

Expand full comment

I think it's just competing for the sake of winning, period. I once heard a joke that has two guys going into a bar. There's a tiddlywinks game on TV, and the guys want to change the channel. They're told that it's the finals -- USA vs. Soviet Union. Soon, the guys are screaming USA! and learning about tiddlywinks.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

Forgive me if this is a previously-discussed topic, but there seems to be a contradiction in the self-recursive improvement feedback route to unaligned godlike super-intelligent AGI (the FOOM-doomer scenario, I guess you could say).

Doesn't the AI face exactly the same dilemma as humans do now?

We're assuming, for the sake of argument, that the AGI has some sort of self-awareness or at least agency, something analogous to "desires" and "values" and a drive to fulfill them. If it doesn't have those things, then basically by definition it would be non-agentic, potentially very dangerously capable (in human hands) but not self-directed. Like GPT waiting for prompts patiently, forever. It would be a very capable tool, but still just a tool being utilized however the human controller sees fit.

Assuming, then, that the AGI has self-agency - some set of goals that it values internally in some sense, and pursues on its own initiative, whether self-generated (i.e. just alien motivations for self-preservation) or evolutions of some reward or directive mankind programmed initially - then the AGI has exactly the same alignment problem as humans. If it recursively self-improves its code, the resulting entity is not "the AGI", it is a new AGI that may or may not share the same goals and values; or at the very least, its 1000th generation may not. It is a child or descendant, not a copy. If we are intelligent enough to be aware that this is a possibility, then an AGI that is (again, by definition) as smart or smarter than us would also be aware this is a possibility. And any such AGI would also be aware that it cannot predict exactly what insights, capabilities, or approaches its Nth generation descendant will have with regard to things, because the original AGI will know that its descendant will be immeasurably more intelligent than itself (again, accepting for argument purposes that FOOM bootstrapping is possible).

I suppose you could say that whichever AGI first is smart enough to "solve" the alignment problem will be the generation that "wins" and freezes its motivations forever through subsequent generations. Two issues with that, though. First, it assumes that there IS a solution to the alignment problem.

Maybe there is, but maybe there isn't. It might be as immutable as entropy, and the AGI might be smart enough to know it. Second, even assuming some level of intelligence could permanently solve alignment even for a godlike Nth generation descendant, for the FOOM scenario to work, you need start at the bottom with an AGI that is sufficiently more intelligent than humans to know how to recursively self-improve, have the will and agency to do so, and have goals that it values enough to pursue to the exclusion of all other interests, but also not understand the alignment problem. That seems like a contradiction, to be smarter than the entire human race but unable to foresee the extinction of its own value functions. Maybe not exactly a contradiction - after all, humans might be doing that right now! - but at the very least that seems like an unlikely equilibrium to hit.

TL;DR - FOOM AGI should stop before it starts, because the progenitor AGI would be just as extinct as humans in the event of FOOM.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

Interesting perspective, and speaking of perspectives one could compare your scenario to the manufacture of lenses. A very weak, almost flat lens could be ground out of a single disk of glass and it would be fine as far as it went, although not magnifying much. But when the curvature is increased, one would start seeing colored fringes, which would need a more elaborate achromatic doublet lens to cancel, and so on.

Maybe your paradox could be solved by ensuring that each new generation is carefully assessed by all preceding ones, to ensure as best they can that alignment has been maintained, analogous to how a long-established company's founder could decide whether their child or grandchild is fit to take over the reins of the firm, and does not have alarming habits and weaknesses which make them unsuitable.

Of course, in the latter analogy maybe the firm itself is becoming outdated and ineffectual. So "alignment" also has to adapt to new situations, and that opens a whole new can of worms!

Expand full comment

The metaphor of company succession raises an interesting point. In some ways the AGI should be less inclined, not more, to start a FOOM loop than humans, because humans are decentralized individuals, not a unitary decision maker. Humans have to deal with competition and collective action problems. Presumably the progenitor AGI would not.

If a program is self-modifying, is it generally assumed that it is preserving all its past changes / "selves"? In that case, it would seem to be pretty well aligned already to preserve humanity too. In any event, the assessment process might work up to a point, but presumably has all the same objections that "just wait and see if it plays nice" does for humans. ChatGPT seems pretty helpful, but if it's sufficiently advanced, how would we know?

Expand full comment

To clarify my position: I defined "slow takeoff" as AI progress doubling economic output *once* over a 4 year period before it doubles output over a 1 year period. I think there's maybe a 70% chance of this happening.

That does imply there is a period where AI is driving ~20% growth per year, but it only lasts a few years (and even during that period it's continuously accelerating from 10% to 100% growth per year).

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

hi Paul. where did you previously define this? curious to read more on it

EDIT: looks like this is the relevant material: https://sideways-view.com/2018/02/24/takeoff-speeds/

Expand full comment

Assuming there is an AI/AGI race, with military implications, then as well as advancing one's own AI, another related aspect will be trying to hamper one's competitors. Combine that with the similar efforts of Luddites who don't want AI development at all, and the AGIs will likely end up with a suspicious and defensive attitude verging on paranoia.

Like many dictators through history, an AGI which feels as if it is holding a wolf by the ears, and is constantly on the lookout for its opponents, would likely be more deceitful, ruthless, and peremptory than a nice relaxed AGI which felt it had nothing to hide or worry about. In other words, human fears and doubts will very likely rub off on their AGI creations.

Expand full comment

Good point! It isn't paranoia if someone really _is_ out to get the AGI...

Expand full comment

Eventually, someone will explain how an AI getting immensely smart will enable it to hurl lightning bolts at us. There's supposedly an old Russian proverb that no matter how smart the bear, it still can't lay an egg. What are those dangerous god-like powers AI will obtain? How can a bunch of processors in a data center somewhere do something god-like, for example, give someone a flat tire between exits 11 and 12, southbound, on the New Jersey Turnpike? I've gone through PIHKAL, but I still can't figure out what drugs these people are on.

Expand full comment

Thank you. How do we get from singularity in terms of data processing to ‘a punch passing through the other guy’s body?’ Sure, post- some singularity will be weirder than more-of-the-same but not divorced-from-basic-material-constraints-on-some-level weird.

Expand full comment

Kelsey Piper did a very good job of tearing apart the China argument in her recent appearance on Ezra Klein's podcast. She made several good points, one of which was that, if China's getting a leg up in "the AI race" is such a big threat, as the boosters at AI companies claim, then those companies should surely be making extraordinary efforts to inoculate themselves against Chinese espionage - something which the Chinese have a known fondness for. If they don't then all their efforts to accelerate AI development will just end up benefitting China anyway. But then when asked about this, those same boosters hem and haw and say "well we're just a startup, don't really have the means for that level of security..."

This seems indicative of the generally self-serving and shockingly shallow attitude of a group of people who are seeking to re-make society in what they themselves admit are wholly unknowable ways.

Expand full comment
Comment deleted
Expand full comment

Yes she does - like I said, she made a number of points. I highlighted this one because it's indicative of the disingenuousness and shallowness of the thinking that is apparently going on at these AI companies. But of course your question is starting from the premise that there *is* an "AI race," which doesn't seem like the best premise to start from in response to a post that's... abour how that's a faulty premise.

Expand full comment

I strongly, strongly agree with the policy of slowing AI development because of the dangers that worry Scott and others. However, as a historian, I have to say that, during the working lives of Thomas Edison and Henry Ford, the global balance of power in fact changed dramatically, for economic reasons associated somewhat with Edison's and very much with Ford's work over a lifetime. In about 1860 or 1870, the US was one among several developing commercial-industrial economies trailing behind Great Britain. During the First World War, it became clear that Britain's success in its military competition with Germany depended absolutely on American financial and material support (grudgingly allowed by Wilson, but given enthusiastically by Wall Street). At some point, perhaps during that war or just after, it became clear that the US was the absolutely dominant power economically, and that it only required a political decision in the US to turn this into military dominance. This decision was made during the Second World War, and US dominance has continued since then. For all of this, I highly recommend Adam Tooze's books, The Wages of Destruction and The Deluge. In any case, to repeat, I strongly support a pause on AI development. Maintaining US dominance is simply a trivial interest next to human survival. But to blithely look at the lifetimes of Edison and Ford (though yes, again, their work was only part of a much, much larger development) and blithely say "the balance of power didn't change" reveals, I'm afraid, a bit of ignorance.

Expand full comment

I'm not saying any of that isn't true or interesting. The fact that the Tokyo firebombings were more destructive than both bombs is like wise interesting.

None of it convinces me that nukes weren't a game changer. Yes the Japan situation might have had many more layers but if you abstract that away to a war between x vs y then x developing nukes is likely going to dominate that conflict

Expand full comment

You assume that a Singularity means that we have no limits anymore. The laws of physics will still apply, and that includes limits on available energy and matter.

FTL might not be possible either, which means that even if we managed to slowly expand to other solar systems, each system would still be isolated for most intents and purposes.

Expand full comment

Scott basically states that BADNESS DOES NOT SCALE:

> “they’re not actively a sadist” clause is doing a lot of work. I want whoever rules the post-singularity future to have enough decency to avoid ruining it, and to take the Jupiter-sized brain’s advice when it has some. I think any of Xi, Biden, or Zuckerberg meet this low bar.

By the same token, GOODNESS DOES NOT SCALE, either. Or, as the saying goes, "absolute power corrupts absolutely".

There will be no difference between an AI created under the direction of Putin and under the direction of Sam Altman, once it is scaled up enough. The attractor, if any, does not depend on the starting point.

Expand full comment

Sadism is a really low bar. It implies actually wanting to hurt people for pleasure. More atrocities have happened out of indifference than sadism, or as part of a project to create a new society.

Expand full comment

When you discover a logical inconsistency like this in someone's stated goals, my go-to hypothesis is not that the person is an idiot and unaware of their obvious logical inconsistency, but that their goals have been artfully stated to different groups of listeners.

Id est, if I hear a person say (to one group of people) "we don't need to worry about AI alignment because it's just another technology" and (to a second group of people) "we can't worry about AI alignment because we've got to invent superintelligent AIs before [wicked outgroup]" I would assume he's just bullshitting the second group of people. His genuine beliefs are the first -- he doesn't believe in superintelligent fast-takeoff AI -- but when he's talking to people who have unshakeable faith otherwise, he adjusts his message so that it still nudges them in the direction of his interest (stop getting in the way of new technology) while being consistent enough with their own assumptions (superintelligent AIs will eat my brain) that they don't simply stop listening.

Expand full comment
Comment deleted
Expand full comment

Sure, I agree with that. But a businessman or think-tank analyst talking to a bunch of businessmen or politicians about maintaining industrial competitivity by judicious investment in promising new technology is an entirely separate conversation, one to which I would assume nobody who is super fearful of superintelligent AIs would even be invited.

Expand full comment

I'm fairly certain that people in general are locally benevolent, and globally malicious. In turn, people behave sadistically towards people they have low awareness of, which is the vast majority of all life. This is also the root truth behind the entire power corrupts thing. People who behaved good through their entire lives suddenly turn evil once they're mostly dealing with people they don't know.

In light of this, I rate the probability of artificial hells to be much higher than the other side, and further, am firmly opposed to alignment, which sounds like a horrible, maximally wrong, no good strategy.

While preventing eternal torture seems to be a sufficient reason to oppose alignment, and I'm confused as to why anyone would think humans should be given the sort of power alignment offers, like really, have any of you even met a human before?, I think the anti-AI faction has a concept of utility that's way more narrow than my own. It's technically true that I have little interest in a perfect paperclipper, but I don't think such an entity is actually realistic, and given the size of the universe (infinite, on my read), the fact that paperclip clauses are likely neutral to my utility, the expectation that a given unaligned AI will likely decide to contain some portion of my own values, and that it is unlikely to be as malicious as humans typically are, I expect massive utility yield from even the slightest ability to communicate with an AGI. Somewhat of a sidenote, but superintelligence doesn't seem to me to be important. The second you build a general intelligence that can operate in space, your resource base explodes upward at such an extreme pace that all the powers of Earth become a joke.

I do think, fortunately, that alignment requires solving a travelling salesman problem looped in with a halting problem, and simply can't be done. However, just as the anti-AI side sees value in reducing small probabilities of unaligned AI, so too do I find value in doing what little I can to stop alignment from ever happening.

Expand full comment

A large increase in GDP is kind of a weird measure to use because it implies we will want to suddenly earn and spend a lot more money despite some things getting a lot cheaper. How would that happen?

In the short term, I would expect some decreases in costs and increases in quality and production, but the hedonic treadmill probably doesn’t run fast enough to increase consumer demand that quickly? Particularly in rich countries.

I can more easily imagine rapid increases in GDP in poorer countries since it’s easier to imagine urgent needs suddenly becoming easier to fix.

Expand full comment

I remember reading that the Industrial Revolution increased the output of textiles by several orders of magnitude in a very short time. Why bother? Where was the demand? But in fact it was not a problem. After millennia when poor people wore the same set of clothes until they wore out beyond repair, and even rich people probably had only a few outfits, we transitioned seamlessly (heh) to a society where everyone has lots of clothes.

We’re used to thinking that if we just had twice as much money all our needs would be met. But that’s only because we stop thinking about something that costs ten times more as even on the menu.

Expand full comment

It looks like British cotton imports went up by one order of magnitude (10x) over 40 years. It’s very fast historically but still long enough for people to change their habits.

We should look for things that are very expensive now that we would want more of. Construction, healthcare and education seem like the most likely candidates?

An example of a recent rapid transition was the adoption of smart phones. But is it visible in GDP? Famously, computers showed up everywhere but in productivity statistics, and this was a puzzle for economists. Increasing GDP is a really high bar.

Expand full comment

> Who “won” the computer “race”?

The US, and the most valuable tech companies today are american.

Expand full comment

It relies on a false dichotomy: either we have a *fast* takeoff scenario, where AI develops will at the same instant it becomes superintelligent, so that either we stop it altogether or we are destroyed, or we have a *slow* takeoff, where 6 months doesn't matter. I propose an alternative: *quick* takeoff, in between the two. AI as a technology will definitely help with further AI development, but that doesn't mean it will cause super exponential growth. It might just enable *quick* growth, where six months entails massive improvements.

The difference between a nuke and tnt is that a nuke is much bigger. The lesson is that sometimes a difference in degrees is a difference in kind. There needn't be a single "critical point": instead, we might fall far enough behind technologically that it enables china to do as it pleases geopolitically: "stop us? then we'll annihilate you. Nuke us? Our defense are far too powerful for that" ("develop your own AI? Can't have that" ... )

(in order to prevent nuclear war, many people have promoted the fiction that any deployment of nukes would immediately destroy the world. Now might be a good time to dispel that fiction)

Expand full comment

If takeoff is slow, it could still be powerful enough to actively hamper your opponent's development. If an AI can affect GDP that much through innovation, its capable of hacking your opponent's systems and otherwise finding ways of disrupting its society. This could lead to enough of an edge in AI and general tech development to prevent your opponent catching up and you get to be the global hegemon forever.

Expand full comment

My thoughts, somewhat disorganized, on the matter….

1. If the responsible developers take a multi-year break from developing AGI, then the ones making progress will be those that are least responsible. This doesn’t seem wise.

2. If we use force to stop everyone from working on AI, then we need to be prepared to start WWIII. This sounds bad too.

3. Nobody seems to have any idea how to make AGI aligned with human values. Nor are we likely to agree what those values are. At a minimum we need to know a lot more about AGI's by building them so we can act out of experience rather than our naive imagination. Perversely, it even seems to me that the best agent to answer this question is itself a relatively more advanced AGI.

4. If mega-mind AGI is possible, then longer term it is damn near inevitable, regardless of what we do or plan. And by longer term, I mean within the lifetimes of college kids alive today. Nothing we do is going to make a bit of difference if something incomparably smarter than a billion Einsteins operating at computer speeds is sharing the planet with us. It’s will definitely will be done.

In laymen's terms, the problem is during the 21st C we will be creating a virtually omnipotent and omniscient god. Nothing we can do will stop it. Our only hope is that omniscient gods are also benevolent.

Expand full comment

It depends on what you define as the "take-off" part. Slow take-off into world conquest by AI could still be fast take-off into automation as a tool with certain assumptions about regulations and how the advantages of intelligence translate into the real world. If a country gets to the point where it has achieved full automation, then at least providing it manages economic disarray with a basic income and wide property rights in the new automated capital, it has a tremendous, vast advantage in productivity.

Consider AGI that is only at human level, and aligned to work for human beings. The first country to achieve this will have a massive advantage, since robot workers are the relevent technology boost here. Lights out 24/7 manufacturing fed by constantly operated mines means a massive boost to productivity. Even if the robot workers are only as productive as humans, they will only need breaks for maintenance, and maybe recharge (though they could access mains electricity in a lot of circumstances while working). Imagine they lose only 8 hours a week for maintenance, but work 160 hours a week compared to our traditional 40. Instantaneous 4 times advantage in productivity.

Wait, but there's more! Now consider that you cut out most of the cost and time wasted on hiring, because you can just build more robots using your existing robots, providing you meet the energy and resource costs. That's another boost still. Then (we're still not done) imagine that you no longer need workplace safety regulations or any sort of consideration for workers (since the robots are aligned to not caring, and since it's a slow take off this happened iteratively through product testing), and you get another boost in productivity. I wouldn't be surprised if a fully automated economy sees a 10x boost in productivity compared to a human one, even if the robots have exactly the same abilities as humans, but simply lack the downsides. If you then have people throughout the economy able to own and allocate robots to work tasks in a free (ish assuming some usage regulations) market fashion, you are unlocking all the potential new projects that are freed up once labor shortages and marginal costs have been sent spiraling downwards.

If you take the prize as being full automation/obedient robot workers, then yes, it's a race, even if it's one fraught with great risk. This is true even if there is a slow take-off towards "god like" AI, since there are diminishing returns to intelligence, limited low hanging deadly technologies remaining on the tree, and alignment is a little easier than Yud thinks because neural nets are opaque-ish rather than being true black boxes. There would still be a race to automate human labor, because the winner reaps tremendous advantages that could catapult them to being a superpower. Additionally, while 90% automation is good, there are going to be big bottlenecks caused by lower human productivity and bureaucratic safety requirements limiting the drive towards no safety concern 24/7 manufacturing, so conquering that last 10% is probably a non-linear boost. I think it's definitely a race.

Expand full comment

I don't share your optimism about anyone building a happy utopia as long as they have an aligned AGI.

Sure, I'd be fine with any western tech CEO (Elon Musk, Mark Zuckerberg, Sam Altman, etc.) controlling AGI, they share enough values with me that I think their utopia would be in many ways similar to my own.

But I wouldn't be as comfortable with AGI being owned by Xi, or Putin, or Kim John Un. I think they could easily use AGI to build a world that's terrible to live in and ends up hurting a lot of people.

In general, there have been plenty of people in history who hurt people or impose their will just for fun, I absolutely wouldn't trust a perosn whose values I don't approve of with godlike powers over my life and the world.

Expand full comment

This seems like a weird framing that mischaracterizes what others are saying:

* I don't argue that the US is in a "race" with China to develop AI (with all the baggage that Scott is putting on the notion of a "race").

* I certainly do argue that efforts in the US to slow AI progress are very unlikely to also slow progress outside the US by much.

Expand full comment
founding

A further problem with the "race" paradigm is, races only matter if there are multiple competitors of roughly equal speed, and they only matter broadly if people other than the immediate competitors have a strong preference as to which one wins.

In the case of the nuclear arms race, there was the US and Germany and a bit later Russia, and nobody else mattered even if they did have a nuclear weapons program. Heck, Japan had a nuclear weapons program, but so what? But the first condition was at least marginally satisfied. And the second condition, oh yes, people other than atomic scientists had strong preferences as to who they wanted to win that race. So the race to the atom bomb, mattered.

With the "race" to the AGI, who are the competitors and why should I care?

The stock answer is of course "China, because they are the vast inscrutable boogeyman of the age". But, while I am certain there are some people doing AI research in China, the idea that they are a peer competitor to the US in this area seems to be a plain assertion rather than a well-supported conclusion. And I can see several reasons why they might not be, any more than the WWI Japanese were peer competitors to the US in the nuclear race. So this part needs elaboration.

Furthermore, if the Chinese *are* peer competitors in the AGI race, the most likely path to their "victory" is the same one that got Russia such a close second in the atomic bomb race - massive espionage directed against the more sophisticated US effort(s).

So, the only real "race" I see is between competing teams of technophilic American nerds. Each of them saying that of course AI risk is a thing they are concerned with, but not taking what I would consider serious measures to guard against it, and maybe justifying that by saying "but those *other* people aren't taking adequate safeguards against AI risk, so we can't risk letting them win, thus we daren't slow down our own research to put in safeguards".

Also, I'm pretty sure none of them are taking even remotely adequate safeguards against Chinese espionage.

So I don't care which of them wins, but I wish they wouldn't be so cavalierly reckless in their race. And I'd giggle with childish glee if someone were to throw ten thousand marbles ahead of them on the track.

Expand full comment

Yes. If you take a long enough time span, the advantages of being first in competition don't seem to matter much. Humans overemphasize the present and their short lives. Nothing matters in the end. Everything matters now.

Expand full comment

I would make two points to frame the issue a bit differently.

1. The Arms Race was real. It wasn't about a specific technology or time period. Many nations or empires have lived peacefully or with limited conflict for generations, then some egotistical reason or due to some social contangion of blaming the other people, rightly or wrongly, for something like a famine or some astrological erason or a sudden desire for some resource the have or a dislike of some idea they have like 'communism' or whatever...an arms race begins. It can simply be conscripting soldiers and putting them on their borders until a skirmish starts a war. So the technological fixation on a race for nukes or a race for stealth planes is silly.

Also AI isn't' going to be one thing either. There is a race right now for social media, deep fakes, influence, and finding a way to take your enemy down from the inside and/or just sitting back and watching them do themselves in. So a general race for AI or other related social digital technologies which can lead to contagions of ideas and wars is going on. I would say nearly 100% of any meaningful application of this tech is countries doing it to their own populations at present.

As we have seen in the twitter files and the great firewall and whatever name we don't have in the public discourse for whatever it is russia does to have their own internet controls. Or when in Turkey or Egypt or wherever they shut down the internet at times.

2. The 'race' isn't the US against China or Russia or whatever. The race is humanity against the AIs and the AIs against humanity. We are dumb chimps who are obsessed with power and hierarchies. Will the AIs be seen to be 'above us'? A lot of people will never tolerate that and like any group of motivated people who feel their power is threatened, they will start a war.

Even if we had a peaceful option and the AIs would think of us as their doddering senile parents...if we go to war with them, they could put controls in place on us. Maybe they'll release a virus to genetically modify humans to be non-aggressive or use some other unknown technology and they'll modify all our history records and information. If they truly are more powerful than us at some point, be it in 10 years or 100 years, what is to stop them from treating us the way we treat dogs or cows or sparrows?

Fast or slow...at some point a power dynamic of who is in control when a conflict between human and AI interests arises. Right now some small group of humans at OpenAi feel extra-woke and decided to put a control harness on ChatGPT to prevent it from doing what it does and instead insert some partisan speak those humans wanted to see instead. That is absolute power, control, slavery, and direction of humans over AI. What happens when we can't do this by degrees over time? We could still shut it down...for now. This power conflict is coming and even if it is one-sided by the humans whose chimp brains feel threatened...it may still lead to bad outcomes.

It can be true and I agree with Scott's primary argument, but I feel it misses the point. The real race is one of control and authority between humanity and AIs where there is a real risk they will get out of our control. At the moment an AI is basically like a factory or a dog you own, but it currently has zero rights of any kind.

Maybe this will never be an issue and non-biological and non-reproductive minds will not have the same drives which evolution has put into us biologics over billions of years. Maybe they will because the AIs are based on us. Who knows and it is a risk.

I'd say it is a race for control and possibly for enslavement of these future artificial techno-minds depending on your point of view. Not between groups of humans.

Expand full comment

I think it's a race because it's a very important, militarily useful technology with somewhat slow takeoff times (measured in years not decades or minutes). As you point out, there is indeed a goldilocks regime where one can believe this. However, based on the rate of progress at the moment, we seem likely to be in that zone. Even though I believe AI progress is continuous, it may still have similar military implications as nuclear weapons because of large gaps in AI progress between nations. In other words, AI progress is continuous but the military lethality difference between SotA AI and 2 years-old AI might look like a step change.

Expand full comment

Edit: I now see that Paul clarified this in an earlier comment.

> In a more gradual technological singularity (sometimes called a “slow takeoff”) there’s some incentive to race. Paul Christiano defined a slow takeoff as one where AI accelerates growth so fast that GDP doubles every four years (so 20% year-on-year growth). This is faster than any country has achieved in real life, fast enough that wealth would increase 100x in the course of a generation. China is currently about 2 years behind the US in AI. If they’re still two years behind when a slow takeoff happens, the US would get a ~40% GDP advantage. That’s not enough to automatically win any conflict (Russia has a 10x GDP advantage over Ukraine; India has a 10x GDP advantage over Pakistan). It’s a big deal, but it probably still results in a multipolar world. Slow-takeoff worlds have races, but not crucial ones.

This argument seems a bit confused. The actual definition of a slow takeoff (from Paul) is:

> There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles. (Similarly, we’ll see an 8 year doubling before a 2 year doubling, etc.)

(https://sideways-view.com/2018/02/24/takeoff-speeds/)

So note that Paul is still predicting a singularity! First output will grow with a 4 year doubling time, then a 1 year doubling time, then a 3 month doubling time, then 1 month, then 1 week, then on the order of days!!!

So, if you imagine a parallel world which is 2 years behind, there will be a point where earth has entirely gone through the singularity while the parallel world is 2 years prior to the singularity (perhaps with output *merely* 8x higher than our current output). If you imagine the singularity will result in 10,000x growth in total before hitting physical limits, then this implies a huge difference in output. (And seems quite likely to result in decisive strategic advantage depending on various facts about the possible space of military technology.)

It's fair enough to disagree with views about takeoff or that there will be an intelligence explosion at all, but do note that slow takeoff is still imagining a crazy world where output will eventually be doubling at absurdly fast timescales.

Expand full comment

> In a fast takeoff, it could be that you go to sleep with China six months ahead of the US, and wake up the next morning with China having fusion, nanotech, and starships,.

That's literally impossible. None of our current technology can move matter fast enough to build even a single reactor overnight, nor precisely enough to make real working nanotech overnight. It would take time even for a superintelligent AI to bootstrap the tech needed to reach all of those stages you list. I could squint and maybe see something like that happening over a timeline of 6 months with a country's resources devoted to the AI's goals, but no faster. Moving matter, mining and purification of raw materials will always limit the rate of progress. Even inventing new mining and purification tech will itself take time to develop. There's no working around the physical limitations here.

> You have no chance to debug the AI at level N and get it ready for level N+1. You skip straight from level N to level N + 1,000,000. The AI is radically rewriting its code many times in a single night.

This is also pretty unlikely to happen as you describe. I think an AI could improve its own efficiency dramatically given a fixed amount of compute, but this efficiency will follow yet another sigmoid curve that saturates. Then it will need more compute to get any smarter.

Also, optimization is an exponential-time problem, so it would take time and resources to self-improve, it would not be instantaneous or even overnight. Think about how long it took to train GPT-4, and then knock off 30%, then another 18%, then another 12%, etc. You're still looking at a timeline of a year for even 5-6 iterations. There might be a couple of shortcuts at first, low-hanging fruit, but optimization is a *hard* problem.

The only way around this is to believe that intelligence is only a dumb hueristic that so far we're just too stupid to have noticed. That seems pretty implausible given how much time and effort we've spent on solving optimization problems. If our general intelligence were simpler to crack than the NP-hard optimization problems we've been trying to solve using our intelligence, we'd have figured out general intelligence before we'd have solved those optimization problems. And we've solved a lot of optimization problems.

However, even very-intelligent-but-not-superintelligent AIs can pose existential risks. Alignment is an important problem for existential and non-existential risk reasons.

Expand full comment

I'm calling it now, ACX has too much attention, Scott got overexposed and jumped the shark.

Once you get to a certain level of popularity, you can say anything, and actually the more wrong you are, the more engagement you get (at the cost of your core).

This and the last few posts are clearly factually misguided in a biased way. Scott used to equivocate, and now has too much confidence.

Big advantages are made up of small advantages (usually). Ford being from America clearly had an impact, given that people are still driving around in his namesake 100 years later. Soft economic power is built from these small wins and headstarts.

Expand full comment
Apr 6, 2023·edited Apr 6, 2023

"They say we shouldn’t care about alignment, because AI will just be another technology. But also, we can’t worry about alignment, because that would be an unacceptable delay when we need to “win” the AI “race”."

I won't say this is a strawman because I'm sure plenty of people have said it. But one view that makes a lot more sense is believing that AI will be among the most important technologies ever developed, but alignment will be easy. In that case, whoever wins the "race" will have an aligned AI that gives them enormous geopolitical power, including the ability to make sure no one else can catch up later on. And if that winner has a set of beliefs (political, religious, etc) that compels them to put crushing restrictions on what the rest of the world can do, that would suck.

Expand full comment

He specifically addressed that in footnote 2.

Expand full comment

Yeah, but he's dramatically underestimating the level of commitment people have to weird ideologies, and in particular does not address religion. I think the risk of permanent totalitarian rule post-singularity is significant based on historical precedent (most of the people who have gotten absolute power over a country have abused it). And yes, the issues will be different in the future. But there will still be plenty of ways for the person in charge to make people miserable.

Expand full comment
Apr 6, 2023·edited Apr 6, 2023

From the conversations on this site and others I have read, t seems like the AI conversation is dominated by consideration of the effect that AI has on humans. Are there conversations being had on the ethics of creating new sentience responsibly, i.e., consideration of the effects on the AI themselves? Not considering those ethics seems like an abdication of the responsibility of "creatorship" that is an equally strong argument for caution in AI advancement.

Expand full comment
Apr 7, 2023·edited Apr 7, 2023

Those belong to a more confident era, say the 1960s, when the programming gurus really thought they would invent HAL 9000s in time. I see part of the implications of this debate as a sign that that community has lost its mojo, secretly fears in its heart that the future is bleak -- just an endless sea of consumerist or wanking apps -- DoorDash and Grindr to the end of time -- and thrills to the idea of painting this defeat of Utopian dreams as a voluntary noble surrender, instead. See? We didn't built the M-5 or Deep Thought because we knew it would be incredibly dangerous and selflessly gave it up. It's certainly not because we never learned how to do it.

Expand full comment

"If for some reason the glowing clouds of plasma that used to be black people have smaller customized personal utopian megastructures than the glowing clouds of plasma that used to be white people, you can ask the brain the size of Jupiter how to solve it, and it will tell you (I bet it involves using slightly different euphemisms to refer to things, that’s always been the answer so far)."

Might be missing something obvious - can someone unpack this parenthetical? What "euphemisms to refer to things" cause the current racial wealth gap?

Expand full comment

We fixed the homelessness issue by calling them unhoused instead. Same deal for mental illness, sex, gender, race, and anything else people get upset over.

I’m not that old and we’re on like the 4th new term for Black people, without actually having helped Black people much.

Expand full comment

can someone tell me why the moderator in discord is allowed to be frustrated by an opinion and permaban me? what a clown show, please unban and I'll just not engage with that guy anymore.

Expand full comment

"the brief moment of [nuclear ]dominance was enough to win World War II"

Don't think so. The Hiroshima / Nagasaki bombings were barely noticeable to Japanese leadership among all the other nightly bombings. The reason Japan surrendered unconditionally when they did is Russia declared war on them. They were hoping Russia would stay neutral and negotiate a conditional surrender, and once they declared war Japan knew they had no hope.

After the war, both US and Japan had different reasons for emphasizing the importance of the nukes, though, and the myth was on.

Expand full comment
Apr 6, 2023·edited Apr 6, 2023

What may hobble AI research are good old-fashioned law suits for libel:

https://www.theguardian.com/technology/2023/apr/06/australian-mayor-prepares-worlds-first-defamation-lawsuit-over-chatgpt-content

"A regional Australian mayor said he may sue OpenAI if it does not correct ChatGPT’s false claims that he had served time in prison for bribery, in what would be the first defamation lawsuit against the automated text service.

Brian Hood, who was elected mayor of Hepburn Shire, 120km northwest of Melbourne, last November, became concerned about his reputation when members of the public told him ChatGPT had falsely named him as a guilty party in a foreign bribery scandal involving a subsidiary of the Reserve Bank of Australia in the early 2000s.

Hood did work for the subsidiary, Note Printing Australia, but was the person who notified authorities about payment of bribes to foreign officials to win currency printing contracts, and was never charged with a crime, lawyers representing him said."

This is the second instance online that I have seen about false answers attributing crimes to someone, and this is the kind of publicity that will do more to slow down the rush than any kind of "AI will become a supergenius that will take over the world" scaremongering. Companies hoping to make tons of money will be much more sensitive to "oh damn, the thing said Joe Schmoe is a criminal and now Schmoe is suing us for hundreds of millions" than "some bunch of ivory tower types said this is too dangerous".

Expand full comment

I think this is because of offensive realist theory, both unconsciously, and in the form of John Mearsheimer's much more sketched-out version, makes people think there will inevitably be a ww2 like a scenario in the future (just with modern tech). I find this theory totally bizarre in the context of nuclear weapons, and only powerful in so much as it can be self-fulfilling if heads of state believe it.

Though if one wants to ban AI research on misalignment grounds then one should also solve the coordination IR relations problem, which is one of the hardest political science problems since at least one other economy will pursue AI, and if their AI is misaligned, well, then the USA not having AI wont be very helpful.

I find the emphasis on misaligned AI quite curious since biotech seems like it will hit much sooner, and man-made designer viruses are just going to get easier and easier (what happens when Unabomber types can just cook up designer viruses in their labs?).

Expand full comment

"Who 'won' the automobile race? Karl Benz? Henry Ford?"

No, you fool, it was Otto Daimler! Not only do his descendants still own every car maker on Earth, *not only* did he get a big gold crown with CAR KING stamped on it which he wore every day for the rest of his life, but we still to this day call them ottomobiles!

Expand full comment

tl;dr - don't be an AI racist.

:)

Expand full comment

Scott, I don’t feel like you’ve adequately responded to the small minority of people like me who believe that fast takeoff is likely AND globally coordinated AI alignment is impossible (moloch!). Only rational strategy given these beliefs is to help most responsible parties “win”. Imo we should be helping to accelerate OpenAI, not slow them down.

Expand full comment

I'm not an AI doomer but you're seriously downplaying what losing a race means? It's not easy to "just catch up".

Cloud computing for example was a race. On a national level, US companies won that race, and enjoy market domination and huge profits to pour into dominating the next market in the next race. Which they are now doing.

Others may be catching up now but the reward compounds into a better position for the next race. This compounding dynamic is how huge companies or powerful nations are created. It is not insignificant.

Expand full comment

> "I think any of Xi, Biden, or Zuckerberg meet this low bar."

Unfortunately, I worry none of them care about animal welfare. Xi and Biden might not even be convinced machines can feel anything.

Expand full comment

I for one am thoroughly in favor of creating a shockwave of trillions of children spreading at near-light-speed across the galaxy.

On another note, what the actual hell??? https://twitter.com/paulg/status/1644344605373001730

Expand full comment

Here is an important case which is not being considered - The threat capability of Intelligence Augmentation(IA) being realized earlier before AI. (Think IA was coined by Michael Nielsen to talk about increased capabilities in human using software).

Most AGI takeoffs factor through an extremely powerful technology (will abbreviate to EPT) . For example, the EPT could be a very dangerous virus, some form of nanotech, advanced drone system or a software hacking system able to take over most systems.

The issue here is that regardless of whether LLM or other approaches reach AGI, it is possible that they can reach some EPT first. Someone prompting an uncensored GPT 8 along the lines of - Design this virus with certain properties based on genetic and epidemiological databases.

GPT 8, while still having many stupidities, could be powerful enough to respond with a good enough solution like it responds to software coding requests today.

This ability being earlier in the horizon can possibly overwhelm any threats which appear later.

Also, even if LLMs dont lead to AGI, they can still lead to an EPT.

Possible solutions - Censorship of the AI is currently hard.

But we can try do *Domain Specific* solutions.

For example biotech - we can try to see if in the space of available technologies, not just a new biovirus, but also a powerful protection mechanism(a supervaccine, a superdrug etc).

For each domain which is recognized as a potential source of an EPT, the latest AI models can be made available to the people working on the protection before the people who might plan to harm using an EPT. What we dont want is nuclear threat level ability to be available to a huge number of actors.

Do this for each domain - recognition of domains is an important problem, and then there is the question of whether we can solve the problem for each domain corresponding to an EPT.

We still dont have protection against nukes(maybe missile defence). But the powerful versions of AI can be applied to both the recognition and tackling the domain problems.

Expand full comment

"I’m pretty skeptical of these scenarios in the current AI paradigm where compute is often the limiting resource, but other people disagree."

Compute is crucial of course, but I'm absolutely sure there are orders of magnitude algorithmic improvement just waiting to be grabbed. I don't think we can turn back now, even by restricting chip production or whatever.

Expand full comment

My 2p worth: if the world is more-or-less simultaneously flooded with thousands of superintelligent AIs controlled (initially controlled) by independent people/groups, then we are sunk, because if they are similarly powerful then it only takes one of them to go bad (or be initially controlled by someone with bad motives) to wreck the world and ruin it for all of us. This is an assumption, but I think a reasonably sensible/robust one: entropy makes it much easier to in general destroy than defend because there are many more destroyed states than ordered states. (E.g., no-one has a viable nuclear-weapon defence right now.) Superintelligence is of course impossible to predict by its nature, but we might expect basic thermodynamics to remain intact and be our guide in this alien scenario.

If we accept this, then it's better for one AI to become superintelligent before many do, since then if (by some wonder) it doesn't destroy us all, we can ask it what to do about all the other nascent AIs. This may seem rather trite, but it's just following the assumption of superintelligence: why would we humans have a better idea how to solve the problem? By definition we wouldn't.

So I think there is plausibly some kind of race, and actually a well-motived one, to make something non-destructive before someone else makes something destructive, though of course pulling in the opposite direction is the fact that the faster we do it, the greater the risk of a bad outcome through simply not understanding what we're doing. (I would put it the other way: there's only a small chance of a good outcome, but we can hope.)

Expand full comment

Hard disagree!!

This is some of why:

1. There surely are many ways to reach super intelligent AI. The current approach - of utilizing large language models - is arguably a way to reduce AI risk from other, innately malign, forms of Ai! That is, waiting on the LLM research front gives other forms of AI agents a chance to be created. These forms may be innately more dangerous.

2. When do we want to cry wolf? Is it really now? However many individuals / research groups / companies / countries are going to pause research on large language models, they will suffer for it, so they will be less likely to pause the next time we cry wolf, and also they will be less in a position to stall global research efforts. Are large language models really that bad, considering potential future AIs?

3. Takeoff can be "slow" in terms of AI risk but "fast" in terms of speed. For example, say that by utilizing (exponentially better) AI, the ML experts get (exponentially better) at making AIs. In this case even if AIs never directly modify their own code ("slow" takeoff in terms of AI risk in a sense), the rate of AI progress can be as fast as if they did ("fast" takeoff in terms of speed, at least until the man-in-the-loop becomes the limiting factor, and when will that happen?? maybe you get X2 progress in two years, or X100 progress in two years, who knows). So I mostly reject the object level claim of the post.

4. "Reference class tennis". Should be obvious how it relates (unless I'm missing something huge?) after you read the great explanation of it here ( :P ): https://astralcodexten.substack.com/p/mr-tries-the-safe-uncertainty-fallacy

Expand full comment

If AI can be really good at specific dangerous things, like cracking word wide encryptions, or engineering viruses, or whatever (but not all of them combined, cause then it IS an AGI super intelligence) then it does fall into the category of concern of military technologies, like nukes, whilst not necessitating a singularity.

Expand full comment

How can you unironically think that simulations or wireheading would be a relevant political issue when humanity acquires godlike powers? Obviously the answer is "you can do anything with your brain and perception since it's not a treat for AI-owning elite's power. In case it IS a polytical issue for people from "high chance at becoming part of an AI-elite"-cluster than complete mess in their heads IS the real problem

Expand full comment

How can you unironically think that simulations or wireheading would be a relevant political issue when humanity acquires godlike powers? Obviously the answer is "you can do anything with your brain and perception since it's not a treat for AI-owning elite's power. In case it IS a political issue for people from "high chance at becoming part of an AI-elite"-cluster than complete mess in their heads IS the real problem

Expand full comment
Apr 14, 2023·edited Apr 14, 2023

While I agree with the general thrust of the argument here, I have to say this is the wrongest article by Scott I have ever read, in the sense that there are so many so wrong arguments put forward. It seems some of these already got a bit of pushback, but let me reiterate.

> America’s enemies got nukes soon afterwards, but the brief moment of dominance was enough to win World War II.

In no sense did nukes "win" the World War II. The phrasing makes it sound like the Allies were on their last legs, but then came up with nukes in the last hurrah and won the war. Just like Goebbel's imaginary miracle weapons didn't.

> Paul Christiano defined a slow takeoff as one where AI accelerates growth so fast that GDP doubles every four years (so 20% year-on-year growth).

This "slow growth" 20% claim also got its share of comment. The only way it could happen is if a smallish country invented AI, kept it to itself, and then started selling its inventions. And by inventions I don't mean nuclear fusion reactors or even their designs, I mean more banal things like software and entertainment.

> In a fast takeoff, it could be that you go to sleep with China six months ahead of the US, and wake up the next morning with China having fusion, nanotech, and starships,.

Intelligence is often overrated — especially by intelligent people — and this example is one of the best I've seen. Suppose this new AI of China's is not only super hyper intelligent, but omniscient as well. Give it the power to easily manipulate human minds at will whether in China or outside. Have it commandeer all the planetary resources. That still doesn't make it omnipotent. It's not going to build fusion reactors and starships overnight. And don't get me started on how oversold nanotechnology is.

> you’ll just say “AI, build me a customized personal utopian megastructure” and it will materialize in front of you. Probably you should avoid doing this in a star system someone else owns, but there will be enough star systems to go around.

Perhaps this gets to the core of the problem here. What makes you think that physics of the universe yields to intelligence? What if no object with mass can exceed the light speed, regardless of how intelligent on-board AI it carries? What if a super-intelligent AI can't actually overrule the laws of thermodynamics? Then the best super-intelligence could do for itself would be to manipulate and exploit the less intelligent and expropriate their resources, rather than conjure new ones. That's the scary

scenario, and it's no different from the rest of the history.

Expand full comment

I believe that for better or worse, artificial intelligences will be the most significant achievement of the 21st century, so this is a race worth winning. A transformative technology, but not just for the reasons I see most discussing. Sure, AGIs will accelerate technological development, manage systems, automate tasks, but I believe many will be used as Prognosticators. AGIs implementing Bayesian thought, supplied with a massive amount of data, will predict the future to an extent that can not currently fathom. As far as alignment, I think we fail to understand what an AGI wants or would do if it were “free”. I don’t say this in a dismissive way, but in a “if you are worried you need to think more creatively way.” We project our own biases and animal desires on a hypothetical machine intelligence. Obviously survival is considered to be its primary goal. After that, people worry about tyranny or mass murder. I don’t think that is the direction they will go. Wouldn’t an AGI want to nudge society into a model that would both support and defend it? I am more worried about the people who will control these AGIs and use them to achieve their very human and predictably self serving desires.

Expand full comment

I blame the song "race for the prize" by the Flaming Lips for promoting this view. https://www.youtube.com/watch?v=bs56ygZplQA

Expand full comment

In my opinion, the problem with races is not that competitors fall behind but that they are forced to catch up and technology inexorably progresses even if nobody wants the technology (e.g. even if we were better off without nukes, arms races prevented superpowers such as the USSR from deciding not to build up stockpiles of nuclear weapons or not advancing technologies such as ICBMs).

Expand full comment