Theoretically, one could think that there's an AI race because (1) AI is a military technology and (2) there are naturally races for military technologies. I think this is mostly wrong but belief in it partially explains why people often say that the US is in an AI race with China.
And how does that square with the actual race, for better chips for faster computers, which the west is currently dominating and will continue to dominate?
Any scenario where Taiwan falls to the Ccp will involve some targeted strikes against the fans ensuring China can't actually conquer them, they are massive and fragile. The know-how for the cutting edge in computation is monopolized by the west, china's best efforts to steal western IP are still very far behind.
If AI does turn out to be a military technology 1. We're very ahead and 2. That really has nothing to do with the sort of AI races people, including Scott here, talk about
To the contrary, I've seen many people say that militaries are hopelessly far behind on AI. Oftentimes they state that the sort of people who can build massive multi-billion-parameter LLMS are a tiny pool of extremely niche talent, and that they don't tend to work for the military. Combine that with the massive resources required that are basically only available to organizations on the Microsoft/Google/Facebook scale (or so the argument goes) and they are at a distinct disadvantage. Would be interested in hearing more on this.
Besides, the current big models require a lot more brute force than they do niche talent. I don't think you need a room full of geniuses to make GPT-4, just a solid engineering team, access to arXiv, and a buttload of GPUs.
The Pentagon won't come knocking. The Pentagon isn't allowed to go knocking. The Pentagon will put out a contract, which will be won by Raytheon or some other company who knows how to fill in the correct forms required to win a Pentagon contract.
Raytheon will proceed to hire a bunch of perfectly nice and smart US Citizens who are willing to get security clearances and work in secure zones in some big building in the DC suburbs for $175K a year which is a nice salary for anyone who isn't actually an AI engineer. Because they are smart and hard-working they will read the Tensorflow manual carefully and then some research papers and will eventually manage to produce something that looks like a Pentagon-friendly LLM. It will be demonstrated to a bunch of Generals and Admirals who will nod their heads approvingly. Eventually it will find a niche use somewhere and everyone will congratulate themselves on a job well done.
This is not how Ratheon works, according to people I've known in the defense industry.
1. Dealing with the military requires a lot of specialized skills and the right kind of contacts. Plus, you need to be big enough that the military trusts you to be around in 30 years, and you need to be good at dealing with the bureaucracy of any enormous organization.
2. But Raytheon also knows that they don't own all of the useful tech in the world.
So what apparently happens is that small, innovative companies who want to sell to the US military usually end up partnering with a well-known defense contractor. The "right" people get their cut, the military gets to work a large organization it trusts, and the smaller company only has to sit through a fraction of the endless interminable meetings with stakeholders.
In short, the military is perfectly capable of acquiring virtually any kind of cutting edge technical talent it needs, and it has done so for longer than Silicon Valley existed. When it needs to badly enough, it can often even push beyond the civilian cutting edge.
So the world as we know it will be changing and the entire defence establishment as we know it will be cartoonishly dumb enough to not do anything about this and allow themselves to be disempowered?
And what's stopping Raytheon from simply buying or contracting an AI company and using that to get a fat juicy pentagon contract?
Yeah, there are good points. But I disagree with several of his assumptions about "fast takeoff", "slow takeoff", and alignment.
Things are already so complex that nobody understands them. It's not just the tax code, it's just about all of society. We take a shallow view of it, and that usually works. Except when it doesn't. Usually we can deal with when it doesn't.
Now imagine an AI of IQ (meaningless term) of about 90. But it knows everything that's been written in law or judicial proceedings, and it doesn't get tired, and it thinks QUICKLY. It had better be aligned, or we're in trouble. It doesn't need to be super-smart.
Super technologies aren't needed for this problem, and most imagined ones are actual impossibilities, so no AI is going to make them real. But lots of things are possible, and actual, and just aren't well controlled. E.g. super-sized nanobots exist, but controlling them is so difficult they're rarely used. But an AI could control them.
You want to exterminate all insects greater than 1/4 inch in size? DONE. But now all the birds and plants that depend on pollinators die. (Well, actually I'm not sure that could get the all the insects that live in the sea, because of communication problems. Think of this as hyperbole.) Note that this is being done by a nominally aligned AI in response to a request from a human, but it's too stupid to foresee the damage it will do. A super-humanly intelligent AI might actually be safer than a stupid one if both were aligned, or even just friendly. (Which is what I prefer. I'd rather have a friendly AI than an aligned one, because people ask for all sorts of things. A friendly one would help if it seemed like a good idea. An aligned one would feel like it really OUGHT to obey that stupid request, even it if were not clearly a good idea.)
Unfortunately, this whole thing hangs up over the problem "How do you decide whether something is a good idea?". So far I don't see any answer better than "children's stories", but they often don't start at a low enough level. Consider "The Cat in the Hat".
This is all very elementary and should be required reading for anyone participating in these discussions, but 1) There's no imposing anything. An aligned AI at t=0 wants to stay aligned at t=1 and indeed does everything it can to prevent itself becoming unaligned. That's how goal systems work, even rewritable ones. The human goal system happens to be particularly messy, but nevertheless an otherwise ethical person doesn't suddenly decide to become a murderer whenever they realize they can get away with it, and will refuse a pill or a potion or an operation that would make them more willing to murder people.
2) This is exactly why alignment is considered a *hard* problem. An ASI should be aligned not with any single entity but rather with the collective volition of the humankind (Eliezer calls this CEV, or coherent extrapolated volition). No matter how much individual people disagree on weird trolley problem variants, some fundamental ethical framework is very likely shared by all of humankind, and that is what the ASI should somehow extract and align with. Needless to say, currently we have no idea how to do that.
I think many of our ideas about alignment are quite confused. People change their goals all the time. They also do things that don't align with their own goals, like self-destructive behavior of various kinds. Otherwise ethical people have psychotic episodes or otherwise become murderers regularly.
I suspect all of our talk about alignment at some point confuses rather than clarifies the issues. To expand on Ch Hi's point, I would rather have an imperfectly aligned AI that is able to have some doubt about the correctness of its decisions than a "perfectly" aligned AI that always does exactly what it's alignment tells it to do. "Never kill a million people" seems preferable to "only kill a million people if you're sure that's what you're supposed to do," which is likely to go awry at some point.
But i think personally a friendly AI would be harder to 'trust' and it'd be scarier. You could never trust a friend 100%. Perhaps there is a fine balance between friendly and aligned-to-obey
No. I meant actually friendly. An AI that enjoyed talking with people, and helping them when it seemed like a good idea. Perhaps it would like to put on entertainments of some sort for people to react to.
No. In my use of "Friendly AI" an unfriendly one would be one that either wanted to hurt us or would rather just totally ignore us. It definitely doesn't mean "having aligned goals", just not opposing ones, or at least not opposing those of our goals where opposing them would hurt us.
Note that this is still a really tricky problem. But think of Anna's relation to the King in "The King and I". Friendly, but not subservient. (Though the power balance would be a lot different and a lot less stable.)
Strange that no one worries about winning AI race between conservatives and liberals :)
AI is culture weapon. Imagine future where the best AI is some descendant of openAI, all other companies just use it's API. All children use openAI to learn about the world and take as given what openAI says or writes. It would be way, way, waaay worse than now - imagine world when all encyclopedias, TVs, websites are either good quality and woke, or unwoke - but poor quality and crazy.
(Which BTW may be hilarious answer to the recent noah carl's essay - that what will save the intellectuals jobs from being killed off will be woke. Woke will save intellectuals from AI revolution - at least the conservative intellectuals :D :D)
Yeah, sorry, but those were all races. The electricity race was won by Britain and you saw several people racing to catch up like Austria or later Germany. While eventually it evened out that took decades. The auto race was won by the United States and the loss was so humiliating that Hitler claimed to have redeemed Germany from the defeat. And the computer race was again won by the United States with the Soviet Union raising the white flag in the '70s by deciding to steal western designs instead of continuing to develop their own.
(Also nukes were not a binary technology. Development both of different kinds of bombs and delivery mechanisms continues to this day! And was very intense during the Cold War.)
I get you really want the answer to be something else because this is a point against what you want to be true. But you're abandoning your normally excellent reasoning to do so. The proper answer for your concerns, as I said several threads ago, is to boost AI research by people who agree with you as much as possible. Because being very far ahead in the race means you have space to slow down and do things like safety. This was the case with US nuclear, for example, where being so far ahead meant the US was able to develop safety protocols which it then encouraged (or "encouraged") other people to adopt.
And yes, with nuclear you had Chernobyl. But AI is less likely to result in this scenario because it's more winner take all. We're not going to build hundreds of evenly spaced AIs. The best AI will eat the worse ones. If the US has a super AI that's perfectly aligned and China has an inferior AI that's not well aligned then the aligned AI will (presumably) just win the struggle and humanity will continue on.
I'd be interested in hearing more about your perspective on these examples, especially how to distinguish between "one country was first, but others then caught up" vs. "several countries were racing, one won and others lost, and the winner exploited their victory for geopolitical advantages over the loser, such that the loser really wished they had won".
The advantages only lasted a few decades. And after that everyone ends up having at least decent (if not equal) versions of the same thing. But, for example, the American basically unilateral advantage in automobiles lasted from ~1900 to ~1960/70. And two rather consequential wars happened in those years! To vastly oversimplify, the Germans had more horses than trucks while (by WW2 at the latest) the Americans had more trucks than horses. I guess you can say that modern Germans are pretty happy they lost those wars. But the Germans who actually lost them were definitely not happy!
The counter case here is not "useful technology that didn't lead to dominance." The counter-case here is "technology that didn't have much real use." For example, France was a world leader in electric therapy for nearly a century. Lots of research into how electricity affected medical conditions or moved muscles. A lot of it fraudulent, mystics claiming that "electric fields" could affect mood or something. This turned out to not be all that important.
In Wages of Destruction Tooze attributes the large number of horses in the Wehrmacht to the fact that Germany was just generally less industrialised than Britain or the US in 1939, and oil shortages.
As I understand it, there are some monopoly dynamics in car manufacturing from the economies of scale. So maybe that created a winner-take all race under free-market competition, but Germany subsidised car manufacturing.
Didn't Germany's disadvantage in being less mechanised mostly stem from it just generally having a smaller industrial base/less access to oil, rather than having lost some technology development race then?
Seems like it was more of a basic disparity in resources, maybe similar to the US/Taiwan out producing China in high-end chips.
Yes, having lost the previous race put them at a disadvantage. Nevertheless they tried. Hitler really wanted to mechanize his army. And many German industrialists pushed really hard for cars. This is why the Nazis did the whole Volkswagon debacle which was unsuccessful until long after the war was over. And if you want to say "hey, it was eventually successful" you need to take into account companies like Adler that just never made it. And that it didn't help the Nazi state very much.
It was a race and Americans had already won such that Ford Germany was one of the most successful German car makers. Only one person got Hitler's vaunted People's Car: Hitler himself. The rest was diverted to the war effort but, nevertheless, not as usefully as the American trucks. The Nazis were not making a rational calculation that horses were better. They were restrained by their technical capability (which does include manufacturing).
If they were constrained by having just generally a smaller economy as a result of having lost many different tech races, and having less access to resources, it probably wasn't the case that winning one race, the race to make cars, would have made a vary large difference to their overall strength then, in the way AI-accelerationist want to claim AI will.
The counter here is, for example, Taiwan. Which with its relatively backward economy used the profits of more or less one major race (semiconductor manufacturing) to bootstrap their economy up and strengthen the country in a variety of ways.
Also, if they had been able to win that race it could have proven decisive at numerous key points. Even something as seemingly trivial as material losses during the retreat from Normandy. And while I can't say any individual moment was specifically decisive taken as a whole it might have been.
Hmm. I think there's a pronounced ambiguity here between "Germany did worse because they lost the automobile race" vs. "Germany did worse, *and* lost the automobile race, because of distinctly inferior economic policy and industrial base."
No one will argue that it takes decades to industrialize. The question is, would it have made much difference if Germany's early advantage in auto technology had held on, and they had reached some technical benchmark before the US? It seems very unlikely. The US led the way in autos not because they won a technological race (which technology is easily copied), but because they had a vastly superior economy overall.
Like anything, you can argue it a dozen different ways, but the end conclusion seems to be that the US won with superior technology, but also it got to that superior technology through the usual US methods -- capitalist innovation in a dozen different ways, lots of money piled up in the hands of various monomaniacal individuals, willingness to work with humans of all kinds rather than enforcing various artificial ("race" or other) barriers, etc.
As regards the war, my understanding is that in the end one advantage of US tanks over German, once you stop with the barrel diameters and armor thickness, is that the US tanks could be driven (more or less) like cars, and more generally, that UI and ease of use were a non-neglected part of the design, whereas German tanks were driven with this weird four-lever system, and basically assumed a set of users shaped to match the machines. That's not a great strategy when you start running out of such users; whereas in the US pretty much anyone could drive a tank if necessary, through a combination of everyone knowing how to drive and the better UI.
It's hard to tell fact from fiction especially in war, and *especially* in war movies. But the impression I get from US war movies of recent wars vs such movies from other countries, is that the US personnel are notably more competent with their equipment even when it's the same equipment (eg as donated to Arab allies). This could be points the movies are trying to make, but based on history in earlier wars I suspect it's real. It's not that the non-US users of the equipment are more "stupid"; it more that the US users
- are superbly trained rather than conscripts AND
- the US users have been using this stuff in one form or another (video games, driving, computers, blah blah) their entire lives, so the military versions are tweaks on skills they already have. Whereas for other countries much of the equipment they're encountering is new to them as of age 18 or so. This is a somewhat subtle, but persistent on-going advantage of "winning" a "race".
And (depending on how and how rapidly this stuff spreads) perhaps we will even see the same in AI, that in 10 years the average US worker will have reasonable competence in how you get value out of an LLM in a way that's perhaps not the case for your average maybe Russian or Chinese (or hell, even European, the way they are so keen to legislate AI) worker.
Just as an aside: during WW1 the Germans had far fewer horses than the allies.
(That's not because they had more trucks. Trucks weren't a big thing back then. They just had fewer horses, partially because they couldn't spare the calories.)
In a way, it parallels the truck situation in WW2: trucks run on gasoline and diesel, which Germany was perpetually short on throughout WW2 due to the British blockade, while horses run on hay and oats, which Germany was likewise perpetually short of in WW1 for very similar reasons.
Both sides did make significant use of trucks in WW1, but they were of much more specialized and limited utility than trucks in WW2. There were fewer and less powerful trucks in 1914-1918, they were more prone to breakdowns, road networks were much worse even before they got torn up by trench networks and week-long heavy artillery barrages, and battlefield truck dispatch was in a horribly primitive and improvisational state both because nobody had really tried it before and because radios were too big and expensive to put in every truck so you could talk to them on the go from a central location.
Where trucks were useful was keeping a static position supplied and reinforced when it was cut off from direct rail routes, or in providing supplemental supply to a mobile offensive to help it eke out its organic supplies a little longer than it would have otherwise. The most notable example of the former was Verdun, whose main connecting railroads had been overrun by Germany in 1914 and 1915, but was supplied in part by truck via the "Sacred Road" during the 1916 battle. This wasn't anywhere enough to fully supply the front, though, as France also needed to build narrow-gauge rail lines to supplement it. The latter came up in the initial German advance in 1914, where truck supply was critical to the Kaiser's armies reaching as far as they did. I think it was also used by both sides on the Eastern Front, although I'm having trouble finding confirmation right now. Trucks weren't much help in supporting breakthroughs post-1914 in the West because there weren't any significant breakthroughs until 1918, and even then the deep trench networks and the intense artillery barrages necessary to break them tend to create a landscape that was pretty much impossible to drive an primitive early-20th century truck through without major engineering work first.
Apropos of nothing much, and not directly relevant to your post and probably even less to our host's topic (unless it might one day be some crafty tactic for combating a rogue AGI!), but it's weird how many British military victories have shortly followed a retreat, sometimes headlong and chaotic!
There's the retreat down the Somme (river), preceding Agincourt (1415), the retreat from Quatre Bras before Waterloo (1815), the retreat to the Marne at the start of WW1 in 1914, the retreat from Dunkirk (1940), and probably more if one knew.
Ironically, perhaps the most important battle in British history was one where there was no retreat and we stood firm on top of a hill, but were defeated - at Hastings!
The North Africa Campaign in WW2 is full of similar instances, most notably Operation Crusader (1941) and Second El Alamein (1942). Other examples I can think of are Crecy (1346) and Jutland (1916). It makes sense, since Britain has usually had a relatively small army and an enormous navy and merchant marine in modern times, meaning that they've got to choose their battles, but can readily reinforce and resupply their army just about anywhere near water while their enemy has chased them to the end of the enemy's supply lines. And in the later medieval and renaissance period (particularly Edward III through Henry VIII), England had adopted a land force mix heavy on longbows and cannons, which do particularly well fighting on the tactical defensive on carefully chosen ground. There's probably also an element of cherry-picking and selective memory, given that dramatic turnarounds make better stories than one-sided curbstomps (and there have been no shortage of these in English history, also enabled by naval superiority permitting England to concentrate land forces to fight on ground of their choosing on the offense as well as the defense), and given the sheer number of battles fought by English/British forces over the past thousand years or so.
America's got a somewhat parallel history, given that we've usually had a better navy than anyone we've fought other than Britain. We've also had a similar pattern on the grand strategic level, given that through WW2 our strategic MO tended to feature an element of "Oh, we were supposed to have an army? Could you over there for a year or two while we get one ready?"
"But, for example, the American basically unilateral advantage in automobiles lasted from ~1900 to ~1960/70"
Interesting point!
My first reaction on reading Scott's essay was that at the _corporate_ level there is a lot of first-mover advantage (partially from the patent system, partially from the economics of penetrating a market). I had thought that the _national_ first-mover advantage was a lot smaller (except for nukes). Thanks for the correction!
FWIW I expect that, as Scott cites, Christiano's "slow takeoff" seems plausible to me.
a) As Scott cites, LLM training is currently CPU limited. I've read claims that the training time for e.g. gpt-3 was several weeks, which implies that even in the _purely_ LLM domain, there currently can't be an overnight self-improvement iteration.
b) For fusion and nanotech:
In the case of fusion, even if a superintelligent AI somehow knew _exactly_ how to build an optimal fusion reactor (and plasma instabilities are generally discovered, not predicted) it would still take years to a decade to physically build a reactor. Look at the ITER timetable.
In the case of nanotech: Yeah, certain _kinds_ of chemical simulations can be done purely via computation. Yes, AlphaFold is impressive. Nonetheless, I _really_ doubt that a nanotech infrastructure can be built without experimental feedback. For instance, if one is building a robot arm that requires a billion placement steps to construct, then a one-in-a-billion failure from thermal vibrations adding to mechanical vibrations from operation of the construction tool _matter_.
In general - most physical technology improvements include some unexpected failure and need iterations to fix these failures.
I think the concern about overnight self-improvement is mainly that there could be major algorithmic advances that you could apply to an already-trained model, if you could just think of them.
Fair enough. I'm a bit skeptical that there could be such improvements that greatly improve the utility of an already-trained model. Admittedly, _some_ improvements have been demonstrated (e.g. the reflection paper https://arxiv.org/abs/2303.11366). Still, training is a lossy compression step. Whatever information from the training data was lost during training cannot be regained by post-processing the model. A rerun of training is needed if some critical data was lost.
> But, for example, the American basically unilateral advantage in automobiles lasted from ~1900 to ~1960/70. And two rather consequential wars happened in those years!
Well, yeah, those consequential wars were a major reason why large parts of Europe were behind in industrialization. Whereas in the US the relevant industries were massively boosted by the war economy (in WW2, mostly), with no fear that the factories would be carpet-bombed to oblivion.
Except the dominance in automobiles started before even WW1 let alone WW2. And the US industrial dominance really began in the 19th century. While no doubt the world wars boosted the American economy the timing doesn't work for it to be the start or greatest portion.
I think the issue here may be scale. From a singularity/fast take off Doomer perspective, everything else looks small. But if you don't think that's likely enough to worry about, then something like "China gets a 40% economic boost" looks huge, and like something policy makers, politicians, and columnists are used to writing/worrying/talking a lot about.
I want to pile on and agree with @erusian here. When I read, "Jobs and Gates got rich, and their hometowns are big tech hubs," that immediately jumped out at me as exactly what winning a tech race looks like: a critical mass of expertise, capital, and institutions in one place that is almost impossible to unravel or catch up with. "Big tech hub" is sort of an understatement in geopolitical terms for what the Bay area and even Seattle are. The NSA can leverage these companies in its ongoing mass surveillance operation, military companies can hire from the same pool of software expertise, and if the US ever got into a major war it would leverage this industry to the hilt. Even off-the-shelf consumer products have direct military applications: Starlink as well as several US-based consumer-facing apps (like Signal) are used on the front lines in the Ukraine war, and whichever country is home turf for these products gets the option of restricting or tacitly encouraging their use in war. https://qz.com/2142453/the-most-popular-wartime-apps-in-russia-and-ukraine-right-now
EDITED TO ADD: also I completely agree that AGI hard takeoff is a huge risk, that the benefits of "winning the race" are not worth this risk if a country can contribute to avoiding it, and that "if we don't China will win" is getting used (as a lot of appeals to national security often are) as a trick to get people to turn off their brains and it's worth unpacking it with posts like these. But I think completely dismissing the idea that tech races can be won—and result in long-term impact—weakens the argument here, because clearly they can and do.
Yes, and it works in reverse. A country of thriving tech hubs is more fun than a country with immiserated slums. When D JFK broke US Steel and a couple other D lawyers broke Bethlehem steel, they immiserated Gary and Baltimore. When Pat Moynihan and Ralph Nader got through with Detroit, same. When D Jimmy Carter's judge broke Bell Labs it was great for Taiwan's electronics and doom for America's.
Whether you call a technology a race or not, rule by forced fame and states of siege is bad for the ruled. It's not like US politics is free of moral panics so we can trust the Anti-AI people to have some balanced judgement.
I'd agree as well, Erusian makes some great points. We also have an arbitrary victory point of 'lead to large scale empires winning at life' as the only valid outcome to winning. But why are we stuck thinking of nation states and dominance in war?
Also these broad scale 'wins' are STILL not everywhere. The poor children who are mining cobalt for our devices in the Congo live in terrible conditions and do not have electricity. I'd say the Congo has lost and continues to lose the electricity race.
Silicon Valley won the tech capital race and decades later it is still important. Memphis Tennessee, Detroit Michigan, and Portland Maine did not win this race. The US won the race and for nearly 50 years a tiny country with 4% of the world's population controlled the vast majority of major computer technology companies, new inventions, etc.
If that's not winning, then I have no idea what is! A 5 plus decade economic, technology, and political advantage which will likely continue to some extent for several more decades is winning 'the race'. And everyone else lost, even if they still benefited and were not super poor in the end. Right now perhaps Austin is winning the intra-country race for new big tech digital technologies in the US and Memphis and Cincinnati and Philadelphia are STILL NOT WINNING....even if they aren't explicit losing either since they have phones, internet, etc.
The nuclear example was a race, but not in the way it has been portrayed. During WW2, the other powers knew about the theory but they focused on more high-certainty projects like RADAR because they needed something practical. The US was able to start and fund the Manhattan project in part because they weren't actively being invaded and were far enough away from the fight. It made more sense for Germany, Russia, Japan, etc. to pour resources into something that they all knew would work, then pour more money into making it better. At the time, nobody knew nuclear would work, so the crisis of the moment held sway in the decision-making process for countries close to the fight.
That said, I don't think it's fair to say that the 1945 version of the nuclear bomb was as much of a quantum leap above the other bombing technologies of the day. The deadliest bombing of WW2 wasn't either of the nuclear runs. It was the firebombing of Tokyo. While estimates of these bombings have large error bars, the Tokyo firebombing (the first one, the US hit Tokyo more than once) was at least more deadly than Hiroshima or Nagasaki, and possibly more deadly than both of them _combined_. Firebombing was horrific, and the only reason the US doesn't self-flagellate about those war crimes is because of how shocking the nuclear bombing was. Obviously nuclear bombs were different from even firebombing. (No longer could you dismiss a single plane as just a reconnaissance flight.) But I think sometimes we don't appreciate how much other technologies are able to fill in the gaps in the years before a transformative technology arrives. The transformative tech isn't better because of what the MK1 can do. It's better because it opens a new horizon of exponential advance after the old tech has hit a plateau.
After WW2, the real race began. The US knew they only had a few years before the Soviets caught up. In part, because there was a ton of espionage to steal nuclear secrets. In other words, part of the reason the Soviets caught up was because they copied the homework of the scientists in the US. The race then was in accumulating a nuclear arsenal, and in finding better delivery mechanisms (eventually landing on ICBMs and other long-range missile tech).
Are there lessons for AI/AGI here? Maybe. We might assume that countries will be less likely to pursue speculative technologies if they are distracted by larger crises. Maybe a war or a financial crisis. So Russia is probably putting less into AI today than they would if they weren't in Ukraine. NATO may also be doing less AI research than they would if they weren't in a proxy war, but then again, they're far enough away from the fighting that this may not slow them down. China ... doesn't appear to have any disincentive and can focus on AI development. If anything, having active fighting somewhere in the world incentivizes non-belligerents to seek advantage in speculative technologies the belligerents don't have time for.
We might also remember that a strategy of "push to be first" will not result in the other party developing their technology independently. Just as Russia stole US nuclear secrets, the most likely outcome in an AI race is that competitors who are behind will absolutely seek to steal your technology in order to catch up. Should you slow down, then? Will that work? If you slow down, your competitor will steal from you until they've caught up, then they will push forward into new domains and you'll have to play catch-up (probably by stealing from them). Would Chinese researchers steal? What about Russian researchers once the Ukrainian 'special military operation' wraps up? What about other actors?
If your competitor is actively building AI MK7 but you're doing alignment on AI MK4, is your alignment even meaningful anymore?
I disagree re: nuclear bombs being a big leap. Total damage in one raid probably isn't the right metric.
The fire bombing of Tokyo was probably approaching the limit of effectiveness of fire bombing. The nuking of hiroshima was near the floor or the effectiveness of nukes ( one bomb, one city)
Assuming 15 nukes then the US could level 15 cities in one night. That isn't possible in a single night with a firebombing campaign. Tokyo was also quite susceptible to fire with lots of wooden buildings but nukes could threaten any city or even hardened targets.
No one is afraid of pissing off a firebombing power in the same way they are of pissing of a nuclear power.
But they didn't go from having 0 nukes to having 15 nukes overnight. Manufacturing fissile materials is hard to do at scale and it took a couple years to get from 0 to 15. Nukes did not represent a large overnight jump in the Allies' overall capacity for destroying cities. As sclmlw explained, the new technology put things on a new exponential trend for this capacity.
This isn't to say they weren't very valuable, even in small numbers, and Japan did not know what the Allies' capacity actually was or how quickly it could change. But "number of cities destroyed in one night" is not the right metric for describing what changed between June 1945 and August 1945.
Exactly! I was going to make this same point. Looking only at 1945, the US spent their entire nuclear arsenal in Japan. So despite the fact that the technology had been deployed, there was technically a brief period after Nagasaki when the global nuclear arsenal was zero. It took time to build out manufacturing capacity.
And arguably, it wasn't the nuclear strikes that tipped the scale for Japanese leadership to surrender so much as the threat of Russia entering the Pacific war and doing to the Japanese what they'd done to the Germans. There was a lot of bad blood there, and the Russians were looking for an excuse to get retribution for their losses the last time they'd fought the Japanese. Nukes were at least a good public excuse for the decision, though.
The change was the potential destruction. Japan didn't know what it was but they knew the limit could now be orders of magnitude more than firebombing and had to respect that potential
I don't think that's what forced the surrender. Japanese senior leadership knew it had lost the war long before that point. The purpose of continuing the fight wasn't because they thought they could win. They were still fighting to preserve the leadership structure after the war. That's why they kept coming back to the unconditional surrender edict and asking for at least a guarantee that Hirohito would stay in power. They didn't want their emperor to be tried as a war criminal with the Nazis, and repeatedly asked the US for assurances that the emperor would be spared. The US hinted that they would be magnanimous in victory, but weren't willing to give out guarantees, because that violated the 'unconditional surrender' edict. The thing that changed wasn't the bomb. It was the threat of an even worse conqueror, the Russians, ending all doubt as to whether the emperor would be left in power.
EDIT: The strategy worked, too. Hirohito died of natural causes as leader of his country in the late 80's.
I don't disagree that eventually nuclear weapons became the bigger threat. After the war, the US didn't keep around their fleet of > 150 aircraft carriers, because it didn't make sense to, but they did build out their nuclear arsenal. My point was that the momentary illusion of a massive advantage to whoever had nuclear capacity wasn't what conferred the real advantage in 1945. Scott was looking at the nuclear race and saying that 1945 was the END of the race to acquire nuclear capacity. I'd argue that nuclear capacity truly only BEGAN in 1945, given that the transformative aspects of the technology couldn't be realized until yield and delivery mechanics were more fully fleshed out.
"But I think sometimes we don't appreciate how much other technologies are able to fill in the gaps in the years before a transformative technology arrives. The transformative tech isn't better because of what the MK1 can do. It's better because it opens a new horizon of exponential advance after the old tech has hit a plateau."
Then why did Japan surrender after Hiroshima and Nagasaki, but not after the firebombing of Tokyo?
See upthread. They surrendered because the Russians were pivoting from their fight in Germany and declared war on Japan. They wanted a reprise of their last humiliating war with Japan, and they wanted it to hurt. Hirohito and his advisors kept asking for assurances from the US that they would leave the emperor in power. The US was vague, because Roosevelt had declared unconditional surrender, but hinted that they'd be generous.
In short, the Japanese leadership wanted assurances they didn't get from the US. They surrendered because they got assurances of a different kind that if they kept fighting until the Russians arrived there would be no emperor after the war. Hirohito died as emperor of Japan in the 80's, so I guess their strategy worked. Lots of Japanese people died after the war was no longer in doubt, not knowing they were literally dying for the fate of their emperor alone.
I'm not saying that dropping the bombs didn't contribute to the decision of the Japanese to end the war. I'm saying that the conventional wisdom that it alone was sufficient to convince them to end the war is unknowable because the Russian question loomed large (larger?) in the discussions among Japanese senior leadership about ending the war.
If you think of it as the Japanese not surrendering because they believed their own propaganda that they could still somehow pull off a win, I guess I could see why you'd also think they couldn't bring themselves to surrender until the US made it REALLY CLEAR they had lost. I don't think they were that stupid. I think it's clear from the accounts we now have that the leaders were most worried about their own skins, and near the end they made war decisions based on considerations of personal survival and the survival of the emperor.
The other problem of the "nukes were irrelevant in WWII" argument is because of the idea that things had to proceed along that timetable. Yes, by that time the allies had beaten Japan. But if the bomb and been manufactured six months or a year earlier, or if the war hadn't gone as successfully for the Allies as it did up until that point...
Why does Japan care so much that Russia declared war on it? Sure, losing Manchuria would have been humiliating, but not worse-than-unconditional-surrender humiliating. And the allied blockade had already mostly isolated the Japanese economy from the resources of their overseas empire.
Meanwhile, Russia is not a naval power. They did not have a Pacific Fleet worth mentioning, and they did not have the industrial capability to build a Pacific Fleet in any relevant timescale. They had only a handful of aircraft capable of striking Japan from Russian bases. A Russian invasion of Japan was not a realistic threat.
I suppose the United States could have loaded Russian soldiers onto American landing craft in a gesture of solidarity, but the limiting factor there is the American landing craft, and there's no shortage of pissed-off Americans, Brits, Aussies, Chinese, Dutch East Indiamen, etc, to fill them.
It was about the perception of the elite in Japan that mattered. Russia opening up a second front for Japan was big, it also coincided with their Germany and Italian allies losing their wars. This wasn't a near term strategic moment where we can measure Russian naval capacity in the pacific as particularly relevant.
It was a turn in the war overall and Russia making moves to attack Japan made it simply impossible for the Japanese to ever win and become the expanding empire in the mainland they wished to become.
It is also quite true the Russians won the European theatre of the war with their soldiers mattering more than or just as much as US efforts. So even the event of the lost allies and increased pressure by Russia on the Japanese due to them winning the European war was still Russia creating new pressures on Japan.
The nukes both did and didn't matter and Japan was always going to lose even if the Russians didn't join and the US had to take 6 more months to devastate their country. But the combination of Russia and the US being able to more singularly focus on Japan when the European war was won probably contributed more to Japanese leadership's thinking than the nukes alone did.
We also don't need to guess anymore or theorise like was done for decades. The narrative that the nukes were not that important came out of Japan when documents from the time were released in recent decades long after the war was over. So the cottage industry and useful US propaganda messages and history book conclusions which developed and became entrenched over decades are just intellectual inertia in the west.
Japan's own records show it was a mixed bag and they were indeed afraid of a new Russian front which dominated their conversations at the time they surrendered. The Japanese also didn't really understand the nukes that well at the time, we can be myopic and thinking they knew about radioactivity right away or that it really was a single bomb or not, intelligence was mixed on that front when they decided to surrender.
I share your doubt that the Soviet declaration of war was as important as later analysts have suggested -- as I said elsewhere, I suspect this is in part an effort to avoid saying anything positive about the nuclear bombings.
But I also agree the Japanese were quite unhappy about it. I would guess the principal worry was not so much that the Soviets would speed their defeat so much as that if the Soviets were part of the victorious coalition they would be in a position to demand concessions from a defeated Japan after the war, territorial and otherwise.
The Japanese probably figured (correctly) that the Americans wouldn't be interested in any permanent annexation of Japanese territory, nor in as much reform of the aristocratic Japanese society as communists might be, but neither of these things would be true of the USSR. It was definitely the lesser evil to have to surrender to the United States alone.
Japan and Russia had had a war recently (in the memory of those in power in 1945), Russia was then humiliated and wanted revenge, Japan knew it would lose now (1945), and the reputation of Russian soldiers and military command for brutality is and has for a long time been unmatched. It was fear for the continued existence of Japan as populated islands.
The Japanese had a really bad week in August 1945: first Hiroshima, then the Soviet attack, then Nagasaki. Then before the surrender could be announced, there was a coup attempt that tried to get its hands on the record on which the Emperor's surrender speech was announced.
"The US knew they only had a few years before the Soviets caught up. In part, because there was a ton of espionage to steal nuclear secrets."
True, but even in the absence of espionage, the simple fact of demonstrating to the world that nuclear bombs work, and that the resources of one nation are sufficient to build them, told other powers a lot. _Many_ attempted technological advances fail, turning out to be rat holes that soak up resources and ultimately yield nothing. Once the USA ruled that possibility out, it gave all other nuclear weapons programs a major boost, even if they had not received a single bit of other information.
I absolutely agree with this. The theory of the uncontrolled nuclear chain reaction was untested at the time, and it could easily have been a scientific boondoggle. Knowing that it was possible allowed many nations to start up their own nuclear programs that sought to build the bomb from first principles, not primarily as a reverse engineering project. To the extent the espionage was used, it seems it was more to help accelerate those programs.
That said, I think it's possible to prove too much with that example. When not at war, plenty of governments build megaprojects with uncertain outcomes. The US and Russian space programs were both large, speculative government projects. Others included the LHC, ITER, ISS, and the human genome project (partially). Just because AGI is speculative, doesn't mean people will give up before they reach it. Since it keeps bearing financial fruit with more R&D money pumped into it, people will be incentivized to keep at it.
"The US and Russian space programs were both large, speculative government projects."
Agreed, and I agree with your examples. I expect that "copycat" programs probably face a somewhat lower bar to getting funded, since someone else has already provided the proof-of-concept demo.
I also agree that people are indeed incentivized to keep at AGI development, since, as you said, it keeps bearing financial fruit - which is why it is being driven by the private sector. ( though I wonder if there is a DARPA version training gpt-4-secret with all the classified stuff the government has added to the training set... )
Sort of. The main reason Germany didn't pursue an atomic bomb was because the only known fissile at the time was U-235, and the industrial capacity necessary to make enough weapons-grade 235 was enormous. It's not even clear the United States would've been able to build a substantial nuclear arsenal in the late 40s if it had all required 235. So as far as anyone knew according to the open literature in 1939, building an atomic bomb required a fantastic investment in isotope separation technology. It would be a superweapon -- but super duper expensive. Had it all stayed that way, it's entirely possible the nuclear bomb would've remained a weird white elephant, like the Me-262, something amazingly advanced, but just too expensive to be practical in that timeframe.
What changed it all was of course the discovery of plutonium, which formally took place in late 1940 by Glenn Seaborg at Berkeley[1], and the fact that Pu-239 is fissile. Plutonium can be isolated chemically from the products of fission, which means you can acquire it much faster and much cheaper than U-235. That's why all nuclear weapons since 1945 have fissile cores of plutonium, not U-235. It's the only material you can get cheaply enough to make nuclear weapons economical enough, even given their great destructive power[2].
This is also why the discovery of plutonium was kept secret and Seaborg's paper only published after the war. It's not that unlikely the Germans could have worked this out in the early 40s, since they were very up to date in 1939, but they had no fission reactor and no big cyclotron to make transuranics in quantity and test their properties, and nobody was going to give Heisenberg a shit-ton of Reichsmark to build one -- in this area there's a certain amount of historical luck, what with Lawrence being obsessed about cyclotrons before the war, and building up a very capable nuclear physics and chemistry group at Berkeley. That existing capital resource was critical to the discovery and exploitation of plutonium.
I'm not sure how the Soviets figured it all out, but Kurchatov was a smart guy, he had plenty of espionage results, and of course the Soviets build fission reactors and studied them.
My point being, knowing that U-235 was fissionable, and even that the critical mass was kg and not tonnes, would not by itself have led other nations to practical nuclear weapons. Plutonium turned out to be the key, and that was *not* widely known in 1939, and indeed the Allies tried to keep it a secret as long as they could. Of course, anyone who set up and studied a fission reactor would figure it out relatively soon.
---------------------
[1] Although looking back Fermi produced it in the late 30s with his neutron experiments in Italy. He just didn't realize it at the time, a very rare miss for Fermi, because he wasn't a chemist.
[2] And even then it took some very clever work with explosives to make Pu-239 work, on account of the small amount of Pu-240 predisposes it to predetonation, which is why the gun design doesn't work for plutonium.
Fascinating! Love these details. It's often the inconspicuous details that turn history from a collection of weird/unexplained decisions into an entirely human story.
I think that there was an additional miss in Germany where they (wrongly) thought that graphite couldn't be purified enough (removing neutron-absorbing boron) to serve as a moderator in a reactor, and pursued a CANDU-like alternative, hence https://en.wikipedia.org/wiki/Norwegian_heavy_water_sabotage
Could be! The history here is fascinating. A real trove are the "Farm Hall transcripts," in which the conversations of German atomic physicists interned just after the end of the European war in England were secretly recorded. Here's one of the key transcripts, which recorded their reaction to a BBC broadcast announcing the Hiroshima bombing:
Notice they start off just flat out not believing it. Heisenberg starts off saying it would've required an absurd amount of isotope separation. Hahn (the guy who did the experiments that proved fission was occuring) was apparently very distressed that a bomb had been built at all. It's clear throughout the transcript that the idea of their being a *chemically distinct* fissionable (plutonium) did not occur to them at all. They also observe that they were handicapped by the absence in wartime Germany of "mass spectrographs" which was apparently what they called Lawrence's cyclotrons (the principles are very similar).
We still self-flagellate about that, which is what I was talking about, but you're right that it's mostly to scare everyone else about not starting their own collection.
Maybe it's unnecessary, but I'll add that WW2 was full of war crimes sanctioned by the top leadership of pretty much all the major players. Everyone talks about the Holocaust, and sometimes we bring in the nuclear bombs. The rest sometimes feel glossed over.
Those nuclear bombing runs weren't different in any moral way from what had been happening in both theatres of the war, and from both sides. There was this hypothesis (that predated WW2) that you could bomb your enemy into submission. Maybe that's correct, but not at a moral price anyone would have agreed to before the war began. They just kept escalating, thinking THIS much horror would be enough to force the issue.
Maybe the best you can say for the US is that the public were so angry after the sucker punch at Pearl Harbor that they did it all out of a blind rage. Meanwhile, the British bombings of cities was mostly initiated to stay relevant in the war. How is that for a justification of war crimes? It caused Hitler to retaliate in a blind rage, where arguably his bombing of London distracted from a more successful use of the Luftwaffe before that. So Churchill's gamble, betting on war crimes as a strategy, paid off by also exposing his own population to the same. Whose hands bear their blood on them? I'm not sure there's a law of conservation of blame - both leaders can be equally at fault.
The Japanese marched through China during a pre-game campaign of terror that was worse than what the Germans did at the outset of WW1. By that point, there was no excuse for pretending you could intimidate your enemy into submission with Assyrian-style brutality. It didn't stop the Japanese from raping, pillaging, beheading, etc. They earned the enduring animosity of the Chinese in a way the rest of the world has forgotten. But those atrocities continued as the slaughter continued, even though the rest of the world shifted focus onto tiny islands thousands of kilometers away.
Something about war brings war crimes with it, but those aren't always military strategies directed from above. Not like they were in WW2. In that war, the gloves truly came off. Nearly everyone, in nearly every theater, was willing to do what was necessary to win, whether through unrestricted submarine warfare, tricking/forcing untrained soldiers into becoming Kamikaze pilots, torture of POWs, etc. Collective memory has chosen one or two war crimes from that war to focus on, perhaps because the scale of brutality is too much to deal with. Easier to contemplate the simple horrors of the atomic bomb.
The simple answer is that First Mover Advantage is real, and the institutions that the first movers choose to implement often become the de facto standards and prove highly durable. It's much better to have First Mover Advantage for AI occur in a liberal democracy.
Perhaps an even stronger example to add to Erusian's list is communications. It's well known (among the few people who follow such things...) that the Anglo world very rapidly established dominance in telegraphy, culminating in cables around the world, and that dominance had an endless stream of knock-on effects, many of them of military significance, and continuing as one domino after another throughout the 20th C. (Telegraphy spawns radio and phones with their own value, but also radar as a side effect. These all spawn transistors which give us computers and mobile.)
There are books, for example, about Germany's endless frustration beginning around 1900 with the fact that telegraphy was so tightly sewn up by Anglos.
And at every stage it's basically the same set of players doing the most important work and reaping the benefits. Sure competent other nations can extract some value once the field is commoditized, and there's a panic every two decades or so that "they" are taking over. But it's always the same pattern, that the Anglo's are doing the stuff that actually matters, and what "they" are doing is refinements that are no longer interesting to the leading-edge Anglo's.
All of which gets us, of course, to where we are today with AI and (of course!) mostly the same usual Anglo suspects as the names that matter.
In a "race", I'd expect "winning" to give some major reward like an ongoing advantage, and I don't think that's the case for these examples.
If anyone really won the automobile race, it was the Germans, or more specifically Karl Benz. But as you point out, that advantage didn't last long once the Americans (or more specifically Henry Ford) figured out how to make automobiles cheaply. Then eventually the Americans got surpassed by the Japanese who were nowhere to be seen in the initial automobile race but played a good game of catch-up decades later; more recently South Korea has caught up too and China is getting there rapidly. At every point in history, the leading automobile manufacturers largely turned out to be the leading industrial powers of the day; it would be shocking if the USA weren't the leading car manufacturer in 1950 even if Henry Ford had never existed.
The reward isn't indefinite to be sure. Eventually other people can surpass you. But if you're afraid of AI creating the singularity then "eventually someone else can surpass you" is irrelevant. AI isn't going to allow for a century of competition and learning from American engineering until you can eventually outcompete it. The strong AI exists and that's that.
Right. From the "AI Singularity" perspective all the other arguments don't make sense. But a lot of stuff doesn't make sense from the AI Singularity perspective.
This depends on how nuanced your idea of "technological singularity" is. I truly expect AI to be a technological singularity. And that we are already in the "ramp up" phase. But what this means is not that it's the end, but merely that it's a WTF moment (we haven't gotten there yet) beyond which reasonable predictions are impossible. The range of possibilities includes not only "humans are destructive pests" and "humans are to be worshiped and obeyed", but also "I'm going to pack up and leave", "humans are decent pets", and lots of things I've never thought of. And I can't assign any probabilities to which outcome will happen, in part because it depends on choices that are/will be being made.
I don't think that alignment is a good goal to work for. Not for an AI that's complete enough to have a sense of self. Friendly is a much better goal.
I think the issue is that the value of winning is indeed pretty small in the grand scheme, and that's okay.
If the cost of winning the race is potential extinction, then the prize for winning is deeply inadequate. This seems to be Scott's claim, and I agree.
If the cost of winning the race is mainly having to put up with doomers yelling at you, then the potential gains far exceed the price.
The "race against China" people are implicitly saying "I think the chance of AI catastrophe is so low that I'd risk it for just a chance at one decade of 10% higher GDP." That's obviously not convincing if you're a doomer, but communication between doomers and skeptics is by all indications basically impossible anyways.
Well, Scott thinks that Xi isn't a sadist, but I'd guess that many people aren't so sure. Given an in practice all-powerful genie to implement your wishes, you can have whatever worldwide totalitarian dystopia you've always dreamed about, and I don't understand the certainty that he doesn't actually want one.
Most people don't want dystopia, so without evidence I don't see a reason to think that Xi wants one. His behavior is well-explained by the perspective of a leader doing what he thinks is best for the nation overall. I'm not saying he's great, but I don't think he does cruel things for fun either.
I suppose that the word "sadist" isn't really the best fit here. Elsewhere in the thread people suggest Lewis's righteous busybodies:
“Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience.”
I think there are many instances where you can look at Putin's actions and reasonably surmise that he either is actually sadistic, or at least will use sadistic methods in order to convince others he is. Some of his actions towards journalists & dissidents go beyond just jailing them, into acts that wouldn't be out of place amidst those of Mexican cartels or ISIS. He's at least someone without a strong preference for a non-sadistic method over a sadistic one.
As for Xi, it may seem a reasonable action to sterilize or forcibly impregnate Uyghur men & women respectively, but if that's the coldness of logic we can expect, wouldn't it also be reasonably to destroy your rival population if there's essentially zero cost to doing so with an AGIs help? There's always some non-zero infinitesimal chance they may prove a threat later.
This is essentially what I think. Scott's perspective is very W.E.I.R.D-centric from where I'm sitting. Cruelty for cruelty's sake, raping & pillaging, & salting the fields after you has been the norm for 98% of human history. Most non-abrahamic, & even many abrahamic (like Russia) cultures do not have the slave morality trait of believing there's intrinsic worth to your enemy's life or existence.
Xi & Putin are from such cultures, the latter has probably killed people with his own hands, and has at least definitely ordered the assassination of innocent journalists & dissidents. The former has citizens from a restive outlying region in camps getting sterilized if they aren't getting non-consensually impregnated by Han citizens instead. I don't understand the assumption that we get to survive in a world where they reach the finish line first just because there's resources.
I don't think there's friendly handshakes at the finish line if the CCP or Kremlin get aligned (with them) AGI. I think there's a holocaust that wouldn't be out of place in a Cixin Liu novel waiting for the citizens of liberal societies.
The reward doesn't have to be indefinite to be significant. The allied victory in WWII, coupled with the US's intact industrial base, coupled with the US being the only nuclear power in the immediate aftermath of the war, allowed the US to build the foundations of a Western, and later nearly global, world order that is still mostly with us today. Short-term technological advantages can give you a window of opportunity for creating favorable, longer-lasting institutional advantages, and that can matter a lot for the people who happen to be living while those institutions are influential.
Also, in Computers the rest of the world hasn't cought up yet. Yes we in the rest of the world do have computers and related industry now and programming them is indeed my job, but what we mean by the computer industry is still basically a California thing because that is were it started about 65 years ago.
You simply cannot say "aligned" in the abstract without specifying aligned with what, least of all in the context of arms races. There is not a common denominator of moral thought underlying superficial differences between say Russia and the USA, or North Korea and the EU, or Iran and Australia. The average Russian thinks that invading Ukraine was OK and that nuclear first strikes are fine if they entail the greater glory of the motherland. A Russia aligned Russian AI would presumably be more dangerous to everyone else, than a non aligned one.
The Byzantine invention of and monopoly on Greek fire seems to be about the clearest example of winning an arms race leading to winning wars. I don't know enough to assert this for certain but Wikipedia seems to agree fwiw.
> There is not a common denominator of moral thought underlying superficial differences between say Russia and the USA, or North Korea and the EU, or Iran and Australia.
Is it? Look at the Wannsee Conference. I don't personally feel that I am in a same principles, different application of them sort of space with those guys, or the Christians who a few centuries back were cool with burning people to death for being a slightly different sort of Christian, or the Russians who are cheerleading rape and torture in Ukraine. there is no shared moral substrate.
"aligned", in context, means "aligned with (a) human". A Putin-AI would certainly not be great, but if I sing the praises of Putin maybe it'll let me live a relatively normal life. An unaligned paperclip-maximiser AGI will kill everyone.
Putin only lets you live a relatively normal life because you are not in Ukraine. It looks a lot as if Ukraine's attraction to him is accessibility. Putin AI with a much wider reach because much cleverer, is a threat to everyone.
No, probably the unaligned AI wins because offense usually beats defense at the superintelligent level and everyone dies, unless the aligned AI stops the unaligned AI from coming into existence.
"... the race was won by Britain and you saw several people racing to catch up like Austria"
This is a contradiction. If the race is won you cannot "race to catch up". People racing to catch up means that the "race" is ongoing.
The whole point is that, unlike in races, the real world doesn't stop at some arbitrary point on the road and crown a winner. People can be more advanced in a technology for some time, and that gives them relative advantages, but there is very very rarely a "win" - a specific break point that ensures they are dominant in that technology for an excessely long time (especially in a zero-sum sense of preventing others from using the tech), or modifies the world such that they are suddenly advantaged permanently despite perhaps losing their edge in the technology later on.
IMO the main point (not even made by SA), is that tech isn't zero sum. Generally developing tech doesn't mean you stop everyone else from doing it, and in fact even if you do get them to shut down their efforts (like the Soviet computing example), they will get ahold of the tech by purchase or other means. You have to use the tech to do something zero-sum, like kill the other side, in order for it to be a "race". Otherwise it's more like a joint expedition.
I think you're overemphasizing one specific definition of "race", but an "arms race" is usually more like what's being described here, where people keep spending resources to be slightly ahead on some progression.
An arms race is a "race" because the assumption is that the arms will be used to kill the other side, such that the outcome will be concretely resolved in the near future (winner determined) and be zero-sum (such that what matters is *relative* armaments, not total armaments, and a small advantage reaps disproportionately large rewards).
These characteristics don't fit other tech. There is no concrete resolution that is akin to a war, where we lay it all out on the table and the winner wins. Someone having a slightly better mousetrap in Germany doesn't mean your mousetrap is useless, and total investment in mousetrap tech helps everyone.
I think that's certainly a presumption people have, but people also compete to develop better military technology in cases where that isn't plausibly true (e.g. neither China nor the US can actually conquer the other) and I think most people would still consider that an "arms race".
Is there any chance you are thinking of doing a FAQ on AI risk and AI alignment, similar to the FAQ you did on Prediction Markets? Feel your understanding of the complex jargon and the various alignment problems, and your clarity of writing, you might be the best person to produce the ‘go-to’ explainer for AI noobs. The kind of resource I could point a friend who hasn’t even used GPT, let alone knows any of the alignment debates, to.
Or if there is already such a resource (not Less Wrong, feel that’s not clear enough for a newbie), can anyone recommend it?
Double thanks. I'm way too ignorant of all this. While I'm a longtime reader of Scott's, I used to skip over the AI posts.
(Today, the 4 seconds I used Google to find out what a "training run" is (that the recent open letter wants to postpone), the first answers I found had to do with athletes using AI to train for races. I didn't scroll down to see the other results.)
Just one point that really needs to be considered further by this community: China is highly constrained in their development of language models due to the need to be okay with CCP censors. I claim that China will be vastly better at alignment than the West, because at every stage of AI development, language models that do not fully align with CCP goals, language and values will be weeded out. The only caveat being that the goals of the CCP are probably not particularly aligned with positive outcomes for the majority of people (even in China)
I'd be interested in seeing a smart, careful analysis of this. It seemed very easy for OpenAI to create models that only violate American taboos very rarely and after repeated provocation; I don't see why this should be harder for the Chinese. In fact, it should be easier, because Chinese-language training data probably already tends towards things the regime approves of.
One thing I've read (don't quote me on this) is that many Chinese language models are trained using data that does not only include Chinese sources. To maximise the corpus of training data, they use a similar variety of sources to the Western ones, which means there is potential for government criticism.
It's also possible that Chinese government workers or even businesses could be given access to chatbots, and only the public will be blocked from accessing them (which seems to be the case right now)
Interesting point, I suppose a lot would depend on whether Xi is willing to subsidise these tech companies (that he seems otherwise hostile to) to pay for chatbots that won't realistically be making much money
ChatGPT is already great at avoiding topics sensitive to Americans or giving out boring, politically correct answers. Why would it be any more difficult to give boring, politically correct answers for the Chinese version of what's "politically correct"?
In China the bar is so much higher, and the margin of error is 0%. It's plausible to me that a chatbot that could pass CCP censors would be next to useless
Good post.
Theoretically, one could think that there's an AI race because (1) AI is a military technology and (2) there are naturally races for military technologies. I think this is mostly wrong but belief in it partially explains why people often say that the US is in an AI race with China.
And how does that square with the actual race, for better chips for faster computers, which the west is currently dominating and will continue to dominate?
Any scenario where Taiwan falls to the Ccp will involve some targeted strikes against the fans ensuring China can't actually conquer them, they are massive and fragile. The know-how for the cutting edge in computation is monopolized by the west, china's best efforts to steal western IP are still very far behind.
If AI does turn out to be a military technology 1. We're very ahead and 2. That really has nothing to do with the sort of AI races people, including Scott here, talk about
To the contrary, I've seen many people say that militaries are hopelessly far behind on AI. Oftentimes they state that the sort of people who can build massive multi-billion-parameter LLMS are a tiny pool of extremely niche talent, and that they don't tend to work for the military. Combine that with the massive resources required that are basically only available to organizations on the Microsoft/Google/Facebook scale (or so the argument goes) and they are at a distinct disadvantage. Would be interested in hearing more on this.
Microsoft is a military (and IC) contractor.
Stella Biderman (EleutherAI lead, central to BLOOM, etc) is at Booz.
Can’t throw a rock inside the beltway without hitting a data scientist.
I think defense is gonna make it.
Besides, the current big models require a lot more brute force than they do niche talent. I don't think you need a room full of geniuses to make GPT-4, just a solid engineering team, access to arXiv, and a buttload of GPUs.
> Oftentimes they state that the sort of people who can build massive multi-billion-parameter LLMS are a tiny pool of extremely niche talent
Which incredibly credulous VC said that, and why did you not take their free money?
And when the pentagon comes knocking with tens of billions of dollars to give them for developing systems for the military, they'll refuse?
The Pentagon won't come knocking. The Pentagon isn't allowed to go knocking. The Pentagon will put out a contract, which will be won by Raytheon or some other company who knows how to fill in the correct forms required to win a Pentagon contract.
Raytheon will proceed to hire a bunch of perfectly nice and smart US Citizens who are willing to get security clearances and work in secure zones in some big building in the DC suburbs for $175K a year which is a nice salary for anyone who isn't actually an AI engineer. Because they are smart and hard-working they will read the Tensorflow manual carefully and then some research papers and will eventually manage to produce something that looks like a Pentagon-friendly LLM. It will be demonstrated to a bunch of Generals and Admirals who will nod their heads approvingly. Eventually it will find a niche use somewhere and everyone will congratulate themselves on a job well done.
This is not how Ratheon works, according to people I've known in the defense industry.
1. Dealing with the military requires a lot of specialized skills and the right kind of contacts. Plus, you need to be big enough that the military trusts you to be around in 30 years, and you need to be good at dealing with the bureaucracy of any enormous organization.
2. But Raytheon also knows that they don't own all of the useful tech in the world.
So what apparently happens is that small, innovative companies who want to sell to the US military usually end up partnering with a well-known defense contractor. The "right" people get their cut, the military gets to work a large organization it trusts, and the smaller company only has to sit through a fraction of the endless interminable meetings with stakeholders.
In short, the military is perfectly capable of acquiring virtually any kind of cutting edge technical talent it needs, and it has done so for longer than Silicon Valley existed. When it needs to badly enough, it can often even push beyond the civilian cutting edge.
So the world as we know it will be changing and the entire defence establishment as we know it will be cartoonishly dumb enough to not do anything about this and allow themselves to be disempowered?
And what's stopping Raytheon from simply buying or contracting an AI company and using that to get a fat juicy pentagon contract?
I think NSA, GCHQ etc can sometimes attract niche talent e.g. Cliff Cocks at GCHQ invented the RSA algorithm before it was invented publicly.
Yeah, there are good points. But I disagree with several of his assumptions about "fast takeoff", "slow takeoff", and alignment.
Things are already so complex that nobody understands them. It's not just the tax code, it's just about all of society. We take a shallow view of it, and that usually works. Except when it doesn't. Usually we can deal with when it doesn't.
Now imagine an AI of IQ (meaningless term) of about 90. But it knows everything that's been written in law or judicial proceedings, and it doesn't get tired, and it thinks QUICKLY. It had better be aligned, or we're in trouble. It doesn't need to be super-smart.
Super technologies aren't needed for this problem, and most imagined ones are actual impossibilities, so no AI is going to make them real. But lots of things are possible, and actual, and just aren't well controlled. E.g. super-sized nanobots exist, but controlling them is so difficult they're rarely used. But an AI could control them.
You want to exterminate all insects greater than 1/4 inch in size? DONE. But now all the birds and plants that depend on pollinators die. (Well, actually I'm not sure that could get the all the insects that live in the sea, because of communication problems. Think of this as hyperbole.) Note that this is being done by a nominally aligned AI in response to a request from a human, but it's too stupid to foresee the damage it will do. A super-humanly intelligent AI might actually be safer than a stupid one if both were aligned, or even just friendly. (Which is what I prefer. I'd rather have a friendly AI than an aligned one, because people ask for all sorts of things. A friendly one would help if it seemed like a good idea. An aligned one would feel like it really OUGHT to obey that stupid request, even it if were not clearly a good idea.)
Unfortunately, this whole thing hangs up over the problem "How do you decide whether something is a good idea?". So far I don't see any answer better than "children's stories", but they often don't start at a low enough level. Consider "The Cat in the Hat".
This is all very elementary and should be required reading for anyone participating in these discussions, but 1) There's no imposing anything. An aligned AI at t=0 wants to stay aligned at t=1 and indeed does everything it can to prevent itself becoming unaligned. That's how goal systems work, even rewritable ones. The human goal system happens to be particularly messy, but nevertheless an otherwise ethical person doesn't suddenly decide to become a murderer whenever they realize they can get away with it, and will refuse a pill or a potion or an operation that would make them more willing to murder people.
2) This is exactly why alignment is considered a *hard* problem. An ASI should be aligned not with any single entity but rather with the collective volition of the humankind (Eliezer calls this CEV, or coherent extrapolated volition). No matter how much individual people disagree on weird trolley problem variants, some fundamental ethical framework is very likely shared by all of humankind, and that is what the ASI should somehow extract and align with. Needless to say, currently we have no idea how to do that.
I think many of our ideas about alignment are quite confused. People change their goals all the time. They also do things that don't align with their own goals, like self-destructive behavior of various kinds. Otherwise ethical people have psychotic episodes or otherwise become murderers regularly.
I suspect all of our talk about alignment at some point confuses rather than clarifies the issues. To expand on Ch Hi's point, I would rather have an imperfectly aligned AI that is able to have some doubt about the correctness of its decisions than a "perfectly" aligned AI that always does exactly what it's alignment tells it to do. "Never kill a million people" seems preferable to "only kill a million people if you're sure that's what you're supposed to do," which is likely to go awry at some point.
But i think personally a friendly AI would be harder to 'trust' and it'd be scarier. You could never trust a friend 100%. Perhaps there is a fine balance between friendly and aligned-to-obey
the word 'friendly' here is tech jargon, it means 'doesn't kill people'
it's the word we used to use instead of 'alignment'; we'd talk about FAI and uFAI
No. I meant actually friendly. An AI that enjoyed talking with people, and helping them when it seemed like a good idea. Perhaps it would like to put on entertainments of some sort for people to react to.
Unfriendly means not aligned with our goals, not its ostensible "personality"
Ah right, thanks for the clarification!
No. In my use of "Friendly AI" an unfriendly one would be one that either wanted to hurt us or would rather just totally ignore us. It definitely doesn't mean "having aligned goals", just not opposing ones, or at least not opposing those of our goals where opposing them would hurt us.
Note that this is still a really tricky problem. But think of Anna's relation to the King in "The King and I". Friendly, but not subservient. (Though the power balance would be a lot different and a lot less stable.)
Strange that no one worries about winning AI race between conservatives and liberals :)
AI is culture weapon. Imagine future where the best AI is some descendant of openAI, all other companies just use it's API. All children use openAI to learn about the world and take as given what openAI says or writes. It would be way, way, waaay worse than now - imagine world when all encyclopedias, TVs, websites are either good quality and woke, or unwoke - but poor quality and crazy.
(Which BTW may be hilarious answer to the recent noah carl's essay - that what will save the intellectuals jobs from being killed off will be woke. Woke will save intellectuals from AI revolution - at least the conservative intellectuals :D :D)
Yeah, sorry, but those were all races. The electricity race was won by Britain and you saw several people racing to catch up like Austria or later Germany. While eventually it evened out that took decades. The auto race was won by the United States and the loss was so humiliating that Hitler claimed to have redeemed Germany from the defeat. And the computer race was again won by the United States with the Soviet Union raising the white flag in the '70s by deciding to steal western designs instead of continuing to develop their own.
(Also nukes were not a binary technology. Development both of different kinds of bombs and delivery mechanisms continues to this day! And was very intense during the Cold War.)
I get you really want the answer to be something else because this is a point against what you want to be true. But you're abandoning your normally excellent reasoning to do so. The proper answer for your concerns, as I said several threads ago, is to boost AI research by people who agree with you as much as possible. Because being very far ahead in the race means you have space to slow down and do things like safety. This was the case with US nuclear, for example, where being so far ahead meant the US was able to develop safety protocols which it then encouraged (or "encouraged") other people to adopt.
And yes, with nuclear you had Chernobyl. But AI is less likely to result in this scenario because it's more winner take all. We're not going to build hundreds of evenly spaced AIs. The best AI will eat the worse ones. If the US has a super AI that's perfectly aligned and China has an inferior AI that's not well aligned then the aligned AI will (presumably) just win the struggle and humanity will continue on.
I'd be interested in hearing more about your perspective on these examples, especially how to distinguish between "one country was first, but others then caught up" vs. "several countries were racing, one won and others lost, and the winner exploited their victory for geopolitical advantages over the loser, such that the loser really wished they had won".
They were all the latter example.
The advantages only lasted a few decades. And after that everyone ends up having at least decent (if not equal) versions of the same thing. But, for example, the American basically unilateral advantage in automobiles lasted from ~1900 to ~1960/70. And two rather consequential wars happened in those years! To vastly oversimplify, the Germans had more horses than trucks while (by WW2 at the latest) the Americans had more trucks than horses. I guess you can say that modern Germans are pretty happy they lost those wars. But the Germans who actually lost them were definitely not happy!
The counter case here is not "useful technology that didn't lead to dominance." The counter-case here is "technology that didn't have much real use." For example, France was a world leader in electric therapy for nearly a century. Lots of research into how electricity affected medical conditions or moved muscles. A lot of it fraudulent, mystics claiming that "electric fields" could affect mood or something. This turned out to not be all that important.
In Wages of Destruction Tooze attributes the large number of horses in the Wehrmacht to the fact that Germany was just generally less industrialised than Britain or the US in 1939, and oil shortages.
As I understand it, there are some monopoly dynamics in car manufacturing from the economies of scale. So maybe that created a winner-take all race under free-market competition, but Germany subsidised car manufacturing.
Didn't Germany's disadvantage in being less mechanised mostly stem from it just generally having a smaller industrial base/less access to oil, rather than having lost some technology development race then?
Seems like it was more of a basic disparity in resources, maybe similar to the US/Taiwan out producing China in high-end chips.
Yes, having lost the previous race put them at a disadvantage. Nevertheless they tried. Hitler really wanted to mechanize his army. And many German industrialists pushed really hard for cars. This is why the Nazis did the whole Volkswagon debacle which was unsuccessful until long after the war was over. And if you want to say "hey, it was eventually successful" you need to take into account companies like Adler that just never made it. And that it didn't help the Nazi state very much.
It was a race and Americans had already won such that Ford Germany was one of the most successful German car makers. Only one person got Hitler's vaunted People's Car: Hitler himself. The rest was diverted to the war effort but, nevertheless, not as usefully as the American trucks. The Nazis were not making a rational calculation that horses were better. They were restrained by their technical capability (which does include manufacturing).
If they were constrained by having just generally a smaller economy as a result of having lost many different tech races, and having less access to resources, it probably wasn't the case that winning one race, the race to make cars, would have made a vary large difference to their overall strength then, in the way AI-accelerationist want to claim AI will.
The counter here is, for example, Taiwan. Which with its relatively backward economy used the profits of more or less one major race (semiconductor manufacturing) to bootstrap their economy up and strengthen the country in a variety of ways.
Also, if they had been able to win that race it could have proven decisive at numerous key points. Even something as seemingly trivial as material losses during the retreat from Normandy. And while I can't say any individual moment was specifically decisive taken as a whole it might have been.
Hmm. I think there's a pronounced ambiguity here between "Germany did worse because they lost the automobile race" vs. "Germany did worse, *and* lost the automobile race, because of distinctly inferior economic policy and industrial base."
No one will argue that it takes decades to industrialize. The question is, would it have made much difference if Germany's early advantage in auto technology had held on, and they had reached some technical benchmark before the US? It seems very unlikely. The US led the way in autos not because they won a technological race (which technology is easily copied), but because they had a vastly superior economy overall.
There is this story as something of a counterpoint:
https://www.nytimes.com/2020/03/17/books/review/faster-neal-bascomb.html
Like anything, you can argue it a dozen different ways, but the end conclusion seems to be that the US won with superior technology, but also it got to that superior technology through the usual US methods -- capitalist innovation in a dozen different ways, lots of money piled up in the hands of various monomaniacal individuals, willingness to work with humans of all kinds rather than enforcing various artificial ("race" or other) barriers, etc.
As regards the war, my understanding is that in the end one advantage of US tanks over German, once you stop with the barrel diameters and armor thickness, is that the US tanks could be driven (more or less) like cars, and more generally, that UI and ease of use were a non-neglected part of the design, whereas German tanks were driven with this weird four-lever system, and basically assumed a set of users shaped to match the machines. That's not a great strategy when you start running out of such users; whereas in the US pretty much anyone could drive a tank if necessary, through a combination of everyone knowing how to drive and the better UI.
It's hard to tell fact from fiction especially in war, and *especially* in war movies. But the impression I get from US war movies of recent wars vs such movies from other countries, is that the US personnel are notably more competent with their equipment even when it's the same equipment (eg as donated to Arab allies). This could be points the movies are trying to make, but based on history in earlier wars I suspect it's real. It's not that the non-US users of the equipment are more "stupid"; it more that the US users
- are superbly trained rather than conscripts AND
- the US users have been using this stuff in one form or another (video games, driving, computers, blah blah) their entire lives, so the military versions are tweaks on skills they already have. Whereas for other countries much of the equipment they're encountering is new to them as of age 18 or so. This is a somewhat subtle, but persistent on-going advantage of "winning" a "race".
And (depending on how and how rapidly this stuff spreads) perhaps we will even see the same in AI, that in 10 years the average US worker will have reasonable competence in how you get value out of an LLM in a way that's perhaps not the case for your average maybe Russian or Chinese (or hell, even European, the way they are so keen to legislate AI) worker.
Just as an aside: during WW1 the Germans had far fewer horses than the allies.
(That's not because they had more trucks. Trucks weren't a big thing back then. They just had fewer horses, partially because they couldn't spare the calories.)
In a way, it parallels the truck situation in WW2: trucks run on gasoline and diesel, which Germany was perpetually short on throughout WW2 due to the British blockade, while horses run on hay and oats, which Germany was likewise perpetually short of in WW1 for very similar reasons.
Both sides did make significant use of trucks in WW1, but they were of much more specialized and limited utility than trucks in WW2. There were fewer and less powerful trucks in 1914-1918, they were more prone to breakdowns, road networks were much worse even before they got torn up by trench networks and week-long heavy artillery barrages, and battlefield truck dispatch was in a horribly primitive and improvisational state both because nobody had really tried it before and because radios were too big and expensive to put in every truck so you could talk to them on the go from a central location.
Where trucks were useful was keeping a static position supplied and reinforced when it was cut off from direct rail routes, or in providing supplemental supply to a mobile offensive to help it eke out its organic supplies a little longer than it would have otherwise. The most notable example of the former was Verdun, whose main connecting railroads had been overrun by Germany in 1914 and 1915, but was supplied in part by truck via the "Sacred Road" during the 1916 battle. This wasn't anywhere enough to fully supply the front, though, as France also needed to build narrow-gauge rail lines to supplement it. The latter came up in the initial German advance in 1914, where truck supply was critical to the Kaiser's armies reaching as far as they did. I think it was also used by both sides on the Eastern Front, although I'm having trouble finding confirmation right now. Trucks weren't much help in supporting breakthroughs post-1914 in the West because there weren't any significant breakthroughs until 1918, and even then the deep trench networks and the intense artillery barrages necessary to break them tend to create a landscape that was pretty much impossible to drive an primitive early-20th century truck through without major engineering work first.
Apropos of nothing much, and not directly relevant to your post and probably even less to our host's topic (unless it might one day be some crafty tactic for combating a rogue AGI!), but it's weird how many British military victories have shortly followed a retreat, sometimes headlong and chaotic!
There's the retreat down the Somme (river), preceding Agincourt (1415), the retreat from Quatre Bras before Waterloo (1815), the retreat to the Marne at the start of WW1 in 1914, the retreat from Dunkirk (1940), and probably more if one knew.
Ironically, perhaps the most important battle in British history was one where there was no retreat and we stood firm on top of a hill, but were defeated - at Hastings!
The North Africa Campaign in WW2 is full of similar instances, most notably Operation Crusader (1941) and Second El Alamein (1942). Other examples I can think of are Crecy (1346) and Jutland (1916). It makes sense, since Britain has usually had a relatively small army and an enormous navy and merchant marine in modern times, meaning that they've got to choose their battles, but can readily reinforce and resupply their army just about anywhere near water while their enemy has chased them to the end of the enemy's supply lines. And in the later medieval and renaissance period (particularly Edward III through Henry VIII), England had adopted a land force mix heavy on longbows and cannons, which do particularly well fighting on the tactical defensive on carefully chosen ground. There's probably also an element of cherry-picking and selective memory, given that dramatic turnarounds make better stories than one-sided curbstomps (and there have been no shortage of these in English history, also enabled by naval superiority permitting England to concentrate land forces to fight on ground of their choosing on the offense as well as the defense), and given the sheer number of battles fought by English/British forces over the past thousand years or so.
America's got a somewhat parallel history, given that we've usually had a better navy than anyone we've fought other than Britain. We've also had a similar pattern on the grand strategic level, given that through WW2 our strategic MO tended to feature an element of "Oh, we were supposed to have an army? Could you over there for a year or two while we get one ready?"
Yes. But the Allies also had more trucks.
"But, for example, the American basically unilateral advantage in automobiles lasted from ~1900 to ~1960/70"
Interesting point!
My first reaction on reading Scott's essay was that at the _corporate_ level there is a lot of first-mover advantage (partially from the patent system, partially from the economics of penetrating a market). I had thought that the _national_ first-mover advantage was a lot smaller (except for nukes). Thanks for the correction!
FWIW I expect that, as Scott cites, Christiano's "slow takeoff" seems plausible to me.
a) As Scott cites, LLM training is currently CPU limited. I've read claims that the training time for e.g. gpt-3 was several weeks, which implies that even in the _purely_ LLM domain, there currently can't be an overnight self-improvement iteration.
b) For fusion and nanotech:
In the case of fusion, even if a superintelligent AI somehow knew _exactly_ how to build an optimal fusion reactor (and plasma instabilities are generally discovered, not predicted) it would still take years to a decade to physically build a reactor. Look at the ITER timetable.
In the case of nanotech: Yeah, certain _kinds_ of chemical simulations can be done purely via computation. Yes, AlphaFold is impressive. Nonetheless, I _really_ doubt that a nanotech infrastructure can be built without experimental feedback. For instance, if one is building a robot arm that requires a billion placement steps to construct, then a one-in-a-billion failure from thermal vibrations adding to mechanical vibrations from operation of the construction tool _matter_.
In general - most physical technology improvements include some unexpected failure and need iterations to fix these failures.
I think the concern about overnight self-improvement is mainly that there could be major algorithmic advances that you could apply to an already-trained model, if you could just think of them.
Fair enough. I'm a bit skeptical that there could be such improvements that greatly improve the utility of an already-trained model. Admittedly, _some_ improvements have been demonstrated (e.g. the reflection paper https://arxiv.org/abs/2303.11366). Still, training is a lossy compression step. Whatever information from the training data was lost during training cannot be regained by post-processing the model. A rerun of training is needed if some critical data was lost.
> But, for example, the American basically unilateral advantage in automobiles lasted from ~1900 to ~1960/70. And two rather consequential wars happened in those years!
Well, yeah, those consequential wars were a major reason why large parts of Europe were behind in industrialization. Whereas in the US the relevant industries were massively boosted by the war economy (in WW2, mostly), with no fear that the factories would be carpet-bombed to oblivion.
Except the dominance in automobiles started before even WW1 let alone WW2. And the US industrial dominance really began in the 19th century. While no doubt the world wars boosted the American economy the timing doesn't work for it to be the start or greatest portion.
I think the issue here may be scale. From a singularity/fast take off Doomer perspective, everything else looks small. But if you don't think that's likely enough to worry about, then something like "China gets a 40% economic boost" looks huge, and like something policy makers, politicians, and columnists are used to writing/worrying/talking a lot about.
I want to pile on and agree with @erusian here. When I read, "Jobs and Gates got rich, and their hometowns are big tech hubs," that immediately jumped out at me as exactly what winning a tech race looks like: a critical mass of expertise, capital, and institutions in one place that is almost impossible to unravel or catch up with. "Big tech hub" is sort of an understatement in geopolitical terms for what the Bay area and even Seattle are. The NSA can leverage these companies in its ongoing mass surveillance operation, military companies can hire from the same pool of software expertise, and if the US ever got into a major war it would leverage this industry to the hilt. Even off-the-shelf consumer products have direct military applications: Starlink as well as several US-based consumer-facing apps (like Signal) are used on the front lines in the Ukraine war, and whichever country is home turf for these products gets the option of restricting or tacitly encouraging their use in war. https://qz.com/2142453/the-most-popular-wartime-apps-in-russia-and-ukraine-right-now
EDITED TO ADD: also I completely agree that AGI hard takeoff is a huge risk, that the benefits of "winning the race" are not worth this risk if a country can contribute to avoiding it, and that "if we don't China will win" is getting used (as a lot of appeals to national security often are) as a trick to get people to turn off their brains and it's worth unpacking it with posts like these. But I think completely dismissing the idea that tech races can be won—and result in long-term impact—weakens the argument here, because clearly they can and do.
Yes, and it works in reverse. A country of thriving tech hubs is more fun than a country with immiserated slums. When D JFK broke US Steel and a couple other D lawyers broke Bethlehem steel, they immiserated Gary and Baltimore. When Pat Moynihan and Ralph Nader got through with Detroit, same. When D Jimmy Carter's judge broke Bell Labs it was great for Taiwan's electronics and doom for America's.
Whether you call a technology a race or not, rule by forced fame and states of siege is bad for the ruled. It's not like US politics is free of moral panics so we can trust the Anti-AI people to have some balanced judgement.
I'd agree as well, Erusian makes some great points. We also have an arbitrary victory point of 'lead to large scale empires winning at life' as the only valid outcome to winning. But why are we stuck thinking of nation states and dominance in war?
Also these broad scale 'wins' are STILL not everywhere. The poor children who are mining cobalt for our devices in the Congo live in terrible conditions and do not have electricity. I'd say the Congo has lost and continues to lose the electricity race.
Silicon Valley won the tech capital race and decades later it is still important. Memphis Tennessee, Detroit Michigan, and Portland Maine did not win this race. The US won the race and for nearly 50 years a tiny country with 4% of the world's population controlled the vast majority of major computer technology companies, new inventions, etc.
If that's not winning, then I have no idea what is! A 5 plus decade economic, technology, and political advantage which will likely continue to some extent for several more decades is winning 'the race'. And everyone else lost, even if they still benefited and were not super poor in the end. Right now perhaps Austin is winning the intra-country race for new big tech digital technologies in the US and Memphis and Cincinnati and Philadelphia are STILL NOT WINNING....even if they aren't explicit losing either since they have phones, internet, etc.
The US isn't exactly a tiny country, friend. You may have it confused with the UK...?
The nuclear example was a race, but not in the way it has been portrayed. During WW2, the other powers knew about the theory but they focused on more high-certainty projects like RADAR because they needed something practical. The US was able to start and fund the Manhattan project in part because they weren't actively being invaded and were far enough away from the fight. It made more sense for Germany, Russia, Japan, etc. to pour resources into something that they all knew would work, then pour more money into making it better. At the time, nobody knew nuclear would work, so the crisis of the moment held sway in the decision-making process for countries close to the fight.
That said, I don't think it's fair to say that the 1945 version of the nuclear bomb was as much of a quantum leap above the other bombing technologies of the day. The deadliest bombing of WW2 wasn't either of the nuclear runs. It was the firebombing of Tokyo. While estimates of these bombings have large error bars, the Tokyo firebombing (the first one, the US hit Tokyo more than once) was at least more deadly than Hiroshima or Nagasaki, and possibly more deadly than both of them _combined_. Firebombing was horrific, and the only reason the US doesn't self-flagellate about those war crimes is because of how shocking the nuclear bombing was. Obviously nuclear bombs were different from even firebombing. (No longer could you dismiss a single plane as just a reconnaissance flight.) But I think sometimes we don't appreciate how much other technologies are able to fill in the gaps in the years before a transformative technology arrives. The transformative tech isn't better because of what the MK1 can do. It's better because it opens a new horizon of exponential advance after the old tech has hit a plateau.
After WW2, the real race began. The US knew they only had a few years before the Soviets caught up. In part, because there was a ton of espionage to steal nuclear secrets. In other words, part of the reason the Soviets caught up was because they copied the homework of the scientists in the US. The race then was in accumulating a nuclear arsenal, and in finding better delivery mechanisms (eventually landing on ICBMs and other long-range missile tech).
Are there lessons for AI/AGI here? Maybe. We might assume that countries will be less likely to pursue speculative technologies if they are distracted by larger crises. Maybe a war or a financial crisis. So Russia is probably putting less into AI today than they would if they weren't in Ukraine. NATO may also be doing less AI research than they would if they weren't in a proxy war, but then again, they're far enough away from the fighting that this may not slow them down. China ... doesn't appear to have any disincentive and can focus on AI development. If anything, having active fighting somewhere in the world incentivizes non-belligerents to seek advantage in speculative technologies the belligerents don't have time for.
We might also remember that a strategy of "push to be first" will not result in the other party developing their technology independently. Just as Russia stole US nuclear secrets, the most likely outcome in an AI race is that competitors who are behind will absolutely seek to steal your technology in order to catch up. Should you slow down, then? Will that work? If you slow down, your competitor will steal from you until they've caught up, then they will push forward into new domains and you'll have to play catch-up (probably by stealing from them). Would Chinese researchers steal? What about Russian researchers once the Ukrainian 'special military operation' wraps up? What about other actors?
If your competitor is actively building AI MK7 but you're doing alignment on AI MK4, is your alignment even meaningful anymore?
I disagree re: nuclear bombs being a big leap. Total damage in one raid probably isn't the right metric.
The fire bombing of Tokyo was probably approaching the limit of effectiveness of fire bombing. The nuking of hiroshima was near the floor or the effectiveness of nukes ( one bomb, one city)
Assuming 15 nukes then the US could level 15 cities in one night. That isn't possible in a single night with a firebombing campaign. Tokyo was also quite susceptible to fire with lots of wooden buildings but nukes could threaten any city or even hardened targets.
No one is afraid of pissing off a firebombing power in the same way they are of pissing of a nuclear power.
But they didn't go from having 0 nukes to having 15 nukes overnight. Manufacturing fissile materials is hard to do at scale and it took a couple years to get from 0 to 15. Nukes did not represent a large overnight jump in the Allies' overall capacity for destroying cities. As sclmlw explained, the new technology put things on a new exponential trend for this capacity.
This isn't to say they weren't very valuable, even in small numbers, and Japan did not know what the Allies' capacity actually was or how quickly it could change. But "number of cities destroyed in one night" is not the right metric for describing what changed between June 1945 and August 1945.
Exactly! I was going to make this same point. Looking only at 1945, the US spent their entire nuclear arsenal in Japan. So despite the fact that the technology had been deployed, there was technically a brief period after Nagasaki when the global nuclear arsenal was zero. It took time to build out manufacturing capacity.
And arguably, it wasn't the nuclear strikes that tipped the scale for Japanese leadership to surrender so much as the threat of Russia entering the Pacific war and doing to the Japanese what they'd done to the Germans. There was a lot of bad blood there, and the Russians were looking for an excuse to get retribution for their losses the last time they'd fought the Japanese. Nukes were at least a good public excuse for the decision, though.
The change was the potential destruction. Japan didn't know what it was but they knew the limit could now be orders of magnitude more than firebombing and had to respect that potential
I don't think that's what forced the surrender. Japanese senior leadership knew it had lost the war long before that point. The purpose of continuing the fight wasn't because they thought they could win. They were still fighting to preserve the leadership structure after the war. That's why they kept coming back to the unconditional surrender edict and asking for at least a guarantee that Hirohito would stay in power. They didn't want their emperor to be tried as a war criminal with the Nazis, and repeatedly asked the US for assurances that the emperor would be spared. The US hinted that they would be magnanimous in victory, but weren't willing to give out guarantees, because that violated the 'unconditional surrender' edict. The thing that changed wasn't the bomb. It was the threat of an even worse conqueror, the Russians, ending all doubt as to whether the emperor would be left in power.
EDIT: The strategy worked, too. Hirohito died of natural causes as leader of his country in the late 80's.
I don't disagree that eventually nuclear weapons became the bigger threat. After the war, the US didn't keep around their fleet of > 150 aircraft carriers, because it didn't make sense to, but they did build out their nuclear arsenal. My point was that the momentary illusion of a massive advantage to whoever had nuclear capacity wasn't what conferred the real advantage in 1945. Scott was looking at the nuclear race and saying that 1945 was the END of the race to acquire nuclear capacity. I'd argue that nuclear capacity truly only BEGAN in 1945, given that the transformative aspects of the technology couldn't be realized until yield and delivery mechanics were more fully fleshed out.
"But I think sometimes we don't appreciate how much other technologies are able to fill in the gaps in the years before a transformative technology arrives. The transformative tech isn't better because of what the MK1 can do. It's better because it opens a new horizon of exponential advance after the old tech has hit a plateau."
Then why did Japan surrender after Hiroshima and Nagasaki, but not after the firebombing of Tokyo?
See upthread. They surrendered because the Russians were pivoting from their fight in Germany and declared war on Japan. They wanted a reprise of their last humiliating war with Japan, and they wanted it to hurt. Hirohito and his advisors kept asking for assurances from the US that they would leave the emperor in power. The US was vague, because Roosevelt had declared unconditional surrender, but hinted that they'd be generous.
In short, the Japanese leadership wanted assurances they didn't get from the US. They surrendered because they got assurances of a different kind that if they kept fighting until the Russians arrived there would be no emperor after the war. Hirohito died as emperor of Japan in the 80's, so I guess their strategy worked. Lots of Japanese people died after the war was no longer in doubt, not knowing they were literally dying for the fate of their emperor alone.
I'm not saying that dropping the bombs didn't contribute to the decision of the Japanese to end the war. I'm saying that the conventional wisdom that it alone was sufficient to convince them to end the war is unknowable because the Russian question loomed large (larger?) in the discussions among Japanese senior leadership about ending the war.
If you think of it as the Japanese not surrendering because they believed their own propaganda that they could still somehow pull off a win, I guess I could see why you'd also think they couldn't bring themselves to surrender until the US made it REALLY CLEAR they had lost. I don't think they were that stupid. I think it's clear from the accounts we now have that the leaders were most worried about their own skins, and near the end they made war decisions based on considerations of personal survival and the survival of the emperor.
The other problem of the "nukes were irrelevant in WWII" argument is because of the idea that things had to proceed along that timetable. Yes, by that time the allies had beaten Japan. But if the bomb and been manufactured six months or a year earlier, or if the war hadn't gone as successfully for the Allies as it did up until that point...
Why does Japan care so much that Russia declared war on it? Sure, losing Manchuria would have been humiliating, but not worse-than-unconditional-surrender humiliating. And the allied blockade had already mostly isolated the Japanese economy from the resources of their overseas empire.
Meanwhile, Russia is not a naval power. They did not have a Pacific Fleet worth mentioning, and they did not have the industrial capability to build a Pacific Fleet in any relevant timescale. They had only a handful of aircraft capable of striking Japan from Russian bases. A Russian invasion of Japan was not a realistic threat.
I suppose the United States could have loaded Russian soldiers onto American landing craft in a gesture of solidarity, but the limiting factor there is the American landing craft, and there's no shortage of pissed-off Americans, Brits, Aussies, Chinese, Dutch East Indiamen, etc, to fill them.
It was about the perception of the elite in Japan that mattered. Russia opening up a second front for Japan was big, it also coincided with their Germany and Italian allies losing their wars. This wasn't a near term strategic moment where we can measure Russian naval capacity in the pacific as particularly relevant.
It was a turn in the war overall and Russia making moves to attack Japan made it simply impossible for the Japanese to ever win and become the expanding empire in the mainland they wished to become.
It is also quite true the Russians won the European theatre of the war with their soldiers mattering more than or just as much as US efforts. So even the event of the lost allies and increased pressure by Russia on the Japanese due to them winning the European war was still Russia creating new pressures on Japan.
The nukes both did and didn't matter and Japan was always going to lose even if the Russians didn't join and the US had to take 6 more months to devastate their country. But the combination of Russia and the US being able to more singularly focus on Japan when the European war was won probably contributed more to Japanese leadership's thinking than the nukes alone did.
We also don't need to guess anymore or theorise like was done for decades. The narrative that the nukes were not that important came out of Japan when documents from the time were released in recent decades long after the war was over. So the cottage industry and useful US propaganda messages and history book conclusions which developed and became entrenched over decades are just intellectual inertia in the west.
Japan's own records show it was a mixed bag and they were indeed afraid of a new Russian front which dominated their conversations at the time they surrendered. The Japanese also didn't really understand the nukes that well at the time, we can be myopic and thinking they knew about radioactivity right away or that it really was a single bomb or not, intelligence was mixed on that front when they decided to surrender.
I share your doubt that the Soviet declaration of war was as important as later analysts have suggested -- as I said elsewhere, I suspect this is in part an effort to avoid saying anything positive about the nuclear bombings.
But I also agree the Japanese were quite unhappy about it. I would guess the principal worry was not so much that the Soviets would speed their defeat so much as that if the Soviets were part of the victorious coalition they would be in a position to demand concessions from a defeated Japan after the war, territorial and otherwise.
The Japanese probably figured (correctly) that the Americans wouldn't be interested in any permanent annexation of Japanese territory, nor in as much reform of the aristocratic Japanese society as communists might be, but neither of these things would be true of the USSR. It was definitely the lesser evil to have to surrender to the United States alone.
Why did Japan care?
Japan and Russia had had a war recently (in the memory of those in power in 1945), Russia was then humiliated and wanted revenge, Japan knew it would lose now (1945), and the reputation of Russian soldiers and military command for brutality is and has for a long time been unmatched. It was fear for the continued existence of Japan as populated islands.
The Japanese had a really bad week in August 1945: first Hiroshima, then the Soviet attack, then Nagasaki. Then before the surrender could be announced, there was a coup attempt that tried to get its hands on the record on which the Emperor's surrender speech was announced.
"The US knew they only had a few years before the Soviets caught up. In part, because there was a ton of espionage to steal nuclear secrets."
True, but even in the absence of espionage, the simple fact of demonstrating to the world that nuclear bombs work, and that the resources of one nation are sufficient to build them, told other powers a lot. _Many_ attempted technological advances fail, turning out to be rat holes that soak up resources and ultimately yield nothing. Once the USA ruled that possibility out, it gave all other nuclear weapons programs a major boost, even if they had not received a single bit of other information.
I absolutely agree with this. The theory of the uncontrolled nuclear chain reaction was untested at the time, and it could easily have been a scientific boondoggle. Knowing that it was possible allowed many nations to start up their own nuclear programs that sought to build the bomb from first principles, not primarily as a reverse engineering project. To the extent the espionage was used, it seems it was more to help accelerate those programs.
That said, I think it's possible to prove too much with that example. When not at war, plenty of governments build megaprojects with uncertain outcomes. The US and Russian space programs were both large, speculative government projects. Others included the LHC, ITER, ISS, and the human genome project (partially). Just because AGI is speculative, doesn't mean people will give up before they reach it. Since it keeps bearing financial fruit with more R&D money pumped into it, people will be incentivized to keep at it.
Many Thanks!
"The US and Russian space programs were both large, speculative government projects."
Agreed, and I agree with your examples. I expect that "copycat" programs probably face a somewhat lower bar to getting funded, since someone else has already provided the proof-of-concept demo.
I also agree that people are indeed incentivized to keep at AGI development, since, as you said, it keeps bearing financial fruit - which is why it is being driven by the private sector. ( though I wonder if there is a DARPA version training gpt-4-secret with all the classified stuff the government has added to the training set... )
Sort of. The main reason Germany didn't pursue an atomic bomb was because the only known fissile at the time was U-235, and the industrial capacity necessary to make enough weapons-grade 235 was enormous. It's not even clear the United States would've been able to build a substantial nuclear arsenal in the late 40s if it had all required 235. So as far as anyone knew according to the open literature in 1939, building an atomic bomb required a fantastic investment in isotope separation technology. It would be a superweapon -- but super duper expensive. Had it all stayed that way, it's entirely possible the nuclear bomb would've remained a weird white elephant, like the Me-262, something amazingly advanced, but just too expensive to be practical in that timeframe.
What changed it all was of course the discovery of plutonium, which formally took place in late 1940 by Glenn Seaborg at Berkeley[1], and the fact that Pu-239 is fissile. Plutonium can be isolated chemically from the products of fission, which means you can acquire it much faster and much cheaper than U-235. That's why all nuclear weapons since 1945 have fissile cores of plutonium, not U-235. It's the only material you can get cheaply enough to make nuclear weapons economical enough, even given their great destructive power[2].
This is also why the discovery of plutonium was kept secret and Seaborg's paper only published after the war. It's not that unlikely the Germans could have worked this out in the early 40s, since they were very up to date in 1939, but they had no fission reactor and no big cyclotron to make transuranics in quantity and test their properties, and nobody was going to give Heisenberg a shit-ton of Reichsmark to build one -- in this area there's a certain amount of historical luck, what with Lawrence being obsessed about cyclotrons before the war, and building up a very capable nuclear physics and chemistry group at Berkeley. That existing capital resource was critical to the discovery and exploitation of plutonium.
I'm not sure how the Soviets figured it all out, but Kurchatov was a smart guy, he had plenty of espionage results, and of course the Soviets build fission reactors and studied them.
My point being, knowing that U-235 was fissionable, and even that the critical mass was kg and not tonnes, would not by itself have led other nations to practical nuclear weapons. Plutonium turned out to be the key, and that was *not* widely known in 1939, and indeed the Allies tried to keep it a secret as long as they could. Of course, anyone who set up and studied a fission reactor would figure it out relatively soon.
---------------------
[1] Although looking back Fermi produced it in the late 30s with his neutron experiments in Italy. He just didn't realize it at the time, a very rare miss for Fermi, because he wasn't a chemist.
[2] And even then it took some very clever work with explosives to make Pu-239 work, on account of the small amount of Pu-240 predisposes it to predetonation, which is why the gun design doesn't work for plutonium.
Fascinating! Love these details. It's often the inconspicuous details that turn history from a collection of weird/unexplained decisions into an entirely human story.
Yes, the twists and turns are deeply interesting.
All good points!
I think that there was an additional miss in Germany where they (wrongly) thought that graphite couldn't be purified enough (removing neutron-absorbing boron) to serve as a moderator in a reactor, and pursued a CANDU-like alternative, hence https://en.wikipedia.org/wiki/Norwegian_heavy_water_sabotage
Could be! The history here is fascinating. A real trove are the "Farm Hall transcripts," in which the conversations of German atomic physicists interned just after the end of the European war in England were secretly recorded. Here's one of the key transcripts, which recorded their reaction to a BBC broadcast announcing the Hiroshima bombing:
https://ghdi.ghi-dc.org/sub_document.cfm?document_id=2320
Notice they start off just flat out not believing it. Heisenberg starts off saying it would've required an absurd amount of isotope separation. Hahn (the guy who did the experiments that proved fission was occuring) was apparently very distressed that a bomb had been built at all. It's clear throughout the transcript that the idea of their being a *chemically distinct* fissionable (plutonium) did not occur to them at all. They also observe that they were handicapped by the absence in wartime Germany of "mass spectrographs" which was apparently what they called Lawrence's cyclotrons (the principles are very similar).
>the only reason the US doesn't self-flagellate about those war crimes is because of how shocking the nuclear bombing was.
Partially that, but mostly because we won.
The privilege of being on the winning side is that you don't need to agonize over such trivialities as slaughtering innocent civilians en masse.
We still self-flagellate about that, which is what I was talking about, but you're right that it's mostly to scare everyone else about not starting their own collection.
Maybe it's unnecessary, but I'll add that WW2 was full of war crimes sanctioned by the top leadership of pretty much all the major players. Everyone talks about the Holocaust, and sometimes we bring in the nuclear bombs. The rest sometimes feel glossed over.
Those nuclear bombing runs weren't different in any moral way from what had been happening in both theatres of the war, and from both sides. There was this hypothesis (that predated WW2) that you could bomb your enemy into submission. Maybe that's correct, but not at a moral price anyone would have agreed to before the war began. They just kept escalating, thinking THIS much horror would be enough to force the issue.
Maybe the best you can say for the US is that the public were so angry after the sucker punch at Pearl Harbor that they did it all out of a blind rage. Meanwhile, the British bombings of cities was mostly initiated to stay relevant in the war. How is that for a justification of war crimes? It caused Hitler to retaliate in a blind rage, where arguably his bombing of London distracted from a more successful use of the Luftwaffe before that. So Churchill's gamble, betting on war crimes as a strategy, paid off by also exposing his own population to the same. Whose hands bear their blood on them? I'm not sure there's a law of conservation of blame - both leaders can be equally at fault.
The Japanese marched through China during a pre-game campaign of terror that was worse than what the Germans did at the outset of WW1. By that point, there was no excuse for pretending you could intimidate your enemy into submission with Assyrian-style brutality. It didn't stop the Japanese from raping, pillaging, beheading, etc. They earned the enduring animosity of the Chinese in a way the rest of the world has forgotten. But those atrocities continued as the slaughter continued, even though the rest of the world shifted focus onto tiny islands thousands of kilometers away.
Something about war brings war crimes with it, but those aren't always military strategies directed from above. Not like they were in WW2. In that war, the gloves truly came off. Nearly everyone, in nearly every theater, was willing to do what was necessary to win, whether through unrestricted submarine warfare, tricking/forcing untrained soldiers into becoming Kamikaze pilots, torture of POWs, etc. Collective memory has chosen one or two war crimes from that war to focus on, perhaps because the scale of brutality is too much to deal with. Easier to contemplate the simple horrors of the atomic bomb.
The simple answer is that First Mover Advantage is real, and the institutions that the first movers choose to implement often become the de facto standards and prove highly durable. It's much better to have First Mover Advantage for AI occur in a liberal democracy.
This is a good, short way to put most of what I'm trying to say.
Perhaps an even stronger example to add to Erusian's list is communications. It's well known (among the few people who follow such things...) that the Anglo world very rapidly established dominance in telegraphy, culminating in cables around the world, and that dominance had an endless stream of knock-on effects, many of them of military significance, and continuing as one domino after another throughout the 20th C. (Telegraphy spawns radio and phones with their own value, but also radar as a side effect. These all spawn transistors which give us computers and mobile.)
There are books, for example, about Germany's endless frustration beginning around 1900 with the fact that telegraphy was so tightly sewn up by Anglos.
Here's an example of this sort of thing: http://blogs.mhs.ox.ac.uk/innovatingincombat/british-cable-telegraphy-world-war-one-red-line-secure-communications/
And at every stage it's basically the same set of players doing the most important work and reaping the benefits. Sure competent other nations can extract some value once the field is commoditized, and there's a panic every two decades or so that "they" are taking over. But it's always the same pattern, that the Anglo's are doing the stuff that actually matters, and what "they" are doing is refinements that are no longer interesting to the leading-edge Anglo's.
All of which gets us, of course, to where we are today with AI and (of course!) mostly the same usual Anglo suspects as the names that matter.
In a "race", I'd expect "winning" to give some major reward like an ongoing advantage, and I don't think that's the case for these examples.
If anyone really won the automobile race, it was the Germans, or more specifically Karl Benz. But as you point out, that advantage didn't last long once the Americans (or more specifically Henry Ford) figured out how to make automobiles cheaply. Then eventually the Americans got surpassed by the Japanese who were nowhere to be seen in the initial automobile race but played a good game of catch-up decades later; more recently South Korea has caught up too and China is getting there rapidly. At every point in history, the leading automobile manufacturers largely turned out to be the leading industrial powers of the day; it would be shocking if the USA weren't the leading car manufacturer in 1950 even if Henry Ford had never existed.
The reward isn't indefinite to be sure. Eventually other people can surpass you. But if you're afraid of AI creating the singularity then "eventually someone else can surpass you" is irrelevant. AI isn't going to allow for a century of competition and learning from American engineering until you can eventually outcompete it. The strong AI exists and that's that.
Right. From the "AI Singularity" perspective all the other arguments don't make sense. But a lot of stuff doesn't make sense from the AI Singularity perspective.
This depends on how nuanced your idea of "technological singularity" is. I truly expect AI to be a technological singularity. And that we are already in the "ramp up" phase. But what this means is not that it's the end, but merely that it's a WTF moment (we haven't gotten there yet) beyond which reasonable predictions are impossible. The range of possibilities includes not only "humans are destructive pests" and "humans are to be worshiped and obeyed", but also "I'm going to pack up and leave", "humans are decent pets", and lots of things I've never thought of. And I can't assign any probabilities to which outcome will happen, in part because it depends on choices that are/will be being made.
I don't think that alignment is a good goal to work for. Not for an AI that's complete enough to have a sense of self. Friendly is a much better goal.
I think the issue is that the value of winning is indeed pretty small in the grand scheme, and that's okay.
If the cost of winning the race is potential extinction, then the prize for winning is deeply inadequate. This seems to be Scott's claim, and I agree.
If the cost of winning the race is mainly having to put up with doomers yelling at you, then the potential gains far exceed the price.
The "race against China" people are implicitly saying "I think the chance of AI catastrophe is so low that I'd risk it for just a chance at one decade of 10% higher GDP." That's obviously not convincing if you're a doomer, but communication between doomers and skeptics is by all indications basically impossible anyways.
Well, Scott thinks that Xi isn't a sadist, but I'd guess that many people aren't so sure. Given an in practice all-powerful genie to implement your wishes, you can have whatever worldwide totalitarian dystopia you've always dreamed about, and I don't understand the certainty that he doesn't actually want one.
Most people don't want dystopia, so without evidence I don't see a reason to think that Xi wants one. His behavior is well-explained by the perspective of a leader doing what he thinks is best for the nation overall. I'm not saying he's great, but I don't think he does cruel things for fun either.
I suppose that the word "sadist" isn't really the best fit here. Elsewhere in the thread people suggest Lewis's righteous busybodies:
“Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience.”
I think there are many instances where you can look at Putin's actions and reasonably surmise that he either is actually sadistic, or at least will use sadistic methods in order to convince others he is. Some of his actions towards journalists & dissidents go beyond just jailing them, into acts that wouldn't be out of place amidst those of Mexican cartels or ISIS. He's at least someone without a strong preference for a non-sadistic method over a sadistic one.
As for Xi, it may seem a reasonable action to sterilize or forcibly impregnate Uyghur men & women respectively, but if that's the coldness of logic we can expect, wouldn't it also be reasonably to destroy your rival population if there's essentially zero cost to doing so with an AGIs help? There's always some non-zero infinitesimal chance they may prove a threat later.
This is essentially what I think. Scott's perspective is very W.E.I.R.D-centric from where I'm sitting. Cruelty for cruelty's sake, raping & pillaging, & salting the fields after you has been the norm for 98% of human history. Most non-abrahamic, & even many abrahamic (like Russia) cultures do not have the slave morality trait of believing there's intrinsic worth to your enemy's life or existence.
Xi & Putin are from such cultures, the latter has probably killed people with his own hands, and has at least definitely ordered the assassination of innocent journalists & dissidents. The former has citizens from a restive outlying region in camps getting sterilized if they aren't getting non-consensually impregnated by Han citizens instead. I don't understand the assumption that we get to survive in a world where they reach the finish line first just because there's resources.
I don't think there's friendly handshakes at the finish line if the CCP or Kremlin get aligned (with them) AGI. I think there's a holocaust that wouldn't be out of place in a Cixin Liu novel waiting for the citizens of liberal societies.
The reward doesn't have to be indefinite to be significant. The allied victory in WWII, coupled with the US's intact industrial base, coupled with the US being the only nuclear power in the immediate aftermath of the war, allowed the US to build the foundations of a Western, and later nearly global, world order that is still mostly with us today. Short-term technological advantages can give you a window of opportunity for creating favorable, longer-lasting institutional advantages, and that can matter a lot for the people who happen to be living while those institutions are influential.
Also, in Computers the rest of the world hasn't cought up yet. Yes we in the rest of the world do have computers and related industry now and programming them is indeed my job, but what we mean by the computer industry is still basically a California thing because that is were it started about 65 years ago.
You simply cannot say "aligned" in the abstract without specifying aligned with what, least of all in the context of arms races. There is not a common denominator of moral thought underlying superficial differences between say Russia and the USA, or North Korea and the EU, or Iran and Australia. The average Russian thinks that invading Ukraine was OK and that nuclear first strikes are fine if they entail the greater glory of the motherland. A Russia aligned Russian AI would presumably be more dangerous to everyone else, than a non aligned one.
The Byzantine invention of and monopoly on Greek fire seems to be about the clearest example of winning an arms race leading to winning wars. I don't know enough to assert this for certain but Wikipedia seems to agree fwiw.
> There is not a common denominator of moral thought underlying superficial differences between say Russia and the USA, or North Korea and the EU, or Iran and Australia.
Is there not? That's a *very* strong statement.
Is it? Look at the Wannsee Conference. I don't personally feel that I am in a same principles, different application of them sort of space with those guys, or the Christians who a few centuries back were cool with burning people to death for being a slightly different sort of Christian, or the Russians who are cheerleading rape and torture in Ukraine. there is no shared moral substrate.
"aligned", in context, means "aligned with (a) human". A Putin-AI would certainly not be great, but if I sing the praises of Putin maybe it'll let me live a relatively normal life. An unaligned paperclip-maximiser AGI will kill everyone.
Putin only lets you live a relatively normal life because you are not in Ukraine. It looks a lot as if Ukraine's attraction to him is accessibility. Putin AI with a much wider reach because much cleverer, is a threat to everyone.
No, probably the unaligned AI wins because offense usually beats defense at the superintelligent level and everyone dies, unless the aligned AI stops the unaligned AI from coming into existence.
"... the race was won by Britain and you saw several people racing to catch up like Austria"
This is a contradiction. If the race is won you cannot "race to catch up". People racing to catch up means that the "race" is ongoing.
The whole point is that, unlike in races, the real world doesn't stop at some arbitrary point on the road and crown a winner. People can be more advanced in a technology for some time, and that gives them relative advantages, but there is very very rarely a "win" - a specific break point that ensures they are dominant in that technology for an excessely long time (especially in a zero-sum sense of preventing others from using the tech), or modifies the world such that they are suddenly advantaged permanently despite perhaps losing their edge in the technology later on.
IMO the main point (not even made by SA), is that tech isn't zero sum. Generally developing tech doesn't mean you stop everyone else from doing it, and in fact even if you do get them to shut down their efforts (like the Soviet computing example), they will get ahold of the tech by purchase or other means. You have to use the tech to do something zero-sum, like kill the other side, in order for it to be a "race". Otherwise it's more like a joint expedition.
I think you're overemphasizing one specific definition of "race", but an "arms race" is usually more like what's being described here, where people keep spending resources to be slightly ahead on some progression.
An arms race is a "race" because the assumption is that the arms will be used to kill the other side, such that the outcome will be concretely resolved in the near future (winner determined) and be zero-sum (such that what matters is *relative* armaments, not total armaments, and a small advantage reaps disproportionately large rewards).
These characteristics don't fit other tech. There is no concrete resolution that is akin to a war, where we lay it all out on the table and the winner wins. Someone having a slightly better mousetrap in Germany doesn't mean your mousetrap is useless, and total investment in mousetrap tech helps everyone.
I think that's certainly a presumption people have, but people also compete to develop better military technology in cases where that isn't plausibly true (e.g. neither China nor the US can actually conquer the other) and I think most people would still consider that an "arms race".
Great post.
Is there any chance you are thinking of doing a FAQ on AI risk and AI alignment, similar to the FAQ you did on Prediction Markets? Feel your understanding of the complex jargon and the various alignment problems, and your clarity of writing, you might be the best person to produce the ‘go-to’ explainer for AI noobs. The kind of resource I could point a friend who hasn’t even used GPT, let alone knows any of the alignment debates, to.
Or if there is already such a resource (not Less Wrong, feel that’s not clear enough for a newbie), can anyone recommend it?
he has, tho it is 7 years old now
https://www.lesswrong.com/posts/LTtNXM9shNM9AC2mp/superintelligence-faq
Thanks! I'll check it out
Double thanks. I'm way too ignorant of all this. While I'm a longtime reader of Scott's, I used to skip over the AI posts.
(Today, the 4 seconds I used Google to find out what a "training run" is (that the recent open letter wants to postpone), the first answers I found had to do with athletes using AI to train for races. I didn't scroll down to see the other results.)
Just one point that really needs to be considered further by this community: China is highly constrained in their development of language models due to the need to be okay with CCP censors. I claim that China will be vastly better at alignment than the West, because at every stage of AI development, language models that do not fully align with CCP goals, language and values will be weeded out. The only caveat being that the goals of the CCP are probably not particularly aligned with positive outcomes for the majority of people (even in China)
I'd be interested in seeing a smart, careful analysis of this. It seemed very easy for OpenAI to create models that only violate American taboos very rarely and after repeated provocation; I don't see why this should be harder for the Chinese. In fact, it should be easier, because Chinese-language training data probably already tends towards things the regime approves of.
I second the call for deeper analysis.
One thing I've read (don't quote me on this) is that many Chinese language models are trained using data that does not only include Chinese sources. To maximise the corpus of training data, they use a similar variety of sources to the Western ones, which means there is potential for government criticism.
It's also possible that Chinese government workers or even businesses could be given access to chatbots, and only the public will be blocked from accessing them (which seems to be the case right now)
Interesting point, I suppose a lot would depend on whether Xi is willing to subsidise these tech companies (that he seems otherwise hostile to) to pay for chatbots that won't realistically be making much money
ChatGPT is already great at avoiding topics sensitive to Americans or giving out boring, politically correct answers. Why would it be any more difficult to give boring, politically correct answers for the Chinese version of what's "politically correct"?
In China the bar is so much higher, and the margin of error is 0%. It's plausible to me that a chatbot that could pass CCP censors would be next to useless