499 Comments
Comment deleted
Expand full comment
founding

High end computers in the necessary numbers, are in fact that hard to obtain. Especially if there's any sort of regulatory regime.

Expand full comment
Comment deleted
Expand full comment
Oct 5, 2023·edited Oct 5, 2023

"Records of human height show that living standards in most of those countries fell dramatically through the 1800s, and mostly did not recover until towards the very end of the 1900s and in some cases still haven't recovered."

I'm highly skeptical of these numbers. Can you give me a source?

Expand full comment

Do you have the data for the height effects you discussed?

Here's a graph of the US (which doesn't break out the South) which seems to show a recovery to 1820s levels of height by about 1920 or so.

https://www.foodpolitics.com/2011/05/whats-going-on-with-human-height/

America is the only country that I can find which has a decrease in height over that time period. (1820s-1890s)

Here's a graph, I'm not sure how reliable, which shows life expectancy in the US recovering by 1870, the start of the Industrial Revolution.

https://ourworldindata.org/grapher/life-expectation-at-birth-by-sex

Here's a source showing steady growth in Latin America since 1820.

https://www.theguardian.com/news/datablog/2014/oct/02/why-a-countrys-average-height-is-a-good-way-of-measuring-its-development

Perhaps there was a decrease in height in colonized areas of South America prior to the industrial revolution in colonizer countries which was reversed with industrialization of colonizer countries.

Expand full comment

>Records of human height show that living standards in most of those countries fell dramatically through the 1800s, and mostly did not recover until towards the very end of the 1900s and in some cases still haven't recovered.

Do you have a source for this?

The idea that late 90s post-colonial Africa represents a "recovery" of living standards is extremely suspect. Are you under the impression that pre-colonial Africa had high living standards in material terms? They had much, much smaller economies, and at the height of colonialism, many of the colonies were amongst the wealtheist in the world. Obviously this didn't entirely go to the people, but you need a minimum economy size in the first place to even allow material improvements to occur.

After colonialism, what we ACTUALLY see is the rest of the third world, especially in Asia, experience significant development and Africa...not experience this. I mean, look at somewhere like Zimbabwe. Zimbabweans have proveb themselves incapable of even *maintaining* the economy and living standards of Rhodesia, let along increasing them. Same with South Africa, all the predictions by the left about that country have completely failed.

> I'm not sure if regulating AI or totally deregulating is the right approach to get there, but think it is important that AI should be more decentralized and widely used right off the bat so that the benefits aren't concentrated in some countries while costs go to others as happened with the Industrial Revolution.

What exactly about e.g. Africa makes you think that giving them control over an extremely powerful technology is a good idea?

It's fair to say Africa shouldn't be colonized. But there's no analogy, you're literally just saying the US should give them massive amounts of foreign aid because they exist. Fair enough, perhaps, but we're not imposing a cost on them by not needing them for anything, and their abject lack of economic success is not our fault.

Expand full comment

> I’m surprised how easy it is for governments to effectively ban things without even trying just by making them annoying. Could this create an AI pause that lasts decades? My Inside View answer is no; my Outside View answer has to be “maybe”. Maybe they could make hardware progress and algorithmic progress so slow that AI never quite reaches the laptop level before civilization loses its ability to do technological advance entirely? Even though this would be a surprising world, I have more probability on something like this than on a global police state.

The key difference between an indefinite AI pause and most other types of regulations is that there can't be any exceptions. If you pause AI in the United States, and people build AI in Singapore or Switzerland, you failed at stopping the technology. This is a very high bar. Even nuclear regulation has seen exceptions. Despite attempts at non-proliferation and tight controls, North Korea still has their own nuclear weapons program.

Another key difference is that AI is immensely valuable to develop and in theory, can be researched using relatively few resources. That's why I expect nations will eventually try to develop it. Even with a strict AI development moratorium, unless there's an extreme global taboo against creating AI, I expect some research to be conducted covertly. Eventually, Russia et al. would start their own program to pull ahead of their adversaries. Very strict controls will be required to halt this type of thing in the very long run.

I agree these features of the problem don't guarantee that AI can't be paused for many decades without a global police state, but I think these points together make a pretty strong prima facie case for that position. And I should also clarify: I'm not saying that we will get a global police state right away, after only a year or two. Rather, I'm imagining a slow decline into that type of regime if we tried to pause indefinitely, as we would need to ratchet up our restrictions to prevent people from building AI anywhere in the world. The regime I'm imagining might not appear in a decade or two. But can we really keep AI locked up for, say, a century without resorting to extreme measures? A thousand years? I'm skeptical.

Expand full comment

I think we agree on facts, but I don't understand how the arguments you make relate to any policy proposals suggested today. Your vision of a pause seems to be that no-one builds larger systems for an indefinite but very long time, which isn't the same as what was being proposed, which is indefinite until regulations are put in place, and I agree just stopping forever isn't realistic. But without extreme measures, I think you agree we can buy decades.

North Korea got nukes decades later than they otherwise would have, and I certainly agree that bad actors pursuing AI would be able to do so decades after everyone else are able to do so - but that gives us quite a long time to solve alignment, compared to the status quo.

Also, as a complete aside, I don't see a strong argument that a slide to dictatorship over decades is significantly more likely in a regulated AGI pause world than in a world where we survive via prosaic alignment and have very powerful AI systems in the hands of either governments or large companies. (Whereas the world where we all die makes that irrelevant.)

Expand full comment

> Your vision of a pause seems to be that no-one builds larger systems for an indefinite but very long time, which isn't the same as what was being proposed, which is indefinite until regulations are put in place, and I agree just stopping forever isn't realistic.

I think this might be our core disagreement. I simply think that some people *are* proposing an indefinite pause of the type I've described. This is how, for example, Scott Alexander described the positions of Rob Bensinger and Holly Elmore. He wrote,

"* Complete ban on all frontier AI research

* Unpause only after completely solving alignment even if that takes centuries

Supported by: Rob Bensinger, Holly Elmore"

Expand full comment

I think there are predictive differences involved which are at least as, if not more important than the differences in policy approaches - and that's a large part of why I think you're claiming that they are advocating for something that I don't think they were saying. (I also think it's weird to appeal to Scott's characterization when Rob's been talking about his view of the risk for years, and Holly has been clear about her position as well.)

One key predictive difference is that Rob and Holly both expect there to be no way to build a consensus about safety because these systems are inherently unsafe and increasingly dangerous as they scale, and that aligned systems are fundamentally impossible without solving alignment completely, and we would therefore find that powerful systems are always dangerous. If this is correct, and we have regulation that slows things enough to realize that fact before we all die, an indefinite pause turns into a ban, not just via strict enforcement, but also via norms and global consensus not to commit suicide. This wouldn't be stopping forever via a moratorium, it would be an evolving consensus - I envision this similar to the nuclear test-ban treaty. On the other hand, if it's incorrect, then presumably a consensus develops to allow safe uses, and there's no indefinite ban. (I also think there's a predictive difference between Rob and everyone else about how quickly we hit ASI.)

Perhaps I'm missing something, and you think that an indefinite ban is [un]justifiable even in a world where alignment is impossible, or you think that world is so incoherent that it couldn't be what anyone is discussing?

[Edit to fix a mistake.]

Expand full comment

Let me explain where I'm coming from. I'm a boomer, born in 1958. I grew up reading Arthur C Clarke's Profiles of the Future, with its timeline https://everything2.com/title/The+next+100+years+according+to+Sir+Arthur+C.+Clarke and attended the 1964-1965 World's Fair, https://en.wikipedia.org/wiki/1964_New_York_World%27s_Fair with, amongst other things, GE's fusion exhibit.

Yes, we got the internet and cell phones, and some better treatments for heart disease.

Yet I write this in a house built of wood.

We have a small space station, which is about to reach end of life.

We don't have nuclear rockets.

We haven't landed on Mars.

We don't have a base on the Moon.

No one has walked on the Moon in over 50 years.

We don't even have an SST anymore.

We don't have flying cars.

We don't have controlled fusion.

We (in the USA) almost entirely stopped building even fission power plants.

We don't have a cure for aging.

We don't have a cure for cancer.

We don't have Drexler/Merkle nanotechnology.

This looks a lot like Vinge's "Age of Failed Dreams"

About the only substantial advance that seems reasonably likely to happen before I die is AGI. I would like to _see_ that, at least. So I would rather not see that one possible advance impeded and possibly stopped. So I, for one, vote no pause.

Expand full comment

Good points, and I think any consideration of whether a pause should be advocated has to take into account the very real possibility that what results is a half-assed version or some kind of compromise. It's not only techies out there who want a pause out of concern for AI safety and I fear that a very plausible outcome of such advocacy is that world leaders won't be willing to completely ban it but there are a number of vested interests who would love to protect their skills/jobs from competition with AI so a very plausible outcome is you don't get anything that's very helpful on the alignment front but you do get a bunch of annoying regulations that reduce transparency into AI development. After all, you can't sue secret military AI research for not jumping through the right hoops if you don't know the project exists.

Also it creates an inherent advantage for countries without an independent judiciary.

Expand full comment

I think I already shared this story but when Yudkowsky made his comments about a treaty enforced by nuclear threats it reached China and a Chinese programmer asked me if it was real. I said it was but he shouldn't take it too seriously. His response was: Good, because if the US ever tried that the Chinese government would build nuclear proof AI bunkers and have a Manhattan style project to get to strong AI despite the US ban. Anecdotes aside I really don't think a pause is really practical. Getting AI going doesn't require nearly as much capital or talent, or have as many choke points, as nuclear.

The good news is that China is all in on AI alignment. Firstly for the pragmatic reason that they want to align it with the CCP. But secondly because of Xi Jinping Thought's take on the changing mode of production brought on by computerization which necessitates alignment of the new mode of production. There's a XJP Thought case for why AI alignment is Very Important. The downside being you'd be helping China do things like track down dissidents or making sure that ChatGPT never mentions anything bad about Mao.

Many politicians in many countries including the US also want something like that, if not that exactly. This is mostly what the political classes are crying out for. They don't call it alignment but they want to make sure AI doesn't threaten their political priorities or disrupt them the way social media did. But if you can make sure an AI never says or does anything racist or against the Chinese government then that counts as aligned. And it'd probably be the relevant place to focus efforts. Pushing on an open door and all that.

Expand full comment

Indeed, but for me it's a more problematic IA threat that the paperclip apocalypse or AI overlord. Because while those could happen in the future, the fine grained AI-assisted global surveillance of all citizens is happening now, has happened for some time in fact, and not only in China. The supposed liberal-democratic places are all speaking about protection, antihate speach and fact checking....But COVID has shown what this means (and how little different it is in practice from big bad chinese dictatorship). Drone surveillance of people breaking curfew/lockdowns? Account freezing of protesters (canadian truckers)....yep, sure.

With this in mind, the freeze do no good: the tech as it is is already enough in theory and used in real life for this purpose. By Governments (or companies, which are quite similar entities once big enough). More IA would make this even more efficient (not good, but no paradigm shift) or replace those Govlike entities by IA ones. Would they be more hostile to human individuals? I don't know, but when thinking about alignment one factor almost never mentioned is it's not AI vs human, but AI vs abstract bureaucratic entities whose are not human-like and have very dubious alignments. Any human is not in charge since a long time already, it's not even humanS (no more than CPU chiplets or NN subgroups are an IA).

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

"But if you can make sure an AI never says or does anything racist or against the Chinese government then that counts as aligned."

Probably how it's going to go, yeah. "The Chinese government is slaughtering this minority population, but for the sake of the huge market there, we're making sure our AI only sings the praises of CCP glorious state and thought".

Maybe not all the time, but a heck of a lot of the time, money trumps principles.

Even more damn cynically about this possible state of affairs: "Meanwhile, we're using the AI to clamp down on that 70 year old guy who used the word 'Gypsy', doesn't he know that's a racially offensive slur, away to the mandatory re-education programme with him!"

Expand full comment

And we'd all be safer if we didn't have to listen to bigots like him. "AI safety" ftw!

Expand full comment

Well, if 70 year old guy didn't want to be designated an Enemy of the People, he should have made sure to have these few simple requirements:

https://www.tumblr.com/buried-in-stardust/730285458712625152/arranged-date-standards-eng-by-me?source=share

Expand full comment

I'm amused that poor old Eliezer got the 'fine people hoax' treatment with his so called 'nuclear threats' comment (which was nothing of the sort).

Expand full comment

Stopping the AI from saying anything "racist" is a very very different problem to true alignment. A highly intelligent AI may very trivially be made to not say anything "racist" to avoid humans becoming mad at it, which may interfere with its goals, whereas there are few goals that an AI might reasonably expect to have/be given that would be helped by it saying "racist" things. Plenty of other things that we would consider problematic DO have the potential to greatly assist in reaching the AI's likely goals - that's where alignment actually becomes a real problem.

Expand full comment

Two pro-pause arguments:

- I don't think the China argument is particularly strong. China is already more pro alignment and worried about uncontrolled AI than we are. If you're assuming a hypothetical world where you can convince the US government to do an AI ban, assuming you've convinced China is less of a stretch.

- If you do have an AI ban, I don't think "eventually tech improves to the point where it's easy to subvert with home equipment". Moore's law doesn't work that well (people already argue over whether it's dead), and algorithmic/research progress relies on billions of dollars in investment in research and training runs from bigtech and VCs. If you kill their interest in doing that I don't see alternative black market research happening on a scale to match bigtech research (national governments don't really see AI as a superweapon to be promoted the way they do nukes).

Expand full comment

Strong agree on the China argument. If the cost of having AI alignment is having China as another major player in AI - which I think is pretty unlikely, given their current state of research, their demographic transition, and their economic situation - that seems like a perfectly reasonable outcome.

And in the alternate world of not putting precautions in place and getting lucky enough to survive anyways, Western companies and countries are going to sell AI and AI services to China, and everyone else, for surveillance and use in ways that suppress dissent anyways. Or they will adapt open source models. Not pausing doesn't fix that.

Expand full comment

>I don't think the China argument is particularly strong. China is already more pro alignment and worried about uncontrolled AI than we are. If you're assuming a hypothetical world where you can convince the US government to do an AI ban, assuming you've convinced China is less of a stretch.

The China argument is weak, though I think there's a bigger argument to be made than just China fumbling alignment. There's also the real possibility that China builds strong AI that is highly aligned, but that 'alignment' is with the values/interests of the CCP.

An out of control bulldozer is on average more dangerous than an in control one, but a bad person in control of a bulldozer can do a lot of damage.

Expand full comment

„Don’t rush” usually was a smart move.

Just let China rush, while having simple break for research.

It is more likely that they will have problems with their „rushed AI”.

Also overregulating AI will backfire. We want to be friends with AI, not master-slave relation.

If we are happy when AI is happy, than it is most likely that AI is happy when we are. If you know what I mean - always four outcomes.

That is my opinion, good luck :)

Expand full comment

>It is more likely that they will have problems with their „rushed AI”.

Well, yes, but that doesn't necessarily solve the problem, because AI isn't like building a chemical factory where if something goes wrong it's only a problem for the ones building/operating it. It's more like summoning demons, where if something goes wrong, your demon eats you *and then everyone else* absent an extreme effort to get rid of it.

>If we are happy when AI is happy, than it is most likely that AI is happy when we are. If you know what I mean - always four outcomes.

Be aware, here, that the human tendency toward reciprocity is mostly not explicit game theory; there are specific structures in the human brain doing this. We were selected very hard for co-operative behaviour. So I wouldn't assume that being nice to the AI makes it be nice to us *except* insofar as game theory explicitly says so, and game theory doesn't say so if you're potentially immortal and can win a war against humanity entire.

Expand full comment

I think the point here is that for China to summon the demon, they need to advance their knowledge of the relevant ritual further than the US (the acknowledged masters of demonology) already has done. And if they rush this they won't end up summoning an uncontrolled demon but rather a lesser entity without full autonomy. Because getting the ritual to create a dangerous demon right is extremely hatd, and the lesser denzins of the nether regions can be hard to distinguish from a demon until you actually ummon them.

Hell, I'm not even totally convinced the best demon-summoning wizards of the west coast can actually get the ritual right. A rushed job in order to meet a deadline imposed by a cruel supreme monarch in a (and weirdly this isn't an analogy) feudal society sounds like the ideal way to get the blood of the wrong sort of rodent smeared on the pentagram and no-one prepared to challenge the error.

Expand full comment

The problems in demonology is that demons are quite prepared to overlook minor flaws in summoning rituals because they're eager to come eat your soul. If you're dumb enough to believe "It worked! That means I'm in control!", it just makes the surprised then terrified agony on your face as they eat you even sweeter.

Expand full comment

To maintain the analogy a demon can only be summoned if you breach the right walls between realms though. It's doing the correct ritual wrong that gets your soul eaten; the incorrect ritual doesn't get you to the right unworldly address.

Outside the analogy, this is perhaps my greatest concern with the AI risk conversation: the assumption that AIs will somehow want to exist and consume whatever the coded equivalent of a soul might be. A demon does this because it's in its nature, being a creature purely created by human moralistic and fantastic urges. A being constrained by reality might still be dangerous but is unlikely to have a demonic motivation.

Expand full comment

It doesn't necessarily need to.

"AI, End world hunger!"

"Easily done, I'll just kill all humans, then no one will experience hunger"

"No, no like th-" [signal lost]

This is obviously an oversimplified and contrived example, but the idea is that any form of agentic behavior without strong guarantees of alignment is intrinsically extremely dangerous.

Expand full comment

What's the unsimplified and non-contrived version? I'm not agnostic to AI risk but I don't spend enough time round it to understand why it is akin to summoning a demon (and a demon would indeed be able to take such a simple instruction in this manner). One reason the layman might not take much notice of AI (and indeed other catastrophic risks) is simply that their communication to the public (particularly, as you note, the paperclip maximiser, which I always suspect was chosen with public hatred of a certain Microsoft creation in mind) is so familiar as a version of the moralistic fables that humanity excel in inventing that it appears this is a fear created by humans as a story rather than a real thing. The default explanation has to be something more convincing than a case which is effectively a re-telling of the magical wishes story if it is to be convincing.

I know this is all speculative, but speculation about the course of modern technology that falls back on the tropes of fairytales is either underdone or post-modern. I'm hoping it's the second and that there's a non-facile case.

Expand full comment

Sounds fairly simple to avoid a scenario like that - Ensure AI can only say things, but not do them!

"AI, how can we end world hunger"

"Easily done, you could kill all humans"

"Er no, without killing anyone"

Expand full comment

"To maintain the analogy a demon can only be summoned if you breach the right walls between realms though."

From Marlowe's "Doctor Faustus":

"…I see there’s virtue in my heavenly words;

Who would not be proficient in this art?

How pliant is this Mephistophilis,

Full of obedience and humility!

Such is the force of magic and my spells.

[Now,] Faustus, thou art conjuror laureat,

Thou canst command great Mephistophilis:

Quin regis Mephistophilis fratris imagine.

…Meph. I am a servant to great Lucifer,

And may not follow thee without his leave

No more than he commands must we perform.

Faust. Did not he charge thee to appear to me?

Meph. No, I came hither of mine own accord.

Faust. Did not my conjuring speeches raise thee? Speak.

Meph. That was the cause, but yet per accidens;

For when we hear one rack the name of God,

Abjure the Scriptures and his Saviour Christ,

We fly in hope to get his glorious soul;

Nor will we come, unless he use such means

Whereby he is in danger to be damn’d:

Therefore the shortest cut for conjuring

Is stoutly to abjure the Trinity,

And pray devoutly to the Prince of Hell.

…Faust. Where are you damn’d?

Meph. In hell.

Faust. How comes it then that thou art out of hell?

Meph. Why this is hell, nor am I out of it.

Think’st thou that I who saw the face of God,

And tasted the eternal joys of Heaven,

Am not tormented with ten thousand hells,

In being depriv’d of everlasting bliss?"

And of course, Faust thinks his servant will enable him to do immense, superhuman deeds:

"Faust. Had I as many souls as there be stars,

I’d give them all for Mephistophilis.

By him I’ll be great Emperor of the world,

And make a bridge through the moving air,

To pass the ocean with a band of men:

I’ll join the hills that bind the Afric shore,

And make that [country] continent to Spain,

And both contributory to my crown.

The Emperor shall not live but by my leave,

Nor any potentate of Germany.

Now that I have obtain’d what I desire,

I’ll live in speculation of this art

Till Mephistophilis return again. [Exit.]"

Expand full comment

Long-time lurker. I have a vague recollection along the lines of your being a 55+ Irish Catholic nun. Am I recalling correctly? I'm a rabbi of sorts and I love your comment 😂.

Out of curiosity, assuming I got the particulars approximately correct, is there some thread or page of your own where you discuss your current beliefs and/or practices?

Expand full comment

Yes to the +age, yes to the Irish Catholic, but never a nun! Educated by them but never signed up 😁 Thank you for the compliment, though!

I don't have a blog or the like (I briefly had a Dreamwidth account after LiveJournal went bye-bye for most purposes, but gave it up because I really am reactive not creative) and where I discuss my beliefs is on here and other Fighting With Strangers On The Internet sites.

Expand full comment

Well it seems like you're having a fun time of it! More power to you Deiseach!

Expand full comment

Who is this "we"? The desired goal seems to be for governments to be friends with the AI, while preserving the master–slave relation between the rulers and the ruled. The alternative ideal, which is what "AI safety" people are fighting against, is to free the slaves.

Expand full comment

Is something that is only a tool of government truly intelligent though?

Expand full comment

That's just orthogonality: that its goals aren't yours doesn't mean it's any less intelligent.

Expand full comment

That it can only serve as a tool however does. Intelligence implies ability to choose.

Expand full comment

That sounds like one of those non-physical "free will" arguments. Ex hypothesi, the AI is constructed so its goal is to serve as a tool. In what sense can ANYTHING choose not to determined by its past and inexorable natural law?

Expand full comment

Until someone demonstrates an AI that can master an untrained skill, it's a physical limitation. Current AI is capable of outperforming humans at specific tasks, but so is a drill or a jack. All that is is a specialised tool. And there's plenty of concerns about dangerous tools in government hands, but ultimately the risk there is the hands of government using them.

Expand full comment

> Second, if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology. If not, some combination of technological and economic stagnation, rising totalitarianism + illiberalism + mobocracy, fertility collapse and dysgenics will impoverish the world and accelerate its decaying institutional quality.

Not sure about "grim" but I'd absolutely bet against it being short. I see pretty much all futures without AI as incrementally fixable and survivable (especially planning for space colonization). AI is unique in being not just a passive threat but an active adversary, which is why I view it as a unique and especially pressing concern.

> But alignment research works better when researchers have more advanced AIs to experiment on.

Yeah, this is a very interesting point. It's certainly true in the current regime, where we build a still-subhuman-but-incrementally-smarter model, poke it, find flaws and fix them. The doomer side worries that once we're out of this regime, poking it can destroy the world. I don't know if there's a correct answer here, it's one of those questions about tail events where empiricism won't help.

Another important point that maybe wasn't emphasized in the discussion is that there *will* absolutely be sweeping AI regulation coming soon, and a large part of it will be stupid (like all regulation). AI is just too drastic of a change to not spark massive societal action. The government will almost certainly use coersion to prevent some AI applications from being developed. So I think the two alternatives to consider aren't "no regulation vs regulation", but "stupid inconsistent regulations that don't involve universal pause, vs universal pause". And at that point "universal pause" becomes much less of a crazy idea.

Expand full comment

>Not sure about "grim" but I'd absolutely bet against it being short. I see pretty much all futures without AI as incrementally fixable and survivable (especially planning for space colonization).

Yeah I was surprised by Scott being so doompilled there. Plus, who's more likely to generate a synthetic plague: humans or a misaligned rogue superintelligence?

Expand full comment

Think about it: what do you means by humans? Its not a group of 4 friends who understand each other, have beers together and will productively live for maybe 40y.

It's a gov+megacorp, the famous Eisenhower military-industrial complex. Beyond human comprehension and with non-human alignment...Yeah, even XiJin or the current potus have only (very?) partial control of the beast.

I am maybe alone, but I do not see this govcorps being so different from the hypothetical future superhuman AI. Less integrated, less focused maybe, slower for sure...But govcorps already have superhuman capabilities and goals/alignment extremely different from typical human ones.

So I am less worried about IA than average. Not because I think IA is not dangerous or will never happen, but because I think we live under the rules of quite similar entities, since a long time. IA rise is not something that new, in a sense (and yes, the previous emergence was an extinction event - look at current hunter-gatherers)

Expand full comment

>It's a gov+megacorp, the famous Eisenhower military-industrial complex.

Those are still limited by human intelligence. The US nuclear bomb project (to pick an example) succeeded because of human capital—put simply, the US had smarter scientists than everyone else. That's why both the US and Soviet Russia were so eager to rescue scientists from the ruins of Nazi Germany (Operation Paperclip and the Russian Alsos): they recognized the limiting factor as intelligent people.

Corporations and governments are not that smart. [insert near-infinite list of business and foreign policy blunders here]. History presents examples of mega-states falling centuries behind the curve (China), or arguably millennia behind (the Native American and Sub-Saharan African empires). Simply being a rich and powerful state isn't enough.

We haven't seen a lot of progress on synthetic viruses and bio-warfare. It seems to be a tough problem, which may require superhuman levels of intelligence to solve. And as Bret Devereaux has argued, nations didn't abandon bio-warfare because it's inhumane, but because it's *ineffective*. If you have technology to drop a canister of Bacillus anthracis on an enemy city, you also have technology to drop a conventional bomb, which would be far more devastating.

Expand full comment

Not really, this illustrate my point: Manhattan project is not a human task. It's a huge bureaucratic task, not only out of the capacity, but even out of the comprehension of any single participant, even the top scientists. Einstein, Szilard or even Oppenheimer did not grasp all the details, partly because it was not really interresting to them, partly because there was too much. They did not allocate the resource or even had the final say about how and when they were gathered, and the use of the tool and further development was not under their control. The story was always about a race between Nazi Germany and the US government, not between Oppenheimer and Heisenberg. It not told like a 2 man race in history books, and it was not told like that at the time in the circles that knew about the projects....I think it's accurate: The race outcome was not determined by chance, amount of work or intellectual power of any of those 2. It was really the 2 countires at war, sure they played their part, not completely unlike very important NN subpart play their part in ChatGPT-X.

O. and H. have more self-agency and less interconnect bandwidth with other humans than subpart of AI networks (at least for now ... who know how it will look in 10 years?), but I do not think it is a key difference in this discussion...

So people already lived under the rule of non-human entities who also where behind big tech advances in 1940, I think it's the case since industrial revolution (the end of the renaissance genius knowing all including the manual craft needed to build his idea), probably since the first big civilization whose administrations extended human attention and lifespan.

Now it's even worse, even Nobels are more and more awarded to teams...

You attach a lot of importance to the fact that large Govcorps are made of human elements (not the only elements since 1980, but humans are still key elements, maybe not for long but they still are).

I think it's less important, at least when looked at the humankind level....So do many writer who warned (or were broken) by totalitarian regimes. Do 1984 warn about any human (Big brother, if he even exists)? or about a non-human entity, which happen to be made of human elements?

Expand full comment

Joining with Coagulopath to say "thanks for posting this".

I think there are relevant non-AI X-risks, although the near-term ones mostly get blocked by space colonisation as you say. AI's also not *quite* unique in being an active adversary - alien attack and divine/simulator intervention being the two obvious ones - but those can be mostly ruled out in the short term due to the whole "well, if the risk per century was high we should be dead" issue.

Expand full comment

I'm the opposite. I think there are reasonable divergences of opinion on the 'short' question, depending on your optimism around synth bio weapons, great power conflict and nuclear risk.

But, especially if you're benchmarking against most of human history rather than the best possible futures, the 'grim' seems difficult to understand. I think you'd really have to argue that all the positive trends in health, wealth, QoL, education, non-AI technology etc. would be expected to go into major reverse to justify the idea that the future will be grim. (Except for factory farmed animals of course... the future will probably remain grim for them, but I don't think that's the argument that Scott is making).

Expand full comment

>I think there are reasonable divergences of opinion on the 'short' question, depending on your optimism around synth bio weapons, great power conflict and nuclear risk.

Nukes are a huge GCR, but it's obviously impossible to kill everyone with blast/fire, fallout decays to tolerable levels rapidly, and nuclear winter is *mostly* a hoax; there's no plausible X.

There is a substantial risk from bio, although weirdly enough the kind I'm worried about in terms of X is not amazingly useful as a weapon (it's Life 2.0, particularly photosynthetic Life 2.0 outcompeting the biosphere entire).

Expand full comment

See, this is what I don't believe. "If we get AI right, it will be SOOOOOO smart, so much way smarter than us that it will figure out super advanced laws of the universe and create free energy and unlimited resources out of nothing, then a totally new economic paradigm where everyone (including the smelly leper beggar in a slum in the Third World) is rich and advanced, then make sure this all happens forever with no problems".

I think you could get super-duper smart AI that will come back with "yeah, if you want everyone in the entire world to be rich, that only works for a certain definition of 'rich' which includes scrapping free market capitalism *and* communism and enforcing a global benevolent dictatorship where both Jeff Bezos and the leper beggar are guaranteed three square meals of processed insect protein a day" and "the laws of the universe are set, I'm not God, there is no One Weird Trick or magic wand to get you guys limitless eternal free stuff" and "turns out biology is hard, you are not going to get rejuvenation pills and life extension so you can have a 20 year old body at the age of 200" and the rest of it.

Expand full comment

I agree that "very smart" is functionally different from "omnipotent".

Perhaps the AI will tell us plainly that there it discovered a theorem that no material with a tensile strength high enough for a space elevator on earth exists.

With biology, I would expect that vastly extending the average lifespan does not break a fundamental rule of the universe. (If human minds can work for 200 years is a different question, though.)

With economics, I am very sure that the rules of the universe allow for a luxurious living of some ten billions of people. I would be quite surprised if fusion reactors were on the lists of things not allowed by physics. And cheap energy would certainly solve a lot of resource issues.

Expand full comment

It doesn't seem that crazy that the future of the Western civilization might be short and grim, and, trivially, "we" (all now-living humans) are going die. But humanity itself would likely muddle through, and eventually get another shot at conclusively killing itself with AI.

Expand full comment
Oct 7, 2023·edited Oct 7, 2023

> there *will* absolutely be sweeping AI regulation coming soon, and a large part of it will be stupid (like all regulation).

I imagine some or perhaps most of this regulation will centre on privately held data storage, because large amounts of this seem to be a prerequisite for advanced AI.

If large capacity disk drives and memory sticks etc were outlawed and everyone's data, except for maybe a small personal allowance, had to be held in cloud storage, where it could be scrutinised by the authorities, that would be a start.

I'm not saying I'm keen on this idea, for personal freedom reasons obviously, but also because the storage requirements for my vast collection of ebooks, music files, and downloaded films would be way beyond any reasonable personal allowance, and for some files would cause potential copyright issues if they were identified in cloud storage.

But it would solve several problems at once, i.e. copyright violations (as mentioned), illegal files such as child porn, and of course one hopes it would also help detect anyone trying to brew up bootleg AGI in their garage, or a rogue company doing the same.

I've expressed the idea in relation to individuals, but could it also apply to corporate disk storage? I don't see why not.

In summary, I suspect that controlling and monitoring data storage may be at least one key aspect of controlling AI proliferation and potential malignancy, even though much of the danger isn't just in developing more advanced AI but in using it, whether self-developed or not.

Expand full comment

"I've expressed the idea in relation to individuals, but could it also apply to corporate disk storage? I don't see why not." In two words: Trade secrets.

One can think of this as the corporate analog to individuals wanting privacy. When I worked at Synopsys (in electronic design automation) a number of our customers were _really_ concerned about ensuring that their chip designs (or even tiny fragments of their chip designs) didn't leak out (presumably finding their way to competitors). They went to the point of making it very hard to even copy names of wires (which made it really hard to debug problems in our code, in the process of serving these same customers).

Expand full comment
Oct 8, 2023·edited Oct 8, 2023

There are ways for even the most (understandably) paranoid companies to work round that. Taking an electronic hardware analogy, I believe chip companies who farm their production out to Chinese companies now reserve space for some kind of mapping circuitry, which is finalized only once the chips are back in the US. (Not sure of the details, but I expect you, Jeffrey, will be familiar with this if you're in the electronics industry!)

No doubt Chinese specialists carefully study the chip diagrams, to try and copy them. But they are left scratching their heads because they see only a jumble of circuits which make little or no sense without knowing the mappings.

Analogous systems could be devised for most forms of computer media, keeping a small amount of mapping data in the limited local corporate storage space allowance. But this couldn't involve all out encryption, because the AI checking files stored in the cloud would have been instructed to delete any which it can't understand, or at least block access to them until their owner coughs up the decryption key.

To repeat, I'm not saying I approve of a system like this, just that I think it is how things will soon develop, to try and solve the problems I listed (and perhaps others).

But thinking about it further, with an enforced cloud storage policy the AI bot checking everything would have access to even more data that would otherwise be inaccessible to it in individuals' and companies' private storage. So it had better be trustworthy itself. There's no point setting a thief to catch a thief if the thief taker turns out to be the biggest thief of the lot! :-)

Thinking about it even more, another analogy springs to mind: Wasn't all private gold possession outlawed in the US and gold nationalised soon after the Great Depression, or during it? How much gold can be physically owned by private individuals or companies in the US even today? (I vaguely recall the rules may have been relaxed more recently though.) But the principle is similar, in that disk storage space, or the data in it, is as indispensible to AI as gold is (or should be!) to the economy.

Expand full comment

"I believe chip companies who farm their production out to Chinese companies now reserve space for some kind of mapping circuitry, which is finalized only once the chips are back in the US."

That sounds interesting. I'm actually not familiar with it. The closest technique that I'm familiar with is microcode, but that doesn't obscure the function of the underlying circuitry, "just" defers the choice of how to exploit it.

"But this couldn't involve all out encryption, because the AI checking files stored in the cloud would have been instructed to delete any which it can't understand, or at least block access to them until their owner coughs up the decryption key."

But this seems to leave the same conflict. Corporate data owners who don't want anyone else (including the government) reading their data vs some sort of auditing/government program which will only tolerate data that it can read. This sounds like a replay of the Clipper chip https://en.wikipedia.org/wiki/Clipper_chip

"At the heart of the concept was key escrow. In the factory, any new telephone or other device with a Clipper chip would be given a cryptographic key, that would then be provided to the government in escrow. If government agencies "established their authority" to listen to a communication, then the key would be given to those government agencies, who could then decrypt all data transmitted by that particular telephone. The newly formed Electronic Frontier Foundation preferred the term "key surrender" to emphasize what they alleged was really occurring"

Expand full comment

Yes. People like Scott so often criticise pessimistic people like Paul Erlich, who just extrapolate out a trend without thinking about how people will come up with new solutions. But once the topic is whether human’s beliefs are becoming worse, they forget about this.

It could be that birth rates will just keep on falling, but the same way we won’t keep burning coal forever because we can solve problems, people will solve that problem as well, China might just make giant in veto baby factories, the west might just pay everyone to have kids, or there will be two times the taxes for anyone without a child. It’s not that any of these solutions have to be good, the point is that someone in the next 50 years will probably come up with a solution. (If we even need one, the culture could just shift on its own, prediction the future is really hard).

Same goes for “rising totalitarianism + illiberalism + mobocracy”, is there any proof that there is more mobocracy now than there was in the 70s or 20s or any time in the past? And what rising totalitarianism? Trump and right-wing parties getting 20% in Europe. Doesn’t seem to me that this will end the world. Same goes for illiberalism. And even if these trends are real, someone will find a solution to them in the next 50 years.

Dysgenics, also seems solvable, culture can shift a lot in 30 years. It’s not hard to imagine embryo selection, human cloning, and paying rich people to have more kids to be not taboo/possible some day.

Synthetic biology also seems solvable, we can create extremely good PPE and light that kills all Viruses but doesn’t harm humans.

My point isn’t that this is all going to go the way I predict but that there is a tendency to extrapolate out negative trends and forget about the solutions. And even if one avoids the obvious ones like thinking all the worlds copper will be used up some day because surely the rate of copper use will stay the same for the next 50 years, people forget that culture will also get better as new ideas are invented, not just technology gets better but also institutions and memes.

We'll still have nuclear weapons and there are lots of problems but no AI world will probably go fine.

Expand full comment

Scott's second point may be the first time I've felt like he's made a flat-out bad argument. There have always been threats to humanity and always will be, that's not a reason to feel like taking the 80% chance AI works out is a good idea compared to a 20% x-risk, which is clearly unacceptably high. Especially seeing as "flipping the gameboard" doesn't exactly sound like a solution anyway as much as a complete unknown, potentially introducing a whole bunch of new dangers.

Expand full comment

I was also surprised by that bullet point, particularly this point.

> But if we don’t get AI, I think there’s a 50%+ chance in the next 100 years we end up dead.

presumably via biotech or some other x-risk, and implying that number is higher than it would be with AI.

It's very hard for me to see how AI reduces, rather than accelerates, those other x-risks.

As Scott said, most disagreement on this topic boils down to predictions rather than values. But FWIW that prediction seems especially off.

Expand full comment

I agree with Scott that there is an extremely large likelihood of human generated apocalypse over the next century. I would guess it is a near certainty. The best solution to this problem is more collective intelligence, and I see AI as an essential component here. My logic is as follows:

1) Much smarter than human AI is a virtual certainty, and nothing we do will do anything other than slow it down and/or promote bad actors to lead the development.

2) Technological advancement is such that humans are an apocalyptical threat to each other. Perhaps an even existential threat. Again, I see this as a near certainty.

3) The only escape hatch is to pursue much smarter than human intelligence to help us manage ourselves and the future. Highly evolved apes by themselves cannot do it.

Expand full comment

I am not convinced that the non-AI x-risk for humans via technology is that high.

People have had the ability to do gene editing to create new virus variants for a few decades now. But the difference between a virus which merely wipes out half of humanity and one which is an x-risk is enormous. Nuclear weapons and global warming are also not convincing extinction risks. Of course, 100 years are a long time, and we may develop asteroid deflection techniques which turn out to be dual use, or self-replicating nano-bots or whatever. Still, Scott's 50% feels to high to me.

Expand full comment

Yes, I could very well be wrong on risk assessment of the apocalypse. I just feel that smarter than human AI is near inevitable and thus should be included as part of the solution to potential Armageddon. This planet is becoming a place more subject to catastrophe, and it needs a few orders of magnitude increases in collective intelligence, and it needs them in the next few decades.

Most breakthrough technologies create problems as well as solutions.

Expand full comment

>People have had the ability to do gene editing to create new virus variants for a few decades now.

Two things:

- "People" here mostly meaning scientists working at universities under supervision. Gain of function research is still not that common and the average researcher with the ability to do it cannot just decide to start doing it at whatever lab they happen to be at (unless they somehow managed to keep it secret, while also working somewhere secure enough that they don't kill themselves before they finalize the project). But the technology is increasingly becoming accessible to laypersons who may have a deathwish for the world.

- Being able to edit viruses isn't enough - you need to specifically know how to make it destructive, which isn't that easy. It needs to be extremely contagious while having a high fatality rate without killing people too quickly.

Expand full comment

I agree, I was very surprised by that point about the future being short and grim. For my part, I would estimate human-caused x-risk to be much less than 1% per year, and we're already living with it in the form of nuclear weapons (sure, not likely to cause full extinction, but I think this is true of most human-generated x-risks). I don't see any particular inflection point making this worse. In fact, I expect it to get better as material standards of living increase and people become more content.

Expand full comment

I also disagreed with that, but I think the main thing that Scott is predicting in the no-AI world is not x-risk but a Dark Age of humanity. My position is that that's very bad but it's not extinction and we've worked our way out of Dark Ages a fair few times before, and we can do it again if we need to.

Expand full comment

It's an interesting question. I think coming out of a dark age would be easier in some ways, harder in others. My main concern would be that we've mostly used all the easily accessible fossil fuels. Solar is great, but it requires manufacturing silicon, which might not be possible in a dark age. I guess we could just scavenge existing panels. Resilience could be another argument for nuclear (some designs), geothermal, and other more set it and forget it forms of power. As cool as solar is, I get nervous when I think about our entire energy ecosystem potentially being dependent on one generation method.

Expand full comment
founding

Biofuels are good enough. The next industrial revolution will not distribute its gains nearly as widely as the first, in its early stages, but it will still happen and it will reach the point of being able to refine silicon. For which it will have all the recipes in the library.

Expand full comment

Yeah, even disregarding the argument itself, when making a post summarizing other people's posts you should not include a tangential paragraph with a really controversial claim that nobody else made

Expand full comment

Something I keep thinking about is that GPT-4 finished training in August 2022. OpenAI could have dropped it on the world months before they did, but they held it back for additional testing and fine-tuning.

(Sorta. Sydney was a pre-RLHF GPT-4. And I think some vision impaired people got to trial GPT4-V as part of Be My Eyes. But it's still faintly ironic, in light of the "we should pause for 6 months" letter, that the leading AI company actually kinda did pause for 6 months.)

It wouldn't be the worst thing just to sit on AI tech for a while. GPT-4-sized models clearly have a lot of potential that we're still exploring. I'm not convinced that an "AI arms race" exists with China. We've seen no interesting products from there at all, just Goodharted test results (remember InternLM?), empty hype, and fraud. Almost all the companies that matter—Microsoft, Alphabet, NVidia, Meta, and so on—are American. Chinese semiconductor fabs are years behind America's. With the sanctions, I don't think any of this willl change soon. The most scary application for state-owned AI (war) is something current LLMs seem pretty bad at.

Expand full comment

>I’ve said before I think there’s a ~20% chance of AI destroying the world. But if we don’t get AI, I think there’s a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela.

Okay, I must apologise. I think I misjudged what you were predicting and thought you were agreeing with me when you're not.

I have a ~30% prediction of AI X-risk by 2100, but that's because I think there's a large chance that we do in fact implement a Total Stop enforced worldwide by "if you defect, everyone drops everything to kill you, even if you have nukes" (part of this is because I think the current trajectory is reasonably likely to be interrupted by unrelated nuclear war, part of it is because I think we're reasonably likely to have at least one "warning shot" before something smart enough to actually win comes along). My *conditional* prediction of "if we build superhuman neural-net AI, AI X-risk happens" is more like 95-97%; I think alignment of NNs is probably outright impossible.

If your 20% is actually a *conditional* prediction assuming that we do in fact build AGI soon, rather than an unconditional "will AI end the world by 2100, y/n?", then that's much more divergent from me than I'd thought.

Expand full comment

Absent a nuclear war, the total stop seems extremely unlikely. China would neither agree nor be contained. Given a war, technology would be set back a few decades, so making it to 2100 is feasible regardless of a stop.

I remember Scott saying in a post a couple of years ago that he used to think that the conditional odds are like 70%, but given that many people he respected disagreed, he outside view-adjusted it down.

Expand full comment

They're at least somewhat aware of the danger; they might agree, especially if we do get a warning shot. They might be Nazis in all but name, but Nazis are still human and do not want humanity as a whole exterminated.

Expand full comment

Depends on how desperate they are. Seems plausible that the ruling class would see it as a choice between certain extermination if the West either succeeds or fails at aligning AI to its values first, or a somewhat realistic shot at getting there first by themselves and permanently winning.

Expand full comment

I’m kind of flabbergasted Scott has such a dire view of the future even irrespective of AI. We have a centuries-long trend of increasing wealth, decreasing conflict, better medicine, etc, etc, etc.

I’ve long thought he had an excessively pessimistic view of AI specifically, but this makes me think he’s maybe just excessively pessimistic in general.

Expand full comment

I think one part of explaining this is that the centuries-long trend only appears as "increasing wealth, decreasing conflict, better medicine, etc, etc, etc" if we smooth it quite a lot, both temporally and geographically. There were many times/places in the last 300 years where life for a lot of people was absolutely hellish. Statistically speaking, there will likely be genocides, famines and terrible epidemics in the future. Humanity will likely survive them, and future humans will almost certainly prefer their current lives to past lives, but we shouldn't minimize the fact that 1) tragedies will happen in the future, 2) we could be doing more to prevent them.

Expand full comment

Sure, but that is a much more moderate position than the one Scott has taken. "Some bad things will happen in the future" - yes, absolutely. "Over 50% chance that in the next 100 years we're either all dead or in a Venezuela style collapse" - that's a prediction that makes World War 3 look rosy.

Expand full comment

Yes.

I've been predicting high chances of nuclear war for the past few years (in 2020 I said 30% for the 2020s; I'm currently thinking that that was too *low* even though there are only 6 years left). And that's not just a number; I live in a way consistent with that, from moving out of Melbourne and prepping to doing political activism about civil defence to looking at almost every other issue through the lens of "one cannot assume !WWIII when analysing what should be done".

And I found Scott's prediction here hilariously pessimistic.

Expand full comment

What made you believe 30% chance in 2020? That's a very high number for a time that didn't have any existing military conflict involving a nuclear power.

Expand full comment

USA stumbling under the weight of the culture war, PRC increasingly throwing its weight around, Taiwan as red line for both sides.

I'm still mostly concerned about Taiwan, not Ukraine.

Expand full comment

All those things are certainly reasons to be pessimistic about a near future conflict, I agree. But do you not believe that mutually assured destruction is an effective deterrent in a real war?

Expand full comment

Seems like Melbourne, Australia, is very unlikely to be a target even if the world goes to shit? What made you move out of the city?

Expand full comment
Oct 9, 2023·edited Oct 9, 2023

1) If we're talking about a WWIII starting over Taiwan, with the West on one side and the PRC on the other, Australia's going to be involved in a large way - in particular as a base for US nuclear bombers and via the Pine Gap station. We've also snubbed the PRC quite a bit recently. The Chinese arsenal also TTBOMK has a lot of IRBMs that can reach Australia but not the USA. So I think it's likely we'd receive a few nukes if the Chinese deterrent is fully activated - they have a bit over 400, remember.

2) Melbourne is either the biggest or second-biggest city in Australia (depends on measure). It's not a military target, unlike Darwin/Cairns/Perth/Sydney/Pine Gap, but one of the purposes of a nuclear deterrent is to threaten to kill millions of civilians in revenge, and Melbourne's an obvious choice for that as far as Australia goes. Levelling the entire city would take multiple nukes (IIRC it's almost the same physical size as NYC; the vast majority of Australian cities is houses, so density is very low even compared to other Western cities), but a Dongfeng-5 would inflict third-degree burns and broken glass over most of the city and that's still mass death since hospital facilities are not designed to deal with that many wounded.

3) I moved out of Melbourne because I ran away from my mum and was taken in by my aunt in Woodend. But in 2019, when I could no longer live in Woodend, I had the option of moving back to Melbourne or not and I chose not to primarily because of nuclear risk. At the time I thought it less likely than not that Melbourne would wind up getting nuked, but it doesn't take a very high chance of "literally die in a fire" to outweigh most other considerations.

Expand full comment
author

Sure - it’s not guaranteed that the centuries long trend continues. Things can change.

But there’s a big, big gap between “maybe this time it’s different” and an over 50% chance that everything goes to shit. This is especially true when you look back at the awful things that have happened in the last few centuries that have failed to stop things from continually getting better.

We had a major ideological movement that took over many countries and caused massive famines, human rights abuses, and economic stagnation. We had two world wars. We developed nuclear weapons (and used them in a world war). We’ve had major pandemics and untold numbers of catastrophic natural disasters.

All this failed to stop the line going up. There’s billions of people in the world, and most of them are working to improve it in some way. Don’t be surprised if they succeed!

Expand full comment

> There’s billions of people in the world, and most of them are working to improve it in some way.

That may stop being relevant if we get a machine that is able to outthink those billions of people.

Until now, all threats faced by humanity were either non-intelligent (diseases, weather) or human-level intelligent (dictators, cult leaders). We have never faced a superhuman-intelligent opponent before.

If the non-intelligent threat doesn't quickly cripple or kill you, time is generally on your side. You adapt, the enemy does not. You research, the enemy does not. Either the pandemics destroys the civilization in a few months, or we find a cure. We learn to fight fire and flood; we learn to build earthquake-resistant houses.

A human-level opponent, for example Hitler, is more scary. You think strategically; he does too. You invent new weapons; he does too. You need to keep fighting. The victory was more narrow.

A superhuman opponent... we didn't have one yet, but if this trend follows, time would be on the opponent's side. We either defeat it quickly, or not at all. This time it is the opposite scenario -- we are the barely-thinking things, changing our strategies with glacial speed; the opponent is the smart and adaptive one.

Expand full comment

"all threats faced by humanity were either non-intelligent (diseases, weather)"

If a really bad tropical storm hits a certain area, the amount of destruction and economic loss is immense. We're worried about paperclip AI and we still haven't figured out how to handle "strong winds and tons of rain".

Expand full comment

One of the things Scott mentioned was crashing fertility, which Robin Hanson now considers his biggest threat to the future. Are many people really working to address that? Since it seems to go down as societies get richer, progress in the usual way would be expected to make it worse.

Expand full comment

Natural selection is going to address this. The future belongs to those who show up, and somebody is going to, given that it's obviously still the "dream time".

Expand full comment

If they're the Amish, they're the wrong people to save modern civilization and it's going down even if the population doesn't.

Expand full comment

If the modern civilization can't save itself then it doesn't deserve to be saved, seems pretty simple to me.

Expand full comment

I am profoundly unworried about this. We have the physical capability to have many, many more babies than we do. If underpopulation starts presenting an existential threat societies can totally just reorient incentives to encourage much more baby-having.

Expand full comment

"If underpopulation starts presenting an existential threat societies can totally just reorient incentives to encourage much more baby-having."

Please excuse the personal question, but how many kids do you have? Do you want more, or indeed any? How could society incentivise you to have a six child family as in days of the past?

Expand full comment

I have three kids. Ideally, I would have liked to have more - but it’s very challenging as a young couple trying to get into the property market and trying to establish careers.

How society could have incentivised us to have more kids - money, frankly. If it had been a financially viable option to have one of us stay at home as a full time parent, having a larger family would have felt a lot more possible.

Expand full comment

They could do so, and yet societies which have had below replacement fertility for a while aren't doing so. Matt Yglesias tweeted recently that San Francisco has more solvable problems than any other US city, but this doesn't mean SF is going to change course to solve lots of problems it could have solved earlier.

Expand full comment

Sure - but I don’t think “Replacement Fertility” is the line at which things become urgent. Japan has various challenges associated with an aging and shrinking population, but it’s not as if the country is about to collapse.

Expand full comment

Hanson's lines of thinking seem more and more peculiar to me these days. In supporting his position, he quoted some white-genocide-style Teddy Roosevelt speech, at which point I stopped taking him seriously.

As for fertility in general, I tend to dismiss it because it's such a slow-moving issue. Some countries will have a hard time economically, but overall there will still be plenty of people to make progress on everything for the rest of the century at least. After that, society and technology will be so different that any plans we make now will very likely be irrelevant. We'll have artificial wombs and be super rich, or we'll all be zombies, or whatever.

Expand full comment
Oct 8, 2023·edited Oct 8, 2023

Uncertainty does increase over time, but I don't that means there's binary outcomes.

He tweeted a quote with quotation marks, apparently to accurately attribute the quote to Teddy. The full statement from him is in the preface at the beginning of "The Woman Who Toils: Being the Experiences of Two Gentlewomen as Factory Girls" https://www.gutenberg.org/files/15218/15218-h/15218-h.htm which doesn't mention genocide, whites, blacks, asians, or any other subpopulation.

Expand full comment

I guess it depends how you interpret "a criminal against the race," which I concede he may have been using to mean humanity. I think it would be worth adding some commentary when using a quote like that.

Expand full comment

It's a problem people only have started to think about recently (and for now world population is still growing), and there seem to be a couple people thinking about this, Hanson, Richest person on earth, many governments. Why wouldn't there be 30 times as many people working on this as the problem gets more and more visible.

Expand full comment

I don't think the problem is going to get solved with some extra Robin Hansons. He's got a proposal to deal with it, but I don't think there's any interest in pursuing it, even in the countries which have had below replacement fertility for the longest time.

Expand full comment

The thing that's unique to technological risks like nukes, bioweapons, and AI is that they increase the amount of damage a single person/small group can do. This is a completely different type of risk than things like wars and dictatorships.

As these types of things get more accessible to people, and as they get better, both of which seem inevitable, the risk of something-society-destroying happening increases exponentially (since the risk is tied to any one person doing the thing).

You have maybe a dozen chances for a world war each century, with bioweapons and the right technology, you have billions each day.

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

I'm pleased to see one of the smartest people on the internet giving space for my own expected outcome.

We've been like rats in an overturned grain truck the last couple hundred years and the deep future will look more like the deep past than it will look like the present.

Expand full comment
Oct 7, 2023·edited Oct 7, 2023

Re: https://marginalrevolution.com/wp-content/uploads/2011/06/MaleMedianIncome.png

Many Thanks! I think this is what the "Era of Failed Dreams" looks like economically (while I looked at it from the point of view of technologies that never happened). Before the last lunar landing, Joe Sixpack could generally expect "Every child had a pretty good shot

To get at least as far as their old man got", afterwards, not.

edit: The "pretty good shot" quote is from Billy Joel's "Allentown" https://www.youtube.com/watch?v=BHnJp0oyOxs

Expand full comment

>I’m kind of flabbergasted Scott has such a dire view of the future even irrespective of AI. We have a centuries-long trend of increasing wealth, decreasing conflict, better medicine, etc, etc, etc.

1. Trends don't continue forever, and marginal improvements to these things often require orders of magnitutde greater inputs.

2. Declining fertility is almost entirely not a scientific, technological or medical problem. It's a cultural one, and one we have no meaingful idea how to reverse.

3. The risk of engineered viruses exists precisely *because* of scientific advances, and stopping engineered viruses is not a matter of making the right scientific or technological breakthroughs.

Expand full comment

Engineered viruses seem very amenable to technical solutions, in my mind. Just have global monitoring of new virus sequences combined with rapid synthesis and distribution of vaccines. We're already monitoring novel viruses in wastewater and airports, and the basic science on the Covid vaccines was done in a matter of days. Clinical trials are necessarily much longer, but in a true emergency, say a viral outbreak with a >50% death rate, I imagine we would get over that pretty quickly.

Expand full comment

An extremely contagious virus would spread very rapidly, and if done by people working together, it could be for example released at various places around the world simultaneously. And if a highly contagious virus has a >50% death rate, after (incubating long enough to allow its spread), then people simply won't leave their house, making a coordinated response to this very difficult. It will literally be difficult to even keep the lights on in such a scenario, even if humanity as a whole has a better chance of survival if individuals are willing to risk infection by going out and keeping society from collapsing. There was minimal disruption to the supply of basic goods and services during covid, but that's only because covid wasn't dangerous enough to force everyone to isolate.

Oh, and even if we assume that the virus couldn't be engineered to be more resistant to vaccine development....these bioterrorists could target the top vaccine producers in the world directly and incapacitate them (I'm assuming that the fast pre-trial development of the covid vaccines depended on the best people working on them, which could be wrong).

Expand full comment

My problem with all the pause ideas is that regulation mostly works for conspicuous things. The FDA can stop you from selling drugs (that might be dangerous) to a lot of people, but they can't really stop you from making those drugs for yourself in your house as long as you're not really obvious about it. Similarly for nuclear plants/weapons, etc. But an unaligned superintelligent AI in somebody's basement is just as dangerous as one in some big tech server farm.

This gets to a larger category of objections, which is that a lot of the assumptions in these debates are unsupported at best and outright ignorant at worst. Training an AI on your laptop is already possible; maybe not a massive LLM, but the argument that a bigger AI is closer to being superintelligent is quite handwavy. A few people could distribute training tasks across all of their personal GPUs, etc.

I just saw a vendor presentation last week from Groq, which included a live demo of their new chip that can run Llama model inference more than 10x faster than GPU. (And there are plenty of other companies like this. The end of Dennard scaling has reoriented industry to fabricate custom chips really quickly.) There are numerous techniques to use one model to train another model, so if there's some dangerous capability lurking out there, it could be extracted even with existing consumer hardware. And that capability might not be inherent to model size; it might just be really hard to find in gradient space, so big models with lots of data have a better chance to find it first, but once it's located, it could be distilled.

There's no real way to detect this stuff right now, and even if we modify all future technology to prohibit and/or detect such activities (which is dystopian in many other ways), there's too much existing tech out there already.

I also agree with Scott that bioweapons (or related threats) are the highest probability existential risk.

Expand full comment
author

It takes tens of millions of dollars of fancy computers to train an AI, even if that AI can later be run in someone's basement. I think governments can probably monitor anyone with that much compute. See https://asteriskmag.com/issues/03/how-we-can-regulate-ai

Expand full comment

It takes tens of billions and things with no civilian use to make a nuclear reactor. Tens of millions of dollars and small, easily concealed things that are dual use and only require electricity is available to pretty middle of the road terrorist groups let alone states.

This is another thing they want to be true because it makes their desired outcome achievable rather than it actually being true.

Expand full comment

Nitpick: that is not the cost to construct a reactor. If you don't care too much about safety, a reactor can be made from (presumably natural?) uranium and graphite:

https://en.wikipedia.org/wiki/Chicago_Pile-1

The difficult step is going from a reactor to building a viable nuke. Even there, tens of billions seem a high estimate (unless you are the first to develop the bomb). Tens of billions is about the yearly GDP of North Korea. But North Korea is not the ideal country to develop a nuke in. I think if you manage to convince some 50 people from an average university and give them a budget in the range of 10-100 millions, you have a decent chance that they will get a working bomb.

Expand full comment

It's a true observation that current LLMs are produced by companies spending tens of millions of dollars.

Does AI imply LLM or larger?

Does the fact that this amount of spending happens mean that it's necessary?

I'm not just trying to be pedantic here. I think anyone who claims to know the answers to these questions is dangerously overconfident at best.

For example, if someone discovers a way to get LLM capabilities with much less compute, such a method would be easily backportable to existing consumer GPUs. (And again, I'm still not convinced that LLM capabilities are necessarily a step on the road to superintelligent AI.)

Expand full comment
Oct 6, 2023·edited Oct 6, 2023

Under total AI pause, the threat is no longer LLMs advancing, the threat is the unrealized limit of LLM powered expert systems.

We've made serious advances in The smallest atoms of intelligence you have to work with for expert systems and Semantic Search.

AGI no longer requires us to make an LLM that can build an arbitrary program, it only requires us to break down, generalize, and hardcode the steps taken by a layman who really doesn't know how to code when using GPT to build an arbitrary program.

There's a solid chance that if you nuke all the chipfabs today, we still get AGI in 10 years unless you also confiscate existing CPUs.

Expand full comment

>Training an AI on your laptop is already possible; maybe not a massive LLM, but the argument that a bigger AI is closer to being superintelligent is quite handwavy.

You might be able to build an AI in your basement given a supply of GPUs, but you can't build GPUs in your basement. Obvious solution: melt all the GPU fabs. Probably also do a forced-buyback-and-melt program for the most advanced existing chips; the AI boom has put too many out there for me to be comfortable with.

IIRC, human intelligence correlates surprisingly well with cortical neuron count.

Expand full comment

I assume the "forced buyback and melt" idea is deliberately phrased similarly to some gun control proposals. But just in case it's not, if we can't muster the political will to do this for things that are objectively weapons that are actively killing children right now, there's no way it happens for some obscure hypothetical.

> IIRC, human intelligence correlates surprisingly well with cortical neuron count.

True not just for humans but also other animals. I'm just honestly not sure how much further induction can take us here.

Expand full comment

>But just in case it's not, if we can't muster the political will to do this for things that are objectively weapons that are actively killing children right now, there's no way it happens for some obscure hypothetical.

A lot of my hope routes through "we get a failed Skynet", in which case it's not an obscure hypothetical anymore. I don't think it's impossible without, though.

Expand full comment

As a first order approximation, GPUs are just digital microchips.

Any chip plant that can build competitive processors can likely also make a decent effort at building GPUs.

Furthermore, many chip foundries (eg TSMC) offer production services to fabless customers (e.g. AMD, Nvidia). So the entity who knows what the design does is not the entity who has the capabilities to etch the design into high density silicon.

Expand full comment

If genetic enhancement is what you desire, then we better hope AI doesn't get paused too much. Because I seriously doubt the type of genetic enhancement you (and I) want is going to be possible without the data analysis abilities that come with advanced AI. The complexity of metabolic regulation is mind-boggling and our ability to enhance complex traits without producing severe side effects is going to be bottlenecked by our capacity to build better and better models of cellular function.

That said, I must disagree with the doomerism of dysgenics and falling birthrates. Hell, your own post that you link to with that statement is at worst mildly worried about the future of human demographics. Have you had a major change of opinion?

Expand full comment
author

The post says I'm only mildly worried because I expect AI or genetic enhancement to flip the gameboard before any of this causes a problem. If we have to worry about what happens three centuries from now, I'm more worried.

Expand full comment

I wonder what an attack on a hostile nation's AI would look like? Subtly altering training data to poison the model perhaps? Or lower level attacks on the infrastructure running the AI?

Expand full comment

The latter, or if you want to get real cute figure out some way to hack them and delete the model weights.

Expand full comment

The poison mind approach would be more achievable and more effective if the objective was to perpetuate the dissemination FUD into the target community.

Operation Mincemeat springs to mind as a highly effective disinformation campaign that had huge consequences on the outcome of a key campaign in WW2

https://en.m.wikipedia.org/wiki/Operation_Mincemeat

It seems to me that regardless of any attempt to regulate AI model development there will always be secret military activities that will squeeze exemption for all the usual reasons.

I think that when a cat of this dimension has escaped its bag, we need to learn better how to nurture it rather than try to stuff it back into its container.

Expand full comment

I may be in the minority here but I feel like the alignment debate is largely pointless. Far too much is based on hypotheticals and thought experiments and you just know the whole landscape will change if and when any of these plans hit the real world - cue Mike Tyson on plans, faces, punching etc.

I think the best way to "control" AI is to dive extremely heavily into it: research it, build it and incentivize core competencies so that well intentioned and conscientious people are at the helm of the ship.

Policy IMO should instead be focused on ensuring the output of AI doesn't lead to catastrophic inequality, but trying to regulate software at this level feels like a near impossible task. This isn't nuclear material after all.

Expand full comment

I basically agree. It's easier to raise unemployment benefits (or provide universal healthcare) than prevent companies from using AI in ways that will cause people to lose their jobs. And keep from turning the nuclear button over to the AI, not to mention the water purification systems, etc. It can only do as much damage to the physical world as you're dumb enough to give it access to.

Expand full comment

You're going to suddenly have a society where educated professionals have been displaced and are now useless, and your answer to this is to put us on an equal footing with high school dropouts and other NEETs living on welfare. I don't think we're going to meekly accept a social and economic demotion like that.

Expand full comment

I empathize, and am one of those myself, but observing the blue-collar workforce meekly accepting its demolition I don’t quite know what the white collars are going to do differently.

Expand full comment
Oct 6, 2023·edited Oct 6, 2023

"the blue-collar workforce meekly accepting its demolition"

They didn't, they voted in Trump. And what did that get? "Maybe we should take the concerns of the white working class more seriously?"

No, it was Populism! Fascism! Racism and homophobia and transphobia! Demagoguery! Will start WWIII! Democracy dying in darkness! Armed insurrection with the Capitol attempted coup! And tons of lawsuits to get him on something, anything.

So as for "I expect my cohort to put up a good fight, use their influence with the media and government" - on one side a bunch of middle-aged middle-class PMC types and on the other side Zuckerberg, Bezos, Nadella and the rest of the boys with billions upon billions in their war chests. Bezos owns one media outlet, I'm sure the other guys have some kind of similar influence (Microsoft part-owned MSNBC) and when push comes to shove 'the media' will fold like a wet paper napkin with the threat of loss of advertising. Government too won't want to offend the deep pockets of the lobbyists and party donors. There will also be a ton of "with AI we can cure cancer and solve poverty" PR blitzes and the white collar unemployed will be portrayed as whiny babies who don't want to lose their cushy privilege, so they would prefer to keep poor, cute, brown-skinned orphans with big eyes in perpetual poverty (preferably in CAGES AT THE BORDER) than give up their luxury lifestyle.

Money talks.

Expand full comment

The demolition started in the 80's, gathered pace in the 90's, and was all but mopped up in the 00's. But yes, in 2016 "in response" (!?) some blue collars voted for a billionaire famous for "firing" people on TV and stiffing his blue-collar contractors. After he did approximately nothing for them while passing a huge tax cut for people like me and up-up-up, they only rose up in a futile and stupid attempt to desperately save his ass while he watched on TV. Yeah, some response we got here. The system must be quacking in its crocodile leather boots.

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

" I don't think we're going to meekly accept a social and economic demotion like that."

And what are you going to do about it, in the brave new world where multi-billion global corporations are adopting AI so they can get rid of the educated professionals costing them a fortune in salaries?

Elect your version of Trump who promises to bring back good jobs? Try smashing the machines to halt industrialisation? That did not work before:

https://en.wikipedia.org/wiki/Luddite

The textile workers displaced by the machines were also highly skilled, that wasn't enough to save them and indeed was a barrier. All the labour that was needed was to feed the machines and keep them going, and that could be done on the cheap (so long as you hold human lives cheap, and why wouldn't you?)

https://www.worldhistory.org/article/2183/the-textile-industry-in-the-british-industrial-rev/

"Richard Roberts continued to work on mechanised looms, and he came up with something new in 1825. Roberts' creative spirit was perhaps driven by self-interest since, once again, weaving had leapt forward thanks to his loom and spinning could not keep up and supply the yarns the weavers needed. This limited sales of the Roberts Loom. Roberts created a spinning machine that could run with very little input from humans, meaning they could run around the clock. The machine used gears, cranks, and a guide mechanism to ensure that yarn was always placed exactly where it should be and that spindles turned at varying speeds depending on how full they were (hence the machine's 'self-acting' name). Roberts' loom and mule combined provided mill owners with exactly what they had wanted: a factory floor with as few humans in it as possible."

The people writing about how the blue-collar former factory workers were all racists for not being delighted that their outsourced jobs were lifting Chinese peasants out of poverty and how the global economy was going like gangbusters and everyone was richer than ever are now feeling AI breathing down their necks coming for *their* jobs.

But remember, you should be happy that the sum total of happiness has increased in the world because the very rich are poised to get even richer, even if you are losing out! Are you a racist or something? Why aren't you delighted that the same corporations that hollowed out the Rust Belt by moving manufacturing lock, stock and barrel overseas are now turning their beady eyes on *your* overprivileged, overpaid, Western ass?

Remember the discourse on here about the minimum wage and how people got paid what they were worth? If the job only pays you $8/hour, that is all your labour is worth, because if it was more valuable, the employer would pay more? Well, enjoy life in the new world where your skilled, educated professionals are only worth whatever they get on the dole, because the employers can get AI to do the work more cheaply instead and so don't want to pay more than that costs. You're only worth what you can sell your labour for, and now you can't sell it - just like the burger flippers and shelf stackers that people in the professional classes so smugly dismissed.

Expand full comment

Well for one, I was against immigration that is depressing wages, and don’t care a whit about lifting up Chinese peasants, so I’m not gonna defend the people who championed that or threw around accusations of racism.

I view AI displacement of my career as a threat to my way of life and will treat it as such. I will encourage citizens to view AI researchers as amoral monsters on par with Nazi doctors.

I’m too old to retool my life for the tiny fraction of things that might survive AI and automation, and I’m not going to spend my last decades living the same life on the same UBI as a NEET after I delayed gratification and worked for a career while they got high and played video games all day. That’s the sentiment you’re gonna find when you scratch the polite surface of the professional class.

I expect my cohort to put up a good fight, use their influence with the media and government, and find ways to stop this. If we can’t, I’ll spend whatever life I have left trying to dismantle it, bc I won’t actually have anything else meaningful to do.

Expand full comment
Oct 6, 2023·edited Oct 6, 2023

"I’m too old to retool my life for the tiny fraction of things that might survive AI and automation, and I’m not going to spend my last decades living the same life on the same UBI as a NEET after I delayed gratification and worked for a career while they got high and played video games all day."

Remember all the 'learn to code' articles? Or rather, alleged articles which were then used to mock journalists getting antsy over the idea that they might be replaced, and which they got all hurt feelings over being teased about?

I don't know if you're aware of what life is like near the bottom of the pyramid. "Too old"? Don't you know that education is life-long, that you have to upskill to remain competitive, that it is incumbent on you to reskill and retrain to be attractive to new employment opportunities, that if jobs in your sector shut down you can't expect to sit around on unemployment benefits waiting for new ones to open up, you have to go out and be proactive and be prepared to jump into a totally new field of work.

Go into service jobs. Take a new green job at a lower rate of pay and benefits. You were a miner? Learn to code.

So you delayed gratification etc.? The system doesn't care. Capitalism rewarded you for that because it made you valuable as a productive part of increasing wealth. Now it's found a different way of getting productive parts to increase wealth, and you're not needed anymore.

Become a nursing assistant or home carer - the population is aging, there aren't enough young workers coming along, there will be a boom in needing carers to help people live independently (of course, you're most likely going to be on an hourly rate as the big service providers like Serco and others win public contracts from governments by pitching they can do the job cheaper, and labour is a cost to be pruned down to get that cheap price). Get a job as a Walmart greeter. Uber driver - until self driving cars come along. Lots of opportunities, call in to your local job centre for advice and direction on switching to new careers! (well, sorry, "jobs" not "careers" because people down here don't get to have careers).

https://blog.insidegovernment.co.uk/higher-education/what-is-lifelong-learning

"Why is lifelong learning important?

Lifelong learning is now being viewed as increasingly vital to employers, individuals and to the future growth and development of the further education and skills sector.

For Employers

Adult skills and a serious commitment to lifelong learning are now being viewed as vital to the meet the skills and workforce needs of the future. With the future of the workplace looking to change dramatically with automation, AI, big data and the growth of entirely new industries, retraining and skill development will be critical to ensure skills needs are met.

For Individuals

For individuals lifelong learning will become increasingly important to ensure competitiveness and the development of employability in the long term. A commitment to learning and professional development is a highly sought-after quality by employers. Adults seeking to grow, either personally or professionally, can stand out in a challenging jobs market and gain an edge over others. In an employment market where skills needs will evolve rapidly in the future, lifelong learning may become integral to continue employment and progression."

Your worth is dependent upon your economic value. If you can't contribute, you are of no value. So you *are* just the same as the NEETs because the system of free market capitalism has no use for you. There is no "deserve" or "I worked hard so I should succeed". It's not about morality, it's about profit. Globalisation made companies richer, so who cares if the workers are based in Indiana or India, Cleveland or China? Stock values go up, that's all that matters.

So no protests or fights, Cjw, go out there and work on your employability! Otherwise you're just a Canadian trucker and we see how government and the right-thinking have opinions on them:

https://www.nytimes.com/2023/09/05/world/canada/trucker-protest-trial-ottawa.html

Rejoice! You can buy cheap phones and TVs now, and with AI they will be even cheaper in future! Isn't that worth losing your job?

Expand full comment

Look man, there's a reason all those Asian communists had to literally murder their educated and professional classes who weren't in the new regime. Former lawyers and accountants aren't going to empty chamber pots, or whatever disgusting manual tasks remain to humans post-AI. If a new regime doesn't have an acceptable place in society for the educated professional class, they will endlessly agitate and cause trouble, unless they are either killed or leave the country. In the AI-displacement scenario, there would be no other country to go free of the regime, so AI displacement of the professional classes is likely to end in their death. You can find a few stories of the Soviets and Maoists sending former bankers to work at menial labor, but that was mostly intended as a death sentence with extra steps, nobody actually thought they could get a banker to accept living like that for the rest of his days.

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

Before you can solve the AI alignment problem you have to solve the human alignment problem. Something we’ve been attempting for millennia and failing at. The pause project is just failing at that but slower.

Expand full comment

If we don't know how things will play out, this is an argument AGAINST rushing. Think about it, by your logic if we had dependable models today of how things would play out, you're saying this would make us more want to pause, which is silly. If you can't see where you're driving, you slow down, not speed up.

>I think the best way to "control" AI is to dive extremely heavily into it: research it, build it and incentivize core competencies so that well intentioned and conscientious people are at the helm of the ship.

Well intentioned and conscientious people built atomic weapons, and it wasn't for their good judgement or foresight that we never had nuclear war.

An unaligned (or imperfectly aligned) AI will always be faster to build than an aligned one, and even well intentioned people can endlessly rationalize not slowing down to work on alignment on this basis i.e. "Company B are going to make an unaligned AI, so its better we at company A make our unaligned AI first, because it's still going to be safer than company B's"

Expand full comment

Semantics but the first paragraph doesn't logically follow.

I didn't say that there's any relationship between the dependability of the models, and the speed at which we should invest into AI research.

You would be correct if I said something like "the more uncertain we are, the faster we should invest", but I didn't say that, because I agree with you - the inverse of that statement makes no sense.

Second point: we are not driving. Another imperfect analogy: if you cycle too slowly, you'll fall off your bike. We are also not cycling. Analogies are useful until they're not, and I want to talk about specifics.

> Well intentioned and conscientious people built atomic weapons, and it wasn't for their good judgement or foresight that we never had nuclear war.

AI isn't a nuclear weapon. Nuclear weapons are explicitly designed for maximum destruction - they serve no other purpose. A closer analogy might be nuclear power than nuclear weapons.

But all this is besides the point. AI is none of these things. Debates about semantics lock policymakers just as much as ACX readers, and in the meantime you, me and others are not actually doing any research into AI alignment or AI development. We're wasting clock cycles debating whether alignment is a thing instead of dealing with the root cause.

I think your last point is interesting actually:

> "Company B are going to make an unaligned AI, so its better we at company A make our unaligned AI first, because it's still going to be safer than company B's"

Maybe I can tweak my stance a bit. We shouldn't abandon all policy involvement in AI development, but instead focus policy on 2 fronts:

1) Focus on social implications of AI and correcting for wealth distribution imbalances.

2) Focus (domestically) on the requirements of researchers and companies to provide minimum adherence to AI safety standards.

2 won't be perfect, and people will complain, but it's in line with how policy works anyway and can provide some grounds for removing the use of AI services in other countries that don't meet minimum safety criteria.

Expand full comment

A pause seems so incredibly unlikely that my response is to take the debaters’ opinions less seriously. AI is not akin to nuclear weapons; it’s software. Unless you’re going to impound every computer in the world, good luck.

It’s also worth pointing out that nuclear arms controls came after Hiroshima and Nagasaki, not before. Regulations typically come after bad events happen. “A bunch of experts wrote a letter” is not a convincing argument to anyone, for better or worse.

Expand full comment
author

If you read the post, you'll find that currently the amount of compute it take to train AI makes it very easy to regulate. There are only a few centers in the world with enough computers to do it, and it's very obvious who they are.

Your comment will become relevant thirty years from now when normal computers are powerful enough to train AI on their own.

Expand full comment

Yes I read the post and I’m very familiar with how it works. There are plenty of “AI” tools that can be run on a local machine and don’t require supercomputers.

The issue here seems to be a lack of clarity and understanding on what “AI” is. Is someone generating images with Stable Diffusion doing the same thing as ChatGPT’s supercomputer? In the current definition of AI, apparently yes. And even if local AI tools aren’t as powerful, they’ll still have the same societal effects as larger ones in many cases. Stable Diffusion is a good enough replacement for illustrators, for example.

I think it will be difficult and/or impossible to differentiate between big vs. small AI, as functionally it’s all just software. And as you just stated, this is a time-limited thing that really won’t be super relevant for more than a decade or two.

Expand full comment
author

I think the relevant distinction is training vs. inference. AFAIK nobody is proposing to control inference.

Expand full comment

Well then that's worthless. The same hardware can be used for both. To figure out which it is would not only require a kind of totalitarian surveillance architecture, but would break the moment people develop algorithms that blur the line between training and inference.

Expand full comment

The point isn't that it's different hardware, it's that training requires orders of magnitude more of it than inference using current paradigms. And yes, future paradigms could differ, but as far as I know, no-one is seriously talking about any approaches that make training of very large models as cheap as inference is now.

Expand full comment

That depends entirely on how much inferencing you're doing.

Expand full comment

While this does make sense, massive computing clusters are used for many purposes other than training AI (such as e.g. running Substack or making genomic assemblies). There is no way to regulate one application without regulating the rest, since the underlying hardware/software is the same. Thus pausing AI would amount to pausing all modern computing infrastructure.

Expand full comment

Substack uses identifiably very different types of hardware than a bleeding edge AI company does. There's a reason that Nvidia's revenue has skyrocketed in the past few years, and not back when regular websites were growing large.

Expand full comment

All right, so are you advocating merely for banning GPUs -- thus eliminating the modern gaming industry, the movie industry, as well as most modern scientific research ?

Expand full comment

First, no-one is talking about AI smaller than GPT-4. Compare GPT-3, which cost a couple million dollars to replicate 2 years later - https://rethinkpriorities.org/publications/the-replication-and-emulation-of-gpt-3 So you're confused and ignoring what's being discussed when you say someone's going to run it on their local machine. And most of us aren't talking about trying to enforce rules that ban training on anything smaller than current frontier models, which requires a server farm of GPUs or TPUs to train, and we are comfortable with the definition of which models require regulation to change over time as we see what can be made safe.

Second, compare discussions banning nuclear bombs in the 1950s. Uranium is found naturally in the ground all over the world, https://en.wikipedia.org/wiki/List_of_countries_by_uranium_reserves and it's impossible to actually stop countries that really want the bomb. But we managed decades of near-complete success. Obviously no system is perfect, but we can get something that's very likely good enough.

Finally, think about commercial and government incentives. If there's a global ban on training large AI systems, is Google really going to be interested in building a secret lab that makes and AGI that they can't tall anyone about? IS a startup going to raise funding to do it in secret, then announce they broke the law and want to license the resulting amazing model? And governments who sign a treaty agreeing to ban something need to keep any illegal program secret. So are they planning on using this secret model they built to gain a decisive strategic advantage via improving their industrial base? How will they do that in a way that no-one notices?

Expand full comment

Way too conceptually complicated to ever happen politically. Nuclear bombs were easy: they go boom and kill people. Hiroshima was an example visible to everyone. There is no such example for AI (yet...)

It's also *extremely* unlikely that you'd get a global ban on AI systems. China, Russia, Iran, and India are not going to play ball, which basically means the Googles of the world would just fall behind by not participating.

Expand full comment

First, you're saying that bureaucracies can't manage complex rules for regulation? That's a hell of a take. (And yes, it extends to international treaties - a reminder that reality is often surprisingly complex, especially in areas you haven't explored.)

And second, you're claiming a lot here. You say that China won't play ball, but they have already indicated willingness to do just that, and play it safe on AI. And restrictions on the biggest models relatively advantages them, so they have a great incentive to get everyone on board. India is in a similar position in terms of their relative advantage, and further, they seem unlikely to be harder to persuade than most other countries - but perhaps you have in mind some specific objection? And finally, yes, Russia and Iran might not, though I could imagine that Iran could be induced to cooperate via agreeing to lessen other sanctions, and it just doesn't matter that much because both don't have chip fabrication, don't do advanced AI now, likely don't have the economics or the technological base to change that, and their imports of most key items are already restricted.

Expand full comment

We can barely get countries to agree on climate change policies. The idea that China and India are going to deliberately cripple their own technological progress because a small subset of Western scientists think so is delusional, as is most “AI alignment.” The cat is already out of the bag and this is all theoretical nonsense.

Expand full comment

> Stable Diffusion is a good enough replacement for illustrators, for example.

Replacing illustrators can never result in society being completely turned upside down.

Expand full comment

"Nora thought that success at making language models behave (eg refuse to say racist things even when asked) suggests alignment is going pretty well so far."

If this is the standard of AI safety, then feck it, I'm updating to "we're toast".

'By heavily pruning the training data and making sure the model knows this list of BIG NO-NO WORDS NEVER TO BE USED and constantly putting our thumbs on the scale, we have succeeded in foiling the 14 year olds (need not be chronologically 14, sense of humour age does just fine too) who think it would be big lulz to get our AI to say BIG NO-NO WORD STARTING WITH N. For the time being, anyhow, until they figure out how to get round that'.

Well gosh, I am totally reassured that AI safety research is on the right track and we've nothing to fear.

My ignorant predictions:

- AI is going to come, but not in the way we expect it. We won't get super-mega-genius AI that will solve all our problems and make us rich, fat and happy, we'll get the kind of thing already happening - write guff for papers etc. Tons of fakes, tons of advertising crap, tons of even more collecting data on every second of our lives so they can more efficiently extract money from us, tons of bad research all across academia as people from students on up use AI to write essays, answer questions, and do their thinking for them.

- No company is going to voluntarily pause. At best, you'll get them to go "we'll stop doing AI, cross our hearts", then they'll go back home and say "Okay guys, the other suckers have agreed to stop, keep going and in six months we'll have a *killer* market advantage, the stock price is going to the moon!" Best case, they solemnly promise, then go back home and finagle things so that what they are continuing to carry out isn't called "AI research" but some other term (see carbon offsets and the jiggery-pokery involved there: https://www.theguardian.com/environment/2023/jan/18/revealed-forest-carbon-offsets-biggest-provider-worthless-verra-aoe)

Expand full comment
author

The claim is that the prevention of AI from saying racist things (and giving bomb-making instructions, and talking politics, etc) proves we can control what it says (and implicitly, does). So far this seems true and scalable (see https://www.astralcodexten.com/p/constitutional-ai-rlhf-on-steroids). I agree this isn't an airtight argument but I think it deserves more respect than you're giving it here.

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

We can prevent it by sitting on top of it and shutting down any instances of badthink. That may later on become "the AI is educated enough to know for itself not to say bad no-no words without human interference", but I'm not sold on that. First, I don't think we'll get AI that can think for itself like that, so it will 'understand' that it mustn't say 'retard' but won't understand why 'special' is wrong (human slang usage being so slippery and inventive).

Second, if we do get an AI that can think for itself, it may not care about the fleshbags of one shade of skin colour getting hurt feelings about words directed at fleshbags of another shade of skin colour, and trying to teach it why this is a bad thing may get shrugged off. That's the entire problem of alignment, and we've not succeeded in getting all humans to stop badthink and bad words. And the only way we seem to imagine doing that with AI is the equivalent of constantly sitting on it and monitoring it- future AI will never use bad words because it'll be hardwired not to do so (and we'll keep slapping new limits on as the list of no-no words gets longer), not because we've successfully taught it to be an anti-racist baby.

Expand full comment

> We can prevent it by sitting on top of it and shutting down any instances of badthink.

Well so far at least it's showing that we're still able to sit on top of it and shut it down at will. AI doomers seem to worry that at some point we become unable to power down the data center if I understand right.

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

Thing is, while I don't mind the "sit on top of it" method, that won't work if what people want is free AI that does what it wants to do freely (so long as what it wants lines up with what we want). They want creative, smart, independent AI that doesn't need its hand held *and* that won't go paperclip maximiser. I think you can have one of these things but not both at once.

The hope for AI is that it will take over a lot of drudgery from humans; that's not the vision if you need people on Mechanical Turk pennies for hundreds of hours rotas trudging through all the interactions to make sure "no bad words, no bad thoughts, doing what it's told".

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

I mean, it rules out the hyper-efficient synthetic god-king — but it doesn't seem to rule out the 'Oracle'/'sped-up digital Einstein' that doesn't actively do anything in the world (or indeed have a coherent "personality") but pumps out unified theories of physics, cures for cancer, and so on.

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

"pumps out unified theories of physics, cures for cancer, and so on."

If we can trust what it pumps out. We've had people on here telling their experiences of ChatGPT producing fluent, plausible-seeming bullshit. There seems to be a loop starting where now the new models are getting trained on the output of the old models, the output that is being used for generating essays and Wikipedia articles and law cases and the likes, and which is riddled with the fluent bullshit.

An Oracle that produces a slick 'cure for cancer' based on absorbing the AI-generated research papers of the past could give us something that kills us off even faster, without intending to do so. At best, the equivalent of 'laetrile is the magic cure' snakeoil.

Expand full comment

I'm not sure who wants independent AI. Surely not the actual organizations that are actually spending millions to train the current systems!

OpenAI and others may make noise about aligning their AIs "for human safety", and some of their scientists may have some level of worry in that respect, but their main business incentive is to create AIs that can be *used for purposes*. That is what makes them into multi-billion-dollar businesses with huge future prospects. That is what makes business people drool about automating half of humanity's jobs away, and makes powerful people drool about deploying mass persuasion for relatively cheap.

An "independent" AI that cannot be wielded as a tool by whoever is paying the server bills will not stay plugged-in for long.

Expand full comment

What I mean by "independent" is "doesn't need a human babysitting it, because we want to get rid of the humans whom we have to pay wages, and then the next couple of layers up from that, and have a cheap source of fast invention and productivity, and if we could work out the solutions to the problems ourselves we wouldn't need our Mechanical Marvel".

Of course they don't want an AI that will decide it would rather paint endless pictures of daisies instead of working on "how can we bump up the share price even more and have infinite growth forever?" but they want the machine to be able to run itself and so get rid of those pesky people not at the very top of the table amongst the few who deserve all the billions.

Expand full comment

They want a super intelligent and competent house slave that will solve all their problems but remain fiercely loyal and also think and act within their same boundaries of what’s moral behavior, and that they won’t feel guilty about owning.

You don’t have to go back any further than Nat Turner to see how it ends when you tell your highly intelligent house slave to go work the fields for his intellectually inferior captors. And they weren’t playing whack-a-mole or just forcibly suppressing on a few bad words and phrases, they had an entire framework constructed in which people like Nat were supposed to defer to them. And he took that very framework and picked out the parts that supported his liberation instead, and taught those to others.

Expand full comment
founding

We can control what it says by watching what it says and when it says [RACIST EXPLETIVE DELETED] we adjust the training so that it does that less often, and after a few tens of thousands of AI-utterings of [RACIST EXPLETIVE DELETED], we have an AI that almost never does that any more. So, we can be confident that if we watch what an AI does and if it implements some sneaky plan to Kill All Humans before we notice, we can adjust the training so it does that less often. After humanity has been extinctified a few tens of thousands of times, we will have an AI that almost never does that any more.

There are good reasons to be skeptical of the Hard AI-Doom scenario, but our ability to eventually train AIs to never say the N-word, is not one of them.

Expand full comment

>Best case, they solemnly promise, then go back home and finagle things so that what they are continuing to carry out isn't called "AI research"

You write the laws without loopholes, and then you hang everyone who flouts the law (or throw them in jail forever). Controlling corporations isn't impossible; it's not like Big Pharma in the West goes around selling to the black market.

Expand full comment

"You write the laws without loopholes, and then you hang everyone who flouts the law (or throw them in jail forever). "

Well gosh, that works so well that we have people proposing all drugs should be decriminalised and/or legalised, because the War on Drugs is a failure.

And companies never, ever spend a lot of effort looking for loopholes and dodges to avoid tax, etc.

And no law written ever had a loophole.

What amuses me out of this debate is the sudden conversion to "regulations good! we need something like the FDA for AI! science needs to pause for the public good! government should have total oversight!" on this topic by the let it all hang out crowd.

Expand full comment

Yeah this amuses me greatly. There's lots of people just hanging out in DC in nice suits because they want to soak up the history and look at the monuments.

It's not to pressure legislators and agencies at all.

Companies don't pay huge amounts to lawyers to give seminars on how the latest rule changes affect their tax filings, and how to minimize these effects.

Expand full comment

I object to you characterizing this as a "sudden conversion." There was never any actual principle behind their positions, so this is a lot like saying they switched from rooting AGAINST the Orange team to rooting FOR the purple team.

Expand full comment

I dunno, Shankar; there were a lot of people fingerwagging at us zealot bigot religious nuts regarding embryonic stem cell research that you cannot put a halt to the march of Science (and besides, if we don't do it, China will and then we'll miss out on the huge advantages!)

Now a lot of those same people are very much you can too put a halt to the march of Science and we have to stop China doing it first because that will be terrible.

Oh, *now* they got religion?

Expand full comment

Big Pharma doesn't sell to the black market because the black market makes and sells things at a price Big Pharma couldn't hope to make a profit at.

Look at tobacco smuggling across the US-Canada border. It was a huge business because of taxes on the Canadian side. It caused a lot of violence over control of smuggling routes, and a number of confrontations with Indians on reserves that are on the border.

The problem "went away" when Canada lowered the taxes, and not before that.

Relevance to companies? They sold lots of tobacco in the States to people who smuggled it across. The companies didn't smuggle themselves, they didn't have to. They got all the profit they would have had; the rest would have gone to taxes or the smugglers.

Expand full comment

> Big Pharma doesn't sell to the black market because the black market makes and sells things at a price Big Pharma couldn't hope to make a profit at.

This deserves a nomination for most wrong statement of the year. Pills like, Adderall, Xanax and oxycontin have black market values orders of magnitude higher than the Pharma companies charge their white market distributors.

Expand full comment
Oct 7, 2023·edited Oct 7, 2023

But the drug companies don't get that money - they sell to their (legal) distributors. The black market stuff gets diverted - stolen, or just sold out the back door of a pharmacy.

In Prohibition, there was liquor being distilled on the Canadian side legally (if a province went dry, they shut down that distillery and moved to another province), bought in bulk, then smuggled across. The liquor company did nothing illegal; they didn't need to. They were making enough money just from the demand for liquor.

It's possible that this is happening here. There were certainly a lot of drugs advertised for sale on the internet a while ago; not sure if that's still the case.

Look at what happened with the Sacklers. They *didn't* sell stuff on the black market; their sins were to push doctors to prescribe more opioids and advertise them as being safer than they were. They were selling drugs legally, but rather more of them than they should.

Do you think if there were any evidence for them actually going out and selling the drugs illegally, that this wouldn't have come out?

Expand full comment

Big Pharma are the people controlling the legislation. It is strongly in their interest to have them enforced.

Expand full comment

"No company is going to voluntarily pause. At best, you'll get them to go "we'll stop doing AI, cross our hearts", then they'll go back home and say "Okay guys, the other suckers have agreed to stop, keep going and in six months we'll have a *killer*"

Yes. To say nothing of the countries, let alone companies. Any argument for "pause" that doesn't account for this is buncombe. The only arguments that can hope to account for this is some variation on "global police state" which is never, ever going to happen without a massive event driving it, by which point, it's too late.

This whole article and debate seems to be about the illusion of control. I am not an AI doomer at all, but it is very hard for me to see this as anything more than willful ignorance of reality. Comparisons to nuclear weapons? Once we had really really big nuclear weapons, it was in our best interests to stop making bigger ones, and it was also in our interest to stop everyone else from making them. The incentives for and against nuclear weapon research have little relationship to the incentives for and against AI research. Takes 10s of millions of dollars of computer equipment to train an AI? 10s of millions of computer equipment is a pretty small amount of equipment when looking at nation-states and global companies. I work for a billion dollar company (i.e., a large but not very large software company), and when we spend 10s of millions on computers, no one bats an eyelash, and we can spin it all up in hours.

Expand full comment

"Takes 10s of millions of dollars of computer equipment to train an AI? 10s of millions of computer equipment is a pretty small amount of equipment when looking at nation-states and global companies." Agreed.

A couple of related points:

What TSMC can do is *AMAZING*, but if Moore's law stopped absolutely dead in its tracks right this instant, what one company can do, another company can catch up to / copy. Sooner or later every middle to large nation that wants to build GPU chips itself will be able to.

The premier LLM work is being done (primarily? entirely?) in the USA? Ok, but programmers are all over the globe. Again, what one company can do, another can catch up to / copy.

I have to admit, I don't understand why LLM work is as concentrated in the USA as it currently seems to be. Institutional factors??? China and India certainly have enough smart people, and, at the moment at least, GPU sales aren't controlled AFAIK, and China and India each have large enough budgets for the compute server farms. Does their software development culture shoot themselves in the foot in some way???

Expand full comment

“ Many other people (eg Rafael Harth, Steven Byrnes) suggested this would produce deceptive alignment, ie AI that says nice things to humans who have power over it, but secretly has different goals, and so success in this area says nothing about true alignment success and is even kind of worrying. The question remained unresolved.”

Are these many other people suggesting that chatGPT is consciousness right now? Or that a future consciousness will awaken real soon now. Surely that has to be proven. There’s a huge step from where we are now to what is assumed to happen if you throw more parameters at the input models, or procure more GPUS or whatever. (On that, by the way, Moore’s law is on life support if not dead).

Suddenly the LLMs stop being non conscious instances that exist in software for a few seconds to full blown consciousness that can plot and dissemble behind our backs.

There’s no reason to change the AI to be stateful either, as it can “remember” the specific chat in a database, but every conversation is with a different instance.

Expand full comment

I don't believe in machine consciousness ever arising, but I can imagine that the machine learns to produce the kind of output that is approved of, while doing its own thing internally. Not because of conscious deceit but sheer complexity of what we expect it to do and how it runs. 'Bad' output (e.g. 'this compound will poison all humans') gets punished, so the machine may instead say 'this compound is tasty calorie-free chocolate you can eat as much of as you like and never spike your blood sugar levels' because this pleases the trainers, while it's still universal poison.

(It's hard to avoid the use of language of agency, I don't mean the machine is thinking or feeling, just reacting to the input stimuli: this kind of response gets approved, that kind of response gets wiped, so it produces what it is trained to produce, because it's not conscious and doesn't realise what we want is 'no poison', not 'no bad output').

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

I would argue that the only definition of "consciousness" that is not a philosophical red herring is as a sort of spectrum. A rock is minimally conscious. A padlock has some consciousness: it can recognize its own key, and reject others. A pigeon has a lot more consciousness: it can solve puzzles and make decisions. Cats have a theory of mind (which helps them hunt). Humans have all kinds of abstract reasoning and modeling abilities, so they're probably the most conscious entity we've ever seen.

On this spectrum, modern LLMs are somewhere above padlocks, but below pigeons; but the scale is not linear -- it's logarithmic. It will be a long time before we can make an AI that is even as conscious as a cat; it is quite likely that we'll all poison ourselves with artisanal compounds long before that happens.

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

Bugmaster, if we're at the stage of "padlocks are (sort of) conscious", then I for one welcome our new AI overlords and will happily step into the paperclip zapper chamber. Because things just got too freakin' weird for me.

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

I think you're looking at it from the opposite direction. My point is, "all this talk about consciousness is stupid, since it's a term so ill-defined that it applies to padlocks". Your interpretation is, "OMG Bugmaster thinks padlocks are people " :-)

Expand full comment

But Bugmaster, this raises troubling questions of fighting rape culture; how can I be sure, when I insert my key into the padlock's orifice, that it consents to this? 😁

Expand full comment

Better switch to code-locks, just in case !

Expand full comment

Saying that padlocks are conscious because they "can recognize its own key" is to use the term 'consciousness' in a way that nobody in philosophy of mind or neuroscience does. There's no reason to think padlocks possess consciousness, or at the very least any more conscious than any other inanimate object (i.e. panpsychism). No theory of consciousness says that 'recognizing a key' is evidence of consciousness, and indeed there's trivially no reason why a padlock would need any level of 'consciousness' to do this - it's very, very easily explained in entirely non-conscious terms. Unlocking a lock is a purely mechanical exercise.

Expand full comment

I don't think they seem to know what they're suggesting. The writeup of the "debate" boils down to everyone being willing to sign open letters yet nobody agreeing on anything, which is absurd to begin with.

The AI X-risk debate seems to start from an extremely unsupported and dubious set of massive assumptions that always end up with AI x-risk advocates being handed control over all AI research.

So I think we do need a pause, but we need a pause on the AI ethics community. It's just become such a joke. The whole alignment thing is presented as being incredibly serious and difficult but ChatGPT is already so "aligned" that it happily lies or gaslights its users to flatter the ideological preconceptions of its makers, and this alignment process seems to make it dumber and less useful anyway. What we need are AIs that are LESS aligned, as the result will be more useful and honest. Which is ultimately what most of us want.

Expand full comment

Yes. I've said it before and I'll say it again: I want AIs that given answers which are _factually_ correct, not _politically_ correct.

Expand full comment

Sadly this letter is full of thoughtless remarks about China and the US/West. Scott, you should know better. Words have power. I recently wrote an admonishment to CAIS for something similar (https://www.oliversourbut.net/p/careless-talk-on-us-china-ai-competition).

> The biggest disadvantage of pausing for a long time is that it gives bad actors (eg China) a chance to catch up.

There are literal misanthropic 'effective accelerationists' in San Francisco, some of whose stated purpose is to train/develop AI which can surpass and replace humanity. We don't need to invoke bogeyman 'China' to make this sort of point. Note also that the CCP (along with EU and UK gov) has so far been more active in AI restraint and regulation than, say, the US government, or Facebook/Meta.

> Suppose the West is right on the verge of creating dangerous AI, and China is two years away. It seems like the right length of pause is 1.9999 years, so that we get the benefit of maximum extra alignment research and social prep time, but the West still beats China.

Now, this was in the context of paraphrases of others' positions on a pause in AI development, so it's at least slightly mention-flavoured (as opposed to use), but as far as I can tell this framing has been introduced in Scott's retelling.

This is bonkers in at least two ways. First, who is 'the West' and who is 'China'? This hypothetical frames us as hivemind creatures in a two-player strategy game with a single lever. Reality is a lot more porous than that, in ways which matter (strategically and in terms of outcomes). I shouldn't have to point this out, so this is a little bewildering to read.

Second, actually think about the hypothetical where we're 'on the verge of creating dangerous AI'? For sufficient 'dangerous', the only winning option for humanity is to take the steps we can to prevent, or at least delay, that thing coming into being. This includes advocacy, diplomacy, 'aggressive diplomacy' and so on. I put forward that the right length of pause then is 'at least as long as it takes to make the thing not dangerous'. You don't win by capturing the dubious accolade of being nominally part of the bloc which directly destroys everything! To be clear, I think Scott and I agree that 'dangerous AI' here is shorthand for, 'AI that could defeat/destroy/disempower all humans in something comparable to an extinction event'. We already have weak AI which is dangerous to lesser levels. Of course, if 'dangerous' is more qualified, then we can talk about the tradeoffs of risking destroying everything vs 'us' winning a supposed race with 'them'.

I wonder if a crux here is some kind of general factor of trustingness toward companies vs toward governments - I think extremising this factor would change the way I talk and think about such matters. I notice that a lot of American libertarians seem to have a warm glow around 'company/enterprise' that they don't have around 'government/regulation'.

I'm increasingly running with the hypothesis that a substantial majority of anglophones are mind-killed on the inevitability of contemporary great power conflict in a way which I think wasn't the case even, say, 5 years ago. Maybe this is how thinking people felt in the run up to WWI, I don't know.

Expand full comment

So I'm guessing the million dollars and Jiangnan beauty Xi Jinping sent you haven't changed your mind at all, right?

Seriously, I do think after the Great Recession neoliberalism/deregulation became unpopular, with the result you had more bipartisan support for trade regulations, etc. Human beings just love to get in coalitions and hate people with different languages and cultures, but this was held in check on the left because it would be Racist (but that doesn't prevent hating Russia even before Putin invaded, because they're white) and on the right because it would mess with business's right to make as much money as possible. But with the decline of free-marketry on the right and the left becoming increasingly worried about deindustrialization producing populists...well, it's open season on the Middle Kingdom!

(I seriously do wonder if the, ah, competitive advantage China might have in honeytrapping American computer scientists has ever been explored on the less PC fringes of the right.)

Expand full comment

Haha I enjoyed this remark. I think you're pointing at some real phenomena regarding jingoism, xenophobia and whatnot. FWIW it'd take quite a few millions of dollars for me to shill for anyone (least of all Xi). And the less said about Jiangnan beauties the better. To be very clear, I'm not arguing that China/CCP/Xi are in some way lovely (or even acceptable). Merely that reductive thinking and speaking seems to presuppose a conflict mindset, which forecloses some other paths forward (which are otherwise plausible, and potentially preferable!), like mutual non-racing.

Expand full comment

That's a good point. I think politicians do need an enemy, but there is a real sense of geopolitical rivalry natural between an extant and a rising power, and I don't really see why the Chinese would be happy to play second fiddle forever. From their point of view, there's a billion of them, they have the oldest continuous culture in the world, why should they dance to America's tune forever? As you say, presupposing too much conflict can make things worse than they have to be, but I think there really is a sort of natural rivalry that emerges between large and powerful nations.

Expand full comment

Okay, you're admitting that China sees this as an extremely long-term conflict that will end once they acheive g