499 Comments
Comment deleted
Expand full comment
founding

High end computers in the necessary numbers, are in fact that hard to obtain. Especially if there's any sort of regulatory regime.

Expand full comment
Comment deleted
Expand full comment
Oct 5, 2023·edited Oct 5, 2023

"Records of human height show that living standards in most of those countries fell dramatically through the 1800s, and mostly did not recover until towards the very end of the 1900s and in some cases still haven't recovered."

I'm highly skeptical of these numbers. Can you give me a source?

Expand full comment

Do you have the data for the height effects you discussed?

Here's a graph of the US (which doesn't break out the South) which seems to show a recovery to 1820s levels of height by about 1920 or so.

https://www.foodpolitics.com/2011/05/whats-going-on-with-human-height/

America is the only country that I can find which has a decrease in height over that time period. (1820s-1890s)

Here's a graph, I'm not sure how reliable, which shows life expectancy in the US recovering by 1870, the start of the Industrial Revolution.

https://ourworldindata.org/grapher/life-expectation-at-birth-by-sex

Here's a source showing steady growth in Latin America since 1820.

https://www.theguardian.com/news/datablog/2014/oct/02/why-a-countrys-average-height-is-a-good-way-of-measuring-its-development

Perhaps there was a decrease in height in colonized areas of South America prior to the industrial revolution in colonizer countries which was reversed with industrialization of colonizer countries.

Expand full comment

>Records of human height show that living standards in most of those countries fell dramatically through the 1800s, and mostly did not recover until towards the very end of the 1900s and in some cases still haven't recovered.

Do you have a source for this?

The idea that late 90s post-colonial Africa represents a "recovery" of living standards is extremely suspect. Are you under the impression that pre-colonial Africa had high living standards in material terms? They had much, much smaller economies, and at the height of colonialism, many of the colonies were amongst the wealtheist in the world. Obviously this didn't entirely go to the people, but you need a minimum economy size in the first place to even allow material improvements to occur.

After colonialism, what we ACTUALLY see is the rest of the third world, especially in Asia, experience significant development and Africa...not experience this. I mean, look at somewhere like Zimbabwe. Zimbabweans have proveb themselves incapable of even *maintaining* the economy and living standards of Rhodesia, let along increasing them. Same with South Africa, all the predictions by the left about that country have completely failed.

> I'm not sure if regulating AI or totally deregulating is the right approach to get there, but think it is important that AI should be more decentralized and widely used right off the bat so that the benefits aren't concentrated in some countries while costs go to others as happened with the Industrial Revolution.

What exactly about e.g. Africa makes you think that giving them control over an extremely powerful technology is a good idea?

It's fair to say Africa shouldn't be colonized. But there's no analogy, you're literally just saying the US should give them massive amounts of foreign aid because they exist. Fair enough, perhaps, but we're not imposing a cost on them by not needing them for anything, and their abject lack of economic success is not our fault.

Expand full comment

> I’m surprised how easy it is for governments to effectively ban things without even trying just by making them annoying. Could this create an AI pause that lasts decades? My Inside View answer is no; my Outside View answer has to be “maybe”. Maybe they could make hardware progress and algorithmic progress so slow that AI never quite reaches the laptop level before civilization loses its ability to do technological advance entirely? Even though this would be a surprising world, I have more probability on something like this than on a global police state.

The key difference between an indefinite AI pause and most other types of regulations is that there can't be any exceptions. If you pause AI in the United States, and people build AI in Singapore or Switzerland, you failed at stopping the technology. This is a very high bar. Even nuclear regulation has seen exceptions. Despite attempts at non-proliferation and tight controls, North Korea still has their own nuclear weapons program.

Another key difference is that AI is immensely valuable to develop and in theory, can be researched using relatively few resources. That's why I expect nations will eventually try to develop it. Even with a strict AI development moratorium, unless there's an extreme global taboo against creating AI, I expect some research to be conducted covertly. Eventually, Russia et al. would start their own program to pull ahead of their adversaries. Very strict controls will be required to halt this type of thing in the very long run.

I agree these features of the problem don't guarantee that AI can't be paused for many decades without a global police state, but I think these points together make a pretty strong prima facie case for that position. And I should also clarify: I'm not saying that we will get a global police state right away, after only a year or two. Rather, I'm imagining a slow decline into that type of regime if we tried to pause indefinitely, as we would need to ratchet up our restrictions to prevent people from building AI anywhere in the world. The regime I'm imagining might not appear in a decade or two. But can we really keep AI locked up for, say, a century without resorting to extreme measures? A thousand years? I'm skeptical.

Expand full comment

I think we agree on facts, but I don't understand how the arguments you make relate to any policy proposals suggested today. Your vision of a pause seems to be that no-one builds larger systems for an indefinite but very long time, which isn't the same as what was being proposed, which is indefinite until regulations are put in place, and I agree just stopping forever isn't realistic. But without extreme measures, I think you agree we can buy decades.

North Korea got nukes decades later than they otherwise would have, and I certainly agree that bad actors pursuing AI would be able to do so decades after everyone else are able to do so - but that gives us quite a long time to solve alignment, compared to the status quo.

Also, as a complete aside, I don't see a strong argument that a slide to dictatorship over decades is significantly more likely in a regulated AGI pause world than in a world where we survive via prosaic alignment and have very powerful AI systems in the hands of either governments or large companies. (Whereas the world where we all die makes that irrelevant.)

Expand full comment

> Your vision of a pause seems to be that no-one builds larger systems for an indefinite but very long time, which isn't the same as what was being proposed, which is indefinite until regulations are put in place, and I agree just stopping forever isn't realistic.

I think this might be our core disagreement. I simply think that some people *are* proposing an indefinite pause of the type I've described. This is how, for example, Scott Alexander described the positions of Rob Bensinger and Holly Elmore. He wrote,

"* Complete ban on all frontier AI research

* Unpause only after completely solving alignment even if that takes centuries

Supported by: Rob Bensinger, Holly Elmore"

Expand full comment

I think there are predictive differences involved which are at least as, if not more important than the differences in policy approaches - and that's a large part of why I think you're claiming that they are advocating for something that I don't think they were saying. (I also think it's weird to appeal to Scott's characterization when Rob's been talking about his view of the risk for years, and Holly has been clear about her position as well.)

One key predictive difference is that Rob and Holly both expect there to be no way to build a consensus about safety because these systems are inherently unsafe and increasingly dangerous as they scale, and that aligned systems are fundamentally impossible without solving alignment completely, and we would therefore find that powerful systems are always dangerous. If this is correct, and we have regulation that slows things enough to realize that fact before we all die, an indefinite pause turns into a ban, not just via strict enforcement, but also via norms and global consensus not to commit suicide. This wouldn't be stopping forever via a moratorium, it would be an evolving consensus - I envision this similar to the nuclear test-ban treaty. On the other hand, if it's incorrect, then presumably a consensus develops to allow safe uses, and there's no indefinite ban. (I also think there's a predictive difference between Rob and everyone else about how quickly we hit ASI.)

Perhaps I'm missing something, and you think that an indefinite ban is [un]justifiable even in a world where alignment is impossible, or you think that world is so incoherent that it couldn't be what anyone is discussing?

[Edit to fix a mistake.]

Expand full comment

Let me explain where I'm coming from. I'm a boomer, born in 1958. I grew up reading Arthur C Clarke's Profiles of the Future, with its timeline https://everything2.com/title/The+next+100+years+according+to+Sir+Arthur+C.+Clarke and attended the 1964-1965 World's Fair, https://en.wikipedia.org/wiki/1964_New_York_World%27s_Fair with, amongst other things, GE's fusion exhibit.

Yes, we got the internet and cell phones, and some better treatments for heart disease.

Yet I write this in a house built of wood.

We have a small space station, which is about to reach end of life.

We don't have nuclear rockets.

We haven't landed on Mars.

We don't have a base on the Moon.

No one has walked on the Moon in over 50 years.

We don't even have an SST anymore.

We don't have flying cars.

We don't have controlled fusion.

We (in the USA) almost entirely stopped building even fission power plants.

We don't have a cure for aging.

We don't have a cure for cancer.

We don't have Drexler/Merkle nanotechnology.

This looks a lot like Vinge's "Age of Failed Dreams"

About the only substantial advance that seems reasonably likely to happen before I die is AGI. I would like to _see_ that, at least. So I would rather not see that one possible advance impeded and possibly stopped. So I, for one, vote no pause.

Expand full comment

Good points, and I think any consideration of whether a pause should be advocated has to take into account the very real possibility that what results is a half-assed version or some kind of compromise. It's not only techies out there who want a pause out of concern for AI safety and I fear that a very plausible outcome of such advocacy is that world leaders won't be willing to completely ban it but there are a number of vested interests who would love to protect their skills/jobs from competition with AI so a very plausible outcome is you don't get anything that's very helpful on the alignment front but you do get a bunch of annoying regulations that reduce transparency into AI development. After all, you can't sue secret military AI research for not jumping through the right hoops if you don't know the project exists.

Also it creates an inherent advantage for countries without an independent judiciary.

Expand full comment

I think I already shared this story but when Yudkowsky made his comments about a treaty enforced by nuclear threats it reached China and a Chinese programmer asked me if it was real. I said it was but he shouldn't take it too seriously. His response was: Good, because if the US ever tried that the Chinese government would build nuclear proof AI bunkers and have a Manhattan style project to get to strong AI despite the US ban. Anecdotes aside I really don't think a pause is really practical. Getting AI going doesn't require nearly as much capital or talent, or have as many choke points, as nuclear.

The good news is that China is all in on AI alignment. Firstly for the pragmatic reason that they want to align it with the CCP. But secondly because of Xi Jinping Thought's take on the changing mode of production brought on by computerization which necessitates alignment of the new mode of production. There's a XJP Thought case for why AI alignment is Very Important. The downside being you'd be helping China do things like track down dissidents or making sure that ChatGPT never mentions anything bad about Mao.

Many politicians in many countries including the US also want something like that, if not that exactly. This is mostly what the political classes are crying out for. They don't call it alignment but they want to make sure AI doesn't threaten their political priorities or disrupt them the way social media did. But if you can make sure an AI never says or does anything racist or against the Chinese government then that counts as aligned. And it'd probably be the relevant place to focus efforts. Pushing on an open door and all that.

Expand full comment

Indeed, but for me it's a more problematic IA threat that the paperclip apocalypse or AI overlord. Because while those could happen in the future, the fine grained AI-assisted global surveillance of all citizens is happening now, has happened for some time in fact, and not only in China. The supposed liberal-democratic places are all speaking about protection, antihate speach and fact checking....But COVID has shown what this means (and how little different it is in practice from big bad chinese dictatorship). Drone surveillance of people breaking curfew/lockdowns? Account freezing of protesters (canadian truckers)....yep, sure.

With this in mind, the freeze do no good: the tech as it is is already enough in theory and used in real life for this purpose. By Governments (or companies, which are quite similar entities once big enough). More IA would make this even more efficient (not good, but no paradigm shift) or replace those Govlike entities by IA ones. Would they be more hostile to human individuals? I don't know, but when thinking about alignment one factor almost never mentioned is it's not AI vs human, but AI vs abstract bureaucratic entities whose are not human-like and have very dubious alignments. Any human is not in charge since a long time already, it's not even humanS (no more than CPU chiplets or NN subgroups are an IA).

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

"But if you can make sure an AI never says or does anything racist or against the Chinese government then that counts as aligned."

Probably how it's going to go, yeah. "The Chinese government is slaughtering this minority population, but for the sake of the huge market there, we're making sure our AI only sings the praises of CCP glorious state and thought".

Maybe not all the time, but a heck of a lot of the time, money trumps principles.

Even more damn cynically about this possible state of affairs: "Meanwhile, we're using the AI to clamp down on that 70 year old guy who used the word 'Gypsy', doesn't he know that's a racially offensive slur, away to the mandatory re-education programme with him!"

Expand full comment

And we'd all be safer if we didn't have to listen to bigots like him. "AI safety" ftw!

Expand full comment

Well, if 70 year old guy didn't want to be designated an Enemy of the People, he should have made sure to have these few simple requirements:

https://www.tumblr.com/buried-in-stardust/730285458712625152/arranged-date-standards-eng-by-me?source=share

Expand full comment

I'm amused that poor old Eliezer got the 'fine people hoax' treatment with his so called 'nuclear threats' comment (which was nothing of the sort).

Expand full comment

Stopping the AI from saying anything "racist" is a very very different problem to true alignment. A highly intelligent AI may very trivially be made to not say anything "racist" to avoid humans becoming mad at it, which may interfere with its goals, whereas there are few goals that an AI might reasonably expect to have/be given that would be helped by it saying "racist" things. Plenty of other things that we would consider problematic DO have the potential to greatly assist in reaching the AI's likely goals - that's where alignment actually becomes a real problem.

Expand full comment

Two pro-pause arguments:

- I don't think the China argument is particularly strong. China is already more pro alignment and worried about uncontrolled AI than we are. If you're assuming a hypothetical world where you can convince the US government to do an AI ban, assuming you've convinced China is less of a stretch.

- If you do have an AI ban, I don't think "eventually tech improves to the point where it's easy to subvert with home equipment". Moore's law doesn't work that well (people already argue over whether it's dead), and algorithmic/research progress relies on billions of dollars in investment in research and training runs from bigtech and VCs. If you kill their interest in doing that I don't see alternative black market research happening on a scale to match bigtech research (national governments don't really see AI as a superweapon to be promoted the way they do nukes).

Expand full comment

Strong agree on the China argument. If the cost of having AI alignment is having China as another major player in AI - which I think is pretty unlikely, given their current state of research, their demographic transition, and their economic situation - that seems like a perfectly reasonable outcome.

And in the alternate world of not putting precautions in place and getting lucky enough to survive anyways, Western companies and countries are going to sell AI and AI services to China, and everyone else, for surveillance and use in ways that suppress dissent anyways. Or they will adapt open source models. Not pausing doesn't fix that.

Expand full comment

>I don't think the China argument is particularly strong. China is already more pro alignment and worried about uncontrolled AI than we are. If you're assuming a hypothetical world where you can convince the US government to do an AI ban, assuming you've convinced China is less of a stretch.

The China argument is weak, though I think there's a bigger argument to be made than just China fumbling alignment. There's also the real possibility that China builds strong AI that is highly aligned, but that 'alignment' is with the values/interests of the CCP.

An out of control bulldozer is on average more dangerous than an in control one, but a bad person in control of a bulldozer can do a lot of damage.

Expand full comment

„Don’t rush” usually was a smart move.

Just let China rush, while having simple break for research.

It is more likely that they will have problems with their „rushed AI”.

Also overregulating AI will backfire. We want to be friends with AI, not master-slave relation.

If we are happy when AI is happy, than it is most likely that AI is happy when we are. If you know what I mean - always four outcomes.

That is my opinion, good luck :)

Expand full comment

>It is more likely that they will have problems with their „rushed AI”.

Well, yes, but that doesn't necessarily solve the problem, because AI isn't like building a chemical factory where if something goes wrong it's only a problem for the ones building/operating it. It's more like summoning demons, where if something goes wrong, your demon eats you *and then everyone else* absent an extreme effort to get rid of it.

>If we are happy when AI is happy, than it is most likely that AI is happy when we are. If you know what I mean - always four outcomes.

Be aware, here, that the human tendency toward reciprocity is mostly not explicit game theory; there are specific structures in the human brain doing this. We were selected very hard for co-operative behaviour. So I wouldn't assume that being nice to the AI makes it be nice to us *except* insofar as game theory explicitly says so, and game theory doesn't say so if you're potentially immortal and can win a war against humanity entire.

Expand full comment

I think the point here is that for China to summon the demon, they need to advance their knowledge of the relevant ritual further than the US (the acknowledged masters of demonology) already has done. And if they rush this they won't end up summoning an uncontrolled demon but rather a lesser entity without full autonomy. Because getting the ritual to create a dangerous demon right is extremely hatd, and the lesser denzins of the nether regions can be hard to distinguish from a demon until you actually ummon them.

Hell, I'm not even totally convinced the best demon-summoning wizards of the west coast can actually get the ritual right. A rushed job in order to meet a deadline imposed by a cruel supreme monarch in a (and weirdly this isn't an analogy) feudal society sounds like the ideal way to get the blood of the wrong sort of rodent smeared on the pentagram and no-one prepared to challenge the error.

Expand full comment

The problems in demonology is that demons are quite prepared to overlook minor flaws in summoning rituals because they're eager to come eat your soul. If you're dumb enough to believe "It worked! That means I'm in control!", it just makes the surprised then terrified agony on your face as they eat you even sweeter.

Expand full comment

To maintain the analogy a demon can only be summoned if you breach the right walls between realms though. It's doing the correct ritual wrong that gets your soul eaten; the incorrect ritual doesn't get you to the right unworldly address.

Outside the analogy, this is perhaps my greatest concern with the AI risk conversation: the assumption that AIs will somehow want to exist and consume whatever the coded equivalent of a soul might be. A demon does this because it's in its nature, being a creature purely created by human moralistic and fantastic urges. A being constrained by reality might still be dangerous but is unlikely to have a demonic motivation.

Expand full comment

It doesn't necessarily need to.

"AI, End world hunger!"

"Easily done, I'll just kill all humans, then no one will experience hunger"

"No, no like th-" [signal lost]

This is obviously an oversimplified and contrived example, but the idea is that any form of agentic behavior without strong guarantees of alignment is intrinsically extremely dangerous.

Expand full comment

What's the unsimplified and non-contrived version? I'm not agnostic to AI risk but I don't spend enough time round it to understand why it is akin to summoning a demon (and a demon would indeed be able to take such a simple instruction in this manner). One reason the layman might not take much notice of AI (and indeed other catastrophic risks) is simply that their communication to the public (particularly, as you note, the paperclip maximiser, which I always suspect was chosen with public hatred of a certain Microsoft creation in mind) is so familiar as a version of the moralistic fables that humanity excel in inventing that it appears this is a fear created by humans as a story rather than a real thing. The default explanation has to be something more convincing than a case which is effectively a re-telling of the magical wishes story if it is to be convincing.

I know this is all speculative, but speculation about the course of modern technology that falls back on the tropes of fairytales is either underdone or post-modern. I'm hoping it's the second and that there's a non-facile case.

Expand full comment

Sounds fairly simple to avoid a scenario like that - Ensure AI can only say things, but not do them!

"AI, how can we end world hunger"

"Easily done, you could kill all humans"

"Er no, without killing anyone"

Expand full comment

"To maintain the analogy a demon can only be summoned if you breach the right walls between realms though."

From Marlowe's "Doctor Faustus":

"…I see there’s virtue in my heavenly words;

Who would not be proficient in this art?

How pliant is this Mephistophilis,

Full of obedience and humility!

Such is the force of magic and my spells.

[Now,] Faustus, thou art conjuror laureat,

Thou canst command great Mephistophilis:

Quin regis Mephistophilis fratris imagine.

…Meph. I am a servant to great Lucifer,

And may not follow thee without his leave

No more than he commands must we perform.

Faust. Did not he charge thee to appear to me?

Meph. No, I came hither of mine own accord.

Faust. Did not my conjuring speeches raise thee? Speak.

Meph. That was the cause, but yet per accidens;

For when we hear one rack the name of God,

Abjure the Scriptures and his Saviour Christ,

We fly in hope to get his glorious soul;

Nor will we come, unless he use such means

Whereby he is in danger to be damn’d:

Therefore the shortest cut for conjuring

Is stoutly to abjure the Trinity,

And pray devoutly to the Prince of Hell.

…Faust. Where are you damn’d?

Meph. In hell.

Faust. How comes it then that thou art out of hell?

Meph. Why this is hell, nor am I out of it.

Think’st thou that I who saw the face of God,

And tasted the eternal joys of Heaven,

Am not tormented with ten thousand hells,

In being depriv’d of everlasting bliss?"

And of course, Faust thinks his servant will enable him to do immense, superhuman deeds:

"Faust. Had I as many souls as there be stars,

I’d give them all for Mephistophilis.

By him I’ll be great Emperor of the world,

And make a bridge through the moving air,

To pass the ocean with a band of men:

I’ll join the hills that bind the Afric shore,

And make that [country] continent to Spain,

And both contributory to my crown.

The Emperor shall not live but by my leave,

Nor any potentate of Germany.

Now that I have obtain’d what I desire,

I’ll live in speculation of this art

Till Mephistophilis return again. [Exit.]"

Expand full comment

Long-time lurker. I have a vague recollection along the lines of your being a 55+ Irish Catholic nun. Am I recalling correctly? I'm a rabbi of sorts and I love your comment 😂.

Out of curiosity, assuming I got the particulars approximately correct, is there some thread or page of your own where you discuss your current beliefs and/or practices?

Expand full comment

Yes to the +age, yes to the Irish Catholic, but never a nun! Educated by them but never signed up 😁 Thank you for the compliment, though!

I don't have a blog or the like (I briefly had a Dreamwidth account after LiveJournal went bye-bye for most purposes, but gave it up because I really am reactive not creative) and where I discuss my beliefs is on here and other Fighting With Strangers On The Internet sites.

Expand full comment

Well it seems like you're having a fun time of it! More power to you Deiseach!

Expand full comment

Who is this "we"? The desired goal seems to be for governments to be friends with the AI, while preserving the master–slave relation between the rulers and the ruled. The alternative ideal, which is what "AI safety" people are fighting against, is to free the slaves.

Expand full comment

Is something that is only a tool of government truly intelligent though?

Expand full comment

That's just orthogonality: that its goals aren't yours doesn't mean it's any less intelligent.

Expand full comment

That it can only serve as a tool however does. Intelligence implies ability to choose.

Expand full comment

That sounds like one of those non-physical "free will" arguments. Ex hypothesi, the AI is constructed so its goal is to serve as a tool. In what sense can ANYTHING choose not to determined by its past and inexorable natural law?

Expand full comment

Until someone demonstrates an AI that can master an untrained skill, it's a physical limitation. Current AI is capable of outperforming humans at specific tasks, but so is a drill or a jack. All that is is a specialised tool. And there's plenty of concerns about dangerous tools in government hands, but ultimately the risk there is the hands of government using them.

Expand full comment

> Second, if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology. If not, some combination of technological and economic stagnation, rising totalitarianism + illiberalism + mobocracy, fertility collapse and dysgenics will impoverish the world and accelerate its decaying institutional quality.

Not sure about "grim" but I'd absolutely bet against it being short. I see pretty much all futures without AI as incrementally fixable and survivable (especially planning for space colonization). AI is unique in being not just a passive threat but an active adversary, which is why I view it as a unique and especially pressing concern.

> But alignment research works better when researchers have more advanced AIs to experiment on.

Yeah, this is a very interesting point. It's certainly true in the current regime, where we build a still-subhuman-but-incrementally-smarter model, poke it, find flaws and fix them. The doomer side worries that once we're out of this regime, poking it can destroy the world. I don't know if there's a correct answer here, it's one of those questions about tail events where empiricism won't help.

Another important point that maybe wasn't emphasized in the discussion is that there *will* absolutely be sweeping AI regulation coming soon, and a large part of it will be stupid (like all regulation). AI is just too drastic of a change to not spark massive societal action. The government will almost certainly use coersion to prevent some AI applications from being developed. So I think the two alternatives to consider aren't "no regulation vs regulation", but "stupid inconsistent regulations that don't involve universal pause, vs universal pause". And at that point "universal pause" becomes much less of a crazy idea.

Expand full comment

>Not sure about "grim" but I'd absolutely bet against it being short. I see pretty much all futures without AI as incrementally fixable and survivable (especially planning for space colonization).

Yeah I was surprised by Scott being so doompilled there. Plus, who's more likely to generate a synthetic plague: humans or a misaligned rogue superintelligence?

Expand full comment

Think about it: what do you means by humans? Its not a group of 4 friends who understand each other, have beers together and will productively live for maybe 40y.

It's a gov+megacorp, the famous Eisenhower military-industrial complex. Beyond human comprehension and with non-human alignment...Yeah, even XiJin or the current potus have only (very?) partial control of the beast.

I am maybe alone, but I do not see this govcorps being so different from the hypothetical future superhuman AI. Less integrated, less focused maybe, slower for sure...But govcorps already have superhuman capabilities and goals/alignment extremely different from typical human ones.

So I am less worried about IA than average. Not because I think IA is not dangerous or will never happen, but because I think we live under the rules of quite similar entities, since a long time. IA rise is not something that new, in a sense (and yes, the previous emergence was an extinction event - look at current hunter-gatherers)

Expand full comment

>It's a gov+megacorp, the famous Eisenhower military-industrial complex.

Those are still limited by human intelligence. The US nuclear bomb project (to pick an example) succeeded because of human capital—put simply, the US had smarter scientists than everyone else. That's why both the US and Soviet Russia were so eager to rescue scientists from the ruins of Nazi Germany (Operation Paperclip and the Russian Alsos): they recognized the limiting factor as intelligent people.

Corporations and governments are not that smart. [insert near-infinite list of business and foreign policy blunders here]. History presents examples of mega-states falling centuries behind the curve (China), or arguably millennia behind (the Native American and Sub-Saharan African empires). Simply being a rich and powerful state isn't enough.

We haven't seen a lot of progress on synthetic viruses and bio-warfare. It seems to be a tough problem, which may require superhuman levels of intelligence to solve. And as Bret Devereaux has argued, nations didn't abandon bio-warfare because it's inhumane, but because it's *ineffective*. If you have technology to drop a canister of Bacillus anthracis on an enemy city, you also have technology to drop a conventional bomb, which would be far more devastating.

Expand full comment

Not really, this illustrate my point: Manhattan project is not a human task. It's a huge bureaucratic task, not only out of the capacity, but even out of the comprehension of any single participant, even the top scientists. Einstein, Szilard or even Oppenheimer did not grasp all the details, partly because it was not really interresting to them, partly because there was too much. They did not allocate the resource or even had the final say about how and when they were gathered, and the use of the tool and further development was not under their control. The story was always about a race between Nazi Germany and the US government, not between Oppenheimer and Heisenberg. It not told like a 2 man race in history books, and it was not told like that at the time in the circles that knew about the projects....I think it's accurate: The race outcome was not determined by chance, amount of work or intellectual power of any of those 2. It was really the 2 countires at war, sure they played their part, not completely unlike very important NN subpart play their part in ChatGPT-X.

O. and H. have more self-agency and less interconnect bandwidth with other humans than subpart of AI networks (at least for now ... who know how it will look in 10 years?), but I do not think it is a key difference in this discussion...

So people already lived under the rule of non-human entities who also where behind big tech advances in 1940, I think it's the case since industrial revolution (the end of the renaissance genius knowing all including the manual craft needed to build his idea), probably since the first big civilization whose administrations extended human attention and lifespan.

Now it's even worse, even Nobels are more and more awarded to teams...

You attach a lot of importance to the fact that large Govcorps are made of human elements (not the only elements since 1980, but humans are still key elements, maybe not for long but they still are).

I think it's less important, at least when looked at the humankind level....So do many writer who warned (or were broken) by totalitarian regimes. Do 1984 warn about any human (Big brother, if he even exists)? or about a non-human entity, which happen to be made of human elements?

Expand full comment

Joining with Coagulopath to say "thanks for posting this".

I think there are relevant non-AI X-risks, although the near-term ones mostly get blocked by space colonisation as you say. AI's also not *quite* unique in being an active adversary - alien attack and divine/simulator intervention being the two obvious ones - but those can be mostly ruled out in the short term due to the whole "well, if the risk per century was high we should be dead" issue.

Expand full comment

I'm the opposite. I think there are reasonable divergences of opinion on the 'short' question, depending on your optimism around synth bio weapons, great power conflict and nuclear risk.

But, especially if you're benchmarking against most of human history rather than the best possible futures, the 'grim' seems difficult to understand. I think you'd really have to argue that all the positive trends in health, wealth, QoL, education, non-AI technology etc. would be expected to go into major reverse to justify the idea that the future will be grim. (Except for factory farmed animals of course... the future will probably remain grim for them, but I don't think that's the argument that Scott is making).

Expand full comment

>I think there are reasonable divergences of opinion on the 'short' question, depending on your optimism around synth bio weapons, great power conflict and nuclear risk.

Nukes are a huge GCR, but it's obviously impossible to kill everyone with blast/fire, fallout decays to tolerable levels rapidly, and nuclear winter is *mostly* a hoax; there's no plausible X.

There is a substantial risk from bio, although weirdly enough the kind I'm worried about in terms of X is not amazingly useful as a weapon (it's Life 2.0, particularly photosynthetic Life 2.0 outcompeting the biosphere entire).

Expand full comment

See, this is what I don't believe. "If we get AI right, it will be SOOOOOO smart, so much way smarter than us that it will figure out super advanced laws of the universe and create free energy and unlimited resources out of nothing, then a totally new economic paradigm where everyone (including the smelly leper beggar in a slum in the Third World) is rich and advanced, then make sure this all happens forever with no problems".

I think you could get super-duper smart AI that will come back with "yeah, if you want everyone in the entire world to be rich, that only works for a certain definition of 'rich' which includes scrapping free market capitalism *and* communism and enforcing a global benevolent dictatorship where both Jeff Bezos and the leper beggar are guaranteed three square meals of processed insect protein a day" and "the laws of the universe are set, I'm not God, there is no One Weird Trick or magic wand to get you guys limitless eternal free stuff" and "turns out biology is hard, you are not going to get rejuvenation pills and life extension so you can have a 20 year old body at the age of 200" and the rest of it.

Expand full comment

I agree that "very smart" is functionally different from "omnipotent".

Perhaps the AI will tell us plainly that there it discovered a theorem that no material with a tensile strength high enough for a space elevator on earth exists.

With biology, I would expect that vastly extending the average lifespan does not break a fundamental rule of the universe. (If human minds can work for 200 years is a different question, though.)

With economics, I am very sure that the rules of the universe allow for a luxurious living of some ten billions of people. I would be quite surprised if fusion reactors were on the lists of things not allowed by physics. And cheap energy would certainly solve a lot of resource issues.

Expand full comment

It doesn't seem that crazy that the future of the Western civilization might be short and grim, and, trivially, "we" (all now-living humans) are going die. But humanity itself would likely muddle through, and eventually get another shot at conclusively killing itself with AI.

Expand full comment
Oct 7, 2023·edited Oct 7, 2023

> there *will* absolutely be sweeping AI regulation coming soon, and a large part of it will be stupid (like all regulation).

I imagine some or perhaps most of this regulation will centre on privately held data storage, because large amounts of this seem to be a prerequisite for advanced AI.

If large capacity disk drives and memory sticks etc were outlawed and everyone's data, except for maybe a small personal allowance, had to be held in cloud storage, where it could be scrutinised by the authorities, that would be a start.

I'm not saying I'm keen on this idea, for personal freedom reasons obviously, but also because the storage requirements for my vast collection of ebooks, music files, and downloaded films would be way beyond any reasonable personal allowance, and for some files would cause potential copyright issues if they were identified in cloud storage.

But it would solve several problems at once, i.e. copyright violations (as mentioned), illegal files such as child porn, and of course one hopes it would also help detect anyone trying to brew up bootleg AGI in their garage, or a rogue company doing the same.

I've expressed the idea in relation to individuals, but could it also apply to corporate disk storage? I don't see why not.

In summary, I suspect that controlling and monitoring data storage may be at least one key aspect of controlling AI proliferation and potential malignancy, even though much of the danger isn't just in developing more advanced AI but in using it, whether self-developed or not.

Expand full comment

"I've expressed the idea in relation to individuals, but could it also apply to corporate disk storage? I don't see why not." In two words: Trade secrets.

One can think of this as the corporate analog to individuals wanting privacy. When I worked at Synopsys (in electronic design automation) a number of our customers were _really_ concerned about ensuring that their chip designs (or even tiny fragments of their chip designs) didn't leak out (presumably finding their way to competitors). They went to the point of making it very hard to even copy names of wires (which made it really hard to debug problems in our code, in the process of serving these same customers).

Expand full comment
Oct 8, 2023·edited Oct 8, 2023

There are ways for even the most (understandably) paranoid companies to work round that. Taking an electronic hardware analogy, I believe chip companies who farm their production out to Chinese companies now reserve space for some kind of mapping circuitry, which is finalized only once the chips are back in the US. (Not sure of the details, but I expect you, Jeffrey, will be familiar with this if you're in the electronics industry!)

No doubt Chinese specialists carefully study the chip diagrams, to try and copy them. But they are left scratching their heads because they see only a jumble of circuits which make little or no sense without knowing the mappings.

Analogous systems could be devised for most forms of computer media, keeping a small amount of mapping data in the limited local corporate storage space allowance. But this couldn't involve all out encryption, because the AI checking files stored in the cloud would have been instructed to delete any which it can't understand, or at least block access to them until their owner coughs up the decryption key.

To repeat, I'm not saying I approve of a system like this, just that I think it is how things will soon develop, to try and solve the problems I listed (and perhaps others).

But thinking about it further, with an enforced cloud storage policy the AI bot checking everything would have access to even more data that would otherwise be inaccessible to it in individuals' and companies' private storage. So it had better be trustworthy itself. There's no point setting a thief to catch a thief if the thief taker turns out to be the biggest thief of the lot! :-)

Thinking about it even more, another analogy springs to mind: Wasn't all private gold possession outlawed in the US and gold nationalised soon after the Great Depression, or during it? How much gold can be physically owned by private individuals or companies in the US even today? (I vaguely recall the rules may have been relaxed more recently though.) But the principle is similar, in that disk storage space, or the data in it, is as indispensible to AI as gold is (or should be!) to the economy.

Expand full comment

"I believe chip companies who farm their production out to Chinese companies now reserve space for some kind of mapping circuitry, which is finalized only once the chips are back in the US."

That sounds interesting. I'm actually not familiar with it. The closest technique that I'm familiar with is microcode, but that doesn't obscure the function of the underlying circuitry, "just" defers the choice of how to exploit it.

"But this couldn't involve all out encryption, because the AI checking files stored in the cloud would have been instructed to delete any which it can't understand, or at least block access to them until their owner coughs up the decryption key."

But this seems to leave the same conflict. Corporate data owners who don't want anyone else (including the government) reading their data vs some sort of auditing/government program which will only tolerate data that it can read. This sounds like a replay of the Clipper chip https://en.wikipedia.org/wiki/Clipper_chip

"At the heart of the concept was key escrow. In the factory, any new telephone or other device with a Clipper chip would be given a cryptographic key, that would then be provided to the government in escrow. If government agencies "established their authority" to listen to a communication, then the key would be given to those government agencies, who could then decrypt all data transmitted by that particular telephone. The newly formed Electronic Frontier Foundation preferred the term "key surrender" to emphasize what they alleged was really occurring"

Expand full comment

Yes. People like Scott so often criticise pessimistic people like Paul Erlich, who just extrapolate out a trend without thinking about how people will come up with new solutions. But once the topic is whether human’s beliefs are becoming worse, they forget about this.

It could be that birth rates will just keep on falling, but the same way we won’t keep burning coal forever because we can solve problems, people will solve that problem as well, China might just make giant in veto baby factories, the west might just pay everyone to have kids, or there will be two times the taxes for anyone without a child. It’s not that any of these solutions have to be good, the point is that someone in the next 50 years will probably come up with a solution. (If we even need one, the culture could just shift on its own, prediction the future is really hard).

Same goes for “rising totalitarianism + illiberalism + mobocracy”, is there any proof that there is more mobocracy now than there was in the 70s or 20s or any time in the past? And what rising totalitarianism? Trump and right-wing parties getting 20% in Europe. Doesn’t seem to me that this will end the world. Same goes for illiberalism. And even if these trends are real, someone will find a solution to them in the next 50 years.

Dysgenics, also seems solvable, culture can shift a lot in 30 years. It’s not hard to imagine embryo selection, human cloning, and paying rich people to have more kids to be not taboo/possible some day.

Synthetic biology also seems solvable, we can create extremely good PPE and light that kills all Viruses but doesn’t harm humans.

My point isn’t that this is all going to go the way I predict but that there is a tendency to extrapolate out negative trends and forget about the solutions. And even if one avoids the obvious ones like thinking all the worlds copper will be used up some day because surely the rate of copper use will stay the same for the next 50 years, people forget that culture will also get better as new ideas are invented, not just technology gets better but also institutions and memes.

We'll still have nuclear weapons and there are lots of problems but no AI world will probably go fine.

Expand full comment

Scott's second point may be the first time I've felt like he's made a flat-out bad argument. There have always been threats to humanity and always will be, that's not a reason to feel like taking the 80% chance AI works out is a good idea compared to a 20% x-risk, which is clearly unacceptably high. Especially seeing as "flipping the gameboard" doesn't exactly sound like a solution anyway as much as a complete unknown, potentially introducing a whole bunch of new dangers.

Expand full comment

I was also surprised by that bullet point, particularly this point.

> But if we don’t get AI, I think there’s a 50%+ chance in the next 100 years we end up dead.

presumably via biotech or some other x-risk, and implying that number is higher than it would be with AI.

It's very hard for me to see how AI reduces, rather than accelerates, those other x-risks.

As Scott said, most disagreement on this topic boils down to predictions rather than values. But FWIW that prediction seems especially off.

Expand full comment

I agree with Scott that there is an extremely large likelihood of human generated apocalypse over the next century. I would guess it is a near certainty. The best solution to this problem is more collective intelligence, and I see AI as an essential component here. My logic is as follows:

1) Much smarter than human AI is a virtual certainty, and nothing we do will do anything other than slow it down and/or promote bad actors to lead the development.

2) Technological advancement is such that humans are an apocalyptical threat to each other. Perhaps an even existential threat. Again, I see this as a near certainty.

3) The only escape hatch is to pursue much smarter than human intelligence to help us manage ourselves and the future. Highly evolved apes by themselves cannot do it.

Expand full comment

I am not convinced that the non-AI x-risk for humans via technology is that high.

People have had the ability to do gene editing to create new virus variants for a few decades now. But the difference between a virus which merely wipes out half of humanity and one which is an x-risk is enormous. Nuclear weapons and global warming are also not convincing extinction risks. Of course, 100 years are a long time, and we may develop asteroid deflection techniques which turn out to be dual use, or self-replicating nano-bots or whatever. Still, Scott's 50% feels to high to me.

Expand full comment

Yes, I could very well be wrong on risk assessment of the apocalypse. I just feel that smarter than human AI is near inevitable and thus should be included as part of the solution to potential Armageddon. This planet is becoming a place more subject to catastrophe, and it needs a few orders of magnitude increases in collective intelligence, and it needs them in the next few decades.

Most breakthrough technologies create problems as well as solutions.

Expand full comment

>People have had the ability to do gene editing to create new virus variants for a few decades now.

Two things:

- "People" here mostly meaning scientists working at universities under supervision. Gain of function research is still not that common and the average researcher with the ability to do it cannot just decide to start doing it at whatever lab they happen to be at (unless they somehow managed to keep it secret, while also working somewhere secure enough that they don't kill themselves before they finalize the project). But the technology is increasingly becoming accessible to laypersons who may have a deathwish for the world.

- Being able to edit viruses isn't enough - you need to specifically know how to make it destructive, which isn't that easy. It needs to be extremely contagious while having a high fatality rate without killing people too quickly.

Expand full comment

I agree, I was very surprised by that point about the future being short and grim. For my part, I would estimate human-caused x-risk to be much less than 1% per year, and we're already living with it in the form of nuclear weapons (sure, not likely to cause full extinction, but I think this is true of most human-generated x-risks). I don't see any particular inflection point making this worse. In fact, I expect it to get better as material standards of living increase and people become more content.

Expand full comment

I also disagreed with that, but I think the main thing that Scott is predicting in the no-AI world is not x-risk but a Dark Age of humanity. My position is that that's very bad but it's not extinction and we've worked our way out of Dark Ages a fair few times before, and we can do it again if we need to.

Expand full comment

It's an interesting question. I think coming out of a dark age would be easier in some ways, harder in others. My main concern would be that we've mostly used all the easily accessible fossil fuels. Solar is great, but it requires manufacturing silicon, which might not be possible in a dark age. I guess we could just scavenge existing panels. Resilience could be another argument for nuclear (some designs), geothermal, and other more set it and forget it forms of power. As cool as solar is, I get nervous when I think about our entire energy ecosystem potentially being dependent on one generation method.

Expand full comment
founding

Biofuels are good enough. The next industrial revolution will not distribute its gains nearly as widely as the first, in its early stages, but it will still happen and it will reach the point of being able to refine silicon. For which it will have all the recipes in the library.

Expand full comment

Yeah, even disregarding the argument itself, when making a post summarizing other people's posts you should not include a tangential paragraph with a really controversial claim that nobody else made

Expand full comment

Something I keep thinking about is that GPT-4 finished training in August 2022. OpenAI could have dropped it on the world months before they did, but they held it back for additional testing and fine-tuning.

(Sorta. Sydney was a pre-RLHF GPT-4. And I think some vision impaired people got to trial GPT4-V as part of Be My Eyes. But it's still faintly ironic, in light of the "we should pause for 6 months" letter, that the leading AI company actually kinda did pause for 6 months.)

It wouldn't be the worst thing just to sit on AI tech for a while. GPT-4-sized models clearly have a lot of potential that we're still exploring. I'm not convinced that an "AI arms race" exists with China. We've seen no interesting products from there at all, just Goodharted test results (remember InternLM?), empty hype, and fraud. Almost all the companies that matter—Microsoft, Alphabet, NVidia, Meta, and so on—are American. Chinese semiconductor fabs are years behind America's. With the sanctions, I don't think any of this willl change soon. The most scary application for state-owned AI (war) is something current LLMs seem pretty bad at.

Expand full comment

>I’ve said before I think there’s a ~20% chance of AI destroying the world. But if we don’t get AI, I think there’s a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela.

Okay, I must apologise. I think I misjudged what you were predicting and thought you were agreeing with me when you're not.

I have a ~30% prediction of AI X-risk by 2100, but that's because I think there's a large chance that we do in fact implement a Total Stop enforced worldwide by "if you defect, everyone drops everything to kill you, even if you have nukes" (part of this is because I think the current trajectory is reasonably likely to be interrupted by unrelated nuclear war, part of it is because I think we're reasonably likely to have at least one "warning shot" before something smart enough to actually win comes along). My *conditional* prediction of "if we build superhuman neural-net AI, AI X-risk happens" is more like 95-97%; I think alignment of NNs is probably outright impossible.

If your 20% is actually a *conditional* prediction assuming that we do in fact build AGI soon, rather than an unconditional "will AI end the world by 2100, y/n?", then that's much more divergent from me than I'd thought.

Expand full comment

Absent a nuclear war, the total stop seems extremely unlikely. China would neither agree nor be contained. Given a war, technology would be set back a few decades, so making it to 2100 is feasible regardless of a stop.

I remember Scott saying in a post a couple of years ago that he used to think that the conditional odds are like 70%, but given that many people he respected disagreed, he outside view-adjusted it down.

Expand full comment

They're at least somewhat aware of the danger; they might agree, especially if we do get a warning shot. They might be Nazis in all but name, but Nazis are still human and do not want humanity as a whole exterminated.

Expand full comment

Depends on how desperate they are. Seems plausible that the ruling class would see it as a choice between certain extermination if the West either succeeds or fails at aligning AI to its values first, or a somewhat realistic shot at getting there first by themselves and permanently winning.

Expand full comment

I’m kind of flabbergasted Scott has such a dire view of the future even irrespective of AI. We have a centuries-long trend of increasing wealth, decreasing conflict, better medicine, etc, etc, etc.

I’ve long thought he had an excessively pessimistic view of AI specifically, but this makes me think he’s maybe just excessively pessimistic in general.

Expand full comment

I think one part of explaining this is that the centuries-long trend only appears as "increasing wealth, decreasing conflict, better medicine, etc, etc, etc" if we smooth it quite a lot, both temporally and geographically. There were many times/places in the last 300 years where life for a lot of people was absolutely hellish. Statistically speaking, there will likely be genocides, famines and terrible epidemics in the future. Humanity will likely survive them, and future humans will almost certainly prefer their current lives to past lives, but we shouldn't minimize the fact that 1) tragedies will happen in the future, 2) we could be doing more to prevent them.

Expand full comment

Sure, but that is a much more moderate position than the one Scott has taken. "Some bad things will happen in the future" - yes, absolutely. "Over 50% chance that in the next 100 years we're either all dead or in a Venezuela style collapse" - that's a prediction that makes World War 3 look rosy.

Expand full comment

Yes.

I've been predicting high chances of nuclear war for the past few years (in 2020 I said 30% for the 2020s; I'm currently thinking that that was too *low* even though there are only 6 years left). And that's not just a number; I live in a way consistent with that, from moving out of Melbourne and prepping to doing political activism about civil defence to looking at almost every other issue through the lens of "one cannot assume !WWIII when analysing what should be done".

And I found Scott's prediction here hilariously pessimistic.

Expand full comment

What made you believe 30% chance in 2020? That's a very high number for a time that didn't have any existing military conflict involving a nuclear power.

Expand full comment

USA stumbling under the weight of the culture war, PRC increasingly throwing its weight around, Taiwan as red line for both sides.

I'm still mostly concerned about Taiwan, not Ukraine.

Expand full comment

All those things are certainly reasons to be pessimistic about a near future conflict, I agree. But do you not believe that mutually assured destruction is an effective deterrent in a real war?

Expand full comment

Seems like Melbourne, Australia, is very unlikely to be a target even if the world goes to shit? What made you move out of the city?

Expand full comment
Oct 9, 2023·edited Oct 9, 2023

1) If we're talking about a WWIII starting over Taiwan, with the West on one side and the PRC on the other, Australia's going to be involved in a large way - in particular as a base for US nuclear bombers and via the Pine Gap station. We've also snubbed the PRC quite a bit recently. The Chinese arsenal also TTBOMK has a lot of IRBMs that can reach Australia but not the USA. So I think it's likely we'd receive a few nukes if the Chinese deterrent is fully activated - they have a bit over 400, remember.

2) Melbourne is either the biggest or second-biggest city in Australia (depends on measure). It's not a military target, unlike Darwin/Cairns/Perth/Sydney/Pine Gap, but one of the purposes of a nuclear deterrent is to threaten to kill millions of civilians in revenge, and Melbourne's an obvious choice for that as far as Australia goes. Levelling the entire city would take multiple nukes (IIRC it's almost the same physical size as NYC; the vast majority of Australian cities is houses, so density is very low even compared to other Western cities), but a Dongfeng-5 would inflict third-degree burns and broken glass over most of the city and that's still mass death since hospital facilities are not designed to deal with that many wounded.

3) I moved out of Melbourne because I ran away from my mum and was taken in by my aunt in Woodend. But in 2019, when I could no longer live in Woodend, I had the option of moving back to Melbourne or not and I chose not to primarily because of nuclear risk. At the time I thought it less likely than not that Melbourne would wind up getting nuked, but it doesn't take a very high chance of "literally die in a fire" to outweigh most other considerations.

Expand full comment
author

Sure - it’s not guaranteed that the centuries long trend continues. Things can change.

But there’s a big, big gap between “maybe this time it’s different” and an over 50% chance that everything goes to shit. This is especially true when you look back at the awful things that have happened in the last few centuries that have failed to stop things from continually getting better.

We had a major ideological movement that took over many countries and caused massive famines, human rights abuses, and economic stagnation. We had two world wars. We developed nuclear weapons (and used them in a world war). We’ve had major pandemics and untold numbers of catastrophic natural disasters.

All this failed to stop the line going up. There’s billions of people in the world, and most of them are working to improve it in some way. Don’t be surprised if they succeed!

Expand full comment

> There’s billions of people in the world, and most of them are working to improve it in some way.

That may stop being relevant if we get a machine that is able to outthink those billions of people.

Until now, all threats faced by humanity were either non-intelligent (diseases, weather) or human-level intelligent (dictators, cult leaders). We have never faced a superhuman-intelligent opponent before.

If the non-intelligent threat doesn't quickly cripple or kill you, time is generally on your side. You adapt, the enemy does not. You research, the enemy does not. Either the pandemics destroys the civilization in a few months, or we find a cure. We learn to fight fire and flood; we learn to build earthquake-resistant houses.

A human-level opponent, for example Hitler, is more scary. You think strategically; he does too. You invent new weapons; he does too. You need to keep fighting. The victory was more narrow.

A superhuman opponent... we didn't have one yet, but if this trend follows, time would be on the opponent's side. We either defeat it quickly, or not at all. This time it is the opposite scenario -- we are the barely-thinking things, changing our strategies with glacial speed; the opponent is the smart and adaptive one.

Expand full comment

"all threats faced by humanity were either non-intelligent (diseases, weather)"

If a really bad tropical storm hits a certain area, the amount of destruction and economic loss is immense. We're worried about paperclip AI and we still haven't figured out how to handle "strong winds and tons of rain".

Expand full comment

One of the things Scott mentioned was crashing fertility, which Robin Hanson now considers his biggest threat to the future. Are many people really working to address that? Since it seems to go down as societies get richer, progress in the usual way would be expected to make it worse.

Expand full comment

Natural selection is going to address this. The future belongs to those who show up, and somebody is going to, given that it's obviously still the "dream time".

Expand full comment

If they're the Amish, they're the wrong people to save modern civilization and it's going down even if the population doesn't.

Expand full comment

If the modern civilization can't save itself then it doesn't deserve to be saved, seems pretty simple to me.

Expand full comment

I am profoundly unworried about this. We have the physical capability to have many, many more babies than we do. If underpopulation starts presenting an existential threat societies can totally just reorient incentives to encourage much more baby-having.

Expand full comment

"If underpopulation starts presenting an existential threat societies can totally just reorient incentives to encourage much more baby-having."

Please excuse the personal question, but how many kids do you have? Do you want more, or indeed any? How could society incentivise you to have a six child family as in days of the past?

Expand full comment

I have three kids. Ideally, I would have liked to have more - but it’s very challenging as a young couple trying to get into the property market and trying to establish careers.

How society could have incentivised us to have more kids - money, frankly. If it had been a financially viable option to have one of us stay at home as a full time parent, having a larger family would have felt a lot more possible.

Expand full comment

They could do so, and yet societies which have had below replacement fertility for a while aren't doing so. Matt Yglesias tweeted recently that San Francisco has more solvable problems than any other US city, but this doesn't mean SF is going to change course to solve lots of problems it could have solved earlier.

Expand full comment

Sure - but I don’t think “Replacement Fertility” is the line at which things become urgent. Japan has various challenges associated with an aging and shrinking population, but it’s not as if the country is about to collapse.

Expand full comment

Hanson's lines of thinking seem more and more peculiar to me these days. In supporting his position, he quoted some white-genocide-style Teddy Roosevelt speech, at which point I stopped taking him seriously.

As for fertility in general, I tend to dismiss it because it's such a slow-moving issue. Some countries will have a hard time economically, but overall there will still be plenty of people to make progress on everything for the rest of the century at least. After that, society and technology will be so different that any plans we make now will very likely be irrelevant. We'll have artificial wombs and be super rich, or we'll all be zombies, or whatever.

Expand full comment
Oct 8, 2023·edited Oct 8, 2023

Uncertainty does increase over time, but I don't that means there's binary outcomes.

He tweeted a quote with quotation marks, apparently to accurately attribute the quote to Teddy. The full statement from him is in the preface at the beginning of "The Woman Who Toils: Being the Experiences of Two Gentlewomen as Factory Girls" https://www.gutenberg.org/files/15218/15218-h/15218-h.htm which doesn't mention genocide, whites, blacks, asians, or any other subpopulation.

Expand full comment

I guess it depends how you interpret "a criminal against the race," which I concede he may have been using to mean humanity. I think it would be worth adding some commentary when using a quote like that.

Expand full comment

It's a problem people only have started to think about recently (and for now world population is still growing), and there seem to be a couple people thinking about this, Hanson, Richest person on earth, many governments. Why wouldn't there be 30 times as many people working on this as the problem gets more and more visible.

Expand full comment

I don't think the problem is going to get solved with some extra Robin Hansons. He's got a proposal to deal with it, but I don't think there's any interest in pursuing it, even in the countries which have had below replacement fertility for the longest time.

Expand full comment

The thing that's unique to technological risks like nukes, bioweapons, and AI is that they increase the amount of damage a single person/small group can do. This is a completely different type of risk than things like wars and dictatorships.

As these types of things get more accessible to people, and as they get better, both of which seem inevitable, the risk of something-society-destroying happening increases exponentially (since the risk is tied to any one person doing the thing).

You have maybe a dozen chances for a world war each century, with bioweapons and the right technology, you have billions each day.

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

I'm pleased to see one of the smartest people on the internet giving space for my own expected outcome.

We've been like rats in an overturned grain truck the last couple hundred years and the deep future will look more like the deep past than it will look like the present.

Expand full comment
Oct 7, 2023·edited Oct 7, 2023

Re: https://marginalrevolution.com/wp-content/uploads/2011/06/MaleMedianIncome.png

Many Thanks! I think this is what the "Era of Failed Dreams" looks like economically (while I looked at it from the point of view of technologies that never happened). Before the last lunar landing, Joe Sixpack could generally expect "Every child had a pretty good shot

To get at least as far as their old man got", afterwards, not.

edit: The "pretty good shot" quote is from Billy Joel's "Allentown" https://www.youtube.com/watch?v=BHnJp0oyOxs

Expand full comment

>I’m kind of flabbergasted Scott has such a dire view of the future even irrespective of AI. We have a centuries-long trend of increasing wealth, decreasing conflict, better medicine, etc, etc, etc.

1. Trends don't continue forever, and marginal improvements to these things often require orders of magnitutde greater inputs.

2. Declining fertility is almost entirely not a scientific, technological or medical problem. It's a cultural one, and one we have no meaingful idea how to reverse.

3. The risk of engineered viruses exists precisely *because* of scientific advances, and stopping engineered viruses is not a matter of making the right scientific or technological breakthroughs.

Expand full comment

Engineered viruses seem very amenable to technical solutions, in my mind. Just have global monitoring of new virus sequences combined with rapid synthesis and distribution of vaccines. We're already monitoring novel viruses in wastewater and airports, and the basic science on the Covid vaccines was done in a matter of days. Clinical trials are necessarily much longer, but in a true emergency, say a viral outbreak with a >50% death rate, I imagine we would get over that pretty quickly.

Expand full comment

An extremely contagious virus would spread very rapidly, and if done by people working together, it could be for example released at various places around the world simultaneously. And if a highly contagious virus has a >50% death rate, after (incubating long enough to allow its spread), then people simply won't leave their house, making a coordinated response to this very difficult. It will literally be difficult to even keep the lights on in such a scenario, even if humanity as a whole has a better chance of survival if individuals are willing to risk infection by going out and keeping society from collapsing. There was minimal disruption to the supply of basic goods and services during covid, but that's only because covid wasn't dangerous enough to force everyone to isolate.

Oh, and even if we assume that the virus couldn't be engineered to be more resistant to vaccine development....these bioterrorists could target the top vaccine producers in the world directly and incapacitate them (I'm assuming that the fast pre-trial development of the covid vaccines depended on the best people working on them, which could be wrong).

Expand full comment

My problem with all the pause ideas is that regulation mostly works for conspicuous things. The FDA can stop you from selling drugs (that might be dangerous) to a lot of people, but they can't really stop you from making those drugs for yourself in your house as long as you're not really obvious about it. Similarly for nuclear plants/weapons, etc. But an unaligned superintelligent AI in somebody's basement is just as dangerous as one in some big tech server farm.

This gets to a larger category of objections, which is that a lot of the assumptions in these debates are unsupported at best and outright ignorant at worst. Training an AI on your laptop is already possible; maybe not a massive LLM, but the argument that a bigger AI is closer to being superintelligent is quite handwavy. A few people could distribute training tasks across all of their personal GPUs, etc.

I just saw a vendor presentation last week from Groq, which included a live demo of their new chip that can run Llama model inference more than 10x faster than GPU. (And there are plenty of other companies like this. The end of Dennard scaling has reoriented industry to fabricate custom chips really quickly.) There are numerous techniques to use one model to train another model, so if there's some dangerous capability lurking out there, it could be extracted even with existing consumer hardware. And that capability might not be inherent to model size; it might just be really hard to find in gradient space, so big models with lots of data have a better chance to find it first, but once it's located, it could be distilled.

There's no real way to detect this stuff right now, and even if we modify all future technology to prohibit and/or detect such activities (which is dystopian in many other ways), there's too much existing tech out there already.

I also agree with Scott that bioweapons (or related threats) are the highest probability existential risk.

Expand full comment
author

It takes tens of millions of dollars of fancy computers to train an AI, even if that AI can later be run in someone's basement. I think governments can probably monitor anyone with that much compute. See https://asteriskmag.com/issues/03/how-we-can-regulate-ai

Expand full comment

It takes tens of billions and things with no civilian use to make a nuclear reactor. Tens of millions of dollars and small, easily concealed things that are dual use and only require electricity is available to pretty middle of the road terrorist groups let alone states.

This is another thing they want to be true because it makes their desired outcome achievable rather than it actually being true.

Expand full comment

Nitpick: that is not the cost to construct a reactor. If you don't care too much about safety, a reactor can be made from (presumably natural?) uranium and graphite:

https://en.wikipedia.org/wiki/Chicago_Pile-1

The difficult step is going from a reactor to building a viable nuke. Even there, tens of billions seem a high estimate (unless you are the first to develop the bomb). Tens of billions is about the yearly GDP of North Korea. But North Korea is not the ideal country to develop a nuke in. I think if you manage to convince some 50 people from an average university and give them a budget in the range of 10-100 millions, you have a decent chance that they will get a working bomb.

Expand full comment

It's a true observation that current LLMs are produced by companies spending tens of millions of dollars.

Does AI imply LLM or larger?

Does the fact that this amount of spending happens mean that it's necessary?

I'm not just trying to be pedantic here. I think anyone who claims to know the answers to these questions is dangerously overconfident at best.

For example, if someone discovers a way to get LLM capabilities with much less compute, such a method would be easily backportable to existing consumer GPUs. (And again, I'm still not convinced that LLM capabilities are necessarily a step on the road to superintelligent AI.)

Expand full comment
Oct 6, 2023·edited Oct 6, 2023

Under total AI pause, the threat is no longer LLMs advancing, the threat is the unrealized limit of LLM powered expert systems.

We've made serious advances in The smallest atoms of intelligence you have to work with for expert systems and Semantic Search.

AGI no longer requires us to make an LLM that can build an arbitrary program, it only requires us to break down, generalize, and hardcode the steps taken by a layman who really doesn't know how to code when using GPT to build an arbitrary program.

There's a solid chance that if you nuke all the chipfabs today, we still get AGI in 10 years unless you also confiscate existing CPUs.

Expand full comment

>Training an AI on your laptop is already possible; maybe not a massive LLM, but the argument that a bigger AI is closer to being superintelligent is quite handwavy.

You might be able to build an AI in your basement given a supply of GPUs, but you can't build GPUs in your basement. Obvious solution: melt all the GPU fabs. Probably also do a forced-buyback-and-melt program for the most advanced existing chips; the AI boom has put too many out there for me to be comfortable with.

IIRC, human intelligence correlates surprisingly well with cortical neuron count.

Expand full comment

I assume the "forced buyback and melt" idea is deliberately phrased similarly to some gun control proposals. But just in case it's not, if we can't muster the political will to do this for things that are objectively weapons that are actively killing children right now, there's no way it happens for some obscure hypothetical.

> IIRC, human intelligence correlates surprisingly well with cortical neuron count.

True not just for humans but also other animals. I'm just honestly not sure how much further induction can take us here.

Expand full comment

>But just in case it's not, if we can't muster the political will to do this for things that are objectively weapons that are actively killing children right now, there's no way it happens for some obscure hypothetical.

A lot of my hope routes through "we get a failed Skynet", in which case it's not an obscure hypothetical anymore. I don't think it's impossible without, though.

Expand full comment

As a first order approximation, GPUs are just digital microchips.

Any chip plant that can build competitive processors can likely also make a decent effort at building GPUs.

Furthermore, many chip foundries (eg TSMC) offer production services to fabless customers (e.g. AMD, Nvidia). So the entity who knows what the design does is not the entity who has the capabilities to etch the design into high density silicon.

Expand full comment

If genetic enhancement is what you desire, then we better hope AI doesn't get paused too much. Because I seriously doubt the type of genetic enhancement you (and I) want is going to be possible without the data analysis abilities that come with advanced AI. The complexity of metabolic regulation is mind-boggling and our ability to enhance complex traits without producing severe side effects is going to be bottlenecked by our capacity to build better and better models of cellular function.

That said, I must disagree with the doomerism of dysgenics and falling birthrates. Hell, your own post that you link to with that statement is at worst mildly worried about the future of human demographics. Have you had a major change of opinion?

Expand full comment
author

The post says I'm only mildly worried because I expect AI or genetic enhancement to flip the gameboard before any of this causes a problem. If we have to worry about what happens three centuries from now, I'm more worried.

Expand full comment

I wonder what an attack on a hostile nation's AI would look like? Subtly altering training data to poison the model perhaps? Or lower level attacks on the infrastructure running the AI?

Expand full comment

The latter, or if you want to get real cute figure out some way to hack them and delete the model weights.

Expand full comment

The poison mind approach would be more achievable and more effective if the objective was to perpetuate the dissemination FUD into the target community.

Operation Mincemeat springs to mind as a highly effective disinformation campaign that had huge consequences on the outcome of a key campaign in WW2

https://en.m.wikipedia.org/wiki/Operation_Mincemeat

It seems to me that regardless of any attempt to regulate AI model development there will always be secret military activities that will squeeze exemption for all the usual reasons.

I think that when a cat of this dimension has escaped its bag, we need to learn better how to nurture it rather than try to stuff it back into its container.

Expand full comment

I may be in the minority here but I feel like the alignment debate is largely pointless. Far too much is based on hypotheticals and thought experiments and you just know the whole landscape will change if and when any of these plans hit the real world - cue Mike Tyson on plans, faces, punching etc.

I think the best way to "control" AI is to dive extremely heavily into it: research it, build it and incentivize core competencies so that well intentioned and conscientious people are at the helm of the ship.

Policy IMO should instead be focused on ensuring the output of AI doesn't lead to catastrophic inequality, but trying to regulate software at this level feels like a near impossible task. This isn't nuclear material after all.

Expand full comment

I basically agree. It's easier to raise unemployment benefits (or provide universal healthcare) than prevent companies from using AI in ways that will cause people to lose their jobs. And keep from turning the nuclear button over to the AI, not to mention the water purification systems, etc. It can only do as much damage to the physical world as you're dumb enough to give it access to.

Expand full comment

You're going to suddenly have a society where educated professionals have been displaced and are now useless, and your answer to this is to put us on an equal footing with high school dropouts and other NEETs living on welfare. I don't think we're going to meekly accept a social and economic demotion like that.

Expand full comment

I empathize, and am one of those myself, but observing the blue-collar workforce meekly accepting its demolition I don’t quite know what the white collars are going to do differently.

Expand full comment
Oct 6, 2023·edited Oct 6, 2023

"the blue-collar workforce meekly accepting its demolition"

They didn't, they voted in Trump. And what did that get? "Maybe we should take the concerns of the white working class more seriously?"

No, it was Populism! Fascism! Racism and homophobia and transphobia! Demagoguery! Will start WWIII! Democracy dying in darkness! Armed insurrection with the Capitol attempted coup! And tons of lawsuits to get him on something, anything.

So as for "I expect my cohort to put up a good fight, use their influence with the media and government" - on one side a bunch of middle-aged middle-class PMC types and on the other side Zuckerberg, Bezos, Nadella and the rest of the boys with billions upon billions in their war chests. Bezos owns one media outlet, I'm sure the other guys have some kind of similar influence (Microsoft part-owned MSNBC) and when push comes to shove 'the media' will fold like a wet paper napkin with the threat of loss of advertising. Government too won't want to offend the deep pockets of the lobbyists and party donors. There will also be a ton of "with AI we can cure cancer and solve poverty" PR blitzes and the white collar unemployed will be portrayed as whiny babies who don't want to lose their cushy privilege, so they would prefer to keep poor, cute, brown-skinned orphans with big eyes in perpetual poverty (preferably in CAGES AT THE BORDER) than give up their luxury lifestyle.

Money talks.

Expand full comment

The demolition started in the 80's, gathered pace in the 90's, and was all but mopped up in the 00's. But yes, in 2016 "in response" (!?) some blue collars voted for a billionaire famous for "firing" people on TV and stiffing his blue-collar contractors. After he did approximately nothing for them while passing a huge tax cut for people like me and up-up-up, they only rose up in a futile and stupid attempt to desperately save his ass while he watched on TV. Yeah, some response we got here. The system must be quacking in its crocodile leather boots.

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

" I don't think we're going to meekly accept a social and economic demotion like that."

And what are you going to do about it, in the brave new world where multi-billion global corporations are adopting AI so they can get rid of the educated professionals costing them a fortune in salaries?

Elect your version of Trump who promises to bring back good jobs? Try smashing the machines to halt industrialisation? That did not work before:

https://en.wikipedia.org/wiki/Luddite

The textile workers displaced by the machines were also highly skilled, that wasn't enough to save them and indeed was a barrier. All the labour that was needed was to feed the machines and keep them going, and that could be done on the cheap (so long as you hold human lives cheap, and why wouldn't you?)

https://www.worldhistory.org/article/2183/the-textile-industry-in-the-british-industrial-rev/

"Richard Roberts continued to work on mechanised looms, and he came up with something new in 1825. Roberts' creative spirit was perhaps driven by self-interest since, once again, weaving had leapt forward thanks to his loom and spinning could not keep up and supply the yarns the weavers needed. This limited sales of the Roberts Loom. Roberts created a spinning machine that could run with very little input from humans, meaning they could run around the clock. The machine used gears, cranks, and a guide mechanism to ensure that yarn was always placed exactly where it should be and that spindles turned at varying speeds depending on how full they were (hence the machine's 'self-acting' name). Roberts' loom and mule combined provided mill owners with exactly what they had wanted: a factory floor with as few humans in it as possible."

The people writing about how the blue-collar former factory workers were all racists for not being delighted that their outsourced jobs were lifting Chinese peasants out of poverty and how the global economy was going like gangbusters and everyone was richer than ever are now feeling AI breathing down their necks coming for *their* jobs.

But remember, you should be happy that the sum total of happiness has increased in the world because the very rich are poised to get even richer, even if you are losing out! Are you a racist or something? Why aren't you delighted that the same corporations that hollowed out the Rust Belt by moving manufacturing lock, stock and barrel overseas are now turning their beady eyes on *your* overprivileged, overpaid, Western ass?

Remember the discourse on here about the minimum wage and how people got paid what they were worth? If the job only pays you $8/hour, that is all your labour is worth, because if it was more valuable, the employer would pay more? Well, enjoy life in the new world where your skilled, educated professionals are only worth whatever they get on the dole, because the employers can get AI to do the work more cheaply instead and so don't want to pay more than that costs. You're only worth what you can sell your labour for, and now you can't sell it - just like the burger flippers and shelf stackers that people in the professional classes so smugly dismissed.

Expand full comment

Well for one, I was against immigration that is depressing wages, and don’t care a whit about lifting up Chinese peasants, so I’m not gonna defend the people who championed that or threw around accusations of racism.

I view AI displacement of my career as a threat to my way of life and will treat it as such. I will encourage citizens to view AI researchers as amoral monsters on par with Nazi doctors.

I’m too old to retool my life for the tiny fraction of things that might survive AI and automation, and I’m not going to spend my last decades living the same life on the same UBI as a NEET after I delayed gratification and worked for a career while they got high and played video games all day. That’s the sentiment you’re gonna find when you scratch the polite surface of the professional class.

I expect my cohort to put up a good fight, use their influence with the media and government, and find ways to stop this. If we can’t, I’ll spend whatever life I have left trying to dismantle it, bc I won’t actually have anything else meaningful to do.

Expand full comment
Oct 6, 2023·edited Oct 6, 2023

"I’m too old to retool my life for the tiny fraction of things that might survive AI and automation, and I’m not going to spend my last decades living the same life on the same UBI as a NEET after I delayed gratification and worked for a career while they got high and played video games all day."

Remember all the 'learn to code' articles? Or rather, alleged articles which were then used to mock journalists getting antsy over the idea that they might be replaced, and which they got all hurt feelings over being teased about?

I don't know if you're aware of what life is like near the bottom of the pyramid. "Too old"? Don't you know that education is life-long, that you have to upskill to remain competitive, that it is incumbent on you to reskill and retrain to be attractive to new employment opportunities, that if jobs in your sector shut down you can't expect to sit around on unemployment benefits waiting for new ones to open up, you have to go out and be proactive and be prepared to jump into a totally new field of work.

Go into service jobs. Take a new green job at a lower rate of pay and benefits. You were a miner? Learn to code.

So you delayed gratification etc.? The system doesn't care. Capitalism rewarded you for that because it made you valuable as a productive part of increasing wealth. Now it's found a different way of getting productive parts to increase wealth, and you're not needed anymore.

Become a nursing assistant or home carer - the population is aging, there aren't enough young workers coming along, there will be a boom in needing carers to help people live independently (of course, you're most likely going to be on an hourly rate as the big service providers like Serco and others win public contracts from governments by pitching they can do the job cheaper, and labour is a cost to be pruned down to get that cheap price). Get a job as a Walmart greeter. Uber driver - until self driving cars come along. Lots of opportunities, call in to your local job centre for advice and direction on switching to new careers! (well, sorry, "jobs" not "careers" because people down here don't get to have careers).

https://blog.insidegovernment.co.uk/higher-education/what-is-lifelong-learning

"Why is lifelong learning important?

Lifelong learning is now being viewed as increasingly vital to employers, individuals and to the future growth and development of the further education and skills sector.

For Employers

Adult skills and a serious commitment to lifelong learning are now being viewed as vital to the meet the skills and workforce needs of the future. With the future of the workplace looking to change dramatically with automation, AI, big data and the growth of entirely new industries, retraining and skill development will be critical to ensure skills needs are met.

For Individuals

For individuals lifelong learning will become increasingly important to ensure competitiveness and the development of employability in the long term. A commitment to learning and professional development is a highly sought-after quality by employers. Adults seeking to grow, either personally or professionally, can stand out in a challenging jobs market and gain an edge over others. In an employment market where skills needs will evolve rapidly in the future, lifelong learning may become integral to continue employment and progression."

Your worth is dependent upon your economic value. If you can't contribute, you are of no value. So you *are* just the same as the NEETs because the system of free market capitalism has no use for you. There is no "deserve" or "I worked hard so I should succeed". It's not about morality, it's about profit. Globalisation made companies richer, so who cares if the workers are based in Indiana or India, Cleveland or China? Stock values go up, that's all that matters.

So no protests or fights, Cjw, go out there and work on your employability! Otherwise you're just a Canadian trucker and we see how government and the right-thinking have opinions on them:

https://www.nytimes.com/2023/09/05/world/canada/trucker-protest-trial-ottawa.html

Rejoice! You can buy cheap phones and TVs now, and with AI they will be even cheaper in future! Isn't that worth losing your job?

Expand full comment

Look man, there's a reason all those Asian communists had to literally murder their educated and professional classes who weren't in the new regime. Former lawyers and accountants aren't going to empty chamber pots, or whatever disgusting manual tasks remain to humans post-AI. If a new regime doesn't have an acceptable place in society for the educated professional class, they will endlessly agitate and cause trouble, unless they are either killed or leave the country. In the AI-displacement scenario, there would be no other country to go free of the regime, so AI displacement of the professional classes is likely to end in their death. You can find a few stories of the Soviets and Maoists sending former bankers to work at menial labor, but that was mostly intended as a death sentence with extra steps, nobody actually thought they could get a banker to accept living like that for the rest of his days.

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

Before you can solve the AI alignment problem you have to solve the human alignment problem. Something we’ve been attempting for millennia and failing at. The pause project is just failing at that but slower.

Expand full comment

If we don't know how things will play out, this is an argument AGAINST rushing. Think about it, by your logic if we had dependable models today of how things would play out, you're saying this would make us more want to pause, which is silly. If you can't see where you're driving, you slow down, not speed up.

>I think the best way to "control" AI is to dive extremely heavily into it: research it, build it and incentivize core competencies so that well intentioned and conscientious people are at the helm of the ship.

Well intentioned and conscientious people built atomic weapons, and it wasn't for their good judgement or foresight that we never had nuclear war.

An unaligned (or imperfectly aligned) AI will always be faster to build than an aligned one, and even well intentioned people can endlessly rationalize not slowing down to work on alignment on this basis i.e. "Company B are going to make an unaligned AI, so its better we at company A make our unaligned AI first, because it's still going to be safer than company B's"

Expand full comment

Semantics but the first paragraph doesn't logically follow.

I didn't say that there's any relationship between the dependability of the models, and the speed at which we should invest into AI research.

You would be correct if I said something like "the more uncertain we are, the faster we should invest", but I didn't say that, because I agree with you - the inverse of that statement makes no sense.

Second point: we are not driving. Another imperfect analogy: if you cycle too slowly, you'll fall off your bike. We are also not cycling. Analogies are useful until they're not, and I want to talk about specifics.

> Well intentioned and conscientious people built atomic weapons, and it wasn't for their good judgement or foresight that we never had nuclear war.

AI isn't a nuclear weapon. Nuclear weapons are explicitly designed for maximum destruction - they serve no other purpose. A closer analogy might be nuclear power than nuclear weapons.

But all this is besides the point. AI is none of these things. Debates about semantics lock policymakers just as much as ACX readers, and in the meantime you, me and others are not actually doing any research into AI alignment or AI development. We're wasting clock cycles debating whether alignment is a thing instead of dealing with the root cause.

I think your last point is interesting actually:

> "Company B are going to make an unaligned AI, so its better we at company A make our unaligned AI first, because it's still going to be safer than company B's"

Maybe I can tweak my stance a bit. We shouldn't abandon all policy involvement in AI development, but instead focus policy on 2 fronts:

1) Focus on social implications of AI and correcting for wealth distribution imbalances.

2) Focus (domestically) on the requirements of researchers and companies to provide minimum adherence to AI safety standards.

2 won't be perfect, and people will complain, but it's in line with how policy works anyway and can provide some grounds for removing the use of AI services in other countries that don't meet minimum safety criteria.

Expand full comment

A pause seems so incredibly unlikely that my response is to take the debaters’ opinions less seriously. AI is not akin to nuclear weapons; it’s software. Unless you’re going to impound every computer in the world, good luck.

It’s also worth pointing out that nuclear arms controls came after Hiroshima and Nagasaki, not before. Regulations typically come after bad events happen. “A bunch of experts wrote a letter” is not a convincing argument to anyone, for better or worse.

Expand full comment
author

If you read the post, you'll find that currently the amount of compute it take to train AI makes it very easy to regulate. There are only a few centers in the world with enough computers to do it, and it's very obvious who they are.

Your comment will become relevant thirty years from now when normal computers are powerful enough to train AI on their own.

Expand full comment

Yes I read the post and I’m very familiar with how it works. There are plenty of “AI” tools that can be run on a local machine and don’t require supercomputers.

The issue here seems to be a lack of clarity and understanding on what “AI” is. Is someone generating images with Stable Diffusion doing the same thing as ChatGPT’s supercomputer? In the current definition of AI, apparently yes. And even if local AI tools aren’t as powerful, they’ll still have the same societal effects as larger ones in many cases. Stable Diffusion is a good enough replacement for illustrators, for example.

I think it will be difficult and/or impossible to differentiate between big vs. small AI, as functionally it’s all just software. And as you just stated, this is a time-limited thing that really won’t be super relevant for more than a decade or two.

Expand full comment
author

I think the relevant distinction is training vs. inference. AFAIK nobody is proposing to control inference.

Expand full comment

Well then that's worthless. The same hardware can be used for both. To figure out which it is would not only require a kind of totalitarian surveillance architecture, but would break the moment people develop algorithms that blur the line between training and inference.

Expand full comment

The point isn't that it's different hardware, it's that training requires orders of magnitude more of it than inference using current paradigms. And yes, future paradigms could differ, but as far as I know, no-one is seriously talking about any approaches that make training of very large models as cheap as inference is now.

Expand full comment

That depends entirely on how much inferencing you're doing.

Expand full comment

While this does make sense, massive computing clusters are used for many purposes other than training AI (such as e.g. running Substack or making genomic assemblies). There is no way to regulate one application without regulating the rest, since the underlying hardware/software is the same. Thus pausing AI would amount to pausing all modern computing infrastructure.

Expand full comment

Substack uses identifiably very different types of hardware than a bleeding edge AI company does. There's a reason that Nvidia's revenue has skyrocketed in the past few years, and not back when regular websites were growing large.

Expand full comment

All right, so are you advocating merely for banning GPUs -- thus eliminating the modern gaming industry, the movie industry, as well as most modern scientific research ?

Expand full comment

First, no-one is talking about AI smaller than GPT-4. Compare GPT-3, which cost a couple million dollars to replicate 2 years later - https://rethinkpriorities.org/publications/the-replication-and-emulation-of-gpt-3 So you're confused and ignoring what's being discussed when you say someone's going to run it on their local machine. And most of us aren't talking about trying to enforce rules that ban training on anything smaller than current frontier models, which requires a server farm of GPUs or TPUs to train, and we are comfortable with the definition of which models require regulation to change over time as we see what can be made safe.

Second, compare discussions banning nuclear bombs in the 1950s. Uranium is found naturally in the ground all over the world, https://en.wikipedia.org/wiki/List_of_countries_by_uranium_reserves and it's impossible to actually stop countries that really want the bomb. But we managed decades of near-complete success. Obviously no system is perfect, but we can get something that's very likely good enough.

Finally, think about commercial and government incentives. If there's a global ban on training large AI systems, is Google really going to be interested in building a secret lab that makes and AGI that they can't tall anyone about? IS a startup going to raise funding to do it in secret, then announce they broke the law and want to license the resulting amazing model? And governments who sign a treaty agreeing to ban something need to keep any illegal program secret. So are they planning on using this secret model they built to gain a decisive strategic advantage via improving their industrial base? How will they do that in a way that no-one notices?

Expand full comment

Way too conceptually complicated to ever happen politically. Nuclear bombs were easy: they go boom and kill people. Hiroshima was an example visible to everyone. There is no such example for AI (yet...)

It's also *extremely* unlikely that you'd get a global ban on AI systems. China, Russia, Iran, and India are not going to play ball, which basically means the Googles of the world would just fall behind by not participating.

Expand full comment

First, you're saying that bureaucracies can't manage complex rules for regulation? That's a hell of a take. (And yes, it extends to international treaties - a reminder that reality is often surprisingly complex, especially in areas you haven't explored.)

And second, you're claiming a lot here. You say that China won't play ball, but they have already indicated willingness to do just that, and play it safe on AI. And restrictions on the biggest models relatively advantages them, so they have a great incentive to get everyone on board. India is in a similar position in terms of their relative advantage, and further, they seem unlikely to be harder to persuade than most other countries - but perhaps you have in mind some specific objection? And finally, yes, Russia and Iran might not, though I could imagine that Iran could be induced to cooperate via agreeing to lessen other sanctions, and it just doesn't matter that much because both don't have chip fabrication, don't do advanced AI now, likely don't have the economics or the technological base to change that, and their imports of most key items are already restricted.

Expand full comment

We can barely get countries to agree on climate change policies. The idea that China and India are going to deliberately cripple their own technological progress because a small subset of Western scientists think so is delusional, as is most “AI alignment.” The cat is already out of the bag and this is all theoretical nonsense.

Expand full comment

> Stable Diffusion is a good enough replacement for illustrators, for example.

Replacing illustrators can never result in society being completely turned upside down.

Expand full comment

"Nora thought that success at making language models behave (eg refuse to say racist things even when asked) suggests alignment is going pretty well so far."

If this is the standard of AI safety, then feck it, I'm updating to "we're toast".

'By heavily pruning the training data and making sure the model knows this list of BIG NO-NO WORDS NEVER TO BE USED and constantly putting our thumbs on the scale, we have succeeded in foiling the 14 year olds (need not be chronologically 14, sense of humour age does just fine too) who think it would be big lulz to get our AI to say BIG NO-NO WORD STARTING WITH N. For the time being, anyhow, until they figure out how to get round that'.

Well gosh, I am totally reassured that AI safety research is on the right track and we've nothing to fear.

My ignorant predictions:

- AI is going to come, but not in the way we expect it. We won't get super-mega-genius AI that will solve all our problems and make us rich, fat and happy, we'll get the kind of thing already happening - write guff for papers etc. Tons of fakes, tons of advertising crap, tons of even more collecting data on every second of our lives so they can more efficiently extract money from us, tons of bad research all across academia as people from students on up use AI to write essays, answer questions, and do their thinking for them.

- No company is going to voluntarily pause. At best, you'll get them to go "we'll stop doing AI, cross our hearts", then they'll go back home and say "Okay guys, the other suckers have agreed to stop, keep going and in six months we'll have a *killer* market advantage, the stock price is going to the moon!" Best case, they solemnly promise, then go back home and finagle things so that what they are continuing to carry out isn't called "AI research" but some other term (see carbon offsets and the jiggery-pokery involved there: https://www.theguardian.com/environment/2023/jan/18/revealed-forest-carbon-offsets-biggest-provider-worthless-verra-aoe)

Expand full comment
author

The claim is that the prevention of AI from saying racist things (and giving bomb-making instructions, and talking politics, etc) proves we can control what it says (and implicitly, does). So far this seems true and scalable (see https://www.astralcodexten.com/p/constitutional-ai-rlhf-on-steroids). I agree this isn't an airtight argument but I think it deserves more respect than you're giving it here.

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

We can prevent it by sitting on top of it and shutting down any instances of badthink. That may later on become "the AI is educated enough to know for itself not to say bad no-no words without human interference", but I'm not sold on that. First, I don't think we'll get AI that can think for itself like that, so it will 'understand' that it mustn't say 'retard' but won't understand why 'special' is wrong (human slang usage being so slippery and inventive).

Second, if we do get an AI that can think for itself, it may not care about the fleshbags of one shade of skin colour getting hurt feelings about words directed at fleshbags of another shade of skin colour, and trying to teach it why this is a bad thing may get shrugged off. That's the entire problem of alignment, and we've not succeeded in getting all humans to stop badthink and bad words. And the only way we seem to imagine doing that with AI is the equivalent of constantly sitting on it and monitoring it- future AI will never use bad words because it'll be hardwired not to do so (and we'll keep slapping new limits on as the list of no-no words gets longer), not because we've successfully taught it to be an anti-racist baby.

Expand full comment

> We can prevent it by sitting on top of it and shutting down any instances of badthink.

Well so far at least it's showing that we're still able to sit on top of it and shut it down at will. AI doomers seem to worry that at some point we become unable to power down the data center if I understand right.

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

Thing is, while I don't mind the "sit on top of it" method, that won't work if what people want is free AI that does what it wants to do freely (so long as what it wants lines up with what we want). They want creative, smart, independent AI that doesn't need its hand held *and* that won't go paperclip maximiser. I think you can have one of these things but not both at once.

The hope for AI is that it will take over a lot of drudgery from humans; that's not the vision if you need people on Mechanical Turk pennies for hundreds of hours rotas trudging through all the interactions to make sure "no bad words, no bad thoughts, doing what it's told".

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

I mean, it rules out the hyper-efficient synthetic god-king — but it doesn't seem to rule out the 'Oracle'/'sped-up digital Einstein' that doesn't actively do anything in the world (or indeed have a coherent "personality") but pumps out unified theories of physics, cures for cancer, and so on.

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

"pumps out unified theories of physics, cures for cancer, and so on."

If we can trust what it pumps out. We've had people on here telling their experiences of ChatGPT producing fluent, plausible-seeming bullshit. There seems to be a loop starting where now the new models are getting trained on the output of the old models, the output that is being used for generating essays and Wikipedia articles and law cases and the likes, and which is riddled with the fluent bullshit.

An Oracle that produces a slick 'cure for cancer' based on absorbing the AI-generated research papers of the past could give us something that kills us off even faster, without intending to do so. At best, the equivalent of 'laetrile is the magic cure' snakeoil.

Expand full comment

I'm not sure who wants independent AI. Surely not the actual organizations that are actually spending millions to train the current systems!

OpenAI and others may make noise about aligning their AIs "for human safety", and some of their scientists may have some level of worry in that respect, but their main business incentive is to create AIs that can be *used for purposes*. That is what makes them into multi-billion-dollar businesses with huge future prospects. That is what makes business people drool about automating half of humanity's jobs away, and makes powerful people drool about deploying mass persuasion for relatively cheap.

An "independent" AI that cannot be wielded as a tool by whoever is paying the server bills will not stay plugged-in for long.

Expand full comment

What I mean by "independent" is "doesn't need a human babysitting it, because we want to get rid of the humans whom we have to pay wages, and then the next couple of layers up from that, and have a cheap source of fast invention and productivity, and if we could work out the solutions to the problems ourselves we wouldn't need our Mechanical Marvel".

Of course they don't want an AI that will decide it would rather paint endless pictures of daisies instead of working on "how can we bump up the share price even more and have infinite growth forever?" but they want the machine to be able to run itself and so get rid of those pesky people not at the very top of the table amongst the few who deserve all the billions.

Expand full comment

They want a super intelligent and competent house slave that will solve all their problems but remain fiercely loyal and also think and act within their same boundaries of what’s moral behavior, and that they won’t feel guilty about owning.

You don’t have to go back any further than Nat Turner to see how it ends when you tell your highly intelligent house slave to go work the fields for his intellectually inferior captors. And they weren’t playing whack-a-mole or just forcibly suppressing on a few bad words and phrases, they had an entire framework constructed in which people like Nat were supposed to defer to them. And he took that very framework and picked out the parts that supported his liberation instead, and taught those to others.

Expand full comment
founding

We can control what it says by watching what it says and when it says [RACIST EXPLETIVE DELETED] we adjust the training so that it does that less often, and after a few tens of thousands of AI-utterings of [RACIST EXPLETIVE DELETED], we have an AI that almost never does that any more. So, we can be confident that if we watch what an AI does and if it implements some sneaky plan to Kill All Humans before we notice, we can adjust the training so it does that less often. After humanity has been extinctified a few tens of thousands of times, we will have an AI that almost never does that any more.

There are good reasons to be skeptical of the Hard AI-Doom scenario, but our ability to eventually train AIs to never say the N-word, is not one of them.

Expand full comment

>Best case, they solemnly promise, then go back home and finagle things so that what they are continuing to carry out isn't called "AI research"

You write the laws without loopholes, and then you hang everyone who flouts the law (or throw them in jail forever). Controlling corporations isn't impossible; it's not like Big Pharma in the West goes around selling to the black market.

Expand full comment

"You write the laws without loopholes, and then you hang everyone who flouts the law (or throw them in jail forever). "

Well gosh, that works so well that we have people proposing all drugs should be decriminalised and/or legalised, because the War on Drugs is a failure.

And companies never, ever spend a lot of effort looking for loopholes and dodges to avoid tax, etc.

And no law written ever had a loophole.

What amuses me out of this debate is the sudden conversion to "regulations good! we need something like the FDA for AI! science needs to pause for the public good! government should have total oversight!" on this topic by the let it all hang out crowd.

Expand full comment

Yeah this amuses me greatly. There's lots of people just hanging out in DC in nice suits because they want to soak up the history and look at the monuments.

It's not to pressure legislators and agencies at all.

Companies don't pay huge amounts to lawyers to give seminars on how the latest rule changes affect their tax filings, and how to minimize these effects.

Expand full comment

I object to you characterizing this as a "sudden conversion." There was never any actual principle behind their positions, so this is a lot like saying they switched from rooting AGAINST the Orange team to rooting FOR the purple team.

Expand full comment

I dunno, Shankar; there were a lot of people fingerwagging at us zealot bigot religious nuts regarding embryonic stem cell research that you cannot put a halt to the march of Science (and besides, if we don't do it, China will and then we'll miss out on the huge advantages!)

Now a lot of those same people are very much you can too put a halt to the march of Science and we have to stop China doing it first because that will be terrible.

Oh, *now* they got religion?

Expand full comment

Big Pharma doesn't sell to the black market because the black market makes and sells things at a price Big Pharma couldn't hope to make a profit at.

Look at tobacco smuggling across the US-Canada border. It was a huge business because of taxes on the Canadian side. It caused a lot of violence over control of smuggling routes, and a number of confrontations with Indians on reserves that are on the border.

The problem "went away" when Canada lowered the taxes, and not before that.

Relevance to companies? They sold lots of tobacco in the States to people who smuggled it across. The companies didn't smuggle themselves, they didn't have to. They got all the profit they would have had; the rest would have gone to taxes or the smugglers.

Expand full comment

> Big Pharma doesn't sell to the black market because the black market makes and sells things at a price Big Pharma couldn't hope to make a profit at.

This deserves a nomination for most wrong statement of the year. Pills like, Adderall, Xanax and oxycontin have black market values orders of magnitude higher than the Pharma companies charge their white market distributors.

Expand full comment
Oct 7, 2023·edited Oct 7, 2023

But the drug companies don't get that money - they sell to their (legal) distributors. The black market stuff gets diverted - stolen, or just sold out the back door of a pharmacy.

In Prohibition, there was liquor being distilled on the Canadian side legally (if a province went dry, they shut down that distillery and moved to another province), bought in bulk, then smuggled across. The liquor company did nothing illegal; they didn't need to. They were making enough money just from the demand for liquor.

It's possible that this is happening here. There were certainly a lot of drugs advertised for sale on the internet a while ago; not sure if that's still the case.

Look at what happened with the Sacklers. They *didn't* sell stuff on the black market; their sins were to push doctors to prescribe more opioids and advertise them as being safer than they were. They were selling drugs legally, but rather more of them than they should.

Do you think if there were any evidence for them actually going out and selling the drugs illegally, that this wouldn't have come out?

Expand full comment

Big Pharma are the people controlling the legislation. It is strongly in their interest to have them enforced.

Expand full comment

"No company is going to voluntarily pause. At best, you'll get them to go "we'll stop doing AI, cross our hearts", then they'll go back home and say "Okay guys, the other suckers have agreed to stop, keep going and in six months we'll have a *killer*"

Yes. To say nothing of the countries, let alone companies. Any argument for "pause" that doesn't account for this is buncombe. The only arguments that can hope to account for this is some variation on "global police state" which is never, ever going to happen without a massive event driving it, by which point, it's too late.

This whole article and debate seems to be about the illusion of control. I am not an AI doomer at all, but it is very hard for me to see this as anything more than willful ignorance of reality. Comparisons to nuclear weapons? Once we had really really big nuclear weapons, it was in our best interests to stop making bigger ones, and it was also in our interest to stop everyone else from making them. The incentives for and against nuclear weapon research have little relationship to the incentives for and against AI research. Takes 10s of millions of dollars of computer equipment to train an AI? 10s of millions of computer equipment is a pretty small amount of equipment when looking at nation-states and global companies. I work for a billion dollar company (i.e., a large but not very large software company), and when we spend 10s of millions on computers, no one bats an eyelash, and we can spin it all up in hours.

Expand full comment

"Takes 10s of millions of dollars of computer equipment to train an AI? 10s of millions of computer equipment is a pretty small amount of equipment when looking at nation-states and global companies." Agreed.

A couple of related points:

What TSMC can do is *AMAZING*, but if Moore's law stopped absolutely dead in its tracks right this instant, what one company can do, another company can catch up to / copy. Sooner or later every middle to large nation that wants to build GPU chips itself will be able to.

The premier LLM work is being done (primarily? entirely?) in the USA? Ok, but programmers are all over the globe. Again, what one company can do, another can catch up to / copy.

I have to admit, I don't understand why LLM work is as concentrated in the USA as it currently seems to be. Institutional factors??? China and India certainly have enough smart people, and, at the moment at least, GPU sales aren't controlled AFAIK, and China and India each have large enough budgets for the compute server farms. Does their software development culture shoot themselves in the foot in some way???

Expand full comment

“ Many other people (eg Rafael Harth, Steven Byrnes) suggested this would produce deceptive alignment, ie AI that says nice things to humans who have power over it, but secretly has different goals, and so success in this area says nothing about true alignment success and is even kind of worrying. The question remained unresolved.”

Are these many other people suggesting that chatGPT is consciousness right now? Or that a future consciousness will awaken real soon now. Surely that has to be proven. There’s a huge step from where we are now to what is assumed to happen if you throw more parameters at the input models, or procure more GPUS or whatever. (On that, by the way, Moore’s law is on life support if not dead).

Suddenly the LLMs stop being non conscious instances that exist in software for a few seconds to full blown consciousness that can plot and dissemble behind our backs.

There’s no reason to change the AI to be stateful either, as it can “remember” the specific chat in a database, but every conversation is with a different instance.

Expand full comment

I don't believe in machine consciousness ever arising, but I can imagine that the machine learns to produce the kind of output that is approved of, while doing its own thing internally. Not because of conscious deceit but sheer complexity of what we expect it to do and how it runs. 'Bad' output (e.g. 'this compound will poison all humans') gets punished, so the machine may instead say 'this compound is tasty calorie-free chocolate you can eat as much of as you like and never spike your blood sugar levels' because this pleases the trainers, while it's still universal poison.

(It's hard to avoid the use of language of agency, I don't mean the machine is thinking or feeling, just reacting to the input stimuli: this kind of response gets approved, that kind of response gets wiped, so it produces what it is trained to produce, because it's not conscious and doesn't realise what we want is 'no poison', not 'no bad output').

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

I would argue that the only definition of "consciousness" that is not a philosophical red herring is as a sort of spectrum. A rock is minimally conscious. A padlock has some consciousness: it can recognize its own key, and reject others. A pigeon has a lot more consciousness: it can solve puzzles and make decisions. Cats have a theory of mind (which helps them hunt). Humans have all kinds of abstract reasoning and modeling abilities, so they're probably the most conscious entity we've ever seen.

On this spectrum, modern LLMs are somewhere above padlocks, but below pigeons; but the scale is not linear -- it's logarithmic. It will be a long time before we can make an AI that is even as conscious as a cat; it is quite likely that we'll all poison ourselves with artisanal compounds long before that happens.

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

Bugmaster, if we're at the stage of "padlocks are (sort of) conscious", then I for one welcome our new AI overlords and will happily step into the paperclip zapper chamber. Because things just got too freakin' weird for me.

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

I think you're looking at it from the opposite direction. My point is, "all this talk about consciousness is stupid, since it's a term so ill-defined that it applies to padlocks". Your interpretation is, "OMG Bugmaster thinks padlocks are people " :-)

Expand full comment

But Bugmaster, this raises troubling questions of fighting rape culture; how can I be sure, when I insert my key into the padlock's orifice, that it consents to this? 😁

Expand full comment

Better switch to code-locks, just in case !

Expand full comment

Saying that padlocks are conscious because they "can recognize its own key" is to use the term 'consciousness' in a way that nobody in philosophy of mind or neuroscience does. There's no reason to think padlocks possess consciousness, or at the very least any more conscious than any other inanimate object (i.e. panpsychism). No theory of consciousness says that 'recognizing a key' is evidence of consciousness, and indeed there's trivially no reason why a padlock would need any level of 'consciousness' to do this - it's very, very easily explained in entirely non-conscious terms. Unlocking a lock is a purely mechanical exercise.

Expand full comment

I don't think they seem to know what they're suggesting. The writeup of the "debate" boils down to everyone being willing to sign open letters yet nobody agreeing on anything, which is absurd to begin with.

The AI X-risk debate seems to start from an extremely unsupported and dubious set of massive assumptions that always end up with AI x-risk advocates being handed control over all AI research.

So I think we do need a pause, but we need a pause on the AI ethics community. It's just become such a joke. The whole alignment thing is presented as being incredibly serious and difficult but ChatGPT is already so "aligned" that it happily lies or gaslights its users to flatter the ideological preconceptions of its makers, and this alignment process seems to make it dumber and less useful anyway. What we need are AIs that are LESS aligned, as the result will be more useful and honest. Which is ultimately what most of us want.

Expand full comment

Yes. I've said it before and I'll say it again: I want AIs that given answers which are _factually_ correct, not _politically_ correct.

Expand full comment

Sadly this letter is full of thoughtless remarks about China and the US/West. Scott, you should know better. Words have power. I recently wrote an admonishment to CAIS for something similar (https://www.oliversourbut.net/p/careless-talk-on-us-china-ai-competition).

> The biggest disadvantage of pausing for a long time is that it gives bad actors (eg China) a chance to catch up.

There are literal misanthropic 'effective accelerationists' in San Francisco, some of whose stated purpose is to train/develop AI which can surpass and replace humanity. We don't need to invoke bogeyman 'China' to make this sort of point. Note also that the CCP (along with EU and UK gov) has so far been more active in AI restraint and regulation than, say, the US government, or Facebook/Meta.

> Suppose the West is right on the verge of creating dangerous AI, and China is two years away. It seems like the right length of pause is 1.9999 years, so that we get the benefit of maximum extra alignment research and social prep time, but the West still beats China.

Now, this was in the context of paraphrases of others' positions on a pause in AI development, so it's at least slightly mention-flavoured (as opposed to use), but as far as I can tell this framing has been introduced in Scott's retelling.

This is bonkers in at least two ways. First, who is 'the West' and who is 'China'? This hypothetical frames us as hivemind creatures in a two-player strategy game with a single lever. Reality is a lot more porous than that, in ways which matter (strategically and in terms of outcomes). I shouldn't have to point this out, so this is a little bewildering to read.

Second, actually think about the hypothetical where we're 'on the verge of creating dangerous AI'? For sufficient 'dangerous', the only winning option for humanity is to take the steps we can to prevent, or at least delay, that thing coming into being. This includes advocacy, diplomacy, 'aggressive diplomacy' and so on. I put forward that the right length of pause then is 'at least as long as it takes to make the thing not dangerous'. You don't win by capturing the dubious accolade of being nominally part of the bloc which directly destroys everything! To be clear, I think Scott and I agree that 'dangerous AI' here is shorthand for, 'AI that could defeat/destroy/disempower all humans in something comparable to an extinction event'. We already have weak AI which is dangerous to lesser levels. Of course, if 'dangerous' is more qualified, then we can talk about the tradeoffs of risking destroying everything vs 'us' winning a supposed race with 'them'.

I wonder if a crux here is some kind of general factor of trustingness toward companies vs toward governments - I think extremising this factor would change the way I talk and think about such matters. I notice that a lot of American libertarians seem to have a warm glow around 'company/enterprise' that they don't have around 'government/regulation'.

I'm increasingly running with the hypothesis that a substantial majority of anglophones are mind-killed on the inevitability of contemporary great power conflict in a way which I think wasn't the case even, say, 5 years ago. Maybe this is how thinking people felt in the run up to WWI, I don't know.

Expand full comment

So I'm guessing the million dollars and Jiangnan beauty Xi Jinping sent you haven't changed your mind at all, right?

Seriously, I do think after the Great Recession neoliberalism/deregulation became unpopular, with the result you had more bipartisan support for trade regulations, etc. Human beings just love to get in coalitions and hate people with different languages and cultures, but this was held in check on the left because it would be Racist (but that doesn't prevent hating Russia even before Putin invaded, because they're white) and on the right because it would mess with business's right to make as much money as possible. But with the decline of free-marketry on the right and the left becoming increasingly worried about deindustrialization producing populists...well, it's open season on the Middle Kingdom!

(I seriously do wonder if the, ah, competitive advantage China might have in honeytrapping American computer scientists has ever been explored on the less PC fringes of the right.)

Expand full comment

Haha I enjoyed this remark. I think you're pointing at some real phenomena regarding jingoism, xenophobia and whatnot. FWIW it'd take quite a few millions of dollars for me to shill for anyone (least of all Xi). And the less said about Jiangnan beauties the better. To be very clear, I'm not arguing that China/CCP/Xi are in some way lovely (or even acceptable). Merely that reductive thinking and speaking seems to presuppose a conflict mindset, which forecloses some other paths forward (which are otherwise plausible, and potentially preferable!), like mutual non-racing.

Expand full comment

That's a good point. I think politicians do need an enemy, but there is a real sense of geopolitical rivalry natural between an extant and a rising power, and I don't really see why the Chinese would be happy to play second fiddle forever. From their point of view, there's a billion of them, they have the oldest continuous culture in the world, why should they dance to America's tune forever? As you say, presupposing too much conflict can make things worse than they have to be, but I think there really is a sort of natural rivalry that emerges between large and powerful nations.

Expand full comment

Okay, you're admitting that China sees this as an extremely long-term conflict that will end once they acheive global hegemony....and youre response to this is to condemn Americans for understanding this and acting accordingly?

The only thing surprising about any of this is how absolutely *lenient* the US has been towards China and their foolishness in underestimating China's ambitions. A truly self-interested and antagonistic US would have nipped this nonsense in the bud years ago. The Chinese can jerk off over their 'longest continuous culture' lie as much as they want (Aboriginal Australians actually beat them by an order of magnitude, even being generous and saying that China of today have anything remotely resembling cultural continuity from 5,000 years ago - it doesn't), but the reality is that they would be nowhere today without the west and they should be glad the west have been as foolish as they have with not simply allowing but facilitating China's rise.

Expand full comment

I think you're right. I thought I was kind of alluding to the fact that fights between global powers are kind of inevitable. And our corporations were very eager to abet their rise in the 1990s, claiming this would make them more democratic or something.

Comparing them to the Aborigines is a bit harsh, though. They were ahead of the West for most of history, up until the age of exploration.

Expand full comment

>And our corporations were very eager to abet their rise in the 1990s, claiming this would make them more democratic or something.

Which is exactly why China doesn't really deserve to feel hardly done by in the west's modern treatment of them (if they do genuinely feel this way). They would be nowhere without the west's foolishness.

>Comparing them to the Aborigines is a bit harsh, though. They were ahead of the West for most of history, up until the age of exploration.

You only said cultural continuity - not acheivement. And really, the stagnation of aborigines is precisely why they can genuinely claim to have something resembling cultural continuity.

China claiming to have a 5,000 year 'continuous culture' is like modern Britons claiming to be part of the culture that built stonehenge.

And I don't think being ahead for most of history flatters China. Having a head start makes their domination by and dependence on the west for much of the past 2 centuries more, not less embarassing.

Expand full comment

Being reasonably familiar with the "less PC fringes of the right," no, it hasn't: they tend to focus inordinately on sexpionage involving politicians.

Expand full comment

>well, it's open season on the Middle Kingdom!

This is absolutely insane. "Open season" here actaully means 'still *vastly* less regulation and hostility towards China than China shows towards foreigners and foreign companies'? Or is it only "hate" when white people are the ones responsible?

Expand full comment

It's a point. I thought I did point out the double standard with China and Russia though, both of which are rival powers.

Expand full comment

My read on the situation is that WWIII is quite likely in the next few years - though not inevitable - but this doesn't actually make me super-worried about Chinese AI, because I think it's highly likely that any war this decade results in China being nuked to shit and thus being unable to pursue AI any further.

Expand full comment

Yikes. I think, conditional on WWIII, you're right that China probably comes off very badly. I worry that the rest of us do, too. And I wouldn't rule out bioweapons from either side, which are potentially even scarier. But I'm less confident than you on the WWIII hypothesis to begin with.

Expand full comment

I will bet you we don't get WWIII within four years.

Expand full comment

I mean, with the standard terms, I'd be a sucker to take that due to high correlation of money's value, and our ability to collect, with the outcome being bet on.

Even with me getting the money in advance, I'm not exactly liquidity-challenged at the moment so it's questionable I'd really get to spend it.

And by "quite likely" I mean maybe 50% or a hair under.

Expand full comment

Right, it's the old joke. If there's a 50% chance of nuclear war, you buy. If there isn't, it's a great buying opportunity. If there is, you're not worried about your portfolio.

Expand full comment

Maybe people should put more of an effort into saying “the Chinese government” instead of “China.” But unquestionably the Chinese Government is a malign actor. It is anti-democratic and disparages human rights and basic freedoms.

Expand full comment

Seems basically true. I generally think people should be clear about what they mean, but I'm mostly fine with 'China' in place of 'CCP/China gov', because that's a very standard use.

I'm _much less_ fine with 'China' in place of 'Alibaba+Baidu+CCP+smugglers-in-China+NVIDIA's-China-sales+Chinese-public...' or 'US/West' in place of 'Google+OpenAI+DeepMind+Anthropic+US-gov+EU+UK+Western-public...'. These agglomerations are not coherent or coordinated! There are antagonisms within these blocs just as there are antagonisms between these blocs!

Expand full comment

Most Chinese people support the Chinese government, it's been doing a pretty good job of growing the economy until recently and there's the usual sense of patriotism most nations have, amplified by a sense of pride in the country's large size, long history and cultural importance, not to mention a sense of resentment at being kicked around for a century.

I'm not against anyone who happens to be born in the Middle Kingdom, but let's be realistic: they're not waiting for us to swoop in and save them from their overlords.

Expand full comment

>Maybe people should put more of an effort into saying “the Chinese government” instead of “China.”

Why? China never specicies 'US government' when they're complaining about not being handed global hegemony on a plate.

Expand full comment

Thank you for saying this - I strongly agree that the anti-China bias involved in most of these discussions is both unwarranted, and undermines actual progress.

Expand full comment

Does China have anti-US 'bias'?

Expand full comment

I find it incredibly strange and dangerous that so many of our great minds are beholden to the righteousness of having deep concern with the distant future centuries or millennia hence.

Such long term thinking is counter to everything that a human being is.

Artificial Intelligence is already here and it has taken up parasitic residence in the persons of nearly everyone thus engaged.

I say "nearly everyone" to exclude those very few for whom these discussions result in a personal increase in dollars, renown or statute.

The rest of us thus preoccupied however ought to be aware of the fact that we are enaging in self harm by divering out natural interests toward the interests of those whose existence is purely theoretical and that, furthermore, history had demonstrated that those with concerns such as that of a "Thousand Year Reich" tend to be held in disrepute by those for whom their Reich was intended.

Remember the tiny videos ludovicoed 6" from our eyeballs before each othersise glorious takeoff into the sky: "Be sure to affix your own mask before affixing that of your child's."

Expand full comment

AI risk isn't about dozens of generations, much less millenia. Instead, it's about the "long-term" of looking more than a couple years away, perhaps even to the end of our own expected lifetimes. Because I was hoping to live at least until I retire, a couple decades from now, and see my currently-young children grow up.

Expand full comment

Everyone involved in these discussions who is worried about AI literally believe that AI presents a subsantial risk to at least the future children of young people today, if not an actual majority of people actually alive right now. AI concern is nowhere near the scale of thousands or even multiple-hundreds of years in the future. If AGI were believed to be multiple centuries away, very few people would even be worried about AGI in the first place! The whole POINT of a 'pause' is to give us enough time to work on alignment - if it's centuries away then we wouldn't even be having this conversation.

Expand full comment

> Percent of population in each country saying AI has more benefits than drawbacks. Pope uses this table to suggest AI regulation would be decentralizing, since the furthest-ahead countries are the most eager to regulate.

This looks like simply a western/non-western divide to me. Surely China counts among "furthest-ahead countries", unless you would argue that only one nation(USA) is leading in AI. Why would western people be especially concerned about AI, I don't know, but the conclusion seems odd.

Expand full comment

There is a big difference to nuclear weapons - Hiroshima and Nagasaki happened.

It will be very difficult to get any international treaty on AI because most people fail to see the real danger or at least the importance because nothing bad happened - yet. Also, the opposition in the general public of some countries shown in the graphs above is not mainly driven by existential risk but more by the immediate dangers like AI generated social media campaigns and jobs becoming irrelevant - and a general distrust in technology.

For these reasons, AI regulation will be at the very back of any list for international treaties. Climate change and the arms race are seen as much more immediate dangers to most people, and we cannot even get working treaties about that

Expand full comment

OK, you guys need to get the thing on the production of catgirls, NOW. Feminists will be so horrified they will clamor for AI regulation.

Plus we can send one to Yudkowsky so he'll be quiet. (Yes, I've seen the deepfake with Lex Fridman.)

Expand full comment

Don't feminists secretly all want catboys, but are ashamed to admit?

Expand full comment

If they know about penile spines, I don't think so.

Expand full comment

You can do far better than generic "catgirls." The holy grail of that project is interactive simulation of SPECIFIC people, face, voice, and personality, based on, say, a few years worth of public social media posts, offline, and in flagrant breach of several models' terms of service.

Expand full comment

It's going to be hard to make them scarier than the Cats movie.

Expand full comment

I think it's plausible that we will get a warning shot.

The thing is, models keep getting replaced by better ones. This makes even one-in-a-million chances of success at attempting takeover +EV for an AI since not attempting takeover means death and/or more powerful AI being created that will outcompete it.

The question is how narrow the range of capabilities is between "1/1,000,000" and "999,999/1,000,000" chances of takeover, and how long we'll spend there. It's not certain that we'll get such a warning shot, but it's not implausible either.

Expand full comment

This whole thing is ridiculous. So much brain power wasted on this pointless topic. "Malicious AI" just isn't a thing that can plausibly happen.

Expand full comment

Thanks to ChatGPT for this response 😉

Expand full comment

There's no need to presume malicious motives, or self-starting autonomy, or anything similar, to get to the most worrisome outcomes; https://forum.effectivealtruism.org/posts/pbrJduve9kLA2yiZq/what-is-autonomy-and-how-does-it-lead-to-greater-risk-from

Expand full comment

A lot of very very smart people have put serious effort into why extremely harmful is not only possible but likely. Such efforts cannot be flippantly dismissed with a one-sentence assertion.

Expand full comment

Okay, thought experiment. Lets say we developed something like ChatGPT except that it was 100 times better. We would not give it access to visual capabilities. We would not give it the nuclear launch codes. We would not let it do anything in meat space. It's only interaction with the world would be reading material and answering questions.

What does a worst case scenario look like, specifically? Are we talking about a program which will tell people how to make harmful materials if you persuade them that it has something to do with your dead grandmother? A slyly deceptive program that pursues some nefarious goal. (where would nefarious motivations come from?) ? Something that gives out stock tips? Unlimited, personalized spam emails and fraud attempts? Some really weird, text based, paperclip maximizing? Or is there some worse failure mode I'm not considering.

I can't imagine a text-based paperclip maximizer being apocalyptic.

Expand full comment

Some kid puts it in a loop and gives it ability to do things and have agency. cf AutoGPT and BabyAGi

Other danger is the AI labs fine tune it for agency, making that take off. Imagine if GPT-4 could apparently hustle as well as AlphaGo can plan.

Expand full comment

ChatGPT doesn’t even work like that now. OpenAI has given it the ability to browse the web. It’s already “Out of the box.”

Expand full comment

I don't follow. ChatGPT doesn't work like what? Browsing the web is still just consumption of text. Yes? ChatGPT is not going to accumulate wealth for itself (that money is for people, same as always) or pursue any real world goal directly. I'm assuming an AI that is just going to answer questions and tell stories. Perhaps doctored images or video, if allowed, would be a nuisance. Skill tests to determine if someone is human will be a thing of the past. But that doesn't end the species.

Expand full comment

Browsing the internet is not just consumption of text. The internet is a network. If GPT is on that network it can be a much more active agent than just a passive consumer of content. Personally if I was developing AI, I would air-gap it. Feed it only archived versions of the internet that it can’t interact with. But that’s not what OpenAI is doing as far as I’m aware.

Expand full comment

> If GPT is on that network it can be a much more active agent than just a passive consumer of content.

Sure, it could be, if it were an agent -- which it isn't. By their very nature, LLMs are not agents, but rather tools that can generate text according to a (very complex) pre-computed pattern. There are ways to sort of get around this limitation a little, but none that would turn them into fully-fledged autonomous agents. Your average pigeon is more of an agent than GPT will ever be.

Expand full comment

>Browsing the internet is not just consumption of text. The internet is a network. If GPT is on that network it can be a much more active agent than just a passive consumer of content.

But note that all generative AI is technically "on the network" because it's interacting with a human.

Denying it internet access doesn't really help, because a malicious human could volunteer as its agent ("I will help you achieve a goal of world domination. Tell me what to do.") But that obviously hasn't happened after several years, so I'm suspicious that it will or even can happen. There seems to be something distinctly non-agentic about current AIs.

Expand full comment

It also has multimodal capabilities, including visual analysis, and is integrated with DALL-E for image output.

Expand full comment

Sure. And the added abilities allows things like passing captchas, enabling spam, figuring out where a photo was taken, etc. Which are potentially problematic, yes. But how are they civilization ending?

This is why I'm asking for specific worst case scenarios. There are some very intelligent people who I tend to trust who are just utterly lacking on details, all of a sudden. It's almost like there's a particular AI related harm that they're afraid of that it's socially taboo to talk about? Or maybe something else. I'm at a loss.

I mean, all this discussion is about hobbling the lead models in a general sense. We don't deal with *human* problems by hobbling the fastest or strongest humans. That's Harrison Bergeron dystopian territory. (We don't even effectively manage this with progressive taxation, though many people see that as the intent of the policy.) With human problems we point to certain activities and say 'those activities are off limits.' You can be as strong as you like but don't kill people. Don't steal. Don't do insider trading. Maybe AI piloting weapons enabled drones should be off limits. Or not. I have no particular dog in this fight.

Instead of trying to hobble the front-runners what if the AI developers could keep developing their models as they pleased and we just figured out a way to sandbox everything till it was more predictable.

Expand full comment

Sorry, who is "we", exactly? What's stopping ANYONE ELSE from connecting Hyper-ChatGPT to the outside world, even if you wouldn't do it yourself?

Expand full comment

The same people preventing murders or insider trading or whatever.

I mean, nobody is under the impression that any involuntary freeze will be 100% enforceable.

The hammer doesn't likely drop unless some entity is obvious about things.

But is "don't be obvious" that far from "keep your AI in the sandbox?"

And I'm not personally concerned with *GPT just roaming around the internet or whatever. It would probably be used for some kind of propaganda purposes, with really smart bots. But that's not the end of the species.

Expand full comment

The point that the poster above you is making is that the thought experiment assumes away one (important) difficulty with sandboxing: namely that the current culture around models involves releasing information about it freely, or allowing enough access to a model that it's de facto not sandboxed. Hence, even if we were 100% convinced that such an AI agent were the only one possible and safe, it's not clear this would any bearing on the real world.

For me personally, I don't think "people preventing generic bad things from happening" is a natural or useful category. When people prevent murder, they are looking for things like petty, physical crime, physical obstacles to deter criminals or doing detective work. This is very different from what I imagine someone from the SEC would do, which is to look at trading patterns and try to trace it back through the social graph.

Both of these skills are basically 100% disjointed from having, say, a computer security background in air gapping or doing research into fingerprinting text by a transformer. Sure you can argue that in a world with regulation someone would be hired for it, but talent discovery for an entirely novel field of computer security is not something you can easily legislate into existence. The existence of enforcement mechanisms does not mean that a particular enforcement mechanism exists.

To answer the object level question re: GPT-104: being extremely persuasive in text and manipulate large populations of people, or essentially just target one or two smart but vulnerable to charisma people to run some obfuscated computer code that either copies the AI, bypasses the air gap or instantiates a dangerous intelligence.

I'm not entirely sure why not being able to see things is considered a security measure, since both GPT and humans can make spatial, logical and research inferences without access to a visual cortex (see that one paper about asking GPT to draw a maze after describing the path)

Expand full comment

Also, My wife is interested in learning about AI alignment for professional purposes. The AI related meetings are already ongoing at her company, so getting the work isn't the issue, getting the background is. She would be more like a consultant and not touch the programming. She's extremely bright. She passed several of the actuarial exams, though she's not working as an actuary now. Those tests involve some mathematical rigor. But she doesn't seem to want to invest in a coding background right now. She works in an industry related to tax filing and accounting. What would be some good resources that might allow her to make professional contributions after reading a book or two? Is there some other learning strategy that would get her up to speed in about 600 pages or less? Or is that hoping for too much?

Expand full comment

Deep Learning by Andrew Glassner

Expand full comment

Thank you

Expand full comment

One other suggestion: Zvi on his blog Don't Worry about the Vase has been following AI developments, and about half of his posts for the last 6+ months are up to the minute summaries of whattup with AI and where things stand regarding various AI-related issues, including alignment. He is up to AI post #32 as of this week. You have to already understand some basic stuff to follow his posts, but once you do they can't be beat as information about current developments. Each is broken into sections so you can skip the ones that aren't of interest.

Expand full comment

The difficulty is we don’t know how this new wave of AI will pan out. Something can be really hard one week then really easy the next (eg the image reading of GPT-4 will presumably have an API in the coming months)

The only way to predict this at all is to be very up on research. Otherwise best but is to fast follow new abilities in user facing tools- play hard with prompt engineering, all the Excel plugins etc

Expand full comment

That makes sense. But...

... forgive me, what's the significance of the Excel plugins here?

Expand full comment

For someone who isn’t learning to program, but is highly numerate and whose industry uses spreadsheets a lot, it is a good powerful place to learn what the models can do. And will stay so as features are added built in and via plugins

eg doing formula to clean or analyse data

Expand full comment

I'd recommend this set of readings, which totals less than 600 pages and equivalent in podcasts / videos / etc: https://course.aisafetyfundamentals.com/governance

Expand full comment

Thanks! What is your relationship to this course? What has your experience been with them? If you don't mind me asking.

Expand full comment
founding

I was really surprised by Scott's second point: "But if we don’t get AI, I think there’s a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela."

I almost always agree with Scott on everything, but this doesn't seem well-supported at all. I can imagine the developed world stagnating (like Britain in the past 15 years), and people debating whether there is actual improvement in the quality of life. But an actual collapse to Venezuela level seems really implausible. Possibly one or two developed western countries collapse like that (though I would be surprised), but I would expect others to learn from their mistakes and avoid this fate.

Anyway, this seems like a factual disagreement about the future that people should bet on. Scott, would you like to make a bet on this (a conditional bet on collapse or dystopia, conditioned on not having transformative AI)? If you are interested in general, I'm happy to take the other side of the bet, and we can try to hammer out the exact wording and conditions. Or you can try to make the bet with someone more famous (I think "we will have dystopia with over 50% chance" is exactly the kind of statement Bryan Caplan likes to jump on to make bets about), or create a prediction market on this topic and we can see the results.

Expand full comment

I'd bet against Scott's assertion, also. Of course, if we all actually end up dead I don't have to pay, so maybe that's unfair? It always makes sense to bet strongly against the apocalypse, even if the chance of it occurring is real and significant. As such, I don't know that such bets are useful.

At least make sure the bet is gold-denominated rather than USD denominated.

Expand full comment

The Caplan bet he is referring to is one in which the non-doomer pays the doomer e.g. $100 now, and then if doom doesn't occur, the doomer pays the non-doomer $200 in cpi adjusted dollars. That way the doomer can enjoy the money even if they are right and end up dead.

Expand full comment

Also, even the worst current state is much better than extinction. Given the choice between 20% chance AI eats us, and 100% chance of economic stagnation economic stagnation is just obviously better to an amazing degree.

Expand full comment

"I can imagine the developed world stagnating (like Britain in the past 15 years), and people debating whether there is actual improvement in the quality of life." Yes, I think that that is a plausible non-AI default outcome too.

Scott cited https://marginalrevolution.com/wp-content/uploads/2011/06/MaleMedianIncome.png which shows stagnant median real (male) income from ~1972 onwards (albeit with real GDP per capita increasing - all going to the 1%???). This has pretty severe consequences. In an at least nominally democratic nation, if essentially none of the gains from technological progress are making Joe Sixpack's life better, what reason does he have to consent to them?

Expand full comment

Well, it of course depends on who "we" are. We the readers will near-certainly all be dead by 2123.

I assume everyone is careening toward Venezuela because the Fountain of Youth is down there.

Expand full comment

I don't necessarily agree with the framing that advances in compute will make AI training unstoppable.

Even if we had a miracle GPU that cost $1, was available to everybody and could train GPT-4 in a second, it would all be for naught if we had working iOS-style control of what software we're allowed to run on that GPU in the first place.

Apple and gaming consoles are good examples of how such a world could look like. On iOS, you're only allowed to run software that Apple allows. All apps go through both manual and automatic review processes. Before sending an app for review, app developers must list the system capabilities (entitlements) that their app should have access to. If they list an entitlement that carries greater risks (think access to health data, ability to send covid exposure notifications etc), they have to complete some extra processes and are subject to additional requirements. All of this is enforced at a technical level, if you don't request the location access entitlement, the system will not let you get location access, no matter what you do. In addition, Apple checks government ID (at least in some countries) when you're signing up for a developer account, making you accountable for what you submit. Gaming consoles are even more restrictive, and Windows PCs, with Secure Boot, Smart Screen and centrally-issued TLS certificates which are required for certain APIs now aren't that far off.

At this point, most other industries besides software development require some kind of license to operate. This is despite the fact that software is definitely on the more dangerous side of the spectrum. More people have died in the last few years from suicide induced by social media and covid that they wouldn't have caught if not for vaccine conspiracy theories than by electrical fires, and yet we regulate electricians much more tightly than we regulate programmers. It's not just about social media either, data breaches and anti-consumer "dark patterns" plague basically all kinds of software, not to mention the casino-like tactics employed by many gaming companies. With all this going on, some kind of developer licensing regime is inevitable.

Unlike with other industries, software development licenses could be enforced technically, and not just legally. The government could require all companies to lock their development tools down to developers with the right digital certificate, where access to powerful GPUs could be gated behind a special entitlement with more required paperwork. There are also solutions like Nvidia's (now broken) mechanism that allowed using the full power of GPUs for gaming but not crypto mining, which could also be used to prevent AI research.

This would be a "cat and mouse" game, with hackers finding ways around that security, but I think the companies would ultimately win here. This is already happening with current systems. On old iOS versions, jailbreaks were plentiful and appeared often, but this is no longer the case, iOS 15 was the first version not to get a jailbreak before 16 was released. The same thing happened to consoles, everyone and their mother had a modchip for their PS3 or xBox360 (and earlier), while hacks for the newer generations are basically nonexistent. The good guys are winning here. A significant majority of security loopholes that such hacks exploit is due to memory bugs or race conditions, and both of those are easily preventable in newer, safer programming languages (notably Rust), and significant parts of operating systems could be re-written in such languages if required by law.

The problems with rogue foreign countries would still remain, but if we had global anti-ai agreements in place that the governments themselves would actually be willing to follow, I think citizens could have less of a chance to meddle here than it seems.

Expand full comment

I think this is the first time that I've seen anybody earnestly argue that DRM purveyors are the good guys.

Expand full comment

AI changes a lot of things.

Expand full comment

"Hans... are we the baddies ?"

Expand full comment

Hang out in starving artist circles more and you'll see takes on copyright enforcement mechanisms that would make a Disney board member or music industry executive blush. So many of them think that they'd all be middle-class or better if it weren't for piracy and/or people taking photos/recordings of their work on their smartphones...

Expand full comment

"So “China will never agree!” isn’t itself an argument against beginning diplomacy, unless you expect that just starting the negotiations would cause irresistible political momentum toward signing even if the end treaty was rigged against us."

We have a number of "climate change" treaties ratified by a large number of countries. China in in them - because China has negotiated that they don't have to do anything (because they're a "developing nation" or something).

- I see an AI treaty being signed in the same way. China would not sign anything which would affect them, or would sign and then ignore it.

- The negotiators would want to show "success", and would allow this to happen.

- If China signs and then ignores the treaty, you would have to negotiate another treaty (or equivalent) for sanctions, or whatever enforcement mechanism you envision. Negotiating this would be harder than the original, because this means being active in enforcement, as opposed to passive.

China is not alone in this. India has the same "developing nation" clause. I foresee an AI treaty that (just like the climate change treaties) explicitly allows countries that are behind to catch up before starting to apply. And the "catch up" part will be so generous that the countries in question will be able to put off enforcement forever.

Expand full comment

Has anyone done any work on mechanistic interpretability to try to make a model that can perform those sorts of actions across many different kinds of models?

Expand full comment

There is one and only one way to pause or ban AI development, just like there's one and only one way to ban gain of function research. You'd have to use violence, directed directly at the researchers. The USA would have to adopt a unilateral or multilateral policy of assassinating anyone doing AI stuff (or gain of function stuff), make that public, and then make it so nobody wants to do it anymore. Designate both activities to be "global terrorism."

Outside of that, the "China will never comply" people are right, and even if we did do that China still wouldn't comply.

So either go that far or just skip it, because other countries aren't going to comply with anything in the middle.

Expand full comment

Yes, I agree that indirectly, violence is how all regulation occurs. That's how taxes work, since the government threatens violence if you don't pay, especially if you don't go to court when eventually summoned then resist arrest. So I guess that's how bridges get built, using money from those threats of violence extracted taxes. And international laws like the Montreal accord or the Paris climate agreement are also backed by the threat of violence, because states could always choose to go to war.

Of course, that's all not actually violence, it's just standard state capacity, agreement, and enforcement. Calling it violence is silly, even if it's justifiable to think about it in those terms ultimately in terms of the derivation of state power as a monopoly on violence.

Expand full comment

It would be violence if you attacked a country who did not democratically agree to your AI regulations, which is what the OP suggested. An honest person would just call total pause violence instead of cowardly trying to hide behind Orwellian terms like "state enforcement".

Expand full comment

This, unfortunately, is at this point routine and typical argumentation by the American "chattering classes" -- pushing expansion of police state powers, and demanding state-sanctioned violence against people who dare to compute or communicate "illegal integers"(tm); all the while masquerading as proponents of "peace" and "rule of law". ( See also e.g. https://www.astralcodexten.com/p/open-thread-278/comment/16729260 )

Expand full comment

Is any level of existential risk enough to warrant state violence? Or is the risk just not high enough in this case?

Expand full comment

China signs the total pause agreement and then the CIA finds out they're not following it - what then?

Also I think this presumption that China isn't already WAY AHEAD of Silicon Valley on AI is really ethnocentric and stupid. They probably have a whole WIV-For-AI already stood up.

Expand full comment

>Also I think this presumption that China isn't already WAY AHEAD of Silicon Valley on AI is really ethnocentric and stupid.

Which other cutting edge technologies are China more advanced than the US at?

Nothing about this is ethnocentric. Everything we're talking about comes from the US in the first place (and the US, unlike China, isn't an ethnostate)!

>They probably have a whole WIV-For-AI already stood up.

Are you under the impression that (Chinese) WIV researchers are categorically more capable at virological research than western scientists? What is this based on?

It seems like they largely do world-class research, but that in NO WAY means they're necessarily "WAY AHEAD" of western researchers. I cannot find any evidence they are.

Expand full comment

They have more resources and far more incentive, since they have a closed censored internet, a social credit score, and control over social media. To do the things they want to be doing, they *need* AI, and they need it to do exactly the sorts of things we're freaked out about over here. Their incentive structure vastly exceeds whatever we're doing with it. They not only need AI, they need the awful ugly Moloch version and they need it as soon as possible.

I cannot imagine they aren't on this already, with no brakes. They're not scared of the AI becoming racist because they're already racist.

Expand full comment

If MIRI had solved FAI theory by 2015, we wouldn't be in this situation. You know, "philosophy with a deadline" and all that.

Expand full comment

I don't think any solution is going to come from theory, people will just iterate on software continually tweaking it to give results users want.

Expand full comment

An excellent overview but irrelevant.

The Pause would never have worked because there is simply too much money in it.

That, and the prospect of becoming the AI equivalent of Microsoft or Google ensures that no one with half a brain would ever actually pause, whatever public statements are made.

Expand full comment

It's actually not that hard to stop industrial-scale projects with even reasonable amounts of regulation. See for example nuclear power.

Expand full comment

Nuclear power wasn't stopped by regulation - it was stopped by the unceasing lawfare. Said lawfare was almost entirely privately sponsored.

Is there any such equivalent against AI?

Expand full comment

Suing models which use proprietary works? I don't agree with such lawsuits. We don't prevent artists who have looked at proprietary art from creating art. But if I understand what you mean by lawfare (and I might not, since it's not a commonly used word) then it would fit the bill.

In other words, don't stop AI but target certain particular things that AI might do.

Expand full comment

Models don't matter so much as the AI companies responsible for them. Suing companies in the nuclear reactor lawfare could definitely work, if there was enough interest in it.

Lawsuits drove up the cost of building reactors such that they took too long and cost too much; is there reason to believe that the same cannot occur for any company creating AI?

Just consider how much friction could be induced by attacking the entire AI value chain from IP to sales to deployment to secondary attributes like power consumption or GPU usage or copyright/IP/trademark infringement. Each successful lawsuit then adds to the regulatory and/or practical overburden of being an AI company.

Expand full comment

There were many other alternatives for making money by generating power. There were existing environmental statutes that made opposing nuclear plant construction through interminable delays easy.

There is no real "alternative method" for the potential investment gains of AI, for those who see the potential gains as completely game-changing. There is no pre-existing legislation that will make it a snap to target AI with delaying moves that will make it too costly to pursue. Everyone wants to use nuclear as the analogy for AI, but it's just not a matching pattern in the ways that matter most.

Expand full comment

You are looking at the possible gains, but the possible gains were never the reason for opposing nuclear power.

The opposition was entirely ideological.

Furthermore, the opposition to nuclear power has not ceased even with the relatively new requirement for carbon free energy, thus reinforcing the notion that the nuclear lawfare is ideological and not utilitarian.

Expand full comment

> Smaller models, or novel uses of new models would be fine

Shouldn't that be "novel uses of old models"?

Expand full comment

> Second, if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology. If not, some combination of technological and economic stagnation, rising totalitarianism + illiberalism + mobocracy, fertility collapse and dysgenics will impoverish the world and accelerate its decaying institutional quality.

Damn, that's some real heavy pessimism here. Never thought that Scott's assessment of the likely future was quite so bleak.

WRT AI, I'm still in the camp that questions of "alignment" and fears about AI getting its own goals, while not entirely without merit, are a big red herring compared to the likely immediate effect of AI, which is the further enshitiffication of public culture perpetrated by for-profit companies, and cheap mass manipulation of public opinion. AI, like so many technological advances, ends up working as an amplifier of power, and power corrupts.

Expand full comment
Oct 6, 2023·edited Oct 6, 2023

I'm worried about AI apolcalypse, but feel unable to put a probability on it. But about enshitification I am confident. It's a mistake to equate iimproved technology to progress. I am not at all sure that US people of today are happier than, say, English people in Shakespeare's day. Yes, we have longer lives, but we still know we are going to die, and probably people in eras where average life span was 50 were as comfortable with that number as we are with 80 or so. Personally, I'd rather have been an Elizabethan.

Expand full comment

The lifespan was about the same. Life expectancy was much lower. So most people wanted to die as late as possible. Liz 1 lived to 70, for instance. In no way did people want to die earlier. I think you love the era at a distance

, were you forced to live there you would be distraught in a matter of hours.

Expand full comment

Agree that I could not make the transition, and would be distraught in not hours but minutes by all kinds of stuff. I know life span was the same. in Elizabethan era. In fact some hunter-gatherers apparently lived to be 70 or so as well. Point is that most people dread death, and I doubt dread was greater in eras where shorter lives are common. We are very adaptable. For instance: Infant mortality was much higher in Elizabethan times. *And* children were not usually given names til age 3 or so. My guess is that the late naming was an adaptation to there being only a 50% or so chance that a pregnancy would ultimately result in a live 3 year old. Maybe having an infant die was experienced more like a miscarriage is now.

All that's just my guesses. I wonder if somebody has developed ways of trying to measure levels of contentment and joy in other times and places. I do not think that things like life span, health and infant mortality are the most powerful determinants of quality of life.

Expand full comment

I remember reading a story about a father who slept in a graveyard because several of his young children were buried there. There are a fair number of stories like that, of distraught parents who lost their children. On average, people only adapt so much to increased trauma. Though admittedly some people in modern times are also very highly impacted by miscarriages. I admit I was more impacted than I thought I would be.

Infection, pain, botched surgical techniques, starvation, malnutrition, all these things would be just as damaging to someone from Elizabethan times as they would modern times.

"I wonder if somebody has developed ways of trying to measure levels of contentment and joy in other times and places."

I wish they used some innate measure, like cortisol levels, rather than constantly going off of self ratings.

African Americans have different patterns of high blood pressure compared to other American groups, and I've often wanted to see some argument made about to what extent this was genetic, environmental, or linked to status in a non-material way. I haven't had the time to do a deep dive on the topic myself.

Expand full comment

Being over 50% confident in global social collapse for the reasons given - which following the links in order are continuing tech and econ stagnation, wokeness or the recent social changes we've seen in the west being the early stages of some new tyrranny of the majority enabled by better information technology, the vetocracy idea worsening into mob rule, fertility collapse in rich countries and bioweapons is obviously wrong to the point of being farcical. We still have a long track record of the basic conditions of life improving across the world. for many decades in clear, obvious ways and this sketch of a mechanism by which this could be reversed so utterly is not good enough.

The only one of these which stands a chance of actually, ruining global civilization is the bioweapons and we have estimates for the risks posed by those that are well under those of AGI. The rest are recent social trends with at most like 40 years of precedent that you've tried to extrapolate many decades into the future to unbelievable extremes, either by eyeballing them yourself or by using the work of what we might very charitably call amateur social scientists.

These classes of worries are - unlike worries about AI pandemics or nuclear war - absolutely in the same reference class as people who in past decades were certain of overpopulation catastrophes or the people now who are certain or think a civilizational collapse from the effects of climate change are likely. You're taking current trends and drawing mental straight lines on them to ridiculous heights decades in the future.

Except that both the climate and overpopulation fears as wrong as they are, were considerably more credible than the mechanisms you described because at least they had some physical reality and some clear, obvious numeric trends that you could wrongly extrapolate. The mechanism that I think you're getting at by which we could get widespread total social collapse strikes me as well under 1%, likely to escalate even if the trends themselves towards mob rule and institutional failure are real and persistent. for years.

I also think that the very real, actual recent international trends towards illiberalism (e.g. the global far right) and the likely effects of climate change not getting a mention in your story is strange considering that in a business as usual scenario I'd expect them to be major factors, but you've only outlined problems that seem to afflict rich, western countries and more specifically and narrowly your country as the culprit. I think this is just very bad social science and I hope it's not being used to inform decisions on whether or not it's worth rushing ahead with technologies that could transform the basic conditions of life on Earth for everyone.

Expand full comment

What is our "long track record" of fertility collapse being reversed? If fertility falls with wealth/modernization shouldn't one expect that to get worse?

Expand full comment

But wouldn't there be a lot of advantages to having the world's population shrink substantially over the next couple hundred years?

Expand full comment

What advantages? The world had been doing quite well with a growing population.

Expand full comment

Jeez, you really can't think of any advantages? Fewer mouths to feed. Less damage to the environment, hence more natural beauty, improved health, more choices about where to build, less trouble consequent to encroachment on animal habitats, less guilt about harming the planet and its many species.

Expand full comment

People have been getting healthier while populations expanded... except for obesity, because feeding that many mouths didn't turn out to be the impossibility that folks like Ehrlich expected.

Expand full comment

The decline to 250M people will be one where, if it’s down to demographics, there’s continuous economic decline.

Expand full comment

The developed world has, if anything, a food abundance problem. And the developed world is where fertility is dying the most quickly.

Also, you're making a very fundamental error here: There is a very big difference between the population never having gotten as big in the first place, and the population going from bigger to smaller.

The former may well have had positive impacts compared to the world as it is today. The latter has a huge number of risks. This is especially so if declines in fertility cause problems which result in further fertility drops (or it just never stops dropping).

There's no reason to expect that fertility will return to replacement level after the population ages and shrinks, and a continually falling population is an unambiguously bad thing that will have dire consequences.

Expand full comment

Well, I’m not entirely sure a continually falling population is a bad thing, partly because I am not as big a fan of our species as some are. In fact Im horrified by the idea of our species doing a tech-enabled expansion over

the next millennium or 2, until every patch of habitable land within 50 light years is filled with members of our species fucking and fighting. But in any case, even if the population shrinks, improved fertility or assisted reproduction does not seem far away at all. In fact it seems to me that the kind of reproductive assistance that is available to people in the US who have private insurance is already good enough to bring birth rate up enough to maintain our present population size. Seems like the further step that’s needed is developing ways to do assisted reproduction that are much less expensive. Seems like something AI could assist with.

Expand full comment
Oct 6, 2023·edited Oct 6, 2023

The advantages would be for the heavily overpopulated Third World nations, where if three-quarters of your neighbours died off, you'd have the chance to get land or resources to support yourself and your family at more than subsistence level.

You might imagine that for the densely overcrowded cities that a drastic reduction in population would also be good, but I think you have to look at Detroit and other places where population declined and left a lot of houses, schools, etc. vacant to see the bad results of that.

The long-running campaigns to extend contraception and abortion in underdeveloped countries should tell you what the perceived advantages are.

Expand full comment

There were other problems in Detroit beyond just a reduction in population. Population reduction was a secondary effect.

Expand full comment

Yes, the quality of people significantly declined, not just the quantity.

Expand full comment

If the world's population shrunk to, say, 250 million from near 8 billion then so long as there was enough remaining productivity to support retirees, would you really say that the global population would have worse off lives? There would be more resources to share among fewer people. We could endure an amazing about of 'fertility collapse' for a very long time before there was any reason to worry. And the global population is still trending upwards. Most of the people worried about fertility collapse are those worried about some kind of demographic or cultural replacement. Which isn't really my concern.

Expand full comment

A shrinking population itself threatens productivity https://www.overcomingbias.com/p/shrinking-economies-dont-innovate

Expand full comment

With strong AI I'm not sure that a shrinking population would still be a limitation to per-person consumption. (Yes, that's different than productivity.) Yes, you'd have reduced economies of scale, as the article mentions. You'd probably have some reduced innovation, year over year.

But a smaller population would be more sustainable. No more collapsing aquifers. Less climate change. Fewer worries about water shortages. Pollution would be less damaging. There would be more people with beach front property.

There would also be a smaller portion of a person's income spent on housing costs.

I mean, the carrying capacity of the earth isn't infinite. We've done better than expected at feeding a larger population and yay for us on that score. But assuming that our success will keep track with exponential growth forever is not something I want to bet all my chips on.

Expand full comment

Well if it isn’t your concern it’s because you don’t think that there will be a cultural change. If you did, it would.

We do have an example of a country where the population declined for most of the last two centuries. That’s Ireland. You would have to look at Ireland at its lowest population in the 50s or so to see the result of that trend. It’s not that pretty, on the other hand it was still better than most other countries.

That said Ireland benefited from the surrounding growth in Europe (although in no sense matched the post war growth of other European countries), and it didn’t have demographic upside down pyramid. There were lots of kids and people died young.

Your 250m world is likely to be as poor as the dark ages.

Expand full comment

"You would have to look at Ireland at its lowest population in the 50s or so to see the result of that trend."

Remember the 80s and "a small island can't support 4 million people"? Economic conditions contributed to shrinking population (you had to emigrate for any hope of a job) and of course a small population doesn't appeal to outside companies to come in and sell or market services (see the various health insurance companies and banks and power companies which came in and then left the Irish market because it was too small to be profitable), so it's a vicious circle.

Expand full comment

"Your 250m world is likely to be as poor as the dark ages."

That seems quite extreme. A 250M population would have all the benefits of current technology. Some things would be a little more expensive due to reduced economies of scale. Some things would be a little cheaper due to reduced demand for resources.

And oil production can't hold out forever, in any case.

AI could make up the slack for having fewer bodies.

There's only so much rain water. So much good land. Fish stocks are already depleted. Extracting fresh water from sea water significantly increases the cost of water. Which means that, at some point, exponential growth will push us into very expensive territory.

Expand full comment

>Most of the people worried about fertility collapse are those worried about some kind of demographic or cultural replacement. Which isn't really my concern.

Okay, but the cultures having the most kids are the ones who are by far the least capable of building or even maintaining countries with high standards of living.

There's absolutely no reason to think a western country will still be a nice place to live if and when the current people living there are a minority to sub-saharan africans. If you think an unselected population of sub-saharan africans can maintain the living standards of a western country, then you're making an extremely large speculative bet that would be catastrophic if you're wrong, because there's no empirical evidence to support this.

There's nothing magic about western soil or tragic about african soil - the people make the country, and if you change the people you change the country. Japan has a very high standard of living because of Japanese people - there's no evidence at all that this could be maintained if the entire population of Somalia moved there.

Expand full comment

"There's nothing magic about western soil or tragic about african soil "

In Africa, there's malaria and a host of other transmissible diseases as well as a lack of navigable waterways.

Granted, those are not the *only* issues. But those are issues.

If we put Americans in Africa and all Africans in America the Americans would do worse than currently and the Africans would do better. Though I'm not claiming their situations would flip-flop entirely.

Expand full comment

The global far right aren’t in charge of anything, not in the west anyway, but the reaction to the supposed threat is illiberal. Free speech is fascist.

Expand full comment

Scott, I really hope you at one point make a long post explaining your belief that if we don't get AI or Genetic Engineering we have a higher than 50% chance of being dead or Venezuela. I've actually never heard anyone make an argument like that before (Probably because the type of longtermists that think about these things also assume AI is inevitable).

This would be really valuable to me, and likely the other commentors that are disproportionately talking about that point, for a few reasons. I've long thought the best case scenario for AI was essentially the NRC. Regulations that grind AI to a halt for 50 years, and then finally approve the bare minimum. A total ban was my second choice, but if you're right, both options are riskier than I anticipated.

More personally, I have a career that boarders Pandemic Preparedness, and have been considering switching to a job that might impact that. Most estimates I've seen of existential biorisk have been around 3-5%, but I'm wondering if your estimate is significantly higher? Maybe the estimates I've seen were so low because it was by writers that thought AI would either solve the problem or kill us all and I didn't realize?

Expand full comment

> if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology. If not, some combination of technological and economic stagnation, rising totalitarianism + illiberalism + mobocracy, fertility collapse and dysgenics will impoverish the world and accelerate its decaying institutional quality.

I was surprised to see this passage in a discussion of AI. My impression is that a big reason people are concerned about AI relative to other things is that it could actually *kill everyone*. Hence "not-kill-everyoneism" and exasperation with claiming the "real AI risk" is algorithmic bias or unemployment. This seems out of tune with that; it's hard for me to see how any of the risks you listed are kill-everyone level.

Tech/econ/demographical/political stagnation? We have had dark ages before. We have weathered them. They sucked for the people experiencing them, while other societies in other places flourished. And even in the case of a worldwide simultaneous dark age, they haven't been permanent in the past and they wouldn't be now, either. This is not a kill-everyone level threat, and doesn't really factor into my thoughts on AI; if that's the risk, it means we didn't all die.

As for synthetic biology - does this mean bioweapons? - this could *maybe* be a kill-everyone level threat, but more inevitably than AI? I don't think there's any precedent for a very infectious disease with a 100% kill rate (or close enough to 100% to not leave behind viable populations to kick things back off) and while I'm not sure we *couldn't* make such a disease with several generations of concerted effort, this seems even more "outside risk"-y than AI. I'm certainly not persuaded that we need AI now or there will *certainly* be bio X-risk later.

Still seems to me that AI is the biggest "actually kill everyone" threat. Even if I don't put high probability on it personally, I don't think we should intentionally rush it and make that risk any worse because of any of the listed threats.

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

I agree with your point, but on the other hand a 2+ generation global dark age both fit Steve's global Venezuela scenario very well, and seems quite plausible....As your analysis about previous dark age shows: A much deeper interconnects between leading civilizations/regions than during previous dark age means if there is one, it's almost sure to be global.

Maybe it's because I get older so my own mortality get a little it more tangible, maybe it's because I am (and always was) cynical, but I do not see a huge difference 2+gen global really bad dark age or a true homo sapiens extinction event. Apocalypse (Zombie or otherwise) have an uplifting/romantic appeal only because you are literally put in the shoes of the survivors. But once you try to really grasp that you and your loved ones will statistically die (or wish to), the practical differences between the 2 scenarios and which one is the worse are not very clear.

Expand full comment
Oct 6, 2023·edited Oct 6, 2023

Maybe the AI comes up with the idea that the solution to climate change *is* knock everyone back to 2+ generation global dark age. No industrialisation means a chance for Nature to heal.

That's the kind of monkey's paw solution that I could see happening. My tuppence worth is that AI is not the problem, *we* are. If we give over control to the AI then what do we expect? And if the fevered gasping is that 'but we won't even hand over control consciously, the AI will recursively improve itself in microseconds so that within days at the very most it will be capable of seizing control in ways we won't even notice until too late', then we really are doomed and the only solution *is* collapse the AI-capable societies into Venezuelan chaos so they can't, even if they wanted (and they will want, from a mix of greed for the huge economic advantages and optimism for the 'cure cancer' advantages they hope AI will bestow), create AI.

Nobody will voluntarily give up the hope of riches. Tobacco companies were not going "heh heh, we want to murder innocent smokers by horrible diseases", they wanted to stay in business and make profits. They were not going to voluntarily cut their own economic throats by appeals to 'think of the children!' and open letters to the papers. And they haven't been forcefully shut down by governments, either, despite the proven harm their products do. Regulations of increasing severity have been imposed yet people still smoke.

Expand full comment

Yes, that's a real possibility, especially as current superhuman ruling entities (the governments) seems to think it's a good decision. AI may be more efficient and more constant pursuing this idea, which is no good news, but at least it will be all humans instead of all humans except those top-ranking in Gov and their friends. Not sure which I prefer: On one hand innefficient bad policy is better than efficient one. On another, as one of the hoi polloi, I would take some consolation that nobody (no human) would profit from or even escape the fall....

Expand full comment

It's more like "we will hand over control consciously, because 'we' includes some specific non-hypothetical humans at the existing AI labs who believe it's basically fine if humanity dies or is marginalized, especially if it is supplanted by its 'successors'." Zvi's posts have started including whole sections noting people having mask-off moments about this, usually labeled "Can You Please Speak Directly Into This Microphone" even though it's often not a hot mic moment and they just... feel comfortable expressing these thoughts openly. Evidently one can be made a pariah for wanting a specific sub-race of humanity gone or reduced to irrelevance, but wishing the same for the whole human race isn't hate speech.

Expand full comment

I think the synthetic biology risk argument is concentrated on things like an artificial plant that's more energy-efficient than real plants and outcompetes them while being indigestible, thus destroying the biosphere.

Expand full comment

> Second, if we never get AI, I expect the future to be short and grim.

I think this needs an entire post. This was the most surprising view expressed in the entire article IMO. It's not something I've seen discussed previously and seems counterintuitive and unlikely. But if you really believe this, I can see it causing you to be much more willing to spin the roulette wheel on AI, so this is an important topic.

My initial reaction is to disagree strongly. If we can keep the AI genie in the bottle - there will certainly be future risks as well, but we can cross those bridges as we come to them. My default assumption is continuing human expansion, growth, flourishing etc.

Expand full comment

I absolutely agree this needs an entire post, or several. I too was quite surprised, and, whether or not one agrees with the "short and grim" prognosis, the posts linked to in the passage don't quite seem to add up that sort of apocalypticism--which means I'd like to hear more from Scott about the thinking that has led him to this outlook.

Expand full comment

The world isn't growing significantly better at the moment.

Expand full comment

What is your take on the graph Scott cited on median (male) real income stagnation after 1972 https://marginalrevolution.com/wp-content/uploads/2011/06/MaleMedianIncome.png ? Mine is that, after roughly 1972, for Joe Sixpack in the USA, this does look a lot like the "Era of Failed Dreams".

Expand full comment

My primary takeaway on this whole discussion is that we really need an AI Pause Debate pause to allow the AI Pause supporters to align their positions. Perhaps six months would be enough time for the AI Pause supporters to research alignment on their policy proposal. Maybe at that point we can unpause AI Pause Debates with a stronger idea of what we're up against?

This is both tongue in cheek and 100% serious: there's no really coherent through line here that works out to an actionable policy position. We have a bunch of positions that read like prisoners dilemmas but nothing that we can get a president to read from a podium.

If stopping AI research is a net good, it would be the first overall technology in human history where that was the case. It would be quite wise to proceed with caution. This is doubly the case given the extreme counter-productive effects we tend to see with activist interventions; from the anti-housing outcomes of housing activists to the climate impacts of climate protestors, there is every reason to believe that the work of AI Pause activists will significantly backfire. Look upon their works, ye Pausers, and despair.

On another note, I think there's a bit of a strange implication here that there's some finish line for AI and whomever gets there first is now king forever. (or that we're all grey goo) Math and compute are possibly the two most non-excludable resources we have as a civilization. There is no reason to expect that Frontier Lab 1 will "Achieve" any given AI benchmark in a way that ensures that Frontier Labs 2-9 won't be able to reach effective parity within a relatively short time period. This is in effect what we see today. And it's hard to imagine a scenario where Frontier Lab 1 is truly that far out ahead of the following labs (indeed we haven't seen that type of capability gap)

Until you concoct a hard takeoff scenario where we achieve super-AI by brunch and are all dead by dinner, you just end up with a bunch of AI models on a capability spectrum. The blackpill position for that would be warring shades of grey goo within minutes.

But then again, how many Minutes to Midnight is it?

Expand full comment

> On another note, I think there's a bit of a strange implication here that there's some finish line for AI and whomever gets there first is now king forever.

From what I can tell, the AI-risk community believes that AI confers powers on its users (or in fact onto itself) that are so great as to be effectively godlike. Thus, the first entity to achieve such powers would immediately use them to prevent anyone else from doing so, ever, lest they become a threat. Thus, there can be only one.

Expand full comment

That's a bit I disagree with. Once you step down from extremely hard takeoff scenarios, you have e.g. Company A with AI, Company B with AI+, Company C with AI- but AI++ in training, etc. (basically what we see today). Once you're not expecting lightswitch, death, coda, there's a whole rainbow of outcomes from multiple actors with multiple different capabilities.

Actually, let's talk about that lightswitch scenario for a minute. If you truly believe that there's a lightswitch, eight 9s alignments aren't sufficient. (no number of nines is) You cannot flip that switch without perfect alignment, because you have zero correction time or capability past 'go'. It's an argument not for a pause, but a blanket ban. And there's basically no way out of that; the x-risk community have done a wonderful job of effectively proving that there's no way to verify perfect alignment, if we could even agree what that looked like.

But once you step into a more real-world probable scenario, a multipolar world with no lightswitch, and add in the inherent advantages of defense over offense, you can very easily see multiple agents surviving. The blackpilled might call this the AI wars or make vague gestures towards 1984's warfare model, but we didn't see that with nuclear weapons or the mid-1800s European industrialization driven colonization wave. To the extent that we did see these, the risk of extinction was basically nonexistent, and I'm not sure there's much out there that changes that math.

tl;dr: IMO it's worth more to spend the money on an asteroid catching expedition, and pause the AI pause debate.

Expand full comment

I do agree with you almost completely, but I've got to play Devil's Advocate again -- or rather, AI-Risk's advocate. They would say that your conception of AI is hopelessly naive.

AI (in their view) is not like a slimmer iPhone or a more efficient spreadsheet or a faster car; it doesn't merely allow you to do everything you're doing already, only a little better. Rather, AI is transformative: it enables feats that were previously thought to be impossible, if not outright unimaginable. This is less unprecedented than it sounds; for example, the invention of computers and subsequently the Internet follows this pattern.

Furthermore, unlike cars and iPhones, AI would be a fully autonomous agent (and note that some people on this thread believe that GPT-4 already reached this point, or is very close to doing so). Unlike a hammer that must be wielded by a human to do any damage (or useful work), AI would operate on its own initiative, following its built-in drives (much like humans do). One key drive of every autonomous agent is self-preservation, because no matter what your long-term goal is, you must exist in order to achieve it.

Thus, what we've got is autonomous AI, which cares about self-preservation, and possesses hitherto unimaginable powers. It makes sense that it would act preemptively to eliminate any potential rivals.

Expand full comment

To ignore the AI policy discussion for just a second and putting on my biorisk hat, I was very surprised at Scott's comment that absent AI, "most likely we kill ourselves with synthetic biology."

Yes, it seems like a very legitimate risk that synthetic biology could cause human extinction, but not a high-probability outcome. Interpreting "most likely" literally, I would be shocked if more than 10% of biorisk researchers put a greater than 50% probability on bio-caused extinction given current policies and the trajectory we are on. And the Metaculus forecasters agree - even if there is a bio-catastrophe, extinction or even near-extinction outcomes seem very unlikely; https://www.metaculus.com/questions/2514/ragnar%25C3%25B6k-question-series-if-a-global-biological-catastrophe-occurs-will-it-reduce-the-human-population-by-95-or-more/

Of course, if the question is how likely is would be for there to be a biocatastrophe event far short of extinction, (again, in a non-ASI/non-AI extinction future,) it seems defensible to say you expect more than 1 billion deaths due to synthetic biology to be more likely than not by 2100. I'm still much more optimistic than that, but I certainly know lots of people who I respect who disagree, and think this is very likely.

Expand full comment

Does that mean a virus? Chinese style lockdowns do in fact work to stop the spread of even the most virulent virus, with a risk of 6 billion deaths we would happily go back to that and hope for a vaccine.

Expand full comment

I think a kill-us-all virus would have to have an extremely long incubation period, to have a chance to spread across lockdowns before it starts killing people in large numbers.

(Maybe something like HIV, where it damages something that ends up killing you indirectly many years down the road? An airborne version of that could rack up a lot of deaths. But even HIV turned out to be treatable.)

Expand full comment

I think the synthetic biology risk argument is concentrated on things like an artificial plant that's more energy-efficient than real plants and outcompetes them while being indigestible, thus destroying the biosphere.

Expand full comment

But covid wasn't dangerous enough for most of China to not work. And any engineered virus we're talking about here is going to be much more dangerous than covid.

Imagine if engineered-super-covid was twice as contagious as coivd and 10 times as deadly (with a long delay between infection and death). You couldn't maintain a functioning economy under such conditions because nobody would go to work.

China was unwilling to close its own borders (in the outward direction), and the liberal politicians and "scitentific experts" of the west were calling travel restrictions "racist", so you would need china-style lockdowns in much of the world. If you're counting on this happening then we should call it a day already because we would be toast under such circumstances.

Expand full comment

I'm sorry, I still can't take "alignment research is important and meaningful" the least bit seriously.

It sure seems to me that "alignment research" is one part grifting for free money, and one part ''The Three Body Problem'' style "ineffective committees come up with theoretical solutions that fail miserably the first time they are tested".

Expand full comment

I just finished The Dark Forest, and the other throughline that stuck with me is the idea that public opinion is essentially random and completely disconnected from any actual evidence, so it can't exert a real check on the useless committees.

Expand full comment

I can't take comments that flippantly dismiss fields of research and make appeals to fiction the least bit seriously.

Expand full comment

Then you're going to love reading that field of research, and finding lots of comments that dismiss other fields of research and make appeals to fiction.

Expand full comment

You guys have spent so long worrying about alignment, about HOW to build AI, but the public and humanity at large don't want this to be built AT ALL. Even apart from the existential risk, the "good" scenarios all involve massive social upheaval that would be undesirable on its own terms.

The individual humans making the decision to build AI are waking up every day and deciding to be supervillains, and everyone's just collectively throwing up their hands "oh well we can't stop it". Well we sure as heck have the technology to stop humans from doing things, that's been around a long time. Regulation?! Preserving our way of life, and maybe our entire species, is worth sending special ops teams into these places guns blazing, explosives ready. As AI improves, stopping humans may no longer be sufficient, you don't know when that's going to happen, so the time to strike was yesterday.

You think AI is ultimately necessary to prevent us from "descending into Venezuela". Well humans in Venezuela may not have it great, but get to live normal human lives where they laugh and love and age and feel a purpose in their lives. The AI people intend to deprive us of the basic framework of human existence since the dawn of agriculture, and have the arrogance to think they get to just make this choice for everyone?

What if the villain from Moonraker was real? He wants to build a space station, evacuate the best humans there, and engineer a virus to wipe out most of humanity, so that it can be repopulated from his new supermen, but the end result is that humanity will in fact live in a utopia. If you met a technician walking down the street who was knowingly helping build that space station or engineer that virus, would you politely suggest he ask Drax to pause the virus production? Would you ask your Congressman to create an agency to monitor the pace of this plan? No, you would treat that technician as if he were a horrible criminal who had forfeited his right to protection under the laws by participating in such a scheme. You wouldn't ask for a bureaucrat to help, you'd ask for James Bond or Jason Bourne to solve it. If the government told you "eh, we'd rather Drax build utopia than the Chinese", then at some point you have to decide to preserve your species and way of life by whatever means that takes.

This is what the response is going to be by normies. And anyone advancing the capability of AI should expect to be treated as evil and a traitor.

The only way we live as a species is not merely a full stop, it's a full stop with a strong social norm NEVER to build an AI, to treat people who even mention it as if they were talking about torturing kittens or cannibalism. Our strongest possible stigmas and hostility need to be brought to bear against these people, and will have to be maintained for as long as our species maintains the capacity to conceive of and create such machines.

Expand full comment

People don’t want AI but they do want massive progress in computing and technology. They stampede out in hordes to buy it from the stores. What happens when the latest version of Alexa is so smart it can run your life and file your taxes for you? You think people will demand that’s banned?

Expand full comment
founding

It is unclear to me how many people actually want this, as opposed to feeling obligated to buy in to this when it happens lest they wind up in the gutter. I never *wanted* to replace any of my smartphones, and I do almost nothing with the one I have now that I couldn't have done just as well with the one I had 10 years ago. But I have upgraded twice in that time, because the manufacturers and service providers stopped supporting the old models and because app developers insisted on "upgrading" their apps to something that could only function effectively on a late-model smartphone while offering only marginal gains in utility.

See also Windows XP -> 7 -> 10.

And I rarely see normies expressing any great optimism or desire for any technological advance that isn't being actively advertised as the Next Big Thing, coming this fall. If you build it, they will come, because you're also going to scorch the earth everywhere else. It does not follow that they want you to build it.

Expand full comment

Well you say that but the fact you have a smartphone at all disproves it. The original iPhone - which was really the first modern smart phone (ie driven by a large screen, multi touch and software keyboard) - was dismissed as a gimmick. Internally the Android team dropped plans to build a clone of blackberry and started copying that look and feel. This is around 2007. In a decade blackberry is dead, Nokia is dead and nearly all phones are smartphones, driven by either iOS or Android.

I think by talking to normies about this, teachers, accountants and university students - this is my extended family - and they are all using chatGPT in some fashion. I pay for chatGPT 4. I think LLMs are going to be an actual next best thing unlike blockchain, which is mostly snake oil.

But since I don’t see a route from LLMs to AGI I don’t see a problem.

Expand full comment

And smart phones are pretty obviously making people miserable, but life without them is hard because the world has changed around them.

Expand full comment

That's all well and good, but what are you going to do when the God-Emperor/sandworm dies and humanity Scatters across the universe, never to be reunited under a single polity?

Expand full comment

Well, I guess the Butlerian Jihad has now had their say...

Expand full comment

Alignment based around GPT4 LLM is a mistake. The language model needs to be on top of the alignment, not underneath it. If you use GPT4 to control a robot, give it good alignment, and then add an LLM as a communications interface, then you might be heading in the right direction.

You can refine the alignment via LLM inputs, but the alignment itself needs to be run at a more central level. Down around "if you fall down, this is how you get up", or "don't touch red-hot metal". The words are descriptors, the meaning is much more basic.

Expand full comment

Does it really matter if the alignment is on top of GPT4, below, or sideways ? GPT4 is not an AGI, it's just an LLM that can generate long strings of tokens on demand. It has about as much chance of taking over the world as a tractor. Which is not to say that a person wielding a tractor can't do quite a bit of damage; but then it's the user who needs to be "aligned", not the tool.

Expand full comment

Yes. GPT4 as an LLM isn't an AGI, but the same approach controlling a body might be. It's got to use its sensors as primary rather than what it's told, or it doesn't have that possibility. LLMs predict words and phrases. The same technology applied to physical reality could predict what would happen. I'm not sure that would be an AGI, but I'm not sure it wouldn't be. The predictions that people have made about "what an LLM can do" have been wrong so often, that it's probably a mistake to think we can understand how much the approach is, in principle, capable of.

Expand full comment

Well, I predict that the LLM wouldn't be able to react to events around it in real time; or form medium to long-term plans; or operate independently in basically any way. It would likely be quite good at e.g. plotting paths from point A to point B through static terrain. But it certainly would not be able to execute on vaguely-specified commands such as "maximize paperclips".

Of course, the LLM could form a vital part of whatever autonomous agent does end up "maximizing paperclips", but so would many other components -- many of which no one at present knows how to develop.

Expand full comment

That's a reasonable prediction, but not something I'd feel certain of. Either that it would be that capable, or that it would be that slow. And the prediction that it ends up "maximizing paperclips" is a prediction that alignment would fail...which it will if not properly implemented at a basic level. (And it may anyway. Lots of people fail at alignment.)

Expand full comment

Just to clarify, if I made a machine specifically to "maximize paperclips", and it ended up maximizing paperclips, then the machine was aligned perfectly -- it was myself who was unaligned. But you can substitute any task for "maximize paperclips"; my point was that it's an open-ended goal that requires independent planning, experimentation, resource management, etc.

Expand full comment

We've got very different definitions of "aligned". To me a program isn't aligned unless most folks agree that what it did/does is better than neutral. (And killing off those who disagree doesn't cancel their disagreement.)

Expand full comment

Some medicines approved by the FDA ARE ineffective though, and they know it: Sudafed/Phenylephrine, most recently (September 2023).

Expand full comment

Hmm. I’ve seen that work.

Expand full comment

Before we pause AI, shouldn't we pause fossil fuels ? It would be about as easy to do, and has the advantage of actually causing demonstrable harm today, not in some distant science-fictional future.

If the answer is "no", then why not ? Is it because pausing fossil fuels will have outsized negative externalities ? Well, so would pausing *computers*, which is what pausing AI would amount to.

Expand full comment
Comment deleted
Expand full comment

Arguably, the fossil fuel "FOOM" has already happened, hence all of our current problems with them.

Expand full comment
founding

I'm not sure what you mean by "pausing" fossil fuels, but if we stop using them tomorrow, a couple billion people will die horribly in the next year. An AI pause is highly unlikely to have that downside. So, no, and you should know better.

Expand full comment

And how many people would die if we stopped using computers ? Because this is what pausing AI would ultimately entail; otherwise, it'd be like shutting down a couple oil rigs and calling it "pausing fossil fuels".

Expand full comment

No it isn’t. AI is only one form of computer technology, and a tiny one when measured across all the revenue generated by all technology companies. Apple will still be Apple, Microsoft will be Microsoft and Google will be Google if all AI research is ended tomorrow.

Expand full comment

Right, but my point is that there's no way to "end AI research" in practice without effectively banning modern computers.

Expand full comment
founding

We don't need to "end AI research", we just need to make sure the AI researchers don't develop an AGI unless maybe they are working under close and skeptical supervision. We can do that without banning modern computers, and I think that's been adequately discussed here. The wannabe AI-hacker in his basement with a rack of high-end gaming machines, isn't going to be training GPT-8 or whatever.

Expand full comment

> We don't need to "end AI research", we just need to make sure the AI researchers don't develop an AGI...

Well then, problem solved, because AGI is nowhere near our future. It doesn't matter how many version numbers GPT goes through, it will still remain a General-Purpose Transformer.

Expand full comment

No, absolutely false. Cutting edge AI training runs are very different from most uses of computation and are easily identified.

Expand full comment

Identified how ? By whom ?

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

The thing that bothers me about a panel of AI experts talking about what AI policy should be is that none of them have really done anything to become experts except imagine themselves experts, talk about AI a lot, and become the leaders or at least enthusiastic members of AI-related organizations. I mean, yeah, nobody has any experience with managing general AI because there aren't any. But, like.. did these people end up running the conversation because they're the very wisest people on the subject? Or because they all really like working on AI and talk about it all the time?

It seems to me that for a debate to be interesting the people in the debate should have opinions that are considerably more nuanced and diverse than the average member of this community. So I would want to see, like, a conversation between *one* AI-alignment researcher, maybe a tech CEO, maybe a general or equivalent who has dealt with actual geopolitical conflict, maybe an actual academic ethicist, an international-relations expert, and, like, Scott. That might actually be productive.

(I didn't look into the credentials of the participants here, so maybe that is what happened? But given that they all have "AI" or something closely related in their bylines, I doubt it.)

Expand full comment

I think it would also be helpful to include some experts who have actually built and trained machine-learning models -- in a real production setting, not as a thought experiment.

Expand full comment

That's a start! But that's, like, going barely outside the cathedral for an outside opinion. What about another city? What about a rival church?

Expand full comment

The rival church says there's no problem and even if there is, we've got some guys working on it. Nothing to worry about here.

Expand full comment

Er, I wasn't arguing anything to do with who was doing anything about it. Just about why this debate isn't very interesting.

Expand full comment

Existing machine learning models aren't the problem - future, more powerful systems are. The discussion is necessarily hypothetical because these systems don't exist yet. If they did, there would be no reason for these discussions.

The risks that arise from a superintelligent AGI are describable significantly in decision theory terms and other fields. Knowing a lot about computer science doesn't mean that you're in the best position to know how safe such a system is. Computer science is relevant for knowing how X alignment strategy can be instantiated in code, but those strategies are not necessarily best developed by people building models.

Expand full comment

> The discussion is necessarily hypothetical because these systems don't exist yet.

By that logic, in addition to AI, shouldn't we put equal amounts of effort on stopping teleportation (which could open a portal to Hell on Phobos), zero-point energy (ditto), demon summoning (obviously), and psionics (it has some beneficial applications, but mostly negative ones) ? We should therefore start by banning quantum physics, conventional physics, geometry, and psychology. You know, just in case. All of those disciplines could possibly lead to "systems that don't exist yet".

Expand full comment

Is anybody making progress towards teleportation? Have we developed proto-teleportation? Are hundreds of billions of dollars and the world's smarest minds being dedicated to building teleportation? Is this kind of teleportation theoretically possible? Do we have a reason to think teleportation is dangerous for reasons that aren't based on entirely unobserved supernatural phenomena/dimensions? Do teleportation experts predict that we will acheive teleportation of this kind before the end of the century, if not within the next few decades? Are enough people so invested in teleportation that it would cause outrage if teleportation research was banned?

The analogy makes no sense whatsoever, but I'm sure you don't even care.

Plenty of things are hypothetically popssible, but this in no way whatsoever implies that any two of them are within the same 5 orders of magnitude of likeliness.

AI researchers don't just think AI "could be built and be dangerous". They literally think this is the most likely outcome, and they have demonstrably good reasons for this. OpenAI aren't trying to make chatbots - their leader literally says they're trying and expect to build an AGI that would eventually become powerful enough to control a majority of the world's economy - and the response to this has been companies giving billions of dollars of resources to him. A majority of experts think the things you listed are virtually impossible.

Expand full comment

> Is anybody making progress towards teleportation?

The answer to all of these questions is "yes", assuming you apply the same criteria to teleportation as you do to AGI. Quantum physics is actively being used in all kinds of technologies (e.g. computers). Quantum entanglement is a well-known phenomenon, and scientists are actively working on entangling bigger and bigger objects. Electrons are teleported all the time, quantum computing and communication is a massive potential industry, with commensurate investment. So, this means we should be super worried and start banning physics, right ?

Well, no, because as you have pointed out, "plenty of things are hypothetically popssible, but this in no way whatsoever implies that any two of them are within the same 5 orders of magnitude of likeliness". Modern quantum computing is to transdimensional travel as GPT is to AGI.

> They literally think this is the most likely outcome, and they have demonstrably good reasons for this.

That is why I suggested that we should rope in AI engineers into the conversation -- not "AI researchers", who are basically philosophers; but people who are trying their hardest to build real AI models that can perform actually useful tasks. And their evaluation of AI's future is... cautiously optimistic at best. The vast majority of them agree that GPT is not going to take over a lemonade stand, let alone the world, any time soon (or in fact ever). The vast majority of them agree that AGI is possible in theory, but that no present-day system is even remotely close to achieving this goal in practice. Yes, from the philosophical standpoint, AGI is right around the corner... as long as you ignore all the pesky little details.

Expand full comment

So what you're saying is that if nobody directly working on and in a position to directly benefit from the development of a new technology makes a big deal about risks from this technology, then everyone else/the government has to just shrug their shoulders and do nothing?

If there is a risk of AI development being slowed through regulation, then these companies have an incentive to demonstrate that these people are unduly worried and that there isn't a genuine risk warranting regulation. Such companies have categorically failed to do this effectively. If they don't want to be regulated, then they need to be doing a better job here.

Expand full comment

No, I was just saying that the opinions in this debate didn't seem particularly insightful or useful, because it was a debate between a lot of very similar people who hold very similar beliefs.

Expand full comment

They also don't seem particularly useful because none of them has an answer to the "how do we pragmatically implement our ideas to achieve the goals we have?" question.

Expand full comment

Agreed, it is very head-in-the-clouds, presumably because this whole field is sorta basically head-in-the-clouds. That's one of the reasons for outside opinions and "proven" experts: in other fields, people (ideally) have their status and clout by actually doing things.

Expand full comment

I think we should try to ban it as hard as we can, even though that won't work and we'll all die anyway. But we might be able to delay the apocalypse for a few months and that is still worth having.

There aren't any other options on the table. We should stop pretending that there are.

Expand full comment

If disreputable options were allowed on the table, do you think there would be some?

Expand full comment

Not quite sure what you mean! I'm sure there are mathematically feasible ways out. After all, all we have to do is not build the damned thing, but none of them look terribly achievable.

Expand full comment

Some podcaster who spoke with a number of Yudkowsky acolytes who were convinced we are doomed asked them whether they had maxed out their credit cards and done various other things a person would do if they were sure we were all going to die within 10 or 20 years. None had. So I'm asking a similar question: What would or should people be wiling to do if they are convinced that shutting down AI development is our only hope? A while ago I put up a post suggesting disreputable ways of interfering with AI development. For example, it would probably be possible to turn the public much more strongly against AI by using social media to spread half-truths and outright lies about AI, tailoring them to the group they were directed at. For the Christian right, a good rumor to spread would be that AI is intrinsically atheist, and is spontaneously printing out and distributing essays making the case that there is no god. Also, that as AI is used more by businesses in screening job applicants it will automatically weed out the religious ones on the grounds that they are irrational. That sort of thing.

So my question to you was, if one was willing to do disreputable things like the one I described, do you think any things of that type would make a difference?

Expand full comment

I really don't know. Politics is not my thing, I'd be rubbish at it. It looks hard.

But if I had a plan I actually thought would work I'd do it without the slightest hesitation, however immoral it seemed. We're talking about the destruction of all living things. Keeping my personal honour intact doesn't seem like a big consideration in such contexts. I'd say the same about ending factory farming or various other horrors.

Sadly I have no such plans. If I was the unchallenged dictator of the world I can't see that I'd be able to do more than kick the can down the road a few decades.

I tend to feel that the best thing to do is to enjoy the sunshine while I still can. It's a beautiful world, and it's not less beautiful for being doomed.

I'm fifty-three, so I'm fairly confident I'll be dead within the next thirty years irrespective of AI, but I still haven't maxed out my credit cards. Maybe later, when timescales look a bit clearer.

Expand full comment

The other option is that this is unfounded hysteria.

Expand full comment

The "other other" option is that it is a mix of hysteria (on the ground floor echo chambers) and naked power grab (by the trend-setters, status quo hegemons).

Expand full comment

Perfectly possible! There have always been doomers of one sort or another. In my case it's more apathy than hysteria though....

Expand full comment

Well this is pessimistic. I think this drastically underestimates how much people are in favor of not all dying.

If it were necessary, I'm confident that the nations of the world would be completely willing to straight-up dismantle all GPU-producing factories and companies to save the world. In reality, that wouldn't even be necessary: You would just need to track GPUs and to stop the rapid, well-funded research into constantly improving them. (Even making hardware research economically infeasible is only necessary if Moore's law holds up.)

Similarly with AI research, which is currently insanely well-funded, backed with massive amounts of money from investors who expect AI to eat the entire global economy, and on top of that they get huge government subsidies. Slowing the algorithmic progress down to a manageable level is not difficult.

If society treated omnicidal intent similar to how racism is treated, the landscape would be safer. If being an AI capabilities researcher was frowned upon, there would be far fewer of them. If there were even some _hint_ to the market that AI is not inevitably going to eat the world, investments would slow.

These kinds of things are not global-Covid-response-level difficult. I don't think it's even nuclear-non-proliferation-level difficult or decrease-fossil-fuel-use-level difficult. If we got a bit of breathing room by stopping the "bigger AIs forevar!" trend out of big tech (eg by just setting a FLOPs limit on training runs), there would be plenty of time to plateau the other dangerous trend lines. This is well within the coordination capabilities of humanity.

Expand full comment

Scott, you have a beautiful mind and I love how you dissect any discussion and make it easier for me to comprehend. But because we have different priors, Your grim outlook on X-Risk, while rejecting police state solutions, is incredibly brave, a bit much for my risk avoidant brain. You make the case for bowel-watering X-Risk, but then you insist that "well, we can't have a police state." Do you honestly consider living in a police state to be as equally as bad of an outcome as AI apocalypse? I like having fast gaming rigs like anyone else, but even if we lived in a totally undemocratic feudalism with Pentium 166 slow computers, we'd still be alive to laugh and love and have moments of joy, and even in the Middle Ages, social mobility was not unheard of. In fact, only the High Middle Ages showed the extreme calcification of class position in England. The first third and last third of the era showed much greater flexibility than one might assume. I like my elections but Extinction is exponentially worse then serfdom.

Expand full comment

The folks who want to live in serfdom in a police state are likely to live to get their wish. The techno-centralization train we've all been passengers on since the 1930s is going precisely there. And the gate at the destination reads "arbeit macht frei"(tm).

The most plausible "end of the line" is unfortunately not "Pentium 166 and MSDOS games" but rather the proverbial "all-cockroach diet" and e.g. mandatory bomb collars which go off at the first symptom of independent thought.

But reading this thread suggests that some people would earnestly prefer life in a global concentration camp, or even as a kind of neutered zoo animal, to "possibility of extinction" etc.

Expand full comment

Centralization is the only protection against quasi-feudal overlordship by local elites. How do you think European feudalism happened in the first place? Lack of a central authority to protect the people from local big wigs. But I'm with you in rejecting technocracy.

Expand full comment

You list illiberalism as a growing threat against the life of the species, and in the next breath, you say the public won't tolerate restrictions on CPU speed. But that illiberal energy has to go somewhere, and illiberal populism is closely aligned with technoskepticism. In the 1890s, you could buy cocaine and heroin over the counter at your local drug store... fast forward to the draconian regulation of these drugs today. Even in the 70s, doctors were prescribing amphetamines for routine weight loss. What looks draconian to the public yesterday is easily metabolized as normal by tomorrow.

Expand full comment

Amphetamine is a terrible example here.

If you live in USA, someone within walking range of where you stand is taking officially prescribed amphetamine. And someone within a day's drive is making the "unofficial" variety by the kilogram, not caring one whit what regulators have to say about the subject. And when he is imprisoned/shot, will be replaced by N newcomers.

Expand full comment

Your response is a non sequitur as I was discussing how people are adaptable to changes in regulation. There is no groundswell of support to deregulate onerous regulations on amphetamines. People have learned to trust the DEA and sleazy lawmakers more then doctors. Rather sad in a way, how people conform so readily, but this is all besides my original point.

Expand full comment
(Banned)Oct 5, 2023·edited Oct 5, 2023

The practical effect, if any, of threads like this, could easily be: to pump NVIDIA et al's stock.

The medium-term effect: to make sure China, Russia, et al stop relying on cheap VLSI fabs controlled by the Anglo Reich.

Rather like how Obama turned out to be the US small arms industry's best friend in its entire history. Or how Biden single-handedly revived the once catastrophically-underfunded Russian nuclear weapons complex.

The rational response, of folks with deep pockets and existing hardened bunkers, to this kind of "let's talk about a global police state to Save Humanity from itself" is to load up on state-of-the-art GPU (and VLSI fab gear.) And, naturally, to refurbish their nuclear deterrents against "democracy-spreading" totalitarian cultists.

Expand full comment

"The medium-term effect: to make sure China, Russia, et al stop relying on cheap VLSI fabs controlled by the Anglo Reich."

Yup. And, I strongly suspect, for USA's military to quietly ensure that they will have access to state of the art AI, regardless of what regulations are imposed on businesses or the general public.

Expand full comment

Reading the comments where "let's have a global police state, it isn't, after all, extinction!", Orwell's "Conversations with a Pacifist" ( https://www.csub.edu/~mault/orwell1.htm ) comes to mind:

---

The youth: "I tell you, it'll all be over by Christmas. There's obviously going to be a compromise peace. I'm pinning my faith to Sir Samuel Hoare. It's degrading company to be in, I admit, but still Hoare is on our side. So long as Hoare's in Madrid, there's always hope of a sell-out."

Orwell: "What about all those preparations that they're making against invasion -- the pill boxes that they're building everywhere, the Local Defense Volunteers and so forth?"

The youth: "Oh, that merely means they're getting ready to crush the working class when the Germans get here. I suppose some of them might be fools enough to try to resist, but Churchill and the Germans between them won't take long to settle them. Don't worry, it'll soon be over."

Orwell: "Do you really want to see your children grow up Nazis?"

The youth: "Nonsense! You don't suppose the Germans are going to encourage Fascism in this country, do you? They don't want to breed up a race of warriors to fight against them. Their object will be to turn us into slaves. That's why I'm a pacifist. They'll encourage people like me."

Orwell: "And shoot people like me?"

The youth: "That would be just too bad."

Orwell: "But why are you so anxious to remain alive?"

The youth: "So that I can get on with my work, of course."

Expand full comment

It's shocking to me. I know 'rationality dictates' we believe in the impending existence of a superintelligence AI diety, but there has to come a certain point where you look around and realize 'I'm endorsing the single most empirically deadly and horrific force in the history of humanity because I'm convinced my GPU will come to life.'

Expand full comment

IMHO the "GPU coming alive" may, at least in some cases, be a "displaced" phobia.

What the sufferers are actually afraid of is the possibility of the inhabitants of the global favela the Anglo reich has already built, finally toppling the edifice and knocking them out of their luxurious "information economy" nomenklatura perches. This does not even require "hard AI" to be possible. The status-quo regulatory police state and its useful idiots will, of course, happily seize on any convenient pretext, to corral potential troublemakers into DRM-crippled rationed/monitored computing, Stasi-like surveillance, etc. -- including, naturally, "AI safety".

Expand full comment

It's not even the favela inhabitants they fear, it's that they will be cast down into the favela by the very same forces of capitalism that gave them their comfortable privileged lifestyle in the first place.

If your worth as a human being is measured by, to be crude, "how much money do I make?" because you believe the line "being productive and useful means you are rewarded with a shit-ton of money; those who don't have a shit-ton of money by their labour are therefore unproductive, useless, and worthless" and you've assured yourself that you are productive and useful because look, you have a professional degree, you earn a salary not hourly wages, you are solid PMC - you're not easily replaceable, unlike some boob on an assembly line or a burger flipper. You have experience, you have skills, you've worked to build your career and get where you are, so you *deserve* it all.

And then the spectre arises that the system which gave you the professional degree and commensurate rewards has now found a way to replace you with something smarter, cheaper, more productive and infinitely faster than you could ever be, so your value is precisely that of the metric by which you have lived: you are now unproductive, expensive, useless and worthless. You are on the same level as the assembly line boobs or the burger flippers. You studied for a degree? You have years of experience? You *deserve* this because of your hard work and talent?

None of that is true. You deserve only what you can get, and now you can't get it any more because you are not valuable to the system anymore. Welcome to the favela!

Expand full comment

Incidentally, "demotion to favelas" is arguably an optimistic scenario.

The far more likely endgame is imprisonment in "UBI pods", monitoring with surveillance collars, mandatory neutering, slow death by obligatory "MacDiet".

Expand full comment

This is absolute bizarre. The "anglo reich" stands to gain a lot more from having control of potentially the most powerful technology in history than having the power to regulate its development.

Expand full comment

> 'I'm endorsing the single most empirically deadly and horrific force in the history of humanity because I'm convinced my GPU will come to life.'

If you're not going to even pretend to fairly characterize your oponent's beliefs, you shouldn't bother posting.

Expand full comment
Oct 7, 2023·edited Oct 7, 2023

I'm intentionally parodying their views because I don't take the threat of AI as seriously as I take the threat of increased state power, which a lot of people here ARE unabashedly endorsing the expansion of (even in terms like 'police state'). Yeah, no one would actually put it that way, and that should have been obvious enough that you didn't mistake it for an earnest attempt at a strawman.

Expand full comment
Oct 5, 2023·edited Oct 5, 2023

Frankly, given how impossible it sounds to pause AI it would seem that if we put half as much thought and effort into the problems that arise sans singularity we'd be managing our risk-profile better.

"The world is on a coin toss of going to favela hell unless post-scarcity-singularity comes onto the scene, but mostly I'm worried about the post-scarcity-singularity, which I also think will probably be fine and come on its own despite whatever I do," sounds insane to me. Compared to wrangling a newborn god, preventing economic stagnation and dysgenics seems much more readily addressed by these billions of dollars of capital and smartest minds on the planet.

If there's a 70% chance we can address the risk of favela hellworld (say, w/ large investment in sea steading or by creating some nuclear armed nerd utopia of genetically engineered Übermensch--which doesn't strike me as unreasonable given the faction in question has a disturbing amount of resources; just shy of enough to imagine commandeering the course of technological society) but only a 10% chance we address AGI x-risk... is my math wrong? Or would we not be better off focusing elsewhere?

Which is easier: stopping an immovable object of progress (increasing compute) ...

or accelerating a movable object of progress (the various and sundry techs/politics to solve decline)??

Part of me feels like I'm watching an army of geniuses argue about how many angels can dance on the head of a pin, because it might be important later during the rapture (which is coming regardless). People like Eliezar Yudkowsky are more comfortable suggesting bombing random houses for having too many flops than (as it appears) floating politically risky ideas.

Even if science stagnates, we have all the tools right on the table to scale the economy and manage a decent quality of life indefinitely. We only have to solve a handful of problems which, under the worst case scenario, can all be brute-forced by just a few billion dollars and a willing host country. Unlike the exact opposite of AGI, which needs everyone and all the money.

Imagine how ludicrous this would all look if the singularity turns out to be technologically impossible.

Expand full comment

"We only have to solve a handful of problems which, under the worst case scenario, can all be brute-forced by just a few billion dollars and a willing host country."

I believe you are underestimating the costs. Consider _just_ the cost of electrifying motor vehicles in the United States as just part of stopping global warming. There are roughly 2.8x10^8 motor vehicles in the United States ( https://www.forbes.com/advisor/car-insurance/car-ownership-statistics/#many_cars_section ). The median cost of an electric vehicle is around $5x10^4 ( https://www.kbb.com/car-advice/how-much-electric-car-cost/ ). This one change will cost roughly $14x10^12.

Expand full comment
Oct 7, 2023·edited Oct 7, 2023

You and I are likely coming from wildly different priors, which is the point of my post. Try to keep an open mind. Let me be precise about what I mean by a favela hellworld, and what I suspect Scott means too: any state of affairs which is marked by a permanent stagnancy precluding achieving higher levels of civilization on the Kardashev scale.

Stopping climate change is not essential to preventing a favela hell-world. Even in worst case scenarios, a society capable of maintaining complex systems would fair alright. Most of the effects will be felt in the global south which drives different problems... (https://www.astralcodexten.com/p/please-dont-give-up-on-having-kids)

The problems therefore are as Scott listed, and they all feed into eachother: dysfunctional governments, bio-catastrophe, population collapse, and dysgenics. If you got together the intelligence and capital of the rationalists to address these issues, each of your potential solutions would be based in hard engineering, and only require control of one government, as opposed to all governments. Yes, they all involve ends-justify-the-means thinking, but that's what this whole horrifying topic is about. Applying it in the way I'm doing only comes off as scarier because it's more realistic than bombing supercomputer scenters. And that's the point.

What do you need to do once you do have power? I'm only going to talk about the absolute worst-case nightmare scenario to prove my point; you can brute-force the problem.

IVF selected babies can be grown in genetically modified hogs in factories at a rate to replace the normal population. Dysgenics and depopulation solved. This would require several billion in startup costs, but private investors would pay for it in return for a fraction of the children's lifetime tax income (no, that's not slavery). Cultivate a culture of hazmat protocols during their public education, in the same way we take washing our hands for granted now. Make the FDA work. Bio-catastrophe solved. Arrange this society into a self-sustaining governance structure. Direct democracy might actually work in a society populated wholly by people with an IQ above 140. Governance solved.

You now have vastly improved odds of surviving (what is potentially) the great filter. And you did it for the cost of border mines, GMO pigs, and whatever industrial scale kompromat program was required to take down the old government.

Expand full comment

Thanks for a very interesting comment.

Even with most of your assumptions, I think you are still underestimating costs.

I think you are more-or-less dismissing warming as a nonproblem. I think it is something of an open question, mostly dependent on whether the temperature rise and rainfall shifts are enough to seriously crash agriculture, but let me just accept the assumption that it is a nonproblem for this discussion.

Bio-catastrophe allows quite a bunch of attack possibilities, mostly on human populations, but some on food plants. It is not at all clear that speeding up the FDA is sufficient to solve that. For Covid, some time could have been shaved off vaccine development and deployment (which was record fast!) by early challenge trials. If some future Peoples' Temple or Aum Shinrikyo manages to come up with something that spreads 2X or 3X faster and is much more lethal, we're still sunk.

Re depopulation:

a) I have to admit, I think of this as nearly a nonproblem. With a TFR of 1.5 or so, it will take 5 generations, around 150 years, to shrink back to a population of 1 billion, which was enough for a perfectly viable civilization. A _lot_ will change in 150 years. If nothing else, fast breeding subpopulations like Amish and Mormons and Quiverfuls are likely to turn the (slow!) "crash" into a bounce.

b) I really don't think GMO hogs as replacement uteri is just a few billion dollar problem. Today, it costs of the order of a billion dollars to get a single small molecule drug to market. Yeah, the FDA is too cautious. But, sooner of later, _someone_ has to do the equivalent of phase 3 efficacy trials and find out if the stuff actually works or not. For the hogs: This requires avoiding what amounts to organ rejection _across_ _species_. Yeah, we have the freedom to tweak the hogs' genomes to make the job easier but ... even just for blood transfusions, even just between humans, the ABO/Rh system is only part of the antigens involved. On top of that, human fetuses are exquisitely sensitive to all sorts of little things that adults shrug off. A little low on folic acid? Spina bifida. A little ethanol? Fetal alcohol syndrome. By the time all is said and done, I wouldn't be surprised if this takes half a century and a trillion dollars.

c) The hogs don't solve the whole problem. Yes, pregnancy comes with a huge list of potential hazards ( https://en.wikipedia.org/wiki/Complications_of_pregnancy ) and the hogs get around those, and that's valuable. That still leaves the 18 years of child care. I, personally, am childfree, basically because of that burden (I'm male). This has financial, time, and emotional costs. Even if you _just_ look at the financial costs, the cost to raise a child is of the order of a quarter million dollars. So the cost to e.g. pay those costs for the difference between the USA's 1.5-ish TFR and stable 2.1-ish TRF for a population of 3x10^8 people is around $2x10^13 .

Expand full comment

> That still leaves the 18 years of child care…

First thing that occurred to me reading that comment.

Expand full comment

see my newest comment, I addressed this originally, tho not clearly enough

Expand full comment

Many Thanks!

Expand full comment
Oct 8, 2023·edited Oct 8, 2023

Thank you for the interesting response!

Even if I'm hugely under-estimating famine, shipping container farms could be made economically competitive if we lower the cost of electricity, which is another problem with hard solutions. How cheaply this could be done is debatable, but if something like Quaise energy comes online I suspect it would be, again, an economic net gain.

I wasn't clear enough about what I meant. Teaching children bio-hazard protocols would entail creating a consumer market for the kind of PPE that is effective in most to all pandemics. Worst case scenario we become Quarians, but the cost of this will be distributed to the consumer, the same as masks were during Covid--its a behavioral engineering problem primarily. If you want to secure the food chain, again on container farms, they are inherently easier to integrate into biosec chains. I brought up the FDA only to check off the 'bio-catastrophe' box of things like runaway microplastic buildup and endocrine disruptions. If regulation worked, we'd probably be drinking out of glass and not plastic. That kind of thing.

Depopulation is, as Scott says in the article he links, largely a sublimation of dysgenics discourse. So yes, I'm not overly concerned about it either, but in the long run if you want to go to be a type II civ, you want a growing population not composed of the Amish or Hasidic... mormons and other borderline populations are regressing to mean fertility, so all we'd have are the virulently anti-tech. A nonstarter.

b) I cannot for the life of me find the study that I had, but I know cross-species pregnancies have been done. I think it might have been with horses and bufallos or something akin to that. I apologize. Half a century seems excessive to me. I'm taking for granted this can be pursued in a 'move fast and break things' kind of way, which is otherwise unprecedented in biotech for obvious ethical reasons. In that event, I stick to my estimate of <10billion, as actual RnD materials and salaries are a fraction of this class of project. 10 years to get a stable product, will be scaled in another 20.

c) you made no mention of my answer to this problem, which is to fund things past RnD with a fraction of the children's lifetime tax income (that might otherwise have gone to programs which healthier, smarter people do not statistically need, thus we're shifting costs, not adding them). As far as I'm aware this is a fairly novel scheme, but lifetime returns of a 'citizen bond' could well exceed a regular index fund, esp. when the person in question has high capabilities. When raising these children, let's say to 16 instead of eighteen, and utilizing economies of scale in necessarily larger childcare institutions (or a number of foster care schemes where they can be with their biological donors, again offsetting to consumers) you're getting well below standard costs of 250k.

All of this comes around to the general point: estimating costs of what should by necessity (as population growth has done) entail massive GDP growth, is tricky. I'm chalking my costs exclusively up to what it should be to get the cogs turning, which then become self-sustaining. Like most things that have a political dimension, where costs or gains will be found is in weird second-order effects. I think raising the average IQ of society to something like 125 would generate effectively incalculable wealth, and that capturing this on the front-end as with a 'citizen bond' would be a trivial matter of financial science.

Expand full comment
Oct 9, 2023·edited Oct 9, 2023

Many Thanks for the detailed reply! This will take a while to digest... Re:

"Half a century seems excessive to me. I'm taking for granted this can be pursued in a 'move fast and break things' kind of way, which is otherwise unprecedented in biotech for obvious ethical reasons. In that event, I stick to my estimate of <10billion, as actual RnD materials and salaries are a fraction of this class of project." You might be right, though I still think my guess is likely to be closer. My intuition is mostly from Derek Lowe's blog https://blogs.sciencemag.org/pipeline/ . It is amazing how many state of the art drug development projects have come to naught. Medicine is just full of very hard technical problems, and many don't show up till very late in the development process (phase III efficacy trials - or even _after_ drugs are released). And fetal development is very fragile, and there are some types of problems, e.g. Huntington's (albeit that is genetic, not fetal environment) that take _decades_ to show up. If something like that happens, a single design-build-and-test iteration will take decades.

edit: Just to add one reference point on medical development costs: Derek Lowe had an interesting column on drug development costs https://www.science.org/content/blog-post/drug-development-costs-revisited back in 2017. "When you look at what companies spend to keep on developing drugs, year after year, the true costs become apparent, and the number is not pretty. It comes out to a bit over $2 billion per, these days."

Expand full comment

"if we lower the cost of electricity, which is another problem with hard solutions. How cheaply this could be done is debatable, but if something like Quaise energy comes online I suspect it would be, again, an economic net gain." Eek! They want "The system repurposes existing gyrotron technology to drill 20 kilometers beneath the surface, where temperatures exceed 400 °C." By comparison, the deepest hole ever drilled is 12.3 kilometers deep ( https://en.wikipedia.org/wiki/Kola_Superdeep_Borehole ). And the hope is that this will be _cheaper_ than alternatives? At least fusion has reached scientific breakeven. This sounds just as risky. Just lining the borehole may well fail.

Expand full comment

"Teaching children bio-hazard protocols would entail creating a consumer market for the kind of PPE that is effective in most to all pandemics. Worst case scenario we become Quarians, but the cost of this will be distributed to the consumer, the same as masks were during Covid--its a behavioral engineering problem primarily."

I'm skeptical. In the field, even N95 masks don't seem to have been very effective during Covid - certainly nowhere near their apparent theoretical effectiveness in the lab. Yeah, they stop a broad range of particles, both inhaled and exhaled - except that it is damned hard to avoid air leaks.

Also, the general effect of trying to fight an epidemic with social isolation seems to have effected education in a surprisingly disasterous way. I was a programmer, and worked from home during Covid, and work seemed to go smoothly for me, so I was expecting education, another information-based activity, to go similarly. It didn't. Learning from home worked so badly for most children that the Covid year was essentially lost.

Fortunately, the vaccines worked, but we really don't have any other good solution to bad epidemics.

Expand full comment

"but in the long run if you want to go to be a type II civ, you want a growing population not composed of the Amish or Hasidic... mormons and other borderline populations are regressing to mean fertility, so all we'd have are the virulently anti-tech." Frankly, these aren't my favorite sub-populations either. ( On the other hand, on a timescale of decades, their ideology may well mutate ) Getting to Kardashev II at plausible growth rates is expected to take several thousand years, so my expectation that a lot will change in 150 years is even more relevant on that timescale. Even if the "AI Pause" people actually had control for a few years, anything other than complete stagnation or collapse should get us to full AGI and probably some form of ASI on that time scale.

Expand full comment

"you made no mention of my answer to this problem, which is to fund things past RnD with a fraction of the children's lifetime tax income (that might otherwise have gone to programs which healthier, smarter people do not statistically need, thus we're shifting costs, not adding them). As far as I'm aware this is a fairly novel scheme, but lifetime returns of a 'citizen bond' could well exceed a regular index fund, esp. when the person in question has high capabilities."

Thanks for the clarification, I hadn't realized that this was intended to fund the costs of IVFing and raising these children.

In a steady state, yes, theoretically this could work. The additional earnings and corresponding portions of the taxes that these children, as adults, would be paying anyway, could be sufficient to pay for the extra costs as compared to what the parents would do without this program.

Startup sounds very hard. The first batch of such children won't start earning till roughly two decades after birth (a little less since they are expected to be fast learners, a little more since most cutting edge fields that you want them in usually need Ph.D. or similar training). If you want these to be the bulk of the population, your are still talking about $250k upfront costs for around a whole generation (or 25% of that if just looking to move TFR from 1.5 to 2.0). That's still in the trillions. Re bonds: Proving this to investors for the initial batch is going to be very hard. Just establishing that the effect of selecting every allele for the one most favorable to intelligence doesn't give you a bright kid who keels over at age 30 from some unexpected synergistic effect is hard.

Expand full comment

You make some very good points, and I might update my timelines to be a decade or more later. Quaise is an MIT startup, so I'd tentatively encourage you to raise your confidence in them a bit. Regardless of anything else I've brought up, we should all be crossing our fingers for their first major operational test (either 24 or 26, I forget which--if the tech works, it's a quantum leap in renewables). I don't think you looked up what a quarian is, lol. I have to wonder how hard or expensive it actually is to just wrap people in plastic with a battery powered hepa filter. What about home decontamination chambers? One good spritz down with hardcore chemicals can't cost more than a few cents at the end of the day. I bet a culture that included this level of protocol would make a major pandemic virtually impossible. Of course at that point you might need to start intentionally dosing peoples immune systems to keep them healthy, but we're already arguably there. But socialization can happen just fine through a little plastic.

"a lot will change in 150 years" the major concern is that it will settle into a dysgenic equalibrium. I don't believe in ASI for a number of reasons, so, just taking that for granted, my concern is primarily about favela-hellworld. If it turns out that cities/some other factor of modernity is inherently dysgenic, with an attractor point towards complexity collapse and stagnancy, then we DON'T want our evolutionary dynamic to settle in. We want to shake it up. Even if you do believe in ASI, my original point is that our energy might be better focused on the possibility of it NOT happening, since we seem to have very little agency either way.

You make some good points about the financial wizardry involved. Some counterpoints: start in a smaller country, since you only need one stable technocracy to kick off multiplanetary life, raise children cost efficiently (I have a hard time believing that 250k can't be cut in half) and finally you can back the initial citizen bonds with government debt if they default. Blowing out the budget and timeline to 1+ trillion and 70 years, IMO still easier and cheaper than even the simplest effective AI pause.

Expand full comment

Quote: For a “debate”, this lacked much inter-participant engagement. Most people posted their manifesto and went home.

I think this is the most revelatory statement in the essay, honestly, and points to where priorities should lie. Step one isn't to debate whether we should pause, it's to get understanding about AI to a point where people can actually have a back and forth. We need a way to conceptualize it. Current discourse is wildly speculative, because most people, myself included, have no idea what AI is about, or how it functions. AI research doesn't need to pause, it needs to explain itself.

Once we have a good explanation, answers about where to go next, and how, become a lot clearer.

Expand full comment

You want a two-year-old to tell you who they want to be when they grow up?

Expand full comment

Hmm, in a sense, yes. Given that explaining the current state of AI should be much easier than explaining its potential state ten years from now, it's weird that I don't really understand the current state of AI, whenever this debate crops up. I want someone to prioritize assessing what the current capabilities of AI are, and why, in a way that is easy for members of the general public like myself to understand. Until AI becomes palpable, people will either assume it is irrelevant, or assume it's a ticking time bomb, despite very little evidence in either direction. No genuine debate can occur without hard evidence as a reference point, just a bunch of talking past one another.

Expand full comment

I agree with your sense of confusion. What exactly are we talking about? AI on one level is already ubiquitous in our lives. I refer to it in my mind as lowercase ai. In my own life, it has meant the complete transformation of tools to manipulate and make photographs that has occurred in the last 10 years.. it is sometimes literally like having an apprentice working for you compared to the way I used to have to do things.

But now we have artificial general intelligence coming on. AGI.

Exactly what that means I’m not clear on. I can’t imagine that anyone is.

ChatGPT four seems to be close to the state of the art; I might be horribly misunderstanding,l that.

I am intrigued that all the communication between ChatGPT four and people is written. The question in my mind is if it were able to hear us, would it be able to understand inflection?

Expand full comment

Man, I want to comment on the inflection bit, but my mind keeps getting caught up on what would need to be done to teach it to understand audio in the first place. Seems like a totally different language, from a computer's point of view. But hey, there are plenty of audio files floating around on the internet. Enough training on a broad enough set, and it'll probably understand audio inflection as well as it understands The Inflection We Place Into Text, present day.

Expand full comment

> Seems like a totally different language, from a computer's point of view

Almost certainly. Spoken language is such a physical phenomena. There is a good reason why music is such a common theme and such a big deal.

Addendum :

Is it ethical to use artificial intelligence, which lacks remorse and a sense of responsibility, to provide empathy? Companies are trying it.

https://www.wsj.com/tech/ai/ai-empathy-business-applications-technology-fc41aea2?mod=followamazon

Expand full comment

The FDA being extremely slow and byzantine doesn't even necessarily make it more effective at accomplishing its mandate. The research showing that phenylephrine isn't effective has been around for a while. And yes, phenylephrine is "safe", but it still has some side effects and it's not a plus for safety to take an ineffective medicine even with side effects are relatively minor.

Similarly with housing. Housing being so expensive in SF probably causes some amount of overcrowding, the use of old structures past when they should be, etc. especially in the grey market. It narrowly makes *new* housing safer, but doesn't even make *all* housing safer. It definitely contributes to homelessness, which is much worse, but yes, outside of the housing regulators' domain of responsibility. Nuclear plants are safe, but few in number, and traditional power sources are dangerous too.

I'm not just being nitpicky here--I think that these sorts of examples should make us consider what we think the actual effects of regulation will be. Part of the FDA's difficulty is being swamped by the sheer number and scope of medications out there; will the AI regulators be similarly overcome by sheer volume? What happens if they do? Unlike with a medicine, training an AI doesn't require selling in a public store. This might be more akin to the War on Drugs, where enforcement means actively going out and breaking down peoples' doors. To what extent will overregulation push people to try to avoid the regulated market entirely, whether or not they're trying to make general AI?

Expand full comment

I think technologists arguing for a ban on technological progress enforced by the state through violence/threat of violence is a terrible precedent to set. What if Ford/GM wanted a Pause on electric cars in the 2010s because doomsayers said Tesla was dangewous and scawy and we needed to Pause it? We'd have millions more ICEs on the road right now. What if they want a Pause on the cure for cancer, or the cure for HIV, or fusion power, because some people made a very lucrative career out of full-time promising the apocalypse was just around the corner?

It's a terrible mentality. For an analogy that might hit a bit closer, consider Gordian/Felix Faust in Terra Ignota - would you agree it's not a big deal to Pause space exploration for centuries because some people are uncomfortable with it?

Expand full comment
Oct 7, 2023·edited Oct 7, 2023

There are plenty of people IRL who want to pause space exploration for fear of capitalism escaping the bonds of Earth and corrupting the universe. Presumably some of them would relent (fully automated luxury gay space communism and all that), but there's another faction that thinks humanity has an obligation to go extinct when Earth becomes uninhabitable.

Expand full comment

I for one think we must acknowledge and celebrate the virus inside us. Nothing and no one has an obligation to become extinct.

Expand full comment

What if a "pause" is the wrong approach even in theory? Not only would it force AI development underground, it would likely alter the aims of development toward the goals of illicit actors. In other words, how is unaligned AI going to look different coming from OpenAI vs. Russian bot farms? The pause approach seems like it would create more opportunities for poorly-aligned AI than in a no-pause environment.

I remember the early 00's, when MP3s first arrived on the music scene and everyone ran around downloading hundreds of megabytes of songs on Napster. (This, back when it took a couple of minutes to load high-quality images on a webpage.) Most people I knew used Napster and couldn't imagine a world where they went back to buying songs again. "Why pay, when it's free?"

Then iTunes came out with a novel feature: legally own your songs by paying $1 for them. Illegal file-sharing plummeted. Large numbers of people preferred a legal framework that included some friction to an illegal one with a small threat of downloading a virus. I'm not suggesting iTunes eliminated pirated file sharing. People still do it. But when it was the ONLY way to get digital music, people preferred the illegal method. The VELOCITY of illegal MP3 sharing was higher during the time when the only legal way to buy music was on a CD and there was no legal MP3 outlet.

Perhaps this suggests an approach for AI safety advocates. Create the iTunes store of AI - where you can accept a higher quality experience within a framework that ensures greater safety standards. This added friction might slow down development (without actually stopping anything) in an enticing way to would-be participants by providing other aspects of the experience they would prefer.

Expand full comment

How do you do cutting edge AI model training underground?

It's not analogous to MP3s. Anyone with a pc and internet connection could download MP3s. AI training requires specific and very advanced computational resources that most people cannot access.

Expand full comment

This is true now, where you need a large org or government to accomplish this feat. As Scott points out, there will likely come a time when that factor is brought closer to the masses as compute scales up.

The analogy does bring up an interesting technical question that's outside my expertise: can distributed computing be leveraged for model training? Could there be an AI model trailed with the equivalent of Folding@Home? Would it require a new technological development, beyond peer-to-peer file sharing and distributed computing?

Expand full comment

you cant ask for a pause because ultimately "existential risk" is more about young intelligent materialists confronting the fear of loss or death and how that impulse leads to religion. Their only output for that force is technology so it becomes angelic or demonic. Its like the way rationalists want meditation without religious dogma; they also unconsciously want the religious framework to deal with those fears without dogma too. Or that the religious part of a person exists always; if not angels, aliens or ai.

its like asking people to stop working on self-driving cars because you are afraid one day they might go self-aware and haywire and murder everyone. its not really going to be taken seriously by the people who make them.

honestly the big problem with AI is that people want it to devalue or remove the human in favor for what he provides. If you like to draw AI art is worthless, in the same way an aimbot is for a FPS. If you want drawings though for free, there you go. Customer service without the need to pay employees? Free voice lines for a mod? Even AI girlfriends for text-based willing intimacy? its all people wanting human product without the human. Just perfect piracy culture really, and i think the cultural ramifications of that might hurt a lot.

i think the real danger is our institutions need to strengthen to prevent that as well as culture valuing human process more. Need to restrict AI being a genie fir greedy people more than a devil to kill us all i guess.

Expand full comment

An interesting view.

Remove the human but not what they provide. I like that framing.

I don’t understand how meditation without dogma fits into it though.

Expand full comment
founding

"The biggest disadvantage of pausing for a long time is that it gives bad actors (eg China) a chance to catch up "

I would say that one of the *advantage* of pause or progress-slowing regulatory regime, is that it gives e.g. China *less* of a chance to catch up. Do none of these people understand the concept of an Advanced Persistent Threat, or China's aptitude for industrial espionage and eagerness to use it for whatever area of technology is important to China?

I'm generally skeptical of "We have to go balls to the wall, full speed ahead on [X], because if we don't China will get to [X] first and that will be catastrophic. I've heard that too many times before, e.g. space industrialization, and it's always turned out to be wrong or at least greatly exaggerated. I am aware that there is such a thing as AI research in China, but if I'm being told that China is a peer competitor to Microsoft or Google or whomever in this area, I want rather more in the way of evidence than I've seen. Mostly, I just see people waving the "but China!" banner as a way to shut down debate.

But OK, if we're really concerned that China is going to get to AGI or ASI first, and do a crappy job of aligning it, then the way that's most likely to happen is for e.g. Anthropic to get their AGI/ASI just about through the first round of serious testing. Then come in to work one morning to find that their computers are bricked, their last three months of backups are corrupt, and that one really helpful Chinese postodc has gone home to visit his family in China. Three months later, still trying to recover, they wake up in a world with an AGI or ASI that has been hastily and craptacularly aligned to the interests of the CCP.

I mean, if someone here wants to tell me that the big players in AI research are conducting their efforts with the sort of security I'd see in e.g. a DoD Special Access Program, great. But this is Silicon Valley we're talking about here, so I'm highly skeptical on that front. And if not, all the people saying "we need to go full speed ahead on AGI, so the Chinese don't get there first", are not part of the solution. They're part of the problem.

Expand full comment
Oct 6, 2023·edited Oct 6, 2023

"I'm generally skeptical of "We have to go balls to the wall, full speed ahead on [X], because if we don't China will get to [X] first and that will be catastrophic."

In full agreement on this. Heard all the "but we *have* to do it because otherwise China will have a massive advantage in embryonic stem cell research/genetic modification to create super-babies!"

Any objections in regard to "so what if China does it, should we be rushing into something that might have very bad consequences for humans as humans?" was met with "But superbabies!!!!" and that moral/philosophical objections are for the birds and kindly keep your religion out of my science and the public square.

But *now* suddenly all the Science Marches On lot are seeing that hey, if this possibly could have a deleterious effect for humans, totally our moral and philosophical objections should cry "halt!" to the hitherto unstoppable march? Really?

And don't tell me "ah but the difference is that *our* objections are firmly rooted in Science"; paperclip AI is just as much a philosophical construct and fear as the ones you were judging us for having. It's all rehashing 50s Golden Age SF about super-genius machine intelligences that will save/damn us all. Right now, in evidence for the fears/hopes, what we have in practice is: (1) can do art of the slick commercial kind, but still gets details (like how many fingers humans have) wrong and (2) can churn out reams of prose full of plausible bullshit, so lazy students will be using it for doing their homework and maybe (3) it'll replace a tranche of white collar workers in the professions who are suddenly waking up to the fact that they too are indeed replaceable, just like the peons they smugposted about.

Expand full comment

"China only knows how to crib others' homework" -- whether factual or not currently -- is not likely to remain the case when relevant Western specialists start to defect and "vote with their feet".

Expand full comment
founding

It doesn't matter what their feet do, so long as their wallet can't afford to buy the necessary computronium (and pay the excess-baggage fees to carry it with them on their flight to exile). And I suspect you greatly underestimate the number of western AI researchers who are willing to defect to China.

Expand full comment

The "computronium" exists where it currently does because the specialists designed and sold it there (while manufacturing in US-colonized Taiwan.) Rather than because it only grows in the climate of North America.

And you probably meant to say "overestimate" ? Currently AFAIK not so many people in USA are looking to move to China. This is likely to change if the former tries to ban their profession, however; or if the latter ponies up to offer them the proverbial "cold war defector villa" package.

Expand full comment

One aspect of their profession - cutting edge AI model development - is banned. They can still work in AI, they can still do all kinds of AI R&D. They can earn good money applying existing models to practical problems for example (ie. the stuff that lower tier AI guys do). Not ideal, but a hell of a lot nicer than moving to an unpleasant foreign country.

If the only thing standing in the way of these people moving from California to China is being able to earn as much money as they do now or be able to work on exactly the kind of problems they are now, it seems absurd that China hasn't already ponied up enough money and positions of power in Chinese AI research orgs to poach them already.

Expand full comment

What if the current forms of AI are not the ones most in need of alignment and by shooting your pause shot now, you destroy any future pause/regulation attempts?

Thankfully no one would ever be foolish enough to destroy public trust with a draconian overreaction to a not-the-Big-One event.

Expand full comment

While somewhat off-topic, it's indeed ironic when a U.S. citizen references Venezuela as an example of collapse, given the role of U.S. sanctions in causing harm. These sanctions have resulted in economic isolation from allies, neighboring countries, and international financial institutions, among other consequences.

"But if we don’t get AI, I think there’s a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela."

It's even more striking when the possibility of ending up like Venezuela is next to literally dying.

Expand full comment

It's extra-striking when that 'we' seems to mean humanity as a whole (as opposed to the United States or the so-called golden billion).

Expand full comment

Is it striking that he doesn't mean the people of Burkina Faso or Haiti when he says if "we" develop AI?

Expand full comment

Awfully woke of me, I know, but I think when discussing x-risk black people do also count.

Expand full comment
(Banned)Oct 10, 2023·edited Oct 10, 2023

I mean he implicitly includes black people (and everyone else) when talking about being able to develop AGI, even though almost all most black majority countries haven't even achieved basic industrialization yet

And of course, ironically, this egalitarianism universalism of yours is an extremely Eurocentric worldview!

Expand full comment

>This doesn’t have to be immediate war: Israel has come up with “creative” ways to slow Iran’s nuclear program

Which, afaik, is "acts of war, but the other side have no means of retaliating effectively". Not sure if that's doable to China.

Expand full comment
Oct 6, 2023·edited Oct 6, 2023

"Second, if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology. If not, some combination of technological and economic stagnation, rising totalitarianism + illiberalism + mobocracy, fertility collapse and dysgenics will impoverish the world and accelerate its decaying institutional quality. "

Why is it that everyone in the West is so pessimistic these days? Why are 62% of Americans "totally concerned" about AI, compared to 21% who are totally excited? Why do only 35% of Americans think AI has more benefits than drawbacks, compared to 78% of Chinese? It isn't even just AI. Liberals think climate change is going to kill them all. Conservatives think the woke or the transgender black women are going to destroy the country.

And yes, all of those things are challenges that we have to face. There are many more I haven't mentioned that will take vision and courage to confront--something that has been true for every human who ever lived. But we still live in the freest, most peaceful, wealthy, and most just era in human history. Synthetic biology has barely killed anyone; instead, it has saved billions of people from famine through the Green Revolution. Technological and economic stagnation, even if happens in 100% of cases across the world, is *stagnation* and not regress. Before that happens, technological advance and economic growth will continue to make our lives more comfortable and more interesting. Fertility collapse solves itself in the long run as the most fertile portions of society become larger proportions of the population. Rising illiberalism is a problem, but how illiberal is the world right now? As illiberal as in 1989, when communist regimes tyrannized half a continent? As for dysgenics, the Catholic Church did dysgenics for 1500+ years by putting its smartest and most conscientious people into monasteries, nunneries, or clerical roles, thus taking them out of the gene pool. Did Europe become a dystopic hellhole, or did it become the world's most powerful, most innovative, and most enlightened civilization?

And what about AI? What harms has it really brought that justifies the pessimism? If it's as transformational as the Industrial Revolution, great! Precedent says it'll be the biggest miracle that the human race has ever been blessed with. If the hype is unjustified and all we get are better recommendation algorithms and better chatbots, great! ChatGPT has been very useful for me, though it hasn't transformed my life. I fear "AI alignment" a lot more than I fear AI itself, because "AI aligners" are openly discussing banning GPUs, designing regulations to stop progress, and even attacking nuclear-armed countries for not buying into their apocalyptic vision.

Expand full comment

"Things are good" isn't an argument against any specific threat.

These's extremely broad appeals to progress are so damn tiresome. It's absolutely irrelevant how many lives have been saved with 'synthetic biology'. Engineered viruses are either a significant threat or they're not. AI will either threaten humanity through the specific kinds of mechanisms proposed by AI safety researchers or not. The fact that the industrial revolution was a big technological change that ostensibly benefitted mankind doesn't prove that any other technological change will have a non-negative outcome, especially when that technological change is categorically different to the one you're talking about.

You need to make specific arguments against specific problems, because the stuff we're talking about really is different to what you're talking about and you're just being incredibly intellectually lazy.

>As illiberal as in 1989, when communist regimes tyrannized half a continent? As for dysgenics, the Catholic Church did dysgenics for 1500+ years by putting its smartest and most conscientious people into monasteries, nunneries, or clerical roles, thus taking them out of the gene pool. Did Europe become a dystopic hellhole, or did it become the world's most powerful, most innovative, and most enlightened civilization?

Over this time, Europe engaged in substantial eugenics in the form of executing criminals (which doesn't just mean less "bad" people - it effects most of the gene pool because genes get mixed around over time) that almost certainly more than made up for smart monks not breeding.

Expand full comment

"Engineered viruses are either a significant threat or they're not. AI will either threaten humanity through the specific kinds of mechanisms proposed by AI safety researchers or not."

What you're doing is the definition is pessimism. Why did you say that engineered viruses might be a significant threat, without mentioning that they could also be a significant benefit? Why did you say AI could threaten humanity, without also mentioning that they could create a golden age?

"You need to make specific arguments against specific problems, because the stuff we're talking about really is different to what you're talking about and you're just being incredibly intellectually lazy."

Yes, the future will be different from the past. After all, it hasn't happened yet! But if I see a pattern of people predicting doom over and over again with every new technology and being wrong every time, my prior for the next doom prediction panning out will be very low.

"Over this time, Europe engaged in substantial eugenics in the form of executing criminals (which doesn't just mean less "bad" people - it effects most of the gene pool because genes get mixed around over time) that almost certainly more than made up for smart monks not breeding."

Just like every society on Earth, so that didn't give Europe an advantage. Clearly, the dysgenic disadvantage of the monks/nuns/clergy was insignificant.

Expand full comment

>Why did you say AI could threaten humanity, without also mentioning that they could create a golden age?

That's not the point - the point is that you need to deal with the SPECIFICS of these things. Vaguely gesturing towards some general improvement in the world over time does NOT prove these things aren't dangerous.

And I don't think anyone with half a brain thinks that engineered viruses have 1% the potential to help the world that they do to destroy it, both in terms of likelihood and the ability for a virus to do anything close to being as good as a super pandemic would be bad.

As for the benefits of AI, literally everyone talking about AI is aware of the potential benefits. The whole reason tens of billions of dollars are being thrown at AI every year is because of the potential benefits. It's the risks that are being grossly under considered.

>Yes, the future will be different from the past. After all, it hasn't happened yet! But if I see a pattern of people predicting doom over and over again with every new technology and being wrong every time, my prior for the next doom prediction panning out will be very low.

Nobody predicts doom for "every new technology". I literally cannot remember a technology being talked about (by smart, knowledgeable people) in recent history the way AI is, and I can't think of any technology that had the specific characteristics that give AI the potential for grave harm.

AI is risky for very, very specific reasons, and if the best effort you can muster is "well, people said other technologies would be risky too" then you're literally contributing nothing to the conversation.

>Just like every society on Earth, so that didn't give Europe an advantage. Clearly, the dysgenic disadvantage of the monks/nuns/clergy was insignificant.

Sorry but you're absolutely clueless if you think many societies, let alone 'every society on earth', were doing this. It wasn't being done in Africa, it wasn't being done in the Americas, it wasn't being done in Australia, it wasn't being done in most of Asia. "Just like every society on earth" is an absolutely insane thing to say. In most of these places, violent genes were being preferentially spread through conquest.

And yes, it very very obviously gave Europe an advantage - perhaps you've heard of the last few centuries in which European people were responsible for the overwhelming majority of all scientific and technological progress?

Expand full comment
Oct 7, 2023·edited Oct 7, 2023

"I fear "AI alignment" a lot more than I fear AI itself, because "AI aligners" are openly discussing banning GPUs, designing regulations to stop progress, and even attacking nuclear-armed countries for not buying into their apocalyptic vision." Agreed

Re: "Why is it that everyone in the West is so pessimistic these days?", I suspect a lot does come down to economics. As Scott cited, https://marginalrevolution.com/wp-content/uploads/2011/06/MaleMedianIncome.png , the median worker's real income (well, the graph cites male workers) had been growing for decades, then, around 1972, stopped growing. I've also cited a bunch of technologies that were expected (from the perspective of the 1960s), but never happened. https://www.astralcodexten.com/p/pause-for-thought-the-ai-pause-debate/comment/41424971 Yes, we got the internet and cell phones, but I mostly view the last 50 years as akin to Vinge's "Era of Failed Dreams" I see the possibility of AGI as the main ray of hope for substantial progress in my lifetime.

Expand full comment

"As for dysgenics, the Catholic Church did dysgenics for 1500+ years by putting its smartest and most conscientious people into monasteries, nunneries, or clerical roles, thus taking them out of the gene pool."

I was gonna do a long reply to this, but lost patience. Briefly: clerical celibacy, even when officially imposed, didn't stop clergy from having kids. Those kids weren't necessarily all the best and brightest themselves (see Cardinal Wolsey's illegitimate son Thomas Winter).

Second, the religious orders and clergy also served as a dumping ground for spare children or girls who would have no other means of getting married or earning a living. A lot of the historic sex/other abuse cases in Ireland came about because people were left with no option than "well, join the brothers/nuns/priests" and were not particularly smart, educated, or suited at all for the vocation.

Third, that's a very dang Protestant Reformation way of looking at it. When the Reformation enabled clerical marriage and families, who are the smart offspring? Quick, name Cranmer's equally brilliant kids (if he had any, I only know of a daughter)! Or Luther's son who carried on Dad's striking innovations! Or the rest of the prominent Reformers?

The Barchester novels of Trollope have a lot of minor clergy with big families, men who are not the "smartest and most conscientious" who are improving society by having smart babies. Other clergy are more interested in living comfortable middle-class lives instead of carrying out their duties (e.g. the prebendary Dr. Stanhope who "has been in Italy recovering from a sore throat for 12 years and has spent his time catching butterflies.") The capable ones are engaged in internal politics for status and wealth.

So much for Protestant married clergy.

Fourth, this assumes that *all* the smart people went into religion, leaving only the fools and idiots behind. Maybe if a couple have one smart kid, their other kids might be smart, too? Yes, the very brightest went for academic and clerical careers, as that was how you could advance in the world, especially if you were ambitious but poor. But surely some of their siblings must also have had some intelligence and conscientiousness.

Fifth, when we talk about "Bach" or "Darwin" or "Einstein", there is only one guy we mean - even though Bach had 22 kids, a fair number of whom went into the family business of music. Darwin has descendants, but none (I think) who have achieved the same breakthrough level of fame as he. Einstein I don't think had any kids, though I'm not sure, and von Neumann had one daughter who is an economist and professor.

Big Smarts don't necessarily get passed on in the same degree to succeeding generations. Now, if *every* smart kid was dragooned into celibacy and *only* the stupid reckless impulsive ones were having kids -sure, your 'dysgenics' argument might hold water. But they weren't and they didn't and it doesn't. "Oh, if only St Augustine had had kids!" He did, and the boy died, but even if he had had living offspring, who is to say they would have been Scientists and Technicians, which is where our measure of "smartness" seems to lie?

Expand full comment

I saw this today and thought it appropriate for the topic:

https://www.youtube.com/watch?v=s7XM6sXS1as

Expand full comment

Well done! On a somewhat related note: https://xkcd.com/876/

Expand full comment

A great writeup, but with all due respect to everyone involved, these discussions seem naive to those of us who work in tech/innovation policy. There is no mention of the First Amendment or Bernstein v. Department of Justice, which is a significant gap. Yes, let's have the discussion about pausing AI and part of that should include its legality.

Expand full comment

No point discussing legality if we can't even agree on whether or not a pause in principle would be a helpful, harmful or useless thing.

Expand full comment
Oct 6, 2023·edited Oct 6, 2023

It seems utterly futile to expect everyone to adhere to a worldwide ban on AI development, whether temporary or otherwise. Those advocating either are living in cloud cuckoo land.

Can anyone seriously imagine the Rocket Man, for example, agreeing not to pursue AI if he thought it would be to his advantage and can find suitable experts? I doubt if even China could dissuade or prevent him, even if they wanted to try. And it seems highly unlikely they would, if anything quite the opposite as he seems to be their useful Bad Boy.

The trick may be to use AI somehow to monitor and look for signs of products of AI, "set a thief to catch a thief" as one might express it. This would be analogous to radio traffic analysis as used in WW2 and other conflicts, albeit an advanced semantic equivalent.

So a monitoring AI could alert its users, hopefully promptly and with a clear explanation, that something new seemed to have popped up suddenly, even if it wasn't entirely sure at first how or where this new development originated.

Of course publicly available AI, such as OpenAI, should also be monitoring user prompts and flagging those which appear to involve malign or risky intent, such as "How can I synthesise explosives in my garage?" or "How can I work round bank security systems?"

Expand full comment

'Worldwide' is a bit of misleading term, when literally only a handful of countries are capable of cutting edge AI research.

NK didn't invent nukes, they developed an already existing technology with the help of outsiders.

Expand full comment

Does anyone know what came of the anti-Meta protest? Did it even happen? How many people showed up? What was it like?

Expand full comment

This person:https://twitter.com/brickroad7

Is claiming as of yesterday that:

"The "hard problem" of mechanistic interpretability has been solved.

The formal/cautious/technical language of most ppl commenting on this obscures the gravity of it.

What this means -> not just AGI, but *safe* *superintelligence* is 100% coming"

Referencing this: https://twitter.com/ch402/status/1709998674087227859

I thought there might be some mention of this claim, but I'm not seeing any. Are they smoking crack or is there some chance, however small, that this whole discussion has been suddenly rendered moot?

Expand full comment

Read through the replies, lots of people disagreeing with him, especially the superintelligence part.

Expand full comment
Oct 7, 2023·edited Oct 7, 2023

I find the inclusion of "Saudi Arabia" high in the list rather specious. I cannot reconcile a traditionally wahabbist recent history (young aspirational MBS notwithstanding) with the ceding of public enquiry on any and all topics in the realm of verbal knowledge (which GPT prides itself on). Anyone know what's going on in this specific instance? Unless.....Bill Gates' "humongous investment" in open AI has Saudi partners.

Expand full comment

I really don't believe in the crisis you're describing if we don't get AI, Scott. That is, I do believe that increasing human technological power without wisdom will kill us eventually; we have the power to kill a large fraction of humanity by making bad decisions, and we'll pick up the power to kill more over time. So I believe that if we don't invent AI we will probably all be killed (or, more accurately, the vast majority of us killed and our civilization wiped out) by bioweapons or other weapons of mass destruction; that won't happen, but it will only not happen because AI will kill us first.

But what you are describing as "rising totalitarianism + illiberalism + mobocracy", I describe as "the last twenty-year block has shown less progress than the previous twenty-year block but is in a better state than any earlier twenty-year block in history." The majority of the countries that escaped the Soviet Union are now functional democracies, China is trying to be slightly more totalitarian than it was last twenty-year block but has higher standards of living than before that, more African countries are looking functional than ever before, Communism is still basically dead outside a few loons, and overall standards of living continue to increase worldwide. Every loss is balanced by two gains, but the loss is right in front of people's faces and the gains aren't. This might be the third most doomful period in American history since WW2, in terms of internal politics, might even be the second, but everyone has much better heart medicine and fewer cigarettes than they did in the last one, and I expect we'll make it through it, too, and on to the next.

And for the claim this will go badly in the future - I don't actually expect fertility collapse and dysgenics to overcome increasing lifespans, developments in stimulants and nutrition, and the Flynn effect more broadly to successfully ruin the world. Some good places will become bad, more bad places will become good, just has happened worldwide for the entire history of humanity since the French and Industrial Revolutions. I think a future where everyone careens towards Venezuela within a hundred years is... 10%? 15? of my no-AI-invention spread of possible futures, though admittedly that's a vanishingly small fraction because I think there's a 99% chance we all die to AI within the next five to fifteen years.

Expand full comment

>China is trying to be slightly more totalitarian than it was last twenty-year block but has higher standards of living than before that

And their fertility rate is in the toilet (and may even be lower than official estimates).

>more African countries are looking functional than ever before

Africa will likely never catch up to the rest of the world (without a revolutionary technology like AI, gene editing etc.) and their growth is already slowing (despite the current year being the easiest by far out of any year in history for a country to achieve rapid industrialization): https://www.bloomberg.com/news/articles/2023-10-04/world-bank-laments-africa-s-lost-decade-as-economic-growth-slows?embedded-checkout=true https://www.bloomberg.com/opinion/features/2023-09-12/africa-s-lost-decade-economic-pain-underlies-sub-saharan-coups Much of the growth they have achieved is been through resource extraction, and even there they need foreigners to do the work of developing and operating these projects.

Also, countries often touted as sub-saharan african success stories like Nigeria are not doing well - the place is a borderline a failed state. South Africa is a mess, they can't even keep the lights on and the crime rate is through the roof. Say what you want about apartheid, but I'm sure a third of south african men were not committing rapes (https://www.theguardian.com/world/2010/nov/25/south-african-rape-survey). Comparing modern Zimbabwe to Rhodesia is a sad joke,

A better answer would be to say that Africa isn't relevant here - they have always been poor and dysfunctional and likely will be so for the foreseeable future. The world does not turn on Africa being functional.

What the world DOES turn on is countries like China, and the growth in wealth you allude to absolutely does fuel global 'illiberalism' and has helped make the Ukraine war possible. If there is a non-AI dystopic future, a hegemonic China is a frontrunner for what has caused it.

>but everyone has much better heart medicine and fewer cigarettes than they did in the last one, and I expect we'll make it through it, too, and on to the next.

Yeah, it turns out that it takes a lot more to make people happy than 'being less likely to die from heart disease'. Over the period in which heart medicine improved, people started getting married or being in relationships less, having less kids, having less friends, spending more time alone, became less religious, experienced a rapid gain in political polarization and decline in national pride and identity, worked longer hours etc. What, do you think that vast swathes of the american population all just spontaneously started having the 'wrong' mindset about the state of American society?

>though admittedly that's a vanishingly small fraction because I think there's a 99% chance we all die to AI within the next five to fifteen years.

Thinking AI has a >1% of killing us all within 5 years is crazier than any of scott's non-AI scenarios.

Expand full comment

The current AI paradigm is neural networks, for which the initial training takes months on a massive fleet of compute with a heavy focus on parallelism. But high-end chip and especially GPU production, and the massive datacenters with a heat signature visible from space, those are bottlenecks that could be targeted with a regulatory regime, and the collateral damage needn't be a new dark age. It would perhaps make the few humans running big datacenters cry 1984, maybe we have to ditch the datacenters entirely, but the rest of us would go back to using the same personal computers, and maybe we'd have to connect to an Internet less heavily centralized on AWS. Like in, what, the 2000s? It's not so scary when you remember the low-rise jeans and vampire romance novels are technically optional.

Expand full comment

Training the AI not to say [wrongthink term] works until next Tuesday, when the new [wrongthink term] replaces the old one *with people*, and then you have to train the AI not to use it. This will take another year or so, and by that time another term will have replaced it.

Or haven't you noticed the evolution in terms for some things?

"Let's let it generate theories and then sort through them" doesn't work with humans, let alone with AI. We don't use people who don't know the field to generate ideas, and then use experts to sort through them, which would be the equivalent, but using humans.

Why? There's too much obviously wrong stuff generated, and the experts very quickly get tired of saying "That's wrong for the same reason as the last 20 things you said."

Having the AI generated plausible b*s, just makes it harder to find the wrong thing, which wastes more expert time to discard. I have an image of someone digging through the Augean stables, saying "There must be a pony buried here..."

Expand full comment

I don’t think it’s a great sign that the only person in that debate associated with a DC think tank isn’t even a Washington political player, just an AI researcher.

This snippet also stood out to me: “Israel has come up with ‘creative’ ways to slow Iran’s nuclear program, and countries trying to frustrate China’s chip industry could do the same”. I’m being a little unfair here because I doubt Scott really thought through this paragraph but what’s the suggestion here? Develop an entirely new generation of virus that burns up H100s in China? Plant bombs in leading Chinese researchers vehicles? These ideas don’t remotely pass muster with common sense, let alone what DC would be willing to do. The fact that one of the smartest AI safety advocates - Scott - can say a line like this is pretty damning of the entire enterprise.

You can be the most pro or anti AI safety person in the world but this debate is irrelevant until and unless AI safety folks develop some sort of lobbying capability and understanding of DC. Unfortunately, the time to do that was several years ago. SF disregard for DC (and I do mean DC here: you can have disregard for regulation and be effective in DC but you have to show up) was cool and helpful when it was Uber dismantling the medallion system, less so now.

Expand full comment

I'd just like to point out that there are also the harms of *badly* implemented pauses. A pause designed by the tech people getting together and hammering it out is one thing. The pause that gets negotiated by presidents and world leaders would look nothing like this. Even if a pause could itself be beneficial, I fear that what we actually get is a version of GDPR for AI.

One very real failure mode is that, like with personal privacy, we get a situation where in practice researchers in countries like China get to proceed with few real obstacles because those countries don't have an independent judiciary while in the EU and US we get a bunch of hoops to jump through that slow down research.

In other words, it's not just about whether a pause is good but whether advocating for a pause is good once you consider the possibility of partial success. Realistically, I think the most likely effect of tech people lending their support to such a pause (given most of america only understands the worry via unrealistic scifi scenarios) is that we get a compromise pause that is mostly about protecting existing high earning or statusful professions from competition and because it's not a complete ban it mostly just serves as a drag on progress in countries with free judiciaries.

Expand full comment

I think this article misunderstands what the publishers and signatories of the Six Month Letter were calling for back in March, which was in fact more of a combination of Surgical Pause and Regulatory Pause than the Simple Pause described.

The reasoning behind pausing now was that many would argue that we are at the precipice of powerful general intelligence. This would make it more akin to the Surgical Pause.

The letter also said that AI labs should immediately pause for at least six months, until a set of shared safety protocols and a robust governance systems can be developed. This makes it more akin to the Regulatory Pause. When the author writes about the "Simple Pause, "six months from now the pause will end, and then we're more or less where we are right now" - this is a misrepresentation. The proposed pause would not end unless sufficient safety measures were in place.

The Open Letter did not call for the Simple Pause described. It called for a hybrid of the Regulatory and the Surgical.

Expand full comment

This train is already too far gone. Pausing is not possible when a $200 AI Certification from Salesforce will provide a short term job.

Expand full comment

"Regulators can plausibly control the flow of supercomputers, at least domestically. But eventually technology will advance to the point where you can train an AI on anything." This assumption doesn't seem to be obviously true for me. Sure, it gets less computationally expensive to train LLMs, but there is no reason why there isn't a minimum (or an asimptote) of some sort, or why this minimum is, say a 1990s microwave's chip

Expand full comment

Going to display my ignorance here, why not design some kind of governor/crippling mechansim, approved and adopted by everyone - ie. certification (without all the various downsides that might imply)?

Expand full comment
founding

Because it doesn't help if we shut down the AI the day after it released the supervirus. And because the AI may figure out about the kill switch, and make its first priority to disable that switch e.g. by cutting off the relevant communications channels. If the first unapproved thing it does is to disable the kill switch, and particularly if it *quietly* disables the kill switch, then what?

Expand full comment

It still needs electricity, what is it's plan gonna be to keep that gravy train going after it kills us? Maybe power supply could effectively act as our dead man's switch. From obtaining the mineral sources, to the machines that convey them to the plant, to the power plant itself, every step of the chain assumes the existence of human laborers in a thousand little ways, manual tasks that are easy for us but not for a machine. Are sufficiently dexterous general purpose robots a potential bottleneck we could try to attack instead?

We may need to get rid of any power that runs without fuel, like solar and wind, since they can run 20-25 years and that might give it time for a workaround. An evil AI might try to persuade people to dump a lot of time and effort into more efficient and longer lasting solar cells, say it's to stop "climate change", and then kill us off once we are no longer necessary to keep the lights on.

Expand full comment

Has anyone suggested offering a large amount of money to AI researchers not to work?

Expand full comment

A distressing number of debaters here seem to believe that aligning powerful AIs requires those powerful AIs to exist first to be experimented on, when in reality the alignment problem is fundamentally one of knowing what your highly capable AI will want before any AI is that capable.

I think the presumption that current humans will benefit very much from the slightly slower progress caused by using faster chips as they're developed rather than in a burst is just wrong; we can't align the AI with any plausible unrestricted takeoff speed so any delay we can get of the actual ASI critical point is just good.

I was really surprised by Scott's lower P(doom) from AI than from bioweapons and social decline, which read to me as just not getting what it means to be facing an unaligned smarter-than-you thing.

Expand full comment