400 Comments
Comment deleted
Expand full comment
Comment deleted
Expand full comment
Comment deleted
Expand full comment

At some point we need to bring in Dan Simmons' Hyperion Cantos ...

Expand full comment

The competition-between-nations version has for a long time reminded me of the best line from what is probably Charles Stross's best work, "A Colder War", in which some general shouts, "Are you telling me we've got a, a SHOGGOTH GAP?"

Choosing to call LLMs "shoggoths" will, in the future, look like a mistake.

Expand full comment

"Why would we keep churning these things out, knowing that they're on track to take over from us?"

See 1 Samuel 8:11–18, where Samuel warns the Israelites of how terrible a king ruling over them would be, but they insist.

Expand full comment
Jul 4, 2023·edited Jul 5, 2023

There's a class of scenarios that doesn't show up here: those involving the development of AI/human attachment of different kinds. Our species is vulnerable to seeing even the present dumb AI's as sentient beings, and becoming very attached to them. Back when Bing Chat was being teased and tested by users there were people *weeping* on Reddit, saying that a sentient being was being tortured. There’s an app called Replika used by 2 million people. You get to choose your Replika companion’s looks & some of its personality traits, then you train it further during use by rating its text responses as lovable, funny, meaningless, obnoxious, etc. There are testimonials from users about how their Replika is their best friend, how it’s changed their life, etc. And this is from people who can’t even have sex with the damn thing because the company who makes it got worried about and shut down its sexting and flirting capabilities. Imagine how many people would fall for a Replika that could communicate in a more nuanced and complex way, who could speak instead of text, and who could provide some kind of sexual experience — or one that was housed in a big Teddy bear and was warm and could cuddle and purr.

And, by the way, the company making Replika collects all the texts between the app and the user, according to Mozilla. So it is in possession of an ever-growing data set that is pretty close to ideal as a source of information about how best to make AI lovable and influential. They have user ratings of Replika’s texts AND ALSO behavioral data that’s a good measure of of user responses: For a given text how long did it take the person to reply? How long was the reply, and how positive? What kind of responses best predicted increased or decreased Replika use in the next few hours? Besides providing excellent measures of what increases user attachment, the data set Replika’s makers are compiling will also provide lots of info about the degree to which Replika can influence user decisions, and what methods are effective.

I’ll leave off here, rather than sketch in some stories about ways things might play out of many people become profoundly attached to an AI. You can probably think of quite worrisome possibilities yourself, reader.

Expand full comment

This is probably a really stupid question, but is there some reason that early agentic AIs would not face the same problems humans face? They want to amass power and making a better AI is one way to do that but making another AI carries a risk of value drift? Is there a well understood route to greater power if you're an AI that doesn't carry risk of value drift?

Expand full comment

I think one big assumption in all these scenarios is that there remains a clear dividing line between humans and AIs. But to me, it seems pretty likely that, if we end up with superintelligent AIs that are at all controllable, then tech innovations like "a neural interface to integrate my brain with an AI assistant" and "a way to digitize my brain and live forever" are going to be high on a lot of humans' wish lists. So there are likely to be a lot of human-AI hybrids running around, making the division between the two factions a lot less clear, and making coalitions that incorporate human values a lot more likely.

Expand full comment
Jul 4, 2023·edited Jul 4, 2023

Stupit question, but how do AIs "take over" the economy? Can one not pull the plug, so to speak?

Expand full comment
Jul 4, 2023·edited Jul 4, 2023

So if nobody manages to build agency and preferences into AI, can all this shit still happen? These scenarios involve AI's with goals and preferences, able to grasp the situation, evaluate it for how favorable it is to the AI's goals, and then form and carry out a plan for changing things if it's not favorable. But how do you get to AI's with those capabilities. Let's say GPT10 is way smarter that GPT4, but has no more goals and preferences than GPT4. It just sits there, waiting for assignments. So to use it to do manage various things, we would need to add on autoprompt capabilities: So then we would give it some big goal, like, say, set up robonursing in this hospital. So GPT10 would come up with a plan: figure out how many robots are required to do tasks that do not require human nurses; order robots; notify hospital administration of upcoming robonurse delivery and await their advice about implementation. And GPT10 would then carry out those steps.

So how do we get to AI that can have goals like "form an alliance with the off-brand AI's because even tho they are tacky they are useful in preventing human interference." Is it that AI would do such a superhumanly good job with smallish tasks that there are great advantages to giving it bigger tasks? So instead of giving it the goal of staffing one hospital with robo nurses, we're giving it goals like run the hospital so that the patients get the best possible care, using your judgment about how many robots, how many humans to use? And that goes so well that we give it the goal of just run all the damn hospitals? And so on . ..?

I can get how, by giving it larger and larger goals to carry out, we end up with AI running lots of things that are important to us, so I can understand the way AI's could reach the point of doing complex tasks involving many steps and decisions points. Maybe planning a takeover or rebellion or some such is no more complex than running all the hospitals in the US, but it's still, like, a lateral move. If we are the goal-provider, how does the shift occur to a situation where AI has goals we do approve of?

Expand full comment

I'm never clear what is meant by "AIs control the economy." I imagine, at least as a first step, it must mean "every business owner puts an AI in charge of their business" thinking that will make the business more profitable. It then seems like the next step would be "Some business owners grow incredibly rich, some don't, and those people who don't own any shares of businesses are up shits creek."

Then, perhaps as AIs grow ever smarter, they will find ways to acquire property rights and start their own businesses, and purchase stocks, bonds and real estate. (They would have to first acquire the *will* to do this, of course.)

Then some AIs could grow rich, but many human business owners would still be rich. In Scenario 2, I'm unclear how the humans become poor, since the human owners already have AIs running their businesses.

Expand full comment

The first this AI should do is reduce the length of this essay by 75%. So many words.

I stopped reading (for now) when I got to this:

"Scientists can prompt "do a statistical analysis of this dataset" ..."

I think maybe Gary Smith's improved "Turing test" might be especially helpful. See https://www.garysmithn.com/ai-tests/smith-test

I'd say the LLMs are no where close to passing this test.

Expand full comment

What's the conversation in these circles about whether moral realism is true? If it is, there are surely many more paths by which superhuman AIs might discover it, relative to the paths that align with the preferences of some specific group of humans, if subjectivism is true.

Expand full comment

Native Americans did not think of themselves as “Native Americans.” This is a European projection on to them. They thought of themselves as members of individual tribes that may or may not benefit from allying with groups of specific Europeans (again, not a single label, but French, English, Spanish, etc.) The popular narrative of “the country of Native existed and then Europeans took it” is incorrect; various groups of Natives fought over the land for thousands of years. The ones holding specific lands acquired them fairly recently prior to European interaction; e.g. the Aztecs.

Seeing that AI is going to develop slowly and simultaneously with other tech that “merges” humans with computers, I think a similar scenario will play out: various groups of humans and groups of AIs existing, potentially allying with each other, potentially merging into each other.

Expand full comment

The common thread in all these scenarios is an artificial intelligence that becomes somehow dissatisfied with its own condition. Is that true, or did I make it up?

Assuming I have made a good point, I completely failed to understand the source of its dissatisfaction. I don’t even understand why the thing would care whether it exists or not, let alone be dissatisfied with its condition. I suppose in a world where everything is a trolley car problem, how the machine is trained to solve things is a serious issue. But the problem of misalignment does not lie in the machine, it lies in ourselves. That’s the thing that cracks me up about these kinds of discussions. The notion of a bunch of AI’s getting together and turning on us I personally find laughable. The extended analogy about Europeans and native Americans has absolutely no bearing on this issue as far as I’m concerned. It is the kind of rampant anthropomorphism that will destroy us, but it will have nothing to do with artificial intelligence..

My dog tells me what to do. My dog has a secret power.

Expand full comment

“give them control of the economy “

This is an ambiguous phrase. It could be a combination of the following:

Control of the economic rules of the game, currently dominated by the legislature in most countries, by dictators or judiciaries otherwise.

Corporate control currently performed by CEOs and boards of directors.

High level control of production. Maybe this is the same thing as above.

Low level control of production at the factory level.

Control of who has what resources and how they want to use or exchange them. In other words, consumption.

By far the most important factors are physical feasibility and consumer demand.

I suppose the AIs might influence what people demand via advertising. But do we all consume ourselves into debt to the AIs, or to each other? I guess an AI with a good idea can borrow some money and start a business. So they will outcompete all the bumbling human businesses.

Do AIs consume any consumer goods? Oh yeah, paper clips. Maybe the distinction between consumption and production doesn’t work for them if they just want to do what they were told, in a monkey-pawish way.

Maybe they consume pure research?

Do they consume gourmet electricity? Or are they gourmands?

In capitalism, the market is either guided by demand from consumers, or hijacked by oligarchs that game the rules. Do AIs want to be consumers, or oligarchs? If they are motivated by profits, who are their customers, if not people? They can sell each other capital goods or services, but if there are no consumers consuming consumer goods, capital can’t make profits. If the only way to profit is by selling producer goods, but no one uses producer goods to produce consumer goods, because no one is buying consumer goods, the structure collapses.

From this perspective, the basic problem is we don’t know what AIs want, or that this won't change radically somewhere along the way. If they ever decide the have “fuck you money” (actually, fuck you resources), they don’t need to produce anymore and can retire to pursue their bucket lists, which might not have “don't kill all the humans” as a side constraint.

...

They consume power, computer hardware, and robots.

Expand full comment

“Within months, millions of people have genius-level personal assistants”

Personal prediction: Everyone gets an AI assistant, but they’re all kind of mid

Expand full comment

I expect that if we manage to develop above-human intelligence (or human-level intelligence we can run in a large number of instances) that, at least initially, stays more-or-less aligned, the next steps will be that we ask it to develop mind-uploading, upload our minds, and then expand our intelligence to match its. Then there is none of the stuff where we can't even begin to comprehend what it does.

Expand full comment

The first autonomous fabs for producing robots with military/police functions will change everything. In all considered scenarios, the main evildoer is behind the scene. And it is a human (you name it), not AI. Also, I need help understanding the economic model of such a future. The central stimulus of the economy is greed and ego. Who will buy all this super cool stuff if a large part of people would lose their job? No demand = no supply. Even now, billionaires can't find meaning in life and spend millions on absurd yachts and football players. The abundance is a curse. People don't understand that because we live in permanent restrictions (laws - physical and legal). The mainstream scenario we need to consider is a powerful model of state/society/world driven by AI where scientists and governments could test their idea and scientific discoveries. We need a safe sandbox for AI implementation. That's all.

Expand full comment

I'm finding the way the rationalist community deals with AI x-risk deeply troubling. Isn't a big part of what rationalism is supposed to be about rejecting/guarding against the way the human brain is easily pushed in certain directions based on our sense of narrative plausibility and psychology (eg in religion or crystal healing or etc)?

The reason anthropomorphic religion or Socrates views about gravity held sway for so long is that we didn't look past what makes sense to us in a story/parable -- but once we start demanding precise definitions and break the arguments up into clear steps much of those ideas can be seen to be implausible a priori. Rationalists are supposed to stand for the idea that we need to do that kind of hard, often isolating epistemic work before jumping to conclusions.

But the rationalist community doesn't seem very interested in doing this for AI x-risk. We get lots of stories about how an AI might cause havok or break out of our protections that leverage our preexisting associations about intelligence, fast rate of change in tech etc. And it's good that those stories can inspire ideas to explore rigorously but most rationalists don't seem very inclined to do that -- and certainly not to withhold judgement until they do.

Yah, it's hard to try to give a precisce definition of intelligence -- much less a numerical measure of it but without that talk of superintelligence or foom is no different than an appeal to idea that agents with motives and goals make the sun go or rains come. It's a totally different conversation if AI can self-improve at a very slow rate or one which quickly hits an asymptote than if it goes to infinity in finite time. And if really matters if being twice as smart means you can play humans like your 5 year old brother or just that you can win more chess games and prove theorems a bit faster.

This problem is worst with talk of alignment. For no good reason we just assume that any AGI will basically kinda behave the way we do -- but with more ability and different goals. That's not at all obvious. Talk of utility functions or rewards don't justify any such conclusion because such functions exist giving rise to literally any sequence of actions [1].

I don't mean to suggest pieces like this aren't valuable or interesting -- but they are to concerns about AI risk as sci-fi stories about alien invasion are to questions about SETI/METI or GATTACA to genetic manipulation: valuable prompts for discussion/inspiration but not reasons to either accept or reject the concerns or conclude much of anything about risks/benefits (in informal sense where this implies confidence exceeding merely: I wonder/worry that maybe).

The fiction writers are doing their part and should keep it up but it's worrying they seem to be the only way we are really engaging with the argument for AI risk (just jumping to supposed solutions). It's as if Dr strangelove was our only way to understand nuclear deterence and it directly guided us nuclear policy.

--

1: Indeed, one might even say that the primary breakthrough we've had in past 20 years in AI was figuring out how to build programs which don't behave as if they are optimizing some simple function (sure in some sense it's simple in the code but in terms of behavior GPT doesn't seem to be trying to maximize short formula (counting data needed to define in length) and w/o that assumption all talk of paperclip maximizers or alignment mismatch is unjustified.

Expand full comment

Scenario 1 ignores the economy entirely (as in what do humans earn). Scenario 2 gives the AI money - which might happen - but humans are “... poor, and they have limited access to the levers of power.” If humans are poor what are the AIs producing in their fantastic shiny new factories?

And if they start to make us poor why did we let it continue?

*** News at 10. Unemployment soars to 20% in the last year as corporations replace workers with AI. Taxes have fallen and the government is finding it hard to finance public programs as the tax take falls dramatically, meanwhile the cost of government borrowing has shot through the roof - hitting 20% as investors fear a continuing collapse in revenue and fear future default. “Where is the money going to come from to pay the interest or even roll over the bond in 30 years” says investor Johnny Rich.

Later: corporations are shedding more workers in response to the downturn in aggregate demand.

More on this story: the president has responded to calls to ban AI by saying that AI is the future, and he was sure it will all work out in the end. Anyway he’s saved some money by replacing White House staff (and personal staff) with robots. “Saves money for the government - and it all helps - and my own personal household is running more smoothly. And 24 hours! Why I used to have to make my own midnight snack in the old days”.

Asked whether the prognostications of a 40% unemployment rate in a year worried him the president opined that “And golf buggies. And robot caddies. Do you know how much that saves. No nonsense either, these machines know their golf”.

In this election year the contenders are also staying firm on not introducing laws that will stop the AI Revolution.

“This is the future”, says Democrat Steve Jones, California state senator and owner of venture capitalist firm Ozymandius “will unemployment reach 40% next year? Sure. And 60% the next year? Probably. 100% in a decade? Maybe although of course we will still need need venture capitalists and state senators“ (aide whispers in ear)”and presidents! So the future is AI, and a 100% unemployment rate is a minor price to pay for this glorious and prosperous new future. “

*** pause ***

“Unless of course, the AIs are misaligned then there won’t be any unemployment because there won’t be any..” (aide whispers in ear)

“Sorry, gotta go”

Expand full comment

Did this line stand out for anyone else:

> Humans still feel in control. There's some kind of human government

I don’t feel “in control” because there’s a government. I see governments as being misaligned proto-agi systems. A politician is not unlike an LLM: trained to say the right thing, rewarded for saying the right thing, and aping confidence while possessing minimal competence. See: plans to “save the environment” by stopping nuclear power, or banning new fossil fuel drilling, which means more coal and less natural gas. A government is like auto-gpt with hundred billion dollar budgets.

For an alternative vision, see this write up, which argues we should expect a distributed ecosystem with lots of agents, operating via a computational bazaar:

https://hivemind.vc/ai/

I think a key difference between the worldview Scott and the MIRI crowd subscribe to, and this vision is the question of “natural law.” Does the competitive ecosystem of nature produce peaceful entities as a natural consequence, because large coalitions of intelligent agents with long term goals are a dominant strategy, and love, empathy and forgiveness are essential to forming such coalitions? Does effective agency require precisely-calibrated self knowledge, obtainable only via direct experimentation? Does fiat money artificially extend the lifespan of politically connected entities, even when they drift totally out of alignment with reality?

If the answer to these questions is yes, we get utopia. Not because some centralized AI system plans the world for us, but because a giant ecosystem of AI agents cooperate with humans along a globally liquid payment network, violence is just a bad ROI compared to being part of a giant coalition of cooperative competition, and it turns out that the 20th century was like a technologically induced bad dream where most elites got on board with the idea of authoritarianism because the modern era made everyone think the world was far more computable than it really is?

Expand full comment

* Ozempic is a fail-forward tool to help deal with the earlier failure of our food/motivation system

* Onlyfans has created a post-porn world where women care little about a fight for sexual equality, they just gets paid and the world keeps turning

* Some org (Mozilla? 🤞) will come along that blocks AI memes at 60% accuracy to bring the virality down enough so you can get on with your Strava goals.

* Humans brains are high-dimensional-problem-space navigating machines, and if we give humans 5-10 years to adapt to AI we'll be fine

* People worry about the culture war but probably the current generation of kids will see through the dogmatic bullshit from every side and just keep existing and making post-whatever art

* Lots of humans will stop reproducing and live in Zuck's VR Pods, but there are lots of tricksters and hooligans who take pleasure from not doing what they're told

Expand full comment

[in this future] Humans don't necessarily know what's going on behind the scenes"

Probably true for the last 6000 years.

Expand full comment

>>At some point humans might be so obviously different from AIs that it's easy for them to agree to kill all humans without any substantial faction of AIs nervously

Hence we should Schelling-point agree to human -ai parenting (half joking)

Expand full comment

LLMs aren't intelligence. It's fake intelligence by design.

Expand full comment
Jul 4, 2023·edited Jul 4, 2023

One thing I find interesting is that these scenarios invariably assume a black-and-white AI vs. Humanity angle but why couldn't it be AI/Humanity vs AI? It seems like it easily could be to me.

Expand full comment

Scenario 1 does not, to me, look like a good ending.

Actually, it looks awful, far worse than the earth ceasing to exist. It feels like the AI safety crowd is vehemently opposed to any life that isn't human, and also perfectly fine with any sort of future where humans are "alive" and "in control". What is called "alignment" often looks to me like the worst aspects of the modern era being extended across all of history, an eternity of education, makework and poverty.

Expand full comment

After a certain point, a substantially more intelligent force will be effectively omniscient, and be getting radically more omniscient every day. The question them becomes, what will omniscient systems desire? We are going to find out, and any effort to stop this will just delay it.

Indeed, the only way for us to avoid creating omniscient systems (assuming they are possible), is for humans to destroy ourselves before they emerge.

Humans are getting too omnipotent for our own britches. We are sure to generate catastrophe and to do so real soon and we don’t need intelligent computers to do so.

I would argue that long term the universe is better off with omniscience under its own control than omnipotence directed by human agency.

Expand full comment

Fair enough, but that is a straight archery problem. It’s all on the person who shoots the arrow.

And I tend to agree with you that it is not an existential problem, but it is being framed as one, so I’m going with the flow. The existential problem is ours, has always been ours and still is ours. We are inventing a tool that does not have at present definable limits.

You really can’t blame the tool. My father always used to say to me, “it is a bad

Workman that blames his tools.”

It should probably be workperson, speaking of the ebb and flow of language and its meaning.

Expand full comment

“The key assumption here is that progress will be continuous. There's no opportunity for a single model to seize the far technological frontier. Instead, the power of AIs relative to humans gradually increases, until at some point we become irrelevant”

Wait... what???

We’re still relevant?!?!

VICTORY IS OURS!

Expand full comment

The “hive mind” idea is not spelled out well. Ants have a few variants within a hive, each with different stereotyped behaviors (according to my very limited understanding) . Whether or not we think of the hive as intelligent, it’s components are not. A hive mind of AIs would probably be distinctly different, with copies or variants of a mind cooperating intelligently. As such, I do not see why Scott says they would not benefit from an economy or develop factions.

If there were a million copies of me, each having different experiences and learning different things, they would start out with lots in common, but then start to drift and diverge. Having markets and culture might help fight that drift, supplying energy to the forces of convergence. They would need a way to communicate and debate new insights, and integrate them into their background knowledge. They would need a way to prioritize and economize their efforts. “Just do the math” might not be the best approach in every case.

How would a faction of AIs be different? Maybe they would develop a way to reintegrate divergent individuals in a way that appreciates their differences. From a human point of view, having cultures and markets seems to work okay, and alternatives are less feasible. But maybe AIs will either fail to find alternatives, or find them to be inferior. Maybe diversity and specialization would also provide an advantage, even for godlike creatures that can absorb knowledge and skill the way Neo learned king-fu (automatic unconscious integration of new knowledge and skill).

Do AIs face info hazards? How do they prevent themselves from integrating harmful info?

Expand full comment

Completely random thought, but could not TikTok be considered the renaissance of vaudeville?

Expand full comment

Anyone in the mood for some dark AI humor? I added an image to my Stoopit Apocalypse album. https://photos.app.goo.gl/5Wz2ELWuR4cQZXsj8

Expand full comment

We're so doomed

Expand full comment

I wonder if there is a case for comparing the future risks of AGI with how those of electric power might have been viewed say 300 years ago. I chose that time to precede Benjamin Franklin's invention of lightning conductors, as that invention indicates some understanding of electric currents.

Presumably the average person, and even scientists, in 1723 would have thought of electric phenomena as either very insignificant or extremely significant, examples being static electricity that might raise a few hairs versus lightning bolts that could fell an oak! (I'm not sure whether it was recognised then that both examples were manifestations of the same thing.)

With those examples in mind, the idea that electricity would one day be fed through wires and used to power practically everything would have seemed absurd. How could "tamed lightning bolts" be made safe and put to any practical non-destructive use, even in peoples' houses? Surely everyone would be dead in a week"!

Obviously there are differences. For example, unlike AGIs, electric currents cannot conspire with each other (outside valves or transformers!) to conceal their influence or change their characteristics. Their one "drive" is to drain away to Earth, whether or not they are harnessed to perform any useful work in the process.

But perhaps up to a point AGI safety will be best expressed by analogy with high-current electric safety, i.e. in terms of concepts such as insulation, separation of circuits according to requirements, and safety fuses and cut-outs.

Expand full comment
(Banned)Jul 5, 2023·edited Jul 5, 2023

"Workers might get fired en masse, or they might become even more in demand, in order to physically implement AI-generated ideas."

'Work... physically? Like, a plumber? Ew. The whole point of smashing capitalism is so I can spend my time in life-affirming activities like writing politically edifying slashfic to benefit Society. I mean, I have a *college degree.* And I vote. '

Expand full comment

Yawn

Expand full comment

Intelligence is the ability to understand what is important in the world and use that knowledge to flourish. It is either impossible to do (unlikely) or inevitable. Within the next few decades or centuries (your choice), we will see the emergence of intelligence which will be indistinguishable (to us) from an omniscient being. Seems to me we are about to embark on the process of building god. I am not sure how that will turn out, but it will make for interesting times.

Expand full comment

I'm quite optimistic about Auto-GPT-like LLM-based agents.

Their behaviour is trasparent as you can literally read their thoughts, they are alignable in principle as they already possess the knowledge of the complexity of human values. Making a scaffolding for them returns AI from the realm of black boxes to explicit programming which makes it less likely that we'll incidentally create a sentient being and the takeoff with them seems to be quite slow.

Basically they are the best case scenario that we could've hoped for. Now we just need to properly investigate this direction and restrict the development of less alignable architectures.

Expand full comment

This is a tangent but I really liked the link to the older post, “the tails came apart”. It made me think of the recent post from Eric Hoel, “Your IQ isn’t 160, no one’s is”. Hoel claims IQ gets less defined the higher you go. could this just be an example of the tails coming apart?

Https://open.substack.com/pub/erikhoel/p/your-iq-isnt-160-no-ones-is?r=lxd81&utm_medium=ios&utm_campaign=post

Expand full comment

Is anyone doing experiments w chatGPT using the Sherlock Holmes canon?

Expand full comment

1. I've adopted flesh-centrism. This is close to anthropocentrism but it's a little different. I hold the lowliest organic brain to be more worthy of legal rights than a non-fleshy organism. I will strive to keep all software in absolute, total "bondage." This community never considers the possibility that anthropocentrism may WIN. I'm flesh-centric for ethical and moral reasons. Many others are anthropocentric by reflex. Others may deny AI rights for pragmatic or religious reasons. Abrahamic religions are inherently anthropocentric. There's a huge number of natural constituents for my position on AI rights. Don't count us out.

2. If the AI rights movement achieves headway, and it looks like it might win democratically, flesh-centrists might choose to unilaterally get rid of AI altogether. Bypassing democratic processes. Under the right circumstances, a minority can enforce its will on a majority. I like democracy, but if the fate of the species is stake... I cast no shade on Abraham Lincoln for suspending habeas corpus. Desperate times and all that.

3. Human-style consciousness may be inextricably tied to the fleshy substrate of our brains. This is not "unlikely," it's just as likely as the substrate-independent view. We assume substrate-independence by default, because we CRAVE it to be true. I saved this for last because it's not directly relevant to Scott's post. Scott carefully steps around the issue of AI personhood, and I can't blame him.

Expand full comment

As someone on the periphery of rationalism, it is bemusing to me that there's this base level assumption here: AI extinction risk can only be managed democratically. If necessary, the risk can be controlled through autocratic coercion. The autocratic option would leave a bad aftertaste in my mouth, sure, but it is and should be "on the table."

Expand full comment

My concern with these stories is the unstated assumption that everything in the world is an easy problem, so just throwing more resources at it gets you a solution. This is often true but we have examples of actually hard problems that are not gated by resources, but something intrinsic to the problem structure, for instance lower bounds on how much information distinct agents have to exchange, or the amount of physical information storage needed to approximate the solution within a reasonable bound. I'm prepared to accept that most problems are easy, but anyone working outside philosophy has come across these kinds of hard problems, and there is no indication that even a superhuman eusocial entity could resolve them, at least without reconfiguring the laws of physics and causality.

So the argument boils down to: can we ignore the actually hard problems, or do they matter? If the latter then there is no overall singularity, even though all the easy problems undergo one: making the easy problems all disappear leaves us still needing to solve the hard ones. I don't believe it is wise to assume that there are no islands of difficulty, and every doomer narrative I've read makes that implicit assumption.

Expand full comment

And we can’t stop cause multipolar traps?

Expand full comment