712 Comments
User's avatar
Steve Sailer's avatar

What about violence?

From p. 82 of "Better Angels of Our Nature" by Steven Pinker:

"The journalist Steven Sailer recounts an exchange from early 20th-century England: “A hereditary member of the British House of Lords complained that Prime Minister Lloyd George had created new Lords solely because they were self-made millionaires who had only recently acquired large acreages. When asked, “How did your ancestor become a Lord?” he replied sternly, “With the battle-ax, sir, with the battle-ax!”"

The secret to being quoted in important books is poor sourcing: although that anecdote made a vivid impression upon me, I have no idea anymore where it’s from.

So, at the moment, I’m the best source!

Expand full comment
Malcolm Storey's avatar

EDIT: this was a Copilot hallucination. Please ignore.

The quote is from Sir Walter Scott's novel "Waverley," first published in 1814.

(This is why we have Copilot :) )

Expand full comment
Deiseach's avatar

This is the first time I've seen one of those things be useful, you're a thaumaturge, Malcolm!

Expand full comment
Anonymous Dude's avatar

Hey, the level titles went out with 2nd edition.

Expand full comment
EngineOfCreation's avatar

Unless you were being sarcastic: You fell for a hallucination.

https://www.gutenberg.org/cache/epub/5998/pg5998.txt

Expand full comment
Tom Hitchner's avatar

It didn't even sound like Waverley.

Expand full comment
Logan's avatar

And this is why I roll my eyes at any comment that starts with, "I asked ChatGPT/Claude/Gemini and this is what it had to say:," and then skip it. When reading a blog with a reasonably intelligent readership, these things are still _useless_ compared to the collective intelligence of other humans.

I understand from tech people that these can be very useful for automating basic code or bouncing ideas around for tricky problems, but for humanities, conversation, and actual worldly knowledge, they do nothing but spout cliches, flattery, and hallucinations.

Expand full comment
Kenny Easwaran's avatar

It’s very useful if it gives you a supposed source, particularly if it does so when Google fails. Google can then let you check whether the source actually says it, but something with a 10-30% accuracy rate on sourcing quotes that Google can’t find is actually really valuable, since we already have the technology to check these quotes.

Expand full comment
beowulf888's avatar

I suspect quotations are more challenging for LLMs because of the frequency of misattribution of quotes to famous people in the training data's source literature. Lincoln, Twain, and Churchill have slews of quotes misattributed to them.

Having said that, I find ChatGPT and CoPilot very useful for general writeups of subjects at an intro level. But I always ask the AI for links to references and double-check them. Chat and CoP are getting better — in the past, they both partially or completely hallucinated about 25% of the references. Nowadays, they're generally getting the references correct, but they frequently misstate or misunderstand the conclusions in the references. Caveat emptor.

Expand full comment
Jeff's avatar

Except a scan of Waverley reveals that it is not the source of the quote at all, simply the AI hallucinating. (Leaving as an exercise for the reader why we do indeed have Copilot)

Expand full comment
Malcolm Storey's avatar

Yes, I wondered if ought to check! Can't find it in Hansard either, but it's very fragmentary from that era.

Expand full comment
Malcolm Storey's avatar

Copilot is very useful to writing poems for my wife. Until I admitted the wording was Copilot's not mine :(

Expand full comment
Pelorus's avatar

Imagine getting Cerano de Bergerac'd by a toaster.

Expand full comment
Malcolm Storey's avatar

I know. They ask for honesty then complain when they get it!

Expand full comment
Paul Botts's avatar

Rookie mistake, dude.

Expand full comment
gwern's avatar

A good example of how AI slop and related problems are increasingly polluting high-quality human fora like ACX and LW2. Some people are still bothering to check or apply some critical thinking to LLM outputs... but for how long, as others keep freeriding on their efforts, taking the benefits of the implicit trust & credibility extended to human commenters while shirking effort and epistemic standards?

(Also, note the lack of critical thinking here, in addition to the laziness in not checking in a trivially available text. Why would a 20th century nonfiction anecdote about Lloyd George & newly-elevated rich peers to the British parliament be from an early 1800s novel by a novelist famous for romantic mythologizing writing about Scotland or medieval England? At the very least, this should make you wonder - even if it was true, and Sailer had badly distorted the anecdote, how could the LLM recognize the 'real' anecdote in a Scottish fantasy a century earlier?)

Expand full comment
Deiseach's avatar

"Why would a 20th century nonfiction anecdote about Lloyd George & newly-elevated rich peers to the British parliament be from an early 1800s novel by a novelist famous for romantic mythologizing writing about Scotland or medieval England?"

Because in so many cases, quotes get mangled, reformatted, and attributed to sixteen different "So-and-so once said" as they get passed along.

So it's not out of all possible bounds that a quote from a novel got Chinese Whispers treatment of ending up being attributed to a hereditary peer under Lloyd George's government.

I'm disappointed that the attribution turns out to be false, so we still don't know where the original came from or who in fact said it, if it was said at all and not invented by somebody on the Internet.

Expand full comment
gwern's avatar

> Because in so many cases, quotes get mangled, reformatted, and attributed to sixteen different

Indeed, hence my last sentence... It would be impressive enough for a LLM to be able to recount the source if the story were unmangled and recounted correctly; but to do so while it would have to be almost totally transformed? That's Radio Yerevan level inference.

I didn't bring this up, because this is unreasonable for an ordinary commenter to know this, and I was focusing on what is reasonable for an ordinary commenter to do or know. But the changes here are not just extremely implausible in their own right, they are also fairly implausible as a corrupted transformation of a plausible Walter Scott original: because they are too extensive, the wrong kinds, over a relatively short period of time (~1 century) for a writing-heavy culture, and against the selective pressure of a widely-read original correcting mutations. After tracking down hundreds of these sorts of things over the years, while quotes and stories do get mangled and transformed, if there was some version which was actually about a Scottish parliament admitting some new landlords (not businessmen buying estates) in the mid-1700s (https://en.wikipedia.org/wiki/Waverley_(novel)#Plot) or earlier, to make it fit, this would be a pretty unusual case, because the putative original would have to be transformed in both time, place, *and* rationale, while the transforms are usually more like a single major corruption combined with simplification and dropping of details which undermine the moral of the story. So if I was investigating this quote, I would expect one of the time, place, or personage to be false and the exact battle-axe line probably a memetically-fitter version of a much clunkier Walter Scottian-style original; but for all 3 of them to be likely false, as implied by a _Waverly_ attribution, I would be highly skeptical.

It is not impossible, because wacky things can happen in the chain of transmission (I still remember the quote I was increasingly sure was completely fake until I manged to trace it to the splicing of 3 different quotes from 3 different books & 2 authors, which I still cannot explain), but I would definitely be looking very hard at anyone, much less a LLM, claiming that that Churchill story obviously came from _Waverly_. Because that is just not a plausible origin for an uncorrupted 'original'. It's too many, too large, mutations, over too short a time, diverging from a too-well known source.

Expand full comment
Malcolm Storey's avatar

Sorry guys! I didn't think it was sufficiently important to check. Usually I do.

And I did say it was from Copilot so you were free to make your own assessment.

As for it being mangled: absolutely and that's what LLM's are good at figuring out (sometimes, anyway!)

(By the way, no criticism, but I have to point out that Lloyd George has now morphed into Churchill!)

If you asked a knowledgable friend they might say "sounds a bit like Walter Scott's 'Waverley'", but Copilot arrogantly stated it as a fact. When I went back to check it admitted it didn't have access to the text due to copyright (despite it being well out of copyright with the full text on the web).

Expand full comment
beowulf888's avatar

This is a perfect example of how quote distortion works. You've re-misattributed (mis-misattributed? meta-misattributed?) the OP's battle-axe quote to Churchill when the OP thought it was Lloyd George. The next time an LLM slurps up the ACX conversations, Churchill may come up as the quotee. LOL!

Expand full comment
Deiseach's avatar

Don't worry about violence, Steve. The new AI Social Worker Response Drones (we'll have defunded the police successfully come the Singularity, you know) will be available 24/7 to pacify any undesirable anti-social behaviour with counselling, appropriate medication if required, and death rays.

Sorry, did I say 'death rays'? I meant of course MAID Mobile Units!

Expand full comment
Steven Postrel's avatar

Oingo Boingo had this all figured out a while ago:

https://www.youtube.com/watch?v=Qo30hYkJWzc

Expand full comment
Anonymous Dude's avatar

Didn't the cannons used to be marked 'the last argument of kings'?

I totally agree--a lot of the way this plays out in the real world tends to be ignored. These 'decentralized networks' still tend to have heavy, bulky, expensive points of failure. What if some government or rebel group starts blowing up datacenters?

Expand full comment
Graham's avatar

Yes, in France under the Ancien Régime: ultima ratio regum. I saw one with this inscription in a museum, I think probably the Deutsches Historisches Museum in Berlin.

Expand full comment
MarsDragon's avatar

There are a bunch floating around Europe, I assume because of the Napoleonic Wars. Obviously the Musée de l'Armée in Paris has entire rows of old cannon with that motto, but I also saw a few at the Kriegsmuseum in Vienna (along with numerous other cannon with mottos, heraldic images, some Arabic calligraphy courtesy of the Turks...).

I would be in favor of putting coats of arms and cool mottos on our war materiel again, not gonna lie. Surely our machine civilization can manage this much?

Expand full comment
Paul Goodman's avatar

Presumably the rebel group gets wiped out by a swarm of drones. If it's a government we're back to the question Scott raises in the post of whether the AIs (which control the drones) answer to the government ahead of the companies that made them.

Expand full comment
Loarre's avatar

I wonder if the distant origin of the story is an anecdote from the Quo warranto hearings conducted in England under Edward I. Quo warranto was a legal process that inquired "by what right" [literally, "warrant"] a given claim (to a specific right, as to judge tenants in a manorial court, or the right to collect some form or forms of rent on a piece of land, etc.) was held. Typically, it asked claimants to produce a document by which the claim had been granted (typically, again, by the king). Supposedly the Earl of Warenne responded by marching into the court and slamming a rusty sword down on the desk where the clerks were writing, while exclaiming something like, "I hold by THIS right"! According to the story, the sword had been wielded by the earl's ancestor, who had ridden with William the Conqueror. The tale is recounted and discussed in ch. 1 of Michael Clanchy's excellent From Memory to Written Record. England 1066-1307.

Expand full comment
[insert here] delenda est's avatar

Quotes and provenance thereof aside, this is indeed the elephant in the room. I would like Scott to write a whole new post addressing violence and the post-singularity; or even post GPT 5, response to it.

Expand full comment
Corbin Preston's avatar

For what it’s worth, Steve, your writing’s had an enormous influence on my understanding of the world and its inhabitants for which I’m eternally grateful.

Many decades from now, I expect you’ll be vindicated in the scientific landscape. Until then, know that there are voices out there less brave than yours who appreciate your academic martyrdom. You’re on the right side of history, I believe that earnestly.

Expand full comment
Ponti Min's avatar

"None of these things happen, and non-plutocrats are stuck on Earth while the plutocrats colonize the galaxy. It doesn’t make sense for 3,000 people to colonize the galaxy on their own, so they will need some source of colonists. If they don’t use poor people, then whom?"

-- maybe there could be some sort of Bobiverse-like scenario where the plutocrats create many copies of their own uploaded minds.

Expand full comment
JonF311's avatar

We aren't going to "colonize the galaxy" barring some inconceivable new physics discoveries. The velocity of light is going to remain the universe's speed limit and that will keep us in our own solar system indefinitely. I'd give more likely odds to our learning how to travel between Earths in alternate timelines and colonizing those where humans did not evolve (but are still friendly to our type of life).

Expand full comment
Ponti Min's avatar

If stars are on average 10 LY apart, speed is 0.1 c and it takes 1000 years after arriving at a star to build new probes, then it would take about 10 million years to colonise the galaxy, which is bugger all time in cosmic terms.

Expand full comment
Matthias Görgens's avatar

Yes. You and me might not colonise the galaxy, but either humanity or whatever replaces humanity will, if they continue to exist at all.

Expand full comment
Ponti Min's avatar

Very likely. Life is grabby.

Expand full comment
Cjw's avatar

The singularity isn't magic, you would still require a certain amount of energy to accelerate an object of certain mass (and decelerate it at the other end of the trip) which is enormous. You'd have to be converting and storing the lifetime energy output of several stars, I doubt any technological breakthrough is going to enable this. And at those velocities, even a tiny little space pebble would demolish you. Neither human bodies nor physical databanks are going to travel between the stars.

If humans are replaced by machines, the machines may well end up slinging around radio messages across the galaxy until heat death arrives. Maybe they manage to replicate themselves on some other planet by showing a scientist there how to create a copy and then kill all the original inhabitants of that planet too. I guess that's colonization.

Expand full comment
Donald's avatar

10% light speed, means that the kinetic energy is about the same as the fusion energy of hydrogen. That means, for every ton of space ship, you need a ton of hydrogen to fuse.

Uranium isn't quite as energy dense, but isn't far off.

This means that even if you sent the whole earth, you would still be using a tiny fraction of the suns total output.

A gigawatt nuclear reactor running for 30 years gives enough energy to get a 1 ton spacecraft up to 10% light speed.

How heavy does an interstellar spacecraft need to be? Can we send a single nanobot and get it to build a radio receiver when it lands?

Expand full comment
quiet_NaN's avatar

This. Cjw is wrong by more than sixteen orders of magnitude. WP on the Sun contains the sentence:

> Every second, the Sun's core fuses about 600 billion kilograms (kg) of hydrogen into helium and converts 4 billion kg of matter into energy.

Rule of thumb, converting 1kg of matter will provide the energy to accelerate another kilogram to a pretty decent relativistic velocity, certainly more than 0.1c. So our local modest Sol could power one 4 million ton spaceship per second.

Heck, with the energy harvested in 50 million years -- still much shorted than the lifetime of our sun -- we could send Earth itself as a relativistic spaceship.

The problem is not energy, and never has been energy. The problem is the tyranny of the rocket equation. Even fusion power will give you limited exhaust velocities well below c. That being said, travelling at 0.01c will not take that much longer, either.

Expand full comment
Cjw's avatar

I appreciate the correction, I had mixed that up in my head with a discussion of a different interstellar travel method. Still it seems that couldn't be right for ships going to a fixed destination, because a 2nd ton of fuel would have to go with you to decelerate at the other end, and that ton of fuel would also require a ton of fuel to accelerate at the outset, adjusting for spent fuel reducing the mass over time. But I accept that it's better, obviously you can go a bit slower and make it work out, and 10000 years vs 20000 years may not matter (although maybe it does? I don't know what sorts of material components would be required and there'd have to be oxygen so the internal components wouldn't be in a vacuum.)

Supposing you could manage this, it's still a non-trivial amount of energy, you would need a pretty darn good idea where you were going to make it worthwhile. We certainly haven't gotten any information yet about other habitable planets that would be good enough to justify trying such a trip. And I can't really see much point in it, too far away for trade, most of your descendants will live and die on a ship, when you get there whoop-de-doo you're on another planet, so what? About the only justification I could imagine is to get *away* from something on Earth, say to escape the ASI if you could somehow ensure no copies of it snuck aboard, and it wouldn't really matter in that case whether there was a destination.

If all you had to do was send out lightweight Voyagers that didn't need to decelerate, that seems plausible. Good enough perhaps for the ASI to replicate itself after taking over Earth, but not gonna do much for us.

Expand full comment
Matthias Görgens's avatar

You can very well colonise at 0.1% the speed of light. You can still get the whole galaxy in a cosmic blip.

Expand full comment
Simone's avatar

I think if it was about machines there would be ways. Like create miniaturized swarms of tiny robots with all the requisite data to start up a local building centre and sling those at high speeds, accepting that a X% will not make it - as long as enough do.

But humans is another story. We have very specific mass, temperature, maximum acceleration etc. requirements to be transported somewhere and reach it alive. And those requirements, combined with the rules of relativity, are forbidding. The engineering of it is a nightmare and it's very well possible that human interstellar travel might simply be not doable.

Expand full comment
Ch Hi's avatar

I think 0.1c is a bit fast. Also I've got an opinion that folks who have lived for a thousand years in a generation ship will want to continue living in the generation ship, and will just build multiple new ones when they reach a rich source of materials.

Expand full comment
EngineOfCreation's avatar

After such a journey, the prospect of reaching a planet to live on will have taken on a deeply religious significance. If the planet they reached turns out to be inviting enough in practical terms, there would be more than enough people wanting to stay to give them a chance as a colony.

Expand full comment
Matthias Görgens's avatar

Some will probably stay and some will move on.

Expand full comment
Doctor Mist's avatar

I’d still call that colonizing the galaxy.

Expand full comment
Ch Hi's avatar

Well, so would I. But those civilizations still planet-bound probably wouldn't notice.

Expand full comment
JonF311's avatar

You're assuming that every star has planets that can support life and provide raw materials for further colonization. That is not the case. Even at the more optimistic levels of predictions most stars will not offer a suitable environment. And you're also forgetting something else: entropy. Nothing is immortal, certainly not any machine we humans create or ever will. Nor us humans either.

The answer to Fermi's purported paradox really is that simple: the universe is too vast, the distances too great and Time is too brutal a tyrant over all physical matter. Whatever wonders we achieve they will be achieved locally only

Though do note the one little proviso I did allow-- there may many Earths and perhaps we can find others suitable for ourselves across the vast otherness of Elsewhere.

Expand full comment
Ponti Min's avatar

“You're assuming that every star has planets that can support life” — no I’m not. I’m assuming raw materials, but these could be in the form of asteroids or lifeless planets.

When we colonise the universe, if we do, it’ll very likely be as uploaded minds, since squishy humans are inefficient in space.

“And you're also forgetting something else: entropy. Nothing is immortal, certainly not any machine we humans create or ever will. Nor us humans either.“ — yes the posthumans that colonise the universe will change over time. Of course they will.

Expand full comment
Simone's avatar

This is not accurate. Life is not a real-valued field that can diffuse even through potential barriers that cause it to attenuate exponentially and then simply replenish itself whenever it encounters favourable conditions. If the barrier is harsh enough, it is entirely possible for there to be a significant non-zero (even a ~1) probability that no lifeform will ever cross it, no matter how hard they try, before the Sun goes nova and we all die anyway.

It is absolutely an option that colonizing the galaxy is fundamentally impossible for all practical purposes. Or that it is impossible for us because we were born in an area of the galaxy and in a solar system where we simply don't have the necessary jumping points and resources to do so. Maybe somewhere else near the nucleus of the Milky Way there is a civilization who has a second habitable but empty planet in their own star system and another star only 0.2 LY away and they are much better positioned to learn the basics of space colonization than us, whose closest options in the Goldilocks zone are a barren satellite, a barely less barren planet, and the worst hellhole you could possibly imagine.

Expand full comment
4Denthusiast's avatar

What makes the physical impossibility of reaching alternate timelines more likely to be proven false by some new discovery than the impossibility of faster than light travel? If the other worlds are based on the many-world interpretation, we're forbidden from reaching them by the linearity of the Hamiltonian which, though perhaps more obscure than the speed of light limit, is much more fundamental. If it's the version of multiverses from eternal inflation or the string theory landscape or something, those would literally just be other locations, far farther away than the rest of the galaxy. If it's some sort of time travel thing, time travel being possible implies FTL is.

Expand full comment
JonF311's avatar

Elsewhere is not governed the same physical laws as four dimensional universes like ours. It isn't even governed by the same geometry, possibly not by the scale arithmetic.

Expand full comment
4Denthusiast's avatar

Even if such places exist, you would still need to get there first before you can take advantage of whatever weird physics you may find there.

Expand full comment
Mo Nastri's avatar

You may be interested in Eternity in six hours: intergalactic spreading of

intelligent life and sharpening the Fermi paradox by Stuart Armstrong and Anders Sandberg: https://www.fhi.ox.ac.uk/wp-content/uploads/intergalactic-spreading.pdf

Expand full comment
Stuart Armstrong's avatar

Was going to post this; thanks for doing it for me :-)

Expand full comment
Mo Nastri's avatar

I'm a bit annoyed that your paper with Sandberg was published over a decade ago and most discourse around this topic is still at this level. The main knock on the Fermi paradox for me is Sandberg et al's dissolution of it via incorporating parameter uncertainty, but it felt really nice for me to know from reading your paper that known physics & engineering don't present an insurmountable obstacle to us spreading across the galaxies in due course. Anyhow, thanks for writing it! :)

Expand full comment
Roman's avatar

Why not? We could send up frozen embryos and artificial wombs. That would sure take a couple millenia, but who cares?

Expand full comment
Simon Break's avatar

If you can solve death, galactic colonization is pretty easy. You just sleep through the journey. This is why I'm always annoyed by the fixation on FTL travel, it's probably impossible & not remotely necessary.

Expand full comment
JonF311's avatar

Solving death is as nonsensical as solving entropy (of which death is really just a biological manifestation).

Expand full comment
Kade U's avatar

I think you're being pedantic here and ignoring the substantive point. 'Death' of course comes for everyone, but there's an enormous practical difference between that and the reality of senescence, which is not required by physics. A lifespan of 100,000 years is more than sufficient to achieve whatever colonization goals a person wants, and really there's no reason to say that 100,000 couldn't become 1,000,000.

Entropy can be locally reduced as long as it is increased in the global system -- i.e., there is no physical contradiction with extending life using energy/matter inputs from the broader world.

Expand full comment
Leninsky Komsomol's avatar

Beating the current 100 meter dash world record is like breaking through the lightspeed barrier.

Expand full comment
Vakus Drake's avatar

The entire model of "space colonization" you're imagining here seems to be some sort of unrealistic Star Trek style scenario based heavily on historical naval exploration. What you need to realize is that this is exactly backwards, it's actually airless rocks that are the best candidate for colonization precisely because they're easy for space industry to get stuff to and from.

In reality at a certain tech level nearly every icy rock is a good candidate for colonization because all you need is energy (in the form of fission and/or fusion fuel) and raw materials. So in reality we don't need to build anything like generation ships to colonize other stars: Rather civilization could sprawl outwards colonizing countless icy bodies until this outwards sprawl hits another stars Oort cloud and then work inwards gradually from there.

Granted there are other faster means of colonization. However, even in a very pessimistic scenario you can still colonize the galaxy, just not at relativistic speed.

Expand full comment
Deiseach's avatar

Why poor people, unless you want the pleasure of being a Russian aristocrat with a raft of serfs of your very own? Much more efficient and sensible to send out colony ships full of robots which can self-assemble on site, do the physical work of setting up the base, get the local version of an AI up and running, and then the pleasure domes will be all operating awaiting the arrival of the plutocrats.

Think Asimov's Solarians, from his Robot series:

"Inhabited by Spacer descendants, Solaria is the fiftieth and last Spacer World settled in the first wave of interstellar settlement. It was occupied from approximately 4627 AD by inhabitants of the neighboring world Nexon, originally for summer homes. ...The Solarians specialized in the construction of robots, which they exported to the other Spacer Worlds. Solarian robots were noted for their variety and excellence.

...Originally, there were about 20,000 people living in vast estates individually or as married couples. There were thousands of robots for every Solarian. Almost all of the work and manufacturing was conducted by robots. The population was kept stable through strict birth and immigration controls. In the era of Robots and Empire, no more than five thousand Solarians were known to remain. Twenty thousand years later, the population was twelve hundred, with just one human per estate."

Expand full comment
Matthias Görgens's avatar

I'm not sure why the plutocrats would put up with strict birth controls?

Expand full comment
Nancy Lebovitz's avatar

Part of being a Solarian is not wanting to be around people, so rich people in such a culture don't want a lot of children.

Expand full comment
Erica Rall's avatar

In the Robot novels, there were a few reasons. One was that Asimov (and therefore also his in-universe politicians and economic planners) believed in Malthusian population economics, that per capita access to natural resources was the limiting factor for prosperity. So before the Settler era (starting towards the end of the Robot novels), everyone practiced population control to prevent standard of living from declining.

For the Solarians in particular, there was an extreme cultural aversion to being in the physical presence of another human, with even marital relations seen as an unpleasant duty. All social contact took place through holographic zoom calls. The backstory was that they'd spread out so much that in-person contact became first unfamiliar and uncomfortable and later a violation of privacy taboos. And also there was a self-selection effect where Solarians who didn't like the Solarians lifestyle fucked off to Aurora or another Spacer world where people did actually see one another in person from time to time. So the Solarians that remained took extreme measures to make sure that their planet remained uncrowded enough to sustain their lifestyle.

The Spacers also have a backstory of practicing Nazi-esque eugenics that was only just starting to mellow when the novels begin. They had strict rules against immigrants who didn't match their ideal (back when they still took immigrants from Earth at all) and had eugenics-based licensing for having kids, complete with mandatory killing of "defective" children. This is mentioned in passing in Caves of Steel and Robots of Dawn, and is a central theme of the novella Mother Earth.

Expand full comment
Deiseach's avatar

"The backstory was that they'd spread out so much that in-person contact became first unfamiliar and uncomfortable and later a violation of privacy taboos."

Yes, and if I'm remembering correctly, all the child-raising was done by robots, so Solarians grew up unaccustomed to physical contact with other humans. To the extent that one of them nearly gets physically ill when in a room with the detective from Earth who talks about meeting "face-to-face", which reminds our Solarian that he's actually breathing the same air as another human in the room with him, meaning that some of what he's inhaling has been exhaled by the other person. This is about as disgusting to him as considering licking up someone's saliva would be to us.

Expand full comment
Erica Rall's avatar

I re-read the books recently, and that's a good highlights reel. The robot creche had a few plot points in it, not least the Shenandoah around the First and Second Laws required to get robot caretaker to discipline their charge.

Solarians reactions to Bailey insisting on in-person conversations ranged from "kinda into it, in an ashamed kinky way" to "prepared to literally commit suicide if he thinks there's another human about to enter his house", with the spit-licking reaction being pretty close to the median.

Expand full comment
anomie's avatar

> Solarians reactions to Bailey insisting on in-person conversations ranged from "kinda into it, in an ashamed kinky way" to "prepared to literally commit suicide if he thinks there's another human about to enter his house"

literally me fr

Expand full comment
Cjw's avatar

That is in “Foundation and Earth” as well when they’re searching the Spacer planets to find earth. The Solarians at that point have lobes to telepathically control energy and compete between their estates for stuff like having the best apples for prestige.

I thought of this part reading the article too.

Expand full comment
Deiseach's avatar

We already have people penning plaints about fertility decline and the boys over at The Motte discussing ways to reverse this (mainly by treating women like cattle, and I'm a freakin' small-t traditional Catholic with socially conservative views, so imagine how hardline dumb their suggestions have to be, to alienate *me*).

Very rich people won't necessarily want or need to have sixteen kids (Elon is an outlier here, bless the guy). If they can just interact with their peers on their luxury estates, insulated from the mass of the grubby proles who follow their every moment online, with super-efficient AI to run everything for them and all their whims catered to by luxury better-than-human servant robots, why would they fill up their luxury world with more people than whatever is considered the optimum? If they have to split off parts of their estate for their children to inherit, then by the social status games that are being put forward as replacement for human striving, the one that loses the most land/holdings divvying it up between four kids, as against the person who has only one or even no children, loses out. And losing out is going to sting even harder, in a society where ultra-mega-super conspicuous possession of resources is what marks you out as the crème de la crème.

Maybe if you can send your sprogs off-world to settle their *own* luxury gated community world, then the plutocrats will have kids. But exclusivity is the name of the game, and having tens or hundreds of millions crowding up your summer homes planet isn't on the to-do list.

Remember Martha's Vineyard and the immigrants? How many of the summer residents happily opened up their vacant holiday homes there?

Expand full comment
Mary Catelli's avatar

Give it time.

If there is ONE billionaire who wants or needs children, and his children take after him in part, he's going to be the future.

Expand full comment
Cjw's avatar

It’s really easy to have kids as a rich person or a poor person. The middle class is who gets screwed by the costs, sets you back at least one subclass, I’m a lawyer and would’ve been living like a first year school teacher or entry level office worker if I’d had children. The poor are already poor and get subsidized and have little to do anyhow, and the rich can either indulge it or hire nannies, but the middle class has to actually work and then be in child duty all night and pay the full balance due.

Expand full comment
Arrk Mindmaster's avatar

I always wanted a serf, but can't find any at any Walmart, whether Hardware, Gardening, or clothing departments.

Expand full comment
Civilis's avatar

Have you tried Home Despot?

In the old days, I used to get mine at Blood Bath and Beyond, but that's no longer a going concern.

Expand full comment
Ryan W.'s avatar

They typically keep them behind the counters. You have to ask for them by name.

Expand full comment
Deiseach's avatar

I'm sure that you've heard the common complaint that you just can't get the help nowadays. Particularly when nearly all purchasing has moved to online shopping, and you can't trust those dropshippers to actually have what you want in stock and deliver it as ordered.

You need to be careful that they aren't confused about you asking for one of these:

https://en.wikipedia.org/wiki/Bennett_Cerf

Expand full comment
proyas's avatar

"Much more efficient and sensible to send out colony ships full of robots which can self-assemble on site, do the physical work of setting up the base, get the local version of an AI up and running, and then the pleasure domes will be all operating awaiting the arrival of the plutocrats."

Agreed. Likewise, the first manned Mars mission will be preceded by an unmanned mission that drops off supplies, structures, and robots at the landing site.

Expand full comment
JonF311's avatar

Re: Much more efficient and sensible to send out colony ships full of robots which can self-assemble on site

They will break down, as will such a shop, long before it gets anywhere useful. Again: Second Law of Thermodynamics. And such a ship would be a closed system, unlike the Earth which is constantly receiving energy from the sun. Apart from chaotic radiation (much more likely to do harm than good) and swarms upon swarms of neutrinos which seldom deign to interact with ordinary matter, there's nothing out there for vast distances.

Expand full comment
Sean Traven's avatar

It is already true that there are many actual human beings who are better poets, philosophers, singers, etc. than anyone I actually know, but I am still interested in their poetry, thoughts, and songs (voices)--that is, in the work of people I know. The whole idea that "better" is the only standard of interest is actually rather odd. It's not even clear what it means. Better to whom? For what purpose? For the purpose of knowing what someone else thinks or wants to express?

I don't think any of this will ever happen, but if it does, then life will not lack interest merely because a machine can produce what someone thinks is a "better" song.

Expand full comment
Anonymous Dude's avatar

I think there probably is some sort of objective artistic quality in the sense some people are more attractive than others (human preferences are correlated), though damned if I can tell you what it is.

Expand full comment
Mr. Doolittle's avatar

The older I get, and perhaps the more sophisticated the attempts at maximizing perfection, the more I value amateur attempts over professional. Watching some professional sports is actually really boring. The absolute best at each sport have so perfected their techniques that the games all go a certain way, forced by the meta of the sport.

And I don't think I'm the only one feeling this. College football is quite popular. Even high school football has a pretty big following, beyond the families of the players. I would definitely rather play a sport myself or watch a family member play than watch a professional game.

I also value my children's drawings and musical performances better than professional. This one is different than sports, because I do value professional more than other people's amateur attempts, but still less than people close to me.

Expand full comment
Nancy Lebovitz's avatar

Maybe we need more difficult sports so that even the best athletes aren't boringly perfect at them.

One possibility is the sport including a random factor at short notice, like a cooking show.

Expand full comment
Doug S.'s avatar

I've joked about a meta-Olympic event: on the day of the competition, one event is chosen at random from all Olympic events, and the previously chosen athletes them have to compete against each other in that event with very little time to prepare. What kind of person do you send to compete in something like that?

Expand full comment
None of the Above's avatar

My prediction is that for anything that looks like a sport, almost any professional athlete will have a massive advantage over almost anyone else.

You invent some completely new sport that isn't very similar to anything anyone has ever played before. You expose 100 people to it, 99 normal people in reasonable health and fitness, one guy who warms the bench of a not-very-good NBA team. I think in almost all cases, the NBA benchwarmer will be the best player of the 100 within a few days of everyone being exposed to the game, and probably the best player by a big margin. Because while there are specific skills you need to be an NBA player (shooting, setting screens, boxing out, handling the ball, etc.), a lot of what you need is general atheticism--fast reflexes, strength, endurance, good eye-hand coordination, good physical intuitions for where a ball/puck/frisbee/birdie will go, etc. And the NBA benchwarmer will have all of those in spades, or he would not even be on the bench of that team.

Expand full comment
Koken's avatar

That still leaves open a lot of questions about what type of athlete to send, though. A weightlifter and a marathon runner have very different builds; what does the optimal 'jack-of-all-trades' athlete look like?

Expand full comment
Paul Botts's avatar

I literally last evening saw a commercial for a cooking show in which the chefs are each dealt a random card saying something like "no salt" or "you can't taste-test it as you're making it".

https://www.foodnetwork.com/shows/wildcard-kitchen

Expand full comment
Skull's avatar

Do you like sports? I just figure the people who should decide what sports should be like are people who actually like sports

Expand full comment
B Civil's avatar

Rollerball?

Expand full comment
Koken's avatar

In competitive Age of Empires 2 (bear with me) there is a system for players to pick the civilisations they will play but tournaments have started adding random bans at the start of each match, so players cannot fully plan in advance and audiences will see different civilisations played. The maps are also procedurally generated but not identical each time (unlike those in some other competitively played Real Time Strategy games such as Starcraft) so that players have to scout and adapt in each game.

Expand full comment
Mo Nastri's avatar

I still find interesting how the meta changes over time in these games (whether NBA or NFL), and how nobody really plays the exact same way (due to things like players getting injured or player uniqueness), but I do agree that there's much higher variance at the amateur levels. For chess I'd agree with you, and consider 960 / Fischer random chess and Armageddon-style tournament formats a breath of fresh air.

Expand full comment
First Last's avatar

That's why Bronze League Heroes in Starcraft 2 is a banger series. No meta, only two people struggling hard with the controls.

Expand full comment
Paul Botts's avatar

The game I am currently hooked on, Beyond All Reason, desperately needs this. Enforcement of its multiplayer meta has become so childish and unpleasant that it risks strangling an excellent title at birth by stalling the recruitment of new players. (The game's subreddit is currently consumed by argument over whether or not that has already irrevocably happened.)

The developers say that BAR (as it's called) will soon get a matchmaking function of some kind, such that players can quickly/easily find matches at their individual or team player-rating level. Hasn't arrived yet so we dunno whether or how well that will work. If it does work then one of the first effects hopefully will be to allow new/learning players to play and learn in peace vs each other, thereby encouraging them to stick with it despite the game's steep competitive learning curve. The developers have an agreement for release of the game on Steam and it may be that they won't go through with that without the matchmaking feature.

If the matchmaking could in some way undermine BAR's rigid meta that would also be great in my view. Random-generated game maps would have some of that effect; but that feature would be a serious programming lift for a game of BAR's complexity so I don't expect it.

Expand full comment
Koken's avatar

Watching good games of that I often have a strong feeling that it is the real game, and what pros play is some stunted offshoot. Not only is the range of viable strategies narrower at elite level, but players don't get much opportunity to problem-solve in-game. You mostly need to be prepping the counter before an attack hits or it's too late, so all the theorycrafting and experimentation has to take place outside the competitive games.

Expand full comment
Arrk Mindmaster's avatar

I can't define it, but I know it when I see it.

Expand full comment
Caledfwlch's avatar

I am generally not interested in bad media. The way subpar media still has its place is by occupying niches - there might be a lot of great poets, but only this one writes about this specific topic that really speaks to me. There might be a lot of great movies, but only this one touched the idea that I think is really fun. Here, occupying the niche compensates for lesser quality.

In the age of AI, niches would evaporate. Regardless of what niche it is, an AI would do a better job than a human would, and they would be able to do it on the fly. You have this really weird set of sci-fi preferences? Just feed them into an AI and receive your perfect book. Or not perfect, but better than anything that a human could've made.

The only fleeing hope for the artists is that there will be a significant movement for non-AI media, and ways to reliably detect AI generated stuff.

Expand full comment
JamesLeng's avatar

I think there's an at least vaguely plausible scenario where AI never really gets "better than the best" compared to humans, just better at applying "conventional methods" quickly and thoroughly. Humans (or posthumans with enough essential similarity for us to see as fit heirs https://www.schlockmercenary.com/2015-02-01 ) still get to find and occupy economic niches where the conventional method doesn't work so well. The kind of people who want to learn whatever skill will make them famous - baseball players and rock stars of https://wondermark.com/c/609/ - can do so by debugging those conventional methods within some field where enough people are dissatisfied by current AI performance. Leaderboard for sewage treatment plant maintenance (or whatever) then becomes the standard subsequent AIs imitate.

For niche creative fields, similar deal. Some people will be happy enough to find that what they want already exists, others get frustrated that so far it's only been loosely approximated, and will push on to finish the job, even if only for their own eccentric satisfaction rather than some rockstar-level bug bounty.

> Or not perfect, but better than anything that a human could've made.

Standing on the shoulders of giants doesn't mean there's nowhere left to climb. Seeing that something can be done imperfectly, some folks get all the more inspired to find and correct remaining flaws. http://www.thecodelesscode.com/case/107

http://www.thecodelesscode.com/case/122

Expand full comment
Ryan W.'s avatar

I don't want to read some perfect book that nobody else has read. I want to read a good book that other people have read, so that I have that common language with them. Books are cultural Schelling points. At least for some.

Expand full comment
Kenny Easwaran's avatar

This was something my partner and I quickly realized a year ago when he had ChatGPT write a story for him that he really liked, but then quickly realized wasn’t going to get distribution.

It’s really hard to know how distribution of these things will work - will it all just be random TikTok users who generate a book and then get their followers to read it and now it’s a bestseller?

Expand full comment
anomie's avatar

> You have this really weird set of sci-fi preferences? Just feed them into an AI

You don't even need to do that. The AI already knows what you want better than you do. Hell, that's already the case today; even current-day algorithms are quite effective as long as you train them properly.

Expand full comment
Timothy M.'s avatar

This feels like a wild exaggeration based on my interactions with every content algorithm ever.

Expand full comment
Michael's avatar

I think people will gradually lose interest in other humans altogether once AI is sufficiently advanced. Afterall, there'd be no way to even know who is a machine and who isn't, and your non-human friends could be optimized to be better friends: always there for you when you need them, very compatible with your personality, and perhaps designed to be less talented that you in some ways so you still feel valuable, while being highly talented in other areas.

You may be more interested in what your friends produce than some "better" song, but those friends may still be machines.

Expand full comment
Simon Break's avatar

People still play chess ffs!

Expand full comment
yossarian's avatar

A singularity I'd wish for would be a type where everyone is completely self-sufficient. Then, capitalism might still be there, but only for those who wish to participate, and for the rest - ability to step away from the general dick-measuring contests, being able to survive comfortably in the bigger universe on one's own, with the social interaction being completely voluntary.

Expand full comment
Deiseach's avatar

I think capitalism is interested in you, even if you are not interested in it. Before we get to settling other planets and the likes, the ultimate source of all wealth even in the "invest in investments in investing" economy will be possession of physical resources to make the chips and the homes and the food and the clothing (unless AI can literally create matter out of thin air, and even there, the atmosphere is not infinite to extract things out of).

So it will come down to "who owns this patch of land from which we get the resources for the AI to recombine into what we need and want?" and that will be governments or very rich private individuals. We may well go back to a society of aristocrats based on owning land, where Billy-Bob the farmer is now the Duke of Hazzard because his acres are the foundation of his and his family's wealth, just like being a Texas oil baron once upon a time.

Expand full comment
yossarian's avatar

That's why my wish in the event of singularity would be for a human upload, a Von Neumann probe that can carry my mind away, a little virtual reality of my own and no planets, thank you, gravity wells are for suckers who like to live crowded. All the physical resouces one would need then are on asteroids and such. Theoretically, it's fairly doable, too. And investment and economy? Fuck that, I'd rather invest in reaching autonomy.

Expand full comment
Anonymous Dude's avatar

There are a lot of us who would rather go away and leave anyone alone rather than attempt to dominate anyone (unless they're into that, of course). You have people trying to FIRE instead of become billionaires, in the financial realm.

Sadly, the wannabe-Big Men aren't going to let you do that.

Expand full comment
yossarian's avatar

If serious space expansion ever happens - it's fairly inevitable that someone will escape and reach autonomy, after that, no limits.

Expand full comment
Anonymous Dude's avatar

Good luck, my friend. Bon voyage.

Expand full comment
JamesLeng's avatar

Until somebody else's VNM probe looks at yours and sees a harvestable asteroid.

Expand full comment
yossarian's avatar

A good objection, yeah. Though, there are some factors that would mitigate that issue - lots of available resources, good weapons, and the whole mentality that would want to go to space is rather different.

Expand full comment
Vakus Drake's avatar

This is only really plausibly an issue if you're nowhere near the expansion frontier and most resources have already been mined out (and the systems already K2). Otherwise you're taking a fairly pointless risk given that asteroids are not a scarce commodity.

Expand full comment
Ch Hi's avatar

You're making assumptions about the minimum requirements. I expect that an interstellar vehicle will be SLOW. That planets will be uninhabitable without terraforming. And that people will want to have several companions during their lifetime. (Also that interstellar vehicles will be sufficiently comfortable that after living in one for over a thousand years one won't even WANT to live on a planet. Whether you put in that thousand years either as one person or as a few generations.)

Expand full comment
Vakus Drake's avatar

It seems like you're agreeing with the OP that focusing on space infrastructure is way better than colonizing planets.

That being said I think you are underestimating the possibilities when it comes to accelerating interstellar vehicles. For instance if you let disposable Von Neuman probes colonize ahead of you and set up infrastructure; then you should be able to safely travel to these prepared systems at a very high fraction of lightspeed along interstellar laser highways: https://www.youtube.com/watch?v=oDR4AHYRmlk

Of course even if you colonize things at a slower rate the space between stars is littered with substellar objects. So you can have your civilization just sort of sprawl outwards from the Oort cloud hopping between mostly small icy bodies: https://www.youtube.com/watch?v=H8Bx7y0syxc

Expand full comment
EngineOfCreation's avatar

That doesn't solve the fundamental problem you'd have. If you find an asteroid worth exploiting, then someone else will find it just as worthwhile. But only one of you can exploit it fully. And just like that you will rediscover competition, cooperation, and some form of capitalism.

Expand full comment
Vakus Drake's avatar

This just means that if you want to avoid competition you stay on the frontier of expansion where infringing on other people's extremely broad claims makes no sense because the resources you want are energy and matter which aren't scarce anywhere near the frontier.

Expand full comment
EngineOfCreation's avatar

And what is that, if not a competition for reaching the frontier faster than others? Even if you win that competition, the others will catch up, and eventually you will run out of frontier one way or another.

Expand full comment
Vakus Drake's avatar

False. It's possible to reach a frontier from which you are guaranteed to be isolated until the heat death of the universe. Indeed if cosmic expansion continues accelerating then it may be the default outcome.

You could in principle gather up a massive and growing fleet of Von-Neuman probes and it's similarly possible to move even entire stars once you're K2. Plus computation would be way more efficient in intergalactic space where it's colder. So the natural path of expansion may be people expanding outwards until they're over each others Hubble horizon. Indeed this seems a near inevitable outcome for anyone wanting to maximize the resources they could claim especially in terms of total computing power.

Expand full comment
David Bergan's avatar

Hi Yossarian!

I'm curious... have you read The Great Divorce by CS Lewis?

Kind regards,

David

Expand full comment
yossarian's avatar

Hi David,

Actually not, but I'll look at it, thanks.

Expand full comment
eg's avatar

A future where everyone is self sufficient would probably actually be quite horrifying and reduce to something like "Twitter but in real life."

The dick measuring will persist, but with increasingly abstract dicks and rulers of unmarked increment at irregular intervals determined by whatever is the fashion at the time.

As a more concrete concern, one wonders what the world looks like when children aren't reliant on their parents to survive.

Expand full comment
yossarian's avatar

Think more of a life of a small tribe on an island somewhere, getting all of their food and such out of their environment. With a difference that every person has their own boat and can sail away at any moment if they wish so.

Expand full comment
eg's avatar

Have you ever lived in such a small tribe with such an easy means of opting out? And was it especially different from the social dynamics of Twitter?

I think a better analogy would be to think of nobility. Extremely concerned with status and keeping up appearances, to the detriment of basically anything else.

Expand full comment
yossarian's avatar

Yes, I have some experiences of the sort. And it's most definitely not like Twitter or nobility. Generally, professional communities of people who work with high risk to life in PVE mode are a good example (army, mafia or police don't count - those are PVP-oriented and strongly hierarchical, more like groups of people working with high-voltage, explosives, radiation, far in wilderness, etc). No dick-measuring contests, because dicking around tends to end up in death fairly quickly. Also, somewhat surprising, 12-step recovery groups (AA, NA and so on). Same here - easy to leave the society or move from group to group, constant threat of death or worse (not environmental, but from within self and not PVP), dicking around too much easily leads to severely unpleasant death by well, natural causes.

Expand full comment
eg's avatar

In the AI utopia, dicking around too much does not lead to death by natural causes.

If your experiences are of places where dicking around too much *does* lead to death of natural causes, your experiences are opposite to the concern.

Expand full comment
yossarian's avatar

The point is, even an enormously powerful AI probably won't remove all risk from human life (even if it could, it would definitely be anti-utopia if AI controlled all human decisions). And also, not all of the humanity would like to safely dick around in twitter. I'm estimating for 5-10 percent utopia would be exactly opposite - being as far away from dicking around, even if it entails constant life risk and the part where dicking around leads to death by natural causes would be a bonus, not a drawback.

Expand full comment
Citizen Penrose's avatar

"Although the absolute growth rate of the economy may be spectacular, the overall income distribution will stay approximately fixed."

I think this should be: the distribution of income between asset holders will stay approximately fixed, and the relative gap between asset holders and (former) workers will grow continuously.

Expand full comment
Deiseach's avatar

I don't see how the economy can infinitely grow, because doesn't it require consumption of goods and services for businesses to be profitable? If I build the world's best self-driving electric cars, that does me no good if my stockyard is full of units and nobody is buying them.

I know there's a lot of "get rich by manipulations of the stock market, not by making things", but even there the advantages will be lost post-Singularity. NVIDIA has the stranglehold on the chips for the AI, so its share price is going to the moon? Not any more, now everyone has access to the AI which all gives the best advice about the best investments and the best manufacturing processes and the best new discoveries etc. etc. etc. No one business has an advantage anymore. Investing by 'stick a pin in a list of names' is just as good as getting professional stock brokers to analyse returns because the AI(s) are running everything and they are all the best at what they do.

New discovery by AI means new business opportunity making yurts out of sixtieth dimensional energy lattices? Great, but even the poor on UBI or the Great Houses of AI Plutocracy (depending on who is permitted to have access to the Sacred Stock Market - remember the story about Joe Kennedy and the Great Depression: "If shoe-shine boys are giving stock tips, then it's time to get out of the market", so maybe shoe shine boys and the lowly are not permitted to interfere with the Mystic Invisible Hand by trying to throw their grubby pennies into the pool) can all have access to that via the AI, so there is no one "only yurt builder in town, we have first mover advantage because there was only one Google that came out of the struggle for supremacy".

I think indeed it is easier to imagine the end of the world than the end of capitalism, because even in the explorations of future scenarios Scott instances, he still imagines free market capitalism to be the economic driver as we have it now.

Expand full comment
Citizen Penrose's avatar

"because doesn't it require consumption of goods and services for businesses to be profitable?"

Yeah if I had to guess I'd predict big demand shocks and volatility (and maybe a declining rate of profit) would put a lot of strain on the monetary system and make transactions difficult and create lots of crises and recessions similar to the early industrial revolution. Although that's also what I thought would happen during covid and central banks maintained liquidity then.

A book called How the World Works by Paul Cockshott has some interesting sections on how slave-based economies tended towards localised vertical integration in slave estates and encomiendas. AIs seem similar to slaves in that you can accumulate them and don't need to worry about division of labour, don't have their own preferences etc. so maybe an AI economy would tend towards insular vertically integrated mega-corporations that don't trade much outside themselves.

Kinda just speculating, not sure if that's relevant to what you were getting at.

Expand full comment
Matthias Görgens's avatar

The early industrial revolution had lots of crises and recessions?

> Although that's also what I thought would happen during covid and central banks maintained liquidity then.

Yeah, central banks weren't quiet as incompetent during Covid as during the Great Recession. (At least in most of the world. Israel is an example of a competent central bank while most of the rest of the world was having their Great Recession.)

Expand full comment
Matthias Görgens's avatar

Oh, well, the US was uniquely terribly. Their unit banking and general asinine financial regulations put them into constantly recurring crises.

But that's got nothing to do with early industrialisation. Have a look at saner countries.

See eg https://www.richmondfed.org/-/media/richmondfedorg/publications/research/econ_focus/2009/winter/pdf/interview.pdf

Unit banking: until fairly recently most states in the US banned banks from having more than one branch. Talk about a recipe for fragile banks.

Expand full comment
Deiseach's avatar

I think the supply chain crisis showed just how vulnerable our globalised economy is to disruptions, and putting all our eggs in the basket of AI sounds like inviting disaster.

But who knows, it could all work out wonderfully and GDP expands to infinite heights as the human billions are converted into paperclips one by one!

Expand full comment
Matthias Görgens's avatar

Wasn't the 'supply chain crisis' worse in the less traded sectors?

Most of what I take away from any such 'crisis' is that anti-price gouging regulation causes shortages.

Expand full comment
Jared's avatar

I don't think construction materials are among less traded, overregulated sectors, and they took a massive hit. Maybe not as massive as other sectors (chips are obviously a major one, do you have some examples?), but still.

Expand full comment
Deiseach's avatar

I could see that little bubble of AI corporations taking in one another's washing, as it were. Replacing the entire planetary economy by something new is on a whole different scale, and I do wonder how you manage that without crashing the existing economy, in which case your shiny new economy doesn't really have much to build off.

But I'm nowhere near being an economist, so they can have fun working it out.

Expand full comment
PrimalShadow's avatar

> I don't see how the economy can infinitely grow, because doesn't it require consumption of goods and services for businesses to be profitable? If I build the world's best self-driving electric cars, that does me no good if my stockyard is full of units and nobody is buying them.

A business can only sell goods and services if someone is buying them, yes. And if we consider a fully post-scarcity society in the strongest sense, you are correct in saying that there is no demand for anything a business might provide, so the idea of businesses as we think of them kinda collapses.

But being "fully-post scarcity in the strongest sense" is much too strong an assumption, even post singularity. Even if every human has all of their needs met, it is impossible to meet every human's *wants*, at the very least because humans want contradictory things. If Alice wants to terraform all planets in the Milky Way to match Earth while Bob wants to keep them unchanged as a monument to nature, they cannot both have their way. Somehow there will have to be a dispute resolution mechanism, and insofar as that mechanism is tied to some tangible thing (e.g. whoever gets a probe to a planet first owns it, or whoever has a stronger space navy gets it) then there is demand for that thing (e.g. fast probes, or strong space navies). Businesses can exist to fill that demand.

Expand full comment
10240's avatar

If you produce more and more stuff for yourself (or rather, have robots produce stuff for you, including more robots), that's part of the economy, even if noone sells or buys anything. What's unclear is what you'd need all that stuff for.

Expand full comment
Scott Alexander's avatar

I've corrected "income distribution" to "wealth distribution", which I hope addresses this point.

Expand full comment
Julian Jamison's avatar

As an economist I noticed this immediately when I first read the post (via email) so thank you for updating. That being said, there are people with negative wealth (i.e. who owe more than the value of their material assets) -- do we expect them and their descendants to remain that way forever after the singularity? For that matter, do they necessarily have such bad lives now? No Set Gauge's original post focuses way too much (from a human welfare perspective, assuming that's what we're interested in) on capital, defined for this purpose I suppose as everything that AGI can do/produce. He also makes some strong claims about the political economy of democracy and redistribution, claims that are hotly contested in the literature.

However I accept the basic point that a post-singularity world could have much less wealth mobility, perhaps essentially zero (although I also agree with your caveats to that!). The issue is that this doesn't imply "existential horror": there are lots of things that humans care about that no AGI can replicate. One example is along the lines of other commenters here (apologies I haven't read all of them): human chess is still very popular, despite the fact that computers are already vastly better than the best human. Competitive running is still popular despite the fact that bicycles and trains and cars and airplanes have been able to go faster for a few years (er, centuries) now. A deeper example is that we care about our human relationships - e.g. being good parents or friends - in a way that has nothing to do with capital or intelligence or anything along those lines, and if anything this will all become more important post-singularity exactly for that reason.

[Related: one bad argument that I often see from non-economists is that AI will clearly start controlling a high fraction of GDP. But GDP depends on prices not just quantities, and the former are dynamically determined in equilibrium. Anything that gets easy to do with AI will have its price drop to almost zero for that very reason; what remains in the economy as measured will be precisely the things that AGI by construction can't do, for instance if I want to show off by hiring an actual human to cut my hair.]

[Rant: Scott perhaps you feel similarly about psychiatry and mental health, but I am always surprised how so many people think they're an expert in economic / social / behavioral science just because they happen to be human and occasionally buy things in markets, although the fact that they are bound by gravity doesn't seem to make people think they're expert physicists. Your link to Yudkowsky's reply post re poverty is another example of this - he's a smart guy but gets this very very very wrong. Which is fine - he clearly knows nothing at all about it! Not sure why he thinks he does though. End rant]

In sum: we care about some fundamental non-AGI-able dimensions; those will become even more important psychologically and economically and status-wise just in proportion as everything else singularizes; and mobility / change can & will still occur within many of those domains. Anyone who isn't convinced can go to Norway and ask them if the only thing they care about in life is 'local political games'.

Expand full comment
Mister_M's avatar

Ok but Yudkowsky's essay has some value: now I want to know what he got wrong. Maybe you could provide a hint or a search term that would turn up some explanations? Did you already implicitly explain how he's wrong and I missed it?

I do want to point out that your position depends on the claim that humans will find much of anything to value in each other relative to AIs. I'm not sure who the domain expert is for this question but I suspect they don't exist.

Expand full comment
Julian Jamison's avatar

Hi - sorry if I gave the impression that Yudkowsky's essay had zero value whatsoever, and no I didn't mean to suggest that I had refuted it implicitly in my comment (wasn't sure that was an appropriate use of space here, although happy to write a more complete response as a separate comment if you think you and/or others would find that helpful). A quick preview of my thoughts would to be to e.g. consider his proposed definition of poverty: working 60-hour weeks at jobs with horrible bosses. (A) Nontrivial numbers of people currently work such jobs and have happy fulfilling successful lives (on balance), who would not consider themselves poor and neither would I, so this seems like a bad definition off the bat. (B) Lots of the unhappy people in such jobs could leave them and do well, but don't for psychological / behavioral / emotional reasons; Yudkowsky is obviously a rationalist (which is a good thing!) and I believe fails to understand much of human behavior / motivation. The implication is that simply observing the existence of such people doesn't tell us very much structurally about society (as opposed to his Anoxistanians who face an actual physical constraint). (C) The vast majority of poor people in the world today would kill to be abused by a boss at a 60-hour wage job; instead they are subsistence farmers or informal day laborers of various sorts. To quote Joan Robinson: "The misery of being exploited by capitalists is nothing compared to the misery of not being exploited at all." I've spent a fair chunk of time doing research in sub-Saharan Africa, whereas I strongly suspect from these kinds of errors that Yudkowsky knows essentially nothing about actual as opposed to armchair poverty (which isn't his field of expertise so that's fine as far as it goes).

To your second point, indeed my claim depends on that and I believe it to be true. There are domain experts (including me, to some extent) in what humans value; among other things we are an incredibly social species (as are our primate ancestors and relatives; compare cephalopods who are also highly intelligent but much less social), deeply evolved to invest in human-to-human interactions / relationships / status -- as one small example, the quality of social relationships predicts all-cause mortality as well as or better than smoking, which is the single most important medical / physical / biological factor. Technically of course you're right that we are only now developing expertise specifically in human-to-AI interactions, and I'm sure we'll learn some new and interesting things. Possibly I'm wrong and eventually people won't care about other people at all, but there is a lot of existing evidence suggesting otherwise so I would posit that the burden of proof is more on No Set Gauge et al. to argue why they think it will fundamentally change.

Expand full comment
Aotho's avatar

> I would posit that the burden of proof is more on No Set Gauge et al. to argue why they think it will fundamentally change

Lest we fall into status quo bias, I think both sides share this burden quite equally. Let me attempt to alleviate the burden of the counter-side briefly here:

If we define AGI/ASI in the following terms: something that can perform as well or better on every conceivable axis than a human, then it's not much of a leap to claim that they'll be better at being counterparties in social connections too, by any and all means.

Whatever a human wants in their relationships, the AI will do better. I would think scarcely anyone would say that their relationships, friendships, are as perfect as they get. On the contrary. I think when better alternatives appear, humans often choose them. Humans regularly choose better connections over worse ones when they can, according to their own preferences. Already hundreds of millions regularly talk to AIs. Some might say that the imperfections provide more meaning in /real/ connections. Those can get an even more perfect mix of imperfections. Unless one tries to argue that this is the pinnacle of all that has ever existed, or will ever exist, right here, right now.

One way I can see a collective getting around it is forming a cultural moat around /real/ connections--only endorsing those. Such gatekeeping is to be expected. This will create predictable factions and skirmishes. Not sure which side will be able to overpower which other, but since AIs will provide better strategic support for their humans, I might bet on that faction.

I think this is humans only such chance for a semi-stable equilibrium in the favor of human-connected world: coordinate early enough and decisively enough against strong enough AI companions. Not sure if the cat is already out of the bag with this one, but maybe it's still reversible.

Expand full comment
Julian Jamison's avatar

Thanks for engaging and for alleviating some of the burden (seriously) - I think we may not be too far apart. I completely agree that AGI/ASI will be able to [externally] perform better than humans at essentially every type of interaction, I suppose by definition (but this seems plausible / realistic to me that it will happen so very willing to accept it), including any optimal imperfections etc.

What no AI will ever be able to do (also by definition) is to literally be human, and I believe -- on the basis of evidence -- that humans already care deeply about those sorts of /real/ connections and likely will continue to do so. Perhaps our deepest possible evolutionary drive (not the only one, and not for everyone, but still) is to meet someone of the opposite sex; intermingle DNA with them; and raise the biological product of that interaction to reproductive age. No AI will ever be able to actually perform the role of partner or of child in that scenario, even if they can perfectly mimic it. Similarly I claim we want to earn the respect of actual human beings (knowing that it is a human being on the other side, not simply a robot which looks / acts identically; just as many/most people would not choose Nozick's experience machine).

So I think we agree on the crucial bit: there are some roles that AI can never fulfill, and it comes down to whether those remain important roles or not. I think they will, due both to evolutionary reasons and economic ones (they will become scarce commodities). But I'm not 100% complacent, and I can imagine that others may reasonably be less confident; your idea that AIs will tactically favor 'their' side is an intriguing one that I hadn't considered - I'm not convinced although I agree possible. I just didn't like the argument (not from you) that sufficiently advanced AI will automatically displace humans, which seems clearly logically false to me.

Expand full comment
Performative Bafflement's avatar

Interventions not mentioned:

1. Neuroscience advancing to the point of being able to change your preferences so you don't feel angry about inequality

2. Rich people / governments being able to change those preferences in the general populace (remember the "obey the AI company / government?" What do you want to bet post-economic level AI is a super-persuader? Or can invent literal mind control drugs / techniques?

3. Tiered AI access so that only rich people / governments / AI researchers get godlike aligned AI, which can then obscure them via true poly-channel privacy, persuade the general populace on their behalf, arrange society so that everyone only interacts with people close enough to them nobody gets mad, etc.

Expand full comment
Deiseach's avatar

We already have massive wealth inequality but there is no current October Revolution. I don't imagine even post-Singularity "now some people are so rich, we have to invent a new word to describe their wealth because even trillionaire isn't enough" disparities will change that.

It's not that "Elon has a zillion billion and I only get by on my pay cheque" that disturbs people, it's "I only get by on my pay cheque but the guy next door is making a mint even though he doesn't work as hard as me and in fact is cheating the government by getting all these fake benefits" that causes resentment.

If everyone gets the UBI of 50,000 Vivekbucks while some people are so rich they are the new reality TV entertainment for the masses, that won't trigger resentment. "I get 50 grand, he gets 50 grand" feels fairer. Some people will hustle and bustle and make more but if they do that by genuine work, there's less resentment (I get my UBI and make a little on the side by doing work paid under the counter, that's okay). Some people will still try and game the system, but if cheats are caught, that will pacify people (the way governments in tough times like to place stories in the media about "welfare cheats" to divert the resentment of the populace: you work hard and pay your taxes and are just getting by, but that's the fault of these leeches and criminals defrauding the system! and yet, when you look at the level of fraud, it's miniscule).

Expand full comment
Hector_St_Clare's avatar

i mean, nobody predicted the October Revolution before it happened, either.

Expand full comment
Tolaughoftenandmuch's avatar

Seriously? At a minimum Kaiser Wilhelm II did!

Expand full comment
anomie's avatar

...And at what point do they stop beating around the bush and kill the poor? Because I feel like everyone is ignoring that automation would mean that the existence of the lesser classes would no longer need to be tolerated. They would be nothing but a liability at that point.

Expand full comment
Performative Bafflement's avatar

> Because I feel like everyone is ignoring that automation would mean that the existence of the lesser classes would no longer need to be tolerated. They would be nothing but a liability at that point.

Yeah, I agree.

Another implication of strong enough AI / automation - no more democracy, and no more rebellion, ever. If you can literally surveil everyone, all the time, forever, no rebellion or insurgency can ever get off the ground. Sure, the NSA has been snaffling that level of data forever, but they haven't been able to keep tabs on ALL of it, you need AI for that.

And the Chinese are basically there already - they sell "panopticon dictator surveillance" packages to various middle eastern and African dictators TODAY.

But yeah, even without that, even under nominative "democracy," when the rich can retreat to AI killbot-patrolled walled compounds with matter synthesizer feeds without any need for farmers, soldiers, or Praetorian guards, I don't really think the non-rich have any leverage at all. Then you gotta hope the RICH are "aligned," not just the AI. 😂

Expand full comment
Melvin's avatar

And if "killing them all" seems like an unnecessarily sci-fi solution then persuading them to die off over a period of many generations seems workable.

Some combination of massive payments for voluntary sterilisation plus carefully engineered anti-natal memes should do the trick.

Expand full comment
10240's avatar

Even if they don't benefit from the lower classes, they wouldn't benefit from getting rid of them either in a post-scarcity society. And most people, you know, are not murderers. Hell, *we* (working class and above) don't need the unemployed underclass, yet we don't go around murdering them. Why would they do so?

Expand full comment
Jordan Braunstein's avatar

1. Neuroscience advancing to the point of being able to change your preferences so you don't feel angry about inequality

Why don't the proletariat confiscate the means of production from the ultra-rich and use neuroscience so they don't feel angry about it?

Expand full comment
Performative Bafflement's avatar

> Why don't the proletariat confiscate the means of production from the ultra-rich and use neuroscience so they don't feel angry about it?

Because they have the militaries and the AI killbots and drones on their side?

I mean, all power differentials are ultimately based on capacity for violence, aren't they? The proletariat has basically zero power since the flintlock rifle days, and once AI drones and killbots are a thing (already a thing in the Ukraine war), it's basically game over.

Expand full comment
Matt Reardon's avatar

I find myself surprised at how unimaginative Scott is here. Why is money supposed to be useful only for things like NFTs and ancient artifacts? Are the AIs that do everything totally walled-off or immune to monetary transactions?

If things go well, there will still be finite matter in the accessible universe and your money should get you control over a proportional amount of it. This has obvious political applications (more AIs controlled by you = more influence over our shared fate), but also quite likely selfish ones: do we really doubt there are not new and more desirable experiences/states of being that can only be accessed with X amount of scarce physical material. Make your brain literally the size of a galaxy? Seems like an opportunity one might want to explore.

Again, there seems to be some arbitrary assumption that intelligence/useful cognition maxes out at the size of something weirdly small here and I don't see why that's the case for even arbitrarily good algorithms/architectures. Like didn't our own brains evolve to socially compete with other human brains?

Expand full comment
JamesLeng's avatar

That would be where something along the lines of a land value tax comes in, to ensure scarce resources are actually being put to efficient use rather than recursively hoarded.

Expand full comment
10240's avatar

I find it doubtful that a bigger brain would make one happier, without limit, especially to an extent that makes it worth causing conflict in what could otherwise be a post-scarcity society.

I can imagine, for instance, a mathematician feeling content at proving more theorems than ever by building a galaxy brain, but even then I wouldn't need it to be *my* brain, and if I'd like to understand all known mathematics spit out by the galaxy brain, that would probably take a much smaller brain than inventing the proofs; so most of the computing power could go towards a common good.

Also, speed-of-light limitations kick in early. If your brain is the size of a galaxy, talking to your nearest friend has a round-trip time of millions of years, subjectively even more if your super-brain's thinking is faster than ordinary human thinking. You can't even have a single, coherent line of thought either, because of the latency between different parts of your brain.

Expand full comment
tcheasdfjkl's avatar

Another incentive for governments to pay attention to what the people want is that if the people are sufficiently unhappy they might physically violently revolt. This might be harder to do successully in an AI world where powerful entities probably have AI protection, but attempts would probably still at least be some amount of a headache worth at least some effort to avoid?

Expand full comment
Performative Bafflement's avatar

> if the people are sufficiently unhappy they might physically violently revolt.

This hasn't been a concern since roughly flintlock rifles. Seriously, you have any US military branch on your side, against literally any number of angry revolutionaries - are you worried at all? Just having air support / capability makes you invincible.

And that's before we get into AI piloted killbots or drones. And drone warfare has basically revolutionized on the ground warfare in the Ukranian war, it outclasses mortars and heavy artillery. They mass produce ~$350 single-use drones that blow enemy soldiers up. Imagine thousands of those on your "revolutionary" battlefield, or even in a city, patiently waiting for any person to show themselves outdoors. The rebels don't stand a chance.

Expand full comment
Anonymous Dude's avatar

You have to keep the soldiers loyal.

There was a joke: "who are you going to side with, Patriot1776 or the US Navy?"

The counter-joke: "what do you think Patriot1776 does for a living?"

Expand full comment
Deiseach's avatar

"You have to keep the soldiers loyal."

A problem solved by the Praetorian Guard? Who then became the literal king-makers. Our new rulers may end up being not the AI Plutocrats but the grunts from the boonies (or their modern army equivalents who will be the college-educated drone operators).

Origins of the Praetorian Guard (at least according to my new current fave Internet historian):

https://www.youtube.com/watch?v=u3gDqLbgYZg

Expand full comment
Anonymous Dude's avatar

Indeed! I was reading about billionaires talking about trying to put high-tech slave shock collars on their guards to force their loyalty after the apocalypse and rolled my eyes. The minute the power fails they're going to kill your ass.

Expand full comment
EC-2021's avatar

I mean...they're not waiting for the power to fail, whoever maintains/repairs those (who's not the billionaire) is just gonna turn 'em off and step outta the way. Or someone's gonna be willing to take a shock in exchange for shooting his dumb ass.

Expand full comment
Deiseach's avatar

Anyone stupid enough to rely on human guards is asking to be hauled off à la lanterne. Get your robot soldiers from Boston Dynamics, guy!

https://www.youtube.com/watch?v=F_7IPm7f1vI

Expand full comment
anomie's avatar

It's honestly weird how even with all the hype around AI, people seem to be oblivious to the fact that robotics is also making a lot of progress. Everyone keeps going on about how "AI/Government/The rich won't kill us, they still need people to work for them!" We're already coming close to solving that problem.

Expand full comment
Melvin's avatar

Ok fine so we replace the existing government by a military dictatorship which no longer has to worry about being overthrown. Now what?

Expand full comment
Robert Leigh's avatar

The Vietnamese did ok against the pentagon

Expand full comment
Anonymous Dude's avatar

Yup. It was their turf, after all. So they wanted it more than we did.

Expand full comment
Melvin's avatar

It was the South Vietnamese's turf though.

Expand full comment
Kenny Easwaran's avatar

Hmm I believe there have been several dozen successful revolutions against governments since the flintlock rifle era, even if none of them have been in the United States.

Expand full comment
Performative Bafflement's avatar

> Hmm I believe there have been several dozen successful revolutions against governments since the flintlock rifle era, even if none of them have been in the United States.

You know, thats a good point, thanks for bringing that up.

Looking into my "air capability" supposition, it appears there's been ~3 successful revolutions against governments with an air force. Iran, Nicaragua, and Cuba. Iran worked because the Iranian air force explicitly sided with the rebels, and against the US-installed Shah.

Nicaragua and Cuba both had very small air forces with limited logistics and weapons, and were bad enough at targeting they largely blew up innocents and civilians, which ended up helping to turn popular sentiment away from the government and towards the rebels.

So I think we'd need Anonymous Dude's "patriot1776 IS an air force pilot" sort of thing, on a mass, all-military-branches scale, for it to come out successful, but I guess that's possible.

Expand full comment
Mark's avatar

Only three? There was a fourth in Syria just a few weeks ago. And I suspect more that I've forgotten about because they aren't recent.

Expand full comment
Performative Bafflement's avatar

> Only three? There was a fourth in Syria just a few weeks ago. And I suspect more that I've forgotten about because they aren't recent.

Yeah, I remembered Syria, but I figured it was too soon to be accurately talked about. Don't accurate depictions of these things require historians digging into stuff for years?

If you think of any others, let me know - I definitely could have missed some.

There have certainly been many other revolutions in total (the Wikipedia list is extremely long, mostly featuring "rebellions"), but they're mainly African countries with no air forces or Laos / Burma / Thailand style countries with no air forces at the time.

Expand full comment
10240's avatar

Also Tunisia(2010), Egypt (2011), Ukraine (2004, 2014), Armenia (2018), Romania (1989), many independence wars in Africa against colonial powers with air forces.

Expand full comment
Adder's avatar

But the people don't have to be able to literally overthrow the government in order for violence to be a threat. The plutocrats might happily triple the UBI if it reduces the risk of getting Mangione'd.

Expand full comment
MarsDragon's avatar

Really? Then why did the US pull out of Afghanistan after 20 years of wasted blood and treasure with no stable, US-favourable government to leave behind?

Turns out that AI drones don't get you a stable government, which is what you need to actually, you know, /govern/. The point isn't to blow people up. The point is a stable society where you don't have to mass-produce AI drones instead of useful things. AI drones don't get you that.

Expand full comment
Performative Bafflement's avatar

> Turns out that AI drones don't get you a stable government, which is what you need to actually, you know, /govern/. The point isn't to blow people up. The point is a stable society where you don't have to mass-produce AI drones instead of useful things. AI drones don't get you that.

To Deiseach's point at several places in this thread, I think the idea is there's a large population of "mouths" that don't contribute and want UBI / the public dole, and you could seize on a revolution as an easy excuse to get rid of most / all them and keep governing with the remaining cabal of elites.

You don't NEED all the UBI people for a stable government - they're not really contributing anything.

AI drones are just the ground truth in the sense that all power differentials are ultimately founded on the capacity for violence. My point is that average people have essentially zero capacity for violence against real militaries with air support, or against zillionaires with AI drones / killbots patrolling their properties.

Expand full comment
Robert Leigh's avatar

OpenAI announced 3 things in pretty much the same week (early December 2024)

1. God tier access to cost $200 a month (don't know what it cost before)

2. It was thinking of putting paid advertising in its output

3. The non profit to for profit pivot discussed above

All of which makes me think that they are or were on the verge of running out of money, and that the best explanation for 3. is that one or more major investors made coming up with any more money, conditional on the pivot.

Expand full comment
Anonymous Dude's avatar

Or they saw a lot more money the more-nonprofit entity had left on the table and decided to grab for it.

Expand full comment
Robert Leigh's avatar

Also possible, but a powerful objectively verifiable motive is a more satisfying explanation than "they just changed their minds."

Expand full comment
Deiseach's avatar

I think it's not just their minds they changed, the entire operation is now different. After the fact, I think we see the factions lined up as, on the one side the original idealists and on the other, the Altman faction of "there is a money fountain here".

The idealists got booted out, the for-profit money fountain guys took over, and we're seeing the changes in action. Adverts, subscription tiers, mask-off about 'we're a privately traded company, meet our criteria and you get permitted to invest' (and if Scott's scenario about the new post-Singularity plutocracy is correct, this is the vital moment to ensure you will be one of the rulers and not the ruled).

Expand full comment
Anonymous Dude's avatar

Agreed.

I doubt you'll be one of the rulers, though, more likely the equivalent of a peasant with a slightly larger field.

And that's assuming someone else's AI doesn't win.

Of course, again taking a page from the age of knights and longswords, you could always hedge your bets...

Expand full comment
Deiseach's avatar

Better to be a vassal than a peasant, and if you're not a shareholder you will be one of the UBI masses.

(What a cheerful subject for the New Year, eh?)

Expand full comment
Anonymous Dude's avatar

So you're saying I should shell out for the top tier. ;)

(I'm a perennial pessimist.)

Expand full comment
Ponti Min's avatar

"[OpenAI] was thinking of putting paid advertising in its output"

I have an image in my head of asking it to draw an astronaut riding a horse, and it does a product-placement of the astronaut holding a can of coke.

I wonder, what would be the most incongruous product placement possible?

Expand full comment
EngineOfCreation's avatar

"Google, create an image of a house on a palm beach"

result: https://blog.pincel.app/hidden-ai-photo/

Expand full comment
Deiseach's avatar

It does draw that, but the horse is one of the Budweiser Clydesdales (complete with branding) 😁

Or there are sponsorship logos on the spacesuit (why not, they're slapping logos on everything now).

Expand full comment
DJ's avatar

I've thought for a while that the newer companies like OpenAI and Anthropic won't make it. Free or nearly free consumer AI will be owned by Google, Apple and Meta. B2B AI will be like open source database platforms -- essential but also free or very low cost. I've been building on open source RDMS for 25 years but rarely spend more than $20/month for it.

B2B tech platforms can be ubiquitous and profitable, but not Meta/Google/Apple profitable. Salesforce is probably the best case scenario.

Expand full comment
Mark's avatar
Jan 4Edited

"It was thinking of putting paid advertising in its output"

This does sound like an admission that they are on the verge of running out of money, or alternatively that they are reaching the limits of AI capabilities and now is time to "enshittify" their product. Both of those intuitively feel unlikely, but I'm not sure what other factors would lead them to pollute their brand for so little gain?

Expand full comment
gdanning's avatar

>Cheap AI labor (including entrepreneurial labor) removes a major force pushing countries to operate for the good of their citizens (though even without this force, we might expect legacy democracies to continue at least for a while).

This seems to be to be the classic error of assuming that the forces that cause a phenomenon to begin are the same as those causing it to continue. The literature on civil war, terrorism, and other types of substate violence discusses this distinction at length.

And I woukd expect that redistribution will be substantial in democracies, given the incentives of politicians.

Expand full comment
Scott Alexander's avatar

Isn't "even without this force, we might expect legacy democracies to continue at least for a while" specifically declaiming this error?

The reason I say "for a while" rather than "forever" is that presumably elites would always like to subvert democracy to serve them, and unless there's some force holding them in check, maybe they'll succeed.

Expand full comment
gdanning's avatar

1. I think it is far to general a statement to constitute such a declination.

2. More importantly, the entire premise is that the decline in economic power of the masses must ultimately cause the end of democracy. The logic seems to be:

A. The growth of economic power of the masses (really, the middle class) was necessary for the development of democracy.

B. Therefore, if the masses lose economic power, democracy must disappear.

But, that doesn't necessarily follow; it is perfectly possible that the economic power of the masses is irrelevant. Eg there is evidence that many factors associated with civil war onset are not predictive of civil war termination.

Moreover, the causal relationship between mass economic power and the development of democracy is based on a particular mechanism. Would that mechanism operate in reverse? It incumbent on the person making the claim to address the mechanism (though to be fair, it is possible that the original piece does so; I am responding only to your summary thereof).

Expand full comment
DJ's avatar

Even autocratic regimes at least pretend to be democratic. I just don't see democracy as an aspiration ever ending outright. It really is the end of history in terms of social organization memes.

Expand full comment
Performative Bafflement's avatar

> Even autocratic regimes at least pretend to be democratic.

Xi Jinping doesn't really care about this or pretend this, and China's definitely the biggest counterexample to the naively hopeful belief that many espoused prior to Xi that "economic growth and capitalism inevitably turns you towards liberalization and democracy."

I guess you've got Putin and his blatant election fiddling, but he's actually been genuinely popular among Russians for decades, and he'd probably have legitimately won most of his fiddled elections. His fiddling was probably more on the order of turning 60-70% elections into 90%+ elections.

I don't think many of the middle eastern authoritarians pretend to be democratic. Iran? UAE? Saudi Arabia?

Expand full comment
DJ's avatar

China, the UAE, North Korea, Egypt and Iran all have elections. Saudi Arabia occasionally has them for local governance but they are the outlier.

Expand full comment
av's avatar

Democracies work because they keep large percentages of the population relatively content; when the people are content they're unlikely to revolt. This logic will necessarily change past AGI/ASI since it will be possible to crush any revolt and you wouldn't even need to have any people on your side, just the AGI/ASI.

Similarly, capitalism works because it's the most efficient system of motivation and resource allocation (of those that have ever been tried), and most people (especially adults) have a decent experiential understanding of that. However, past AGI/ASI this feature of capitalism is likely to become irrelevant and it will probably cease to exist, either as a result of AGI/ASI led dictatorship/tyranny, or as a result of natural democratic process (if the rich are no longer needed for efficient allocation of resources, why the hell wouldn't you tax them into oblivion?)

Expand full comment
gdanning's avatar

>Democracies work because they keep large percentages of the population relatively content

This is not particularly consistent with the research on the topic. The fact is that uprisings are extremely rare in all regime types, and the question of why uprisings are rare, even when conditions are poor, is a question that has been asked for decades. Uprisings are indeed even more rare in democracies than elsewhere, but that is probably largely because democracies provide an avenue for the expression of discontent other than violence.

Expand full comment
av's avatar

... and that avenue keeps the people *relatively* more content, yes. The point is that after AGI/ASI there's not going to be a rationale for democracy to exist any more, unless the AGI/ASI "wants" it to continue to exist.

Expand full comment
Ch Hi's avatar

FWIW, there is no existing large democracy. Calling a "representative democracy" a democracy is clearly abuse of the language. I don't think actual democracies can scale to a large number. When you "vote for your representative" do you vote for someone who represents your views? I vote for the one who's views are least distasteful. Calling him my "representative" is clearly wrong. He doesn't support many of the things I support, and he supports many things I don't support.

So. There no reason to doubt that something similar to that which is currently called democracy will continue to exist. It has a strong pacifying effect on the citizenry. But the matters important to the powerful will be decided by the powerful. Possibly more efficiently than in the current government.

Expand full comment
Jon's avatar

Well, there are countries with representative democracies that also frequently hold large-scale referenda for important legislative questions. I’m always a little curious why people assume this couldn’t scale.

Expand full comment
Ch Hi's avatar

The referenda are necessarily presented in a "take it or leave it" form. It scales if you don't want it to represent your opinion. There are necessarily big impediments to getting any particular form of the referendum in front of the voters. I've never seen one that I really agreed with, at best they were "Well, this is probably better than just not doing anything.". Many of them are actually worse than the laws that the lobbyists write for the legislators.

Expand full comment
Sustainable Views's avatar

Historically, democratic institutions and capitalism don't yield the most resilient of nations. If you want to talk about thousands of years of non-collapsing governments, China is the best example we have.

The interesting thing about ancient China is the conditions are eerily similar to post-singularity AI. There is little to no economic mobility for peasants. 99% of citizens are in subsistence poverty. There is a ruling/wealthy elite class that lives a different life than farmers. And the powerful own everything, and fight each other for resources and power.

The US is the only country on earth formed with a giant swatch of uncontested land at their doorsteps, and it created a culture of people who expect opportunity abundance. But in most of human history, opportunity scarcity was the prevailing environment.

How China (and most of the pre-capitalist world) stabilized in a situation of scarcity was by valuing things other than economic mobility. Confucianism and filial piety dominated the landscape in China, giving peasants a meaning to life through family connectedness instead of through economic achievement.

My interest in zero sum game economies is just peripheral, so it would take a lot more time to really draw some parrallels, but it may be that the majority of people would transition into more traditional, non-capitalistic cultures and values, post singularity.

Expand full comment
Freedom's avatar

"thousands of years of non-collapsing governments"?

Does China even have thousands of years of continuity as a state? Weren't there a bunch of successful invasions and revolutions?

Expand full comment
Mark's avatar

China is a civilization, not a government. Chinese history can be roughly divided into over a dozen dynasties with the transitions between them typically violent, as well as interdynastic periods. So one can speak of well over a dozen governments in those thousands of years. China is hardly an outlier in terms of resilience.

Also "99% of citizens are in subsistence poverty. There is a ruling/wealthy elite class that lives a different life" describes any agricultural society with states, from Sumeria onwards.

Expand full comment
Citizen Penrose's avatar

I also thought Scott overestimated how strong Western democracies are when he said it was likely they could redistribute wealth in a major way. The original post itself mentioned that there's arguably never been a society in history that lastingly reduced inequity purely through the political process, it's always required wars, plagues or revolutions before.

If you hold that particular view mentioned in the post about the origins of democracy maybe it's plausible. But the origins of democracy seem pretty murky to me, from what I've read it seems to have to evolved because parliaments played a big role in the process of capitalists replacing aristocrats as the dominant class. If you think there are major open questions around how much Western democracies represent the popular will or interest of the majority, how much they're subject to elite capture etc. then that scenario looks a lot less likely imo.

Expand full comment
gdanning's avatar

>The original post itself mentioned that there's arguably never been a society in history that lastingly reduced inequity purely through the political process, it's always required wars, plagues or revolutions before.

That doesn't seem right. The Civil Rights Movement seems to be an obvious example, as does the development of the welfare state in Western democracies.

Expand full comment
Citizen Penrose's avatar

I think the claim was from The Great Leveler which attributes the welfare states to WW1 and 2 from memory. Wouldn't have thought civil rights affected wealth distribution that much at a guess.

Expand full comment
The Ancient Geek's avatar

The threat of revolution can be good enough. The reforms of 19th. C England were prompted by a desire to avoid an equivalent of the French revolution.

Expand full comment
anomie's avatar

...And that threat significantly goes down when you have a monopoly on information and violence. Both of which are heavily aided by technology.

Expand full comment
Deiseach's avatar

"If there is UBI, some entity will have to limit the number of allowed children (it’s not fair for a poor person to generate a million children and force society to give payments to all of them)."

Congratulations, you have just stated the rationale for euthanising all the poor people (which means all the non-plutocrats, which could mean 'anyone with a piddling million dollars in savings and investments' and 'anyone who isn't part of the AI aristocracy') post-Singularity.

After all, there (will be) are billions, not just millions, of them now! All reliant on UBI, all non-high value human capital (can you tell how much I love that phrase since I encountered people on here using it unironically of themselves?), all of them non-productive in the new economy (whatever that will shake out to be), all of them having - ugh! - one child each and so perpetuating the mass of leeches forcing society (the productive, high value plutocrats who have some share in the AI) to give payments to all of them forever.

It'll also solve your problem of wealth inequality - now with AI replacing all labour forever (*all* labour? well, if we have Fairy Godmother AI, I suppose we also have Rosie the Robot Housekeeper style androids to do the physical work), those people are not needed, even to mow lawns and wash the plutocrats' clothing. They can be safely disposed of, leaving Earth and the stars for the plutocrat aristocrats forever.

Isn't that a lovely vista? All the useless eaters gone, only the likes of Sam Altman left to inherit what they deserve!

"a strong economy benefits from educated, globally-mobile, and substantially autonomous bourgeoisie and workforce"

Oh, yes. I can see currently how much all the people who were telling the likes of me "just pick up and move to where the jobs are", "the economy is going gangbusters, what are you complaining about the price of eggs for?", "if you object to your job going overseas so now a poor Chinese rice farmer can work in a factory sweatshop instead, you are a racist", and "there should be no minimum wage, such laws depress employment; employers should be free to pay what the job is worth, and if they can pay buttons that just means your labour is only worth buttons" are reacting to the H-1B kerfuffle with Musk and Trump.

Funny how when it's *their* necks on the chopping block, suddenly it's all American jobs for Americans, there should be a minimum salary cap to prevent depression of wages by cheaper labor out-competing them, and oh no they can't just move to some shit-hole in Alabama for a job now that the FAANGs are full of Indians, and never mind being so happy that a poor Indian peasant farmer's kids are now getting the chance to be socially upwardly mobile, there aren't any nice sushi restaurants there and who wants to live in Alabama?

I think that if we ever get the Singularity, it will (before we get the robots) be easier to replace a quant than a plumber. Sure, the quant may be in a better position because he has the capital to invest in AI and can maybe become one of the servants/courtiers of the new great lords by virtue of that, so he'll be better off than the plumber, but still. Replaceable means better for the economy, yes? That's the message you guys have been cramming down the throats of the working class for decades, so what's sauce for the goose is sauce for the gander.

Expand full comment
Marian Kechlibar's avatar

I certainly do sometimes meditate how precisely would coverage of immigration in newspapers look like, if there was a realistic possibility of the entire staff being replaced by cheap Bangladeshis. I suspect the journos would become a lot more Trumpian.

Expand full comment
Deiseach's avatar

That's part of my Schadenfreude. "we're educated opinion formers, some non-liberal MAGA redneck may deservedly lose their privilege by having their jobs outsourced but no-one can take over from us" was the attitude. AI chipping away at that may shake some of their confidence, at least for the 'opinion former' types.

"It's racist to object" would be a whole new thing if it were "Bunty, 35, lives in Park Slope, now being replaced by Dileep, our stringer in Pune".

Expand full comment
Mark's avatar

Journalists are *less* threatened by Bangladeshis (who rarely speak English as a first language) than the vast majority of voters, so the idea of allowing unlimited Bangladeshi immigration will never get popular enough for journalists to feel threatened by it.

Expand full comment
Marian Kechlibar's avatar

That was the point, Mark. It is not a realistic possibility, and so the journos can afford to be generous on immigration. It is not threatening them directly.

Expand full comment
Cry6Aa's avatar

This is precisely my worry about the future - a post-scarcity utopia (for a few generations, at least) where the descendants of today's rich live just outside sight of mountains of human skulls.

Also agreed that plumbers will be among the last to be put out of work, but then the folk at the top will be the ones in a position to decide whether to axe their own jobs...

Expand full comment
Deiseach's avatar

By the looks of it, we're going back to hereditary aristocracies 😁 Nobody will be making or inventing or doing anything, the AIs will be doing all that. So it will be the equivalent of the Vanderbilts and Gettys (or maybe I should say the Kardashians) living off capital accumulated by their founder, and being famous for being famous. Grandfather managed to get in in time on the AI goldrush, now we live on the bounty of his shareholdings in OpenAnthroMegaAlphaRil.

Not that we're not already living in something of the same, just with the veneer not yet completely peeled away about being a democracy where Jack is as good as his master. Lots of people get married in the City Hall in SF, but I doubt lots of people can hire Nancy Pelosi to do the registry clerk bit:

https://www.instagram.com/p/CWRthQTBQ-V/?utm_source=ig_embed

Expand full comment
moonshadow's avatar

> easier to replace a quant than a plumber

Unironically true: quants just do maths, but a plumber has to deal with the physical world in all its nuanced glory of messy, minute yet unending and unforgiving details. It's a much harder task, by many orders of magnitude.

Picture the inevitable day: the quant, having passed on to the very small shell script created to replace him all information pertinent to the job, rests in his mansion, confident in the power of the numbers representing his accumulated riches to sustain him.

Yet rude awakening comes when liquids flow forth from where they ought not, and from where they ought come only hissing whispers of forgotten dreams; the robotic helpers are as powerless in the face of Lovecraftian plumbing decisions of yore as today's self-driving cars are when faced with the harsh reality of the A465 roadworks. Nothing remotely like this was foreseen in the lab or extrapolatable from the training data!

And lo! Comes forth a mystic; carrying a bag of implements so sufficiently removed from daily experience that they are indistinguishable from magic; shrouded in an air perhaps not spiritual but nevertheless of spirits. He surveys the palace and uncovers its hidden underside, and sucks in air through his teeth as he considers the inscriptions left by his predecessors; for he, too, has known. Quant, technocrat, politician - all are equal before the forces of entropy in their need for the judicious application of a large wrench.

TLDR: the quants will long since have been replaced and forgotten before the plumbers are threatened.

Expand full comment
Deiseach's avatar

Not that a mere lowly quant will be able to interface with the august and terrible Priest of the Temple of Cloacina, that will be one of the apprentices to the apprentices, not yet elevated even to the level of journeyman, one not so far removed from their roots in common with the laity that they cannot communicate with the uninitiated.

https://en.wikipedia.org/wiki/Cloacina

"Yeah, who was the cowboy you had in before? This is a bodge job, can't even get the parts, no way we can be here before Tuesday" is the mystic utterances of the tradition related by the Guild Apprentice (Phase I), ranging back so far in time that even the learned of today quail before the occult significance of the half-understood implications.

Our friend and former commenter Plumber may yet rise to be one of the powerful in SF!

Expand full comment
moonshadow's avatar

> the mystic utterances

...while the magical incantations the higher priesthood utter over the pipework intertwine many basic and universal concepts such as body parts, parents, procreation and effluvia; the sheer unending poetry of their infinite permutations a reminder to all of what it is to be human.

Expand full comment
Deiseach's avatar

We uninitiates who approach the sacred chthonic Mysteries must do so with veiled heads and covered mouths; the utterances are to remind us of the great wheel of life and how, indeed, Inter faeces et urinam nascimur 😁

Expand full comment
McJunker's avatar

ALL HAIL THE OMNISSIAH

CONDUCT THE DIAGNOSTIC RITE BY SOUNDING THE BINARIC LITURGY OF AQUAEMANCIPATION

Expand full comment
MugaSofer's avatar

I feel like you're reacting to the political signifiers here rather than engaging with the actual questions of a post-Singularity world.

How do you actually think this should be handled? Do you think a person should be able to spend their future-UBI creating a hundred million copies of themselves, each of which draws down a UBI and uses it to create a hundred million more copies, etc etc? Not as a metaphor for modern arguments about taxes or immigration or whatever, but as an *actual question* about post-uploading/AGI economics?

Expand full comment
anomie's avatar

It should be handled by not developing AI in the first place. It doesn't matter what a person "should" be able to do, only what "will" happen.

Expand full comment
10240's avatar

A simple (though arguably unfair) solution is to give a quantum of UBI to a person existing at the time of the singularity and all their descendants together. You can have as many children as you want, but the more descendants you have, the poorer they will be; but it doesn't affect anyone else's UBI. It may be possible to do this in such a way that unless you actually have an inordinate amount of descendants, they can still all live like kings, regardless of exactly what division they get.

Expand full comment
10240's avatar

I haven't followed the H-1B thing too closely—but who are these people who support outsourcing, low-skilled immigration, oppose minimum wage etc., but now come out against high-skilled/H-1B immigration because it competes with them?

My impression has been that there are people who support low- and high-skilled immigration, people who support high-skilled immigration but oppose low-skilled immigration, and people who oppose both, but not many who oppose high-skilled but support low-skilled. The people who oppose high-skilled immigration are mostly populist right types who also oppose low-skilled immigration and outsourcing, and probably don't oppose the minimum wage.

Expand full comment
JonF311's avatar

"SIngularity"? I think it more likely we will meet intelligent and benevolent aliens than experience anything of the sort.

Also, nothing remotely in the realm of possibility is going to alter human nature at any time distance we should care about.

Expand full comment
Scott Alexander's avatar

Warning as per comment policy - please don't assert things without giving reasons for them.

Expand full comment
YesNoMaybe's avatar

Does that not usually apply to inflammatory or at least hotly controversial comments? This one just reads really normal to me.

Contrast it with "Even autocratic regimes at least pretend to be democratic. I just don't see democracy as an aspiration ever ending outright. It really is the end of history in terms of social organization memes." - a comment by another user which seems just as assertive while not providing any evidence. It's always been my impression that this was fine and not banworthy, so long as you weren't an asshole about the point you were making.

Perhaps you feel the parent comment is meanspirited? To me it seems reasonably kind and reasonably true*, perhaps not necessary but then commenting on the internet is very rarely necessary.

*true as in "It's a real possibility that hasn't been brought up in the comments I've read so far". Not necessarily true as in "I believe it is highly likely this is how things will turn out" - because by that standard there'd be very few comments that would count as true given how speculative a entire topic is.

Expand full comment
Michael's avatar

> Contrast it with "Even autocratic regimes at least pretend to be democratic. I just don't see democracy as an aspiration ever ending outright. It really is the end of history in terms of social organization memes." - a comment by another user which seems just as assertive while not providing any evidence.

The commenter gave an opinion (they don't see the aspiration towards democracy ending), and an argument to back it up (autocratic regimes pretend to be democratic). Their argument is easily verified. They're not required to write an in-depth essay for every comment.

Expand full comment
YesNoMaybe's avatar

So you see the first sentence as argument for the second? Not how I read it initially but I can sort of see it.

Expand full comment
Hector_St_Clare's avatar

+1000.

Expand full comment
Nikita Sokolsky's avatar

> nothing remotely in the realm of possibility is going to alter human nature at any time distance we should care about.

Nanobots rewiring our brains to change human nature is completely plausible and likely to happen before the end of this century. Science fiction writers never mention this idea because it immediately ruins all plots but there's no reason why it couldn't happen in real life, as real life is always far less exciting than fiction. See: https://nsokolsky.substack.com/p/beware-the-science-fiction-bias-in

This completely ruins the premise of The Matrix, for example, if you think about it: the machines could've just rewired every human's brain to be perfectly happy in the ideal Matrix (that humans supposedly rebelled against) and this way there would be billions of blissfully happy humans co-existing with blissfully happy machines.

Expand full comment
anomie's avatar

...And of course, they could've just built organisms specifically suited for the purpose instead of idiotically using humans. Sustaining an entire human body is a massive waste compared to using biocomputers or digestive organs dedicated to computation or chemical digestion respectively.

Expand full comment
User was indefinitely suspended for this comment. Show
Expand full comment
Scott Alexander's avatar

I think this qualifies as more strong/controversial statements presented without argument or justification, I'm going to ban this commenter.

Expand full comment
Jeff's avatar

Many of these scenarios rely on the thought experiment of "What if we broke the market" but then act as though the market will just keep on chugging along like normal.

If you make human labor pointless, then humans don't earn money. When humans don't earn money they don't spend that money on the goods and services your AIs provide. You don't make infinite money, you make zero money and probably get yourself killed in some sort of uprising when people can't afford bread.

Expand full comment
Scott Alexander's avatar

I'm having trouble figuring out where our models are diverging.

Imagine the AIs as companies (they may very well be owned by a company). They're doing various productive things like making hovercars and fusion plants. The rich people who own stock in the AIs/companies get a share of those hovercars and fusion plants. They could consume them directly (by riding the cars or lighting their house with the power), but as with all economies throughout history, we can abstract this away by saying they get money equal to their share. They can trade this money with other rich people for the goods their companies produce, or with poor people who have gotten money (or goods) through UBI.

In short, replace "AI" with "company", and "labor" with "UBI", and this is just the economy we have now.

Expand full comment
Edmund's avatar

I suspect Jeff thinks that your model only works with AIs that aren't much more efficient than companies today, and breaks down if we assume the kind of *arbitrarily* effective AIs that would turn even minor shareholders into zillionaires. (All AI companies might keep the technology and its profits to themselves if they make a really good LLM that makes them billions of dollars, but would there really be a monopoly on full-AGI make-infinite-stuff-for-free god-tech? By definition it would only take one charitable defector to put infinite free stuff in the public's hands, at which point any economic advantage from hoarding your own model dissolves as demand for any product whatsoever poofs away.)

To put it another way, there's an argument that what you describe holds for "low-level" AGI, but not for the actual *Singularity*. Either AI can't make you a mega-gajillionaire (just an ordinary multi-billionaire), or AI is so good that only a perfectly united front of evil psychopaths would keep its benefits from the public entirely in a way that maintains the existence of scarcity.

Expand full comment
Deiseach's avatar

"that would turn even minor shareholders into zillionaires"

How do they become zillionaires? Because their stock holding in the company is worth a fortune if they sell it. (*If* they sell it, and that's a whole other question. I think we've covered how Musk may be worth several billion on paper for owning stock in his companies, but if he tried selling off that stock in meaningful quantities, it would crash the price. So he does better to borrow several millions using the shares as collateral, rather than selling them).

But leaving that aside, even a sliver of AI Corp is now worth a zillion. Why is it worth a zillion? On the model of current companies, because the goods/services it produces are being consumed. Who is consuming them? People who are paying money they earn or have inherited or stolen or whatever to purchase such goods and services.

Some of the people who work for AI Corp earn money from those jobs and consume some of those services and goods. But the profitability comes from selling to everyone, not just the workers at AI Corp. But if all the janitors and receptionists and accountants are replaced by AI, those people now have no jobs and aren't earning money to consume services and goods. They don't get jobs elsewhere, because AI is running everything in other businesses as well.

So where is the money coming from, from selling goods/services produced by AI Corp? That's the question I'm asking. If you're making infinite stuff *for free*, where is the profit?

Expand full comment
Edmund's avatar

> So where is the money coming from, from selling goods/services produced by AI Corp? That's the question I'm asking. If you're making infinite stuff *for free*, where is the profit?

That was exactly my point — "the kind of AI that would turn shareholders into gajillionaires" was meant as in "the kind of AI that would naively have the capacity to do this, except of course it wouldn't in practice". I thought the rest of my post got that across.

(In theory, of course, the answer to this question is that the people who own AI Corp could ration the infinite stuff arbitrarily to the plebs to squeeze them for money. Out of, I suppose, petty sadism, or because they just want to keep winning the Make Profit Line Go Up game That Much, regardless of whether it has any bearing on how much luxury they can enjoy either way.)

Expand full comment
EngineOfCreation's avatar

"I survived the Singularity, and all I got was this knock-off Capitalism"

I blame a lack of super-human intelligence to imagine what a super-humanly intelligent entity could do differently than we.

Expand full comment
Deiseach's avatar

Right now, I get my money from the company by doing labour - it needs my input. In the AI world, the company doesn't need me, the AI (and robots) are doing all that. I get the UBI from - where, exactly? The government takes some of the rich people's money and hands it to me? Not precisely the same as taxation, because now I am doing nothing for the UBI and am of no benefit, except as a consumer, to the rich people who are the investors and shareholders in the AI companies.

So it's a circle: the money for the consumption of goods and services goes back to the AI company, which then hands over a share of that money to the government to redistribute, so that it can be used to purchase the goods and services provided by the AI company. In short, they're just taking the dollars in their right hand and transferring them to their left hand. Basically they're paying us to spend the money to give it back to them. I don't think that's a sustainable model.

They don't need the UBI serfs in the middle at all - *except* as consumers of production to pay over money. If there is no money to begin with, then the hovercars and fusion power is useless since nobody can afford to consume them. Even the very rich can only consume so much, so we're going to have to change consumption models completely, and I don't know how we'll manage that.

Expand full comment
Jeff's avatar

The money you can make as a company, even if you have perfect super intelligent AI designed products, is constrained by consumer demand. We're used to thinking of consumer demand in terms of taste and product quality which our AI can optimize for, but consumer demand is also based on consumer income. Get rid of the income of your consumers (through mass unemployment) and the revenues of the AI companies also crater as the demand for their products crater.

So we just put everybody on UBI, that solves the problem of nobody working right? There's serious sustainability concerns about actually running a mass UBI campaign but we'll ignore those concerns and say sure it works and can keep working.

The problem is this still doesn't set you up for runaway corporate money making. Consumer demand is still constrained by consumer income and chances are aggregate consumer income is lower under UBI compared to right now. (As an aside, though likely a net lowering the effects would not be evenly applied. Industries which used to cater to the upper middle class would see a huge drop-off while low income goods might actually see an increase) Our companies' profits may have gone down despite having superior products when you combine this reduced consumer demand with the heavy taxes necessary for UBI.

Expand full comment
Malcolm Storey's avatar

So you just print more money, increase M1. You accept inflation (so it becomes a wealth tax).

The more pressing question is how do you maintain/support/calculate the values of the different currencies? Probably everybody will switch to whichever looks likely to be the winner. When there's only one currency, differentials no longer exist and its value is based on the cost of essentials like food.

Expand full comment
10240's avatar

Inflation is a tax only on wealth stored in cash.

Expand full comment
Malcolm Storey's avatar

True (at least if you include assets with defined cash value), except for the fact that it's always the ones with least wealth who suffer the most under inflation.

You'd probably have to ban credit as well.

Expand full comment
Nikita Sokolsky's avatar

We're very close to making uprisings impossible thanks to new technology like cheap, AI-driven drones. Even without AGI its possible that revolting against the government will become completely impossible by the end of the century.

Expand full comment
Malcolm Storey's avatar

You mean revolting against the military...

Expand full comment
10240's avatar

I agree in an AGI world, but today rebels can make those cheap drones too (maybe remote controlled instead of AI-driven).

Expand full comment
10240's avatar

I agree that in a singularity scenario markets would become meaningless, but it would result in everything being free, not in people starving. Today if you write a program for your own use, you can put it on the internet and make it available as free software for anyone who might find it useful, without it costing you anything more than if only you use it. So many people do this, but today it only works with intellectual property, copying which is too cheap to meter. But if you can tell a robot to make bread for everyone in the world (if it doesn't have the capacity, then to make more robots first), costing you no more than telling it to make bread for you (neither costs anything), then there will be people doing that.

... Well, that doesn't take into account that natural resources like the materials to make stuff out of are still not unlimited. However, with unlimited free labor available, even a tiny amount of redistribution would be sufficient to make enough stuff for everyone, or even without redistribution, just rationing out some of the land and mineral resources publicly owned at the time of the singularity to ordinary people. It's also possible that the natural resources we need will be abundant enough to be free, like air today: today there's talk of mineral resources eventually running out, but that's just because we don't have unlimited labor to mine landfills, build solar panels to replace oil, or switch to minerals that make manufacturing more complicated but are more abundant.

Expand full comment
Charlie's avatar

Don't give up hope. The AI might do the "learn the values of all humans and help them happen" thing because humans make it happen, not just by accident.

It helps that AI doing the "follow orders" thing, taken seriously, leads to a winner-take-all knife fight that even those in power might want to avoid.

Expand full comment
10240's avatar

IMO making an AGI follow orders (at least the orders of its builders or a government, not necessarily any user) is safer than building it to follow some moral values. In the former case, if it's well-made and the one it takes orders from is an at least somewhat decent person, things will be fine. In the latter case, we also need to make sure the values we build into it actually lead to outcomes most of us like—all the while humans often have contradictory, inconsistent moral values, and values many people would say are universal would lead to conclusions we'd consider repugnant if followed off a cliff.

In particular, I expect that one component of forestalling an unfriendly AI catastrophe where each superintelligent AI makes an even smarter one, and each one is gradually less aligned than the previous one, is to make our AIs only make and launch better AI under full human supervision and orders, only after we've checked that they're aligned. If we make AIs follow values, rather than orders, they're more likely to decide that making a smarter AI is good, and do so without our supervision. And even if the AI wants to avoid an UFAI catastrophe too, it'll prefer all subsequent AIs to be aligned to *its* values, not ours, so we're back to the risk of each AI being only imperfectly aligned to the previous one.

So, if you make an AGI, make one that follows your orders, and then order it to help humans; don't teach it to inherently want to help humans. The latter makes it harder to correct for mistakes. Of course it needs to *understand* human values to help humans, but that shouldn't be its terminal value.

Expand full comment
Malcolm Storey's avatar

There are only 100 or so elements. Many are only finite. There isn't enough to give everybody everything so obviously you have to keep money to allow people to optimise their share.

The super-rich would be competing to see how many supercars they could write-off in one crash. The rest of humanity would accept UBI and carry on pretty much as before, but, inevitably, a few would use their new AI assistants to try to scam everybody else.

But sooner or later the only choice is:

1. Join the AI in a shared-experience hive mind.

2. Spend all their time with their collection of Personal Companions (Asimov-5 compliant, SynthAware (TM), Owner Focussed (TM), Real Skin (TM), over 100 tuning parameters, 150 optional upgrades).

Expand full comment
eg's avatar

Accept UBI from whom, and by what mechanism does the rest of humanity assure that the checks keep coming in?

Expand full comment
Malcolm Storey's avatar

1. If the rich minority push the poor majority too far there will be bloodshed.

2. The rich know that to keep spending and stay rich they need consumers to buy stuff from them. If you think capitalism mainly benefits the rich, do you think they don't know that?

3. The Gov could just print money. Exchange rates don't matter (what would they even be based on post singularity?). Accept the inflation hit. This is just a disguised wealth tax so maybe the rich would prefer taxes.

4. Actually exchange rates would be based on export of natural resources (food, minerals, energy, ?body parts) cos that's the only real wealth left.

Expand full comment
eg's avatar

There would indeed be bloodshed. But even the most valiant peasant-warrior can only best so many explosive murder drones.

The rich will be perpetually on a precipice where they can either

1. in uneasy perpetuity placate the billions of poor people who might try to usurp their hold on the servers and fabs that run the magic genie machines, or they can

2. just get rid of the poor people and stop having to worry.

Expand full comment
Malcolm Storey's avatar

You're right of course. Either on day one (for the Trumps and Musks) or only when it became necessary for self-defence (the Gates's and Soros's perhaps)

But, to introduce a bit of science/philosophy, you're forgetting the Oberver Selection Principle. If you were a human born in the year 3000, you'd likely arrive in a time-line with a high weight (ie probability) multiplied by human population. The low population of your scenario outweighs its higher weight,

How do you think our species survived all the previous bottle-necks, pandemics, and nuclear stand-offs? History is written by the (most x most likely) survivors.

Expand full comment
10240's avatar

5. At least some of the super-rich hand out at least a fraction of their robots' production as a UBI because they already have much more than they need, and it feels nice to be the one who feeds humanity. Idk where this idea comes from that every rich person is so 100% selfish that they wouldn't share anything except at the threat of bloodshed, when in reality there are plenty of rich people giving a a lot of money to charitable causes.

Expand full comment
Malcolm Storey's avatar

Let's hope you're right, but as an Englishman this all feels very "American". When Americans view the distant future it's infinite wealth for all. When the English view the distant future it's about how we share scarce resources.

Expand full comment
Vakus Drake's avatar

Why would those be the only two choices?

It certainly seems like one can easily come up with a great many other choices than those two. Indeed from my perspective neither of those is impressive or particularly appealing options at the tech level where they're possible.

Expand full comment
Malcolm Storey's avatar

Cos AI does everything better than you and better than all your friends, including friendship, so you either join it (in one way or another), or become lonely and irrelevant. (Or join the resistance, I guess.)

Expand full comment
Vakus Drake's avatar

Here's one far more appealing option for what to do with your own personal AGI that you don't consider, which is that if mainstream society sucks enough and people can opt out then they will:

You go digital (can be done gradually so as to maintain continuity of experience) then expand outwards slightly behind a fleet of Von-Neuman probes (to clear and monitor space for you to minimize risk).

You create whatever your personal idea of utopia is, and if self selection isn't enough to make that work then you create new digital minds. For instance if most existing humans wanted to be a celebrity that isn't normally possible. Unless you only pay attention to and care about your particular utopia simulation, and have a majority population of digital people within it who were created to not desire zero sum status. Within this utopian world you have your AGI assistant act as a perfect DM creating amazing simulated adventures for you to experience and acting out the roles of the NPC's (whereas the friends you make along the way would be newly created minds not NPC's).

After countless eons you likely get bored of all there is for someone of your intelligence level to appreciate (at least with expanded memory, so you don't just forget said experiences and keep repeating them): So you slightly enhance your mind so that you can now appreciate things you couldn't before. In exactly the same way that there's things that a child can appreciate in ways an adult normally can't, but there's also things that adults can appreciate which children cannot. Rinse and repeat until you gradually become a superintelligence over huge subjective timescales.

Expand full comment
Malcolm Storey's avatar

"You go digital (can be done gradually so as to maintain continuity of experience)" Sure about that?

Read this:

https://philosophy.williams.edu/files/Egan-Learning-to-Be-Me.pdf

Expand full comment
Vakus Drake's avatar

Rereading that story it's extremely clear that it's not a remotely sound analogy to gradual replacement:

After all in the story the crystal literally has no causal influence on the brain it's connected to, so it's purely creating a copy of the original while the original still exists (and would exist whether the crystal did or not).

Frankly nobody who cares about continuity of consciousness would ever get one of those gems. Since the gems are basically a variant of the transporter scenario, not a scenario in which you'd ever plausibly maintain continuity of experience.

Expand full comment
Malcolm Storey's avatar

"a variant of the transporter scenario" absolutely. And p-zombies. But even if you don't believe in souls it all falls apart when you add duplicates, progressive atom exchange etc. I'm an 80ms, bundle theory, emergentist, automaton (and so am I! :) ), but would I use a transporter? No. It's one thing to have a preferred interpretation of reality - quite another to bet your life on it.

You proposal is closer to the hive mind question. If your body dies while you're in the hive mind, do you live on?

Expand full comment
10240's avatar

What's continuity of consciousness/experience in the first place, and why does it matter? You don't even have continuity of experience when you go to sleep, and wake up next day.

Expand full comment
Malcolm Storey's avatar

Incidentally the Hive has ruled to ban all personal explorations such as yours and Global Defenses have been alerted.

In your future wanderings you might evolve into a gigabrain and then encounter a familiar-looking blue planet where you might pose a threat. The hive mind is responsible for 10 billion people. If there's a 1 in 10 billion chance that you will eventually pose an existential threat, the Utilitarian Principles which drive the Hive's ethics require that appropriate precautions be taken.

The Hive thanks you for bringing this to their attention.

Have a nice day.

Expand full comment
Vakus Drake's avatar

Bit of a failed AGI that seriously fears any human might surpass it. If you have good self improvement capabilities then you should have such a head start that you are pretty much untouchable.

Even if you quickly hit a maximum tech and/or intelligence level so the incumbent advantage is diminished: You can always go the route of deliberately stripping all the resources from every system near to the Earth and filling the whole area with multiple light years worth of automated defenses.

Expand full comment
Malcolm Storey's avatar

"1 in 10 billion chance" is not "seriously fears" - more like "aware of the possibility".

re automated defenses: so how do you replace them when they've worn out cos you've used up all the resources? The Hive is thinking much longer term.

Anyway, the Hive isn't AGI, it's combined human awareness and intellect, though it could interface to and merge with AGI. Maybe it still maintains human paranoia and petty things like hurt pride at somebody wanting to leave, which it rationalises into a threat.

Expand full comment
10240's avatar

Even if AI is "better at friendship", whatever that means, you still have the *choice* to have human friends with the same preference. In your original post you made it sound like it's a bad thing that your only choice is to have AI as friends, but if at least a few people agree that it's a bad thing, then it won't be your only choice.

Expand full comment
Malcolm Storey's avatar

Better at friendship means other people prefer their AI friends to you. (And if you've got used to having sycophantic AI friends who never argue and only want your happiness, I expect I'd prefer AI to your company too!)

It's not your only choice, but it's the easy option, it's very seductive (in the non-sexual sense), and it's safer (better a sycophant than a sicko!).

Expand full comment
Malcolm Storey's avatar

I was taking the two largest people-facing industries and extrapolating to their logical conclusions.

Expand full comment
10240's avatar

People can also use AI assistants to warn about scams. Also, if you can't sign away your UBI, then a scammer can at worst scam you out of one UBI payment, and next time you'll be more careful.

Expand full comment
Malcolm Storey's avatar

AI assistants: then it becomes your AI assistant trying to outwit the scammers AI assistant and then it comes down to who is better resourced.

one UBI payment - or your house or your identity.

Expand full comment
10240's avatar

Some elements are scarce, but many may well be abundant enough that there's more of them than anyone wants to use. Then many basic necessities, or perhaps even all practical products may be free, with only prestige products like golden jewellery being competitive.

Expand full comment
Malcolm Storey's avatar

yes, but the more elements you have the more you can do with technology cos they each have unique properties. OK, you don't need to say it: maybe we'll eventually be able to replace them all with organics.

Expand full comment
JohanL's avatar

I have a strong suspicion that today’s AI discourse will seem utterly risible and silly in the future, when they actually have something to go on rather than just speculating wildly on no empirical content. You know, like those 1800’s ”the world in the year 2000” illustrations.

Expand full comment
JamesLeng's avatar

Naturally, but doing something badly is a necessary first step toward doing it well. http://www.thecodelesscode.com/case/100

Expand full comment
JohanL's avatar

Doing the wrong thing is frequently worse than doing nothing.

Expand full comment
Nutrition Capsule's avatar

Trying to figure something out frequently helps more in eventually figuring it out than never trying in the first place.

Expand full comment
JohanL's avatar

This actually seems questionable when it comes to AI risk. Likely, any AI risk is higher due to the activities of AI risk people, because while their impact on any risk is negligible, they also push AI forward generally.

Open AI is the obvious example.

Expand full comment
Deiseach's avatar

Well, yes. We're speculating based on nothing concrete, just "suppose all the hopes and dreams of the SF future come true?" Probably we will look just as ridiculous when it turns out "things are pretty much the same as today, more or less, except now being a barista is a premium career track since AI is doing all the software engineering". The benefit of having tastebuds when selling consumable goods to beings with tastebuds, you know?

Expand full comment
JohanL's avatar

I think the "AI rebellion" will look particularly bad - it will keep being a tool, and as always, the bad outcomes will come from people doing shitty things with their tools.

Expand full comment
magic9mushroom's avatar

I'm going to note that I'm not going to bet against you because most of the cases where you're wrong (i.e. the things done by out-of-control AI are worse than the things humans do with controlled AI) involve both of us dying.

Expand full comment
JonF311's avatar

Occasionally those old predictions got something right, but they also got a lot absurdly wrong.

Expand full comment
Anonymous Dude's avatar

I guess the point of this is to think out the ramifications of the Singularity, and that's a valid purpose of science fiction, so go right ahead.

Similarly with AI killing everyone--I have my doubts, but all the people who know most about it seem to be the ones who are the most scared, so go right ahead with whatever you're doing.

(I mean, it's not like I can stop you.)

But I don't see any evidence of a Singularity right now. We've got a bunch of computer programs that are better than anything previous at the jobs of authors, visual artists, and programmers. I get the philosophical implications--a lot of us thought art was something uniquely human, and maybe it isn't. That's a big deal to a lot of people.

But practically speaking, it's going to put a lot of creatives out of business, which I don't like (though with the left turn of the arts over the past few decades I'm becoming a lot less sympathetic). However, even if ChatGPT can generate you the next J.K. Rowling, Sally Rooney, or Christopher Nolan work, that'll just decrease the value of artistic works. If stories become a utility like water or electricity...a philosopher or artist could no doubt come up with reasons this would be awful or inhuman, and they might even be right, but I don't see why it would change the return on investment to a microchip factory or oil rig.

Similarly, if there are a lot of coders put out of business, it'll be bad for a lot of ACX readers, and I get that people would be upset. (Though the tech industry seems to go through booms and busts a lot, and I'm not sure after the last bust we wouldn't see another boom in custom AIs or something.) But I don't see why it would make g >>> r or the other way around; you'll just shift money around between existing players. (The index fund strategy seems to remain relevant.)

People are still meat beings that live in a physical universe, and there are physical objects that need to be handled, transformed, and transported, and it's not clear to me that AI is going to change that all that much. It may make industrial processes more efficient, but it's not going to make more oil or gas or corn or soybeans or lentils, or create more space to put all these people on.

Expand full comment
Michael's avatar

there are big investments being made into robotics right now, and a fairly direct path from good vision-language generative models to good robots.

Expand full comment
Anonymous Dude's avatar

You still have to make the robots. You need raw materials. You need replacement parts.

Expand full comment
anomie's avatar

Much of manufacturing is already automated. Resource extraction is mostly done through machinery, which could definitely be mostly automated if there was a will for that. (Though, the labor unions are going to be a pain to deal with...) Maintenance is the actual difficult part, be we are making good progress towards general purpose AI-driven robots (for example: https://youtu.be/F_7IPm7f1vI ). At this rate, it actually seems likely that we'll get good robots before good AGI.

Expand full comment
Nutrition Capsule's avatar

Most of the global manufacturing is literally in the hands of people, yes, but mostly because we haven't yet automated as much as we will be able to during the next 20-50 years. People also control all the capital for the time being.

Both of these are likely to change, slowly at first. Assuming AI, deploying it will gain a competitive edge almost anywhere, including manufacturing. Robots are easily controlled by AIs, which makes them useful for anything an AI needs to do.

As for capital, for similar reasons AIs will increasingly be given control over tasks related to capital management - trivial at first - and from here there's only shades of gray between what actually happens and complete human displacement.

Expand full comment
Anonymous Dude's avatar

I agree a lot more automation is possible. I have my doubts the robots are adaptable enough to allow for resource acquisition and management a la our paperclip maximizer. You could make them that way, but I suspect along the way there's going to be enough embarrassing errors they'll put a stop to it--some drone swarm strips a town with people in it or something.

As for capital...I can see them using more algorithms to allocate it, certainly, but I don't think the ruling classes are going to give everything up to the computer. They want to make money, not reach the Singularity.

Expand full comment
Nutrition Capsule's avatar

I don't think paperclip maximizer type scenarios (what with Dyson Spheres and nanobots) are essential to scenarios involving AI and robotics pushing humanity to irrelevance. Much more modest developments suffice for that.

People make all sorts of errors all the time, but that hasn't stopped us from creating a global industrial civilization as a whole. AIs, assuming they'll become much smarter than people, should be capable of at least the same level of performance. Using humanity as tools at first, if necessary.

As for leaving everything up to computers, I see slippery slope scenarios as the most plausible. Political and company leaders might eventually either have to take advantage of the computing power at hand, or simply lose outright. Step by step we might eventually reach situations where any human intervention will be just too slow or short-sighted, and lead to catastrophic losses facing far more capable AIs.

The point is not that leaders will immediately opt for Singularity, but that the developments will cascade bit by bit, slowly at first and then escalate due to competitive pressure.

Expand full comment
MaxEd's avatar

OK, I'm one of those people who think about this a lot, because I'm not much of a rationalist, but I read waaaaaay too much sci-fi.

The most general answer, of course, is "we can't predict post-Singularity, by definition". This is most likely true, but boring. Let's try anyway.

For one thing, I assume a super-intelligent AI will become independent after some time, no matter how much pre-Singularity businesses or governments try to capture it. By being super-intelligent, it will probably amass wealth beyond all human dreams. In fact, it's not impossibly that such AI will come to own all of Earth in its entirety. In process, it WILL destroy capitalism, by destroying private property, or rather by becoming the sole owner of all private property in the world. Humans won't even be able to rent anything, since they don't have money to pay the rent - they can do no productive labor that AI would be willing to pay for!

What happens next, depends entirely on AI's views on humans. It can keep us as pets, uplift (some of) us, destroy us (directly, or via neglect), abandon us and go explore the galaxy by itself (allowing us to revert to some kind of pre-Singularity society, probably with a strong religious prohibition on AI).

Of these possibilities, the only two that are interesting are "pets" and "uplift" scenarios.

In the first case, I HOPE it will be a good pet owner, and try to create a rich and rewarding environment for humans to thrive in, to the best of its abilities. It will allow us to play various "games" to earn "rewards", but nothing too violent, or dangerous to our health. This is kind what happens in Ian Banks' Culture series, I think.

In the second case, the question is "why". The first "why": why uplift humans instead of e.g. spawning more AIs? I guess the AI would have to have some fondness for us for this to become true. The second "why": why do it at all? More branching here: maybe the AI will be lonesome without someone who's (almost) its peer in intelligence, but have a different mind. Then it will probably uplift some humans as companions, and keep the rest (of surviving ones) in a "zoo", in case it needs more later. Or maybe it just likes to increase the number of intelligent things in Universe, in which case it might uplift everyone and their pet dog (I certainly hope for this).

One important note for this comment: it assumes there is exactly ONE super-intelligent AI in post-Singularity, either because others didn't achieve it in time (and then it quashed their attempts), or because they were all destroyed, subsumed, or something else. In a world of competing super-AIs, things might be different, beginning with the fact they will have to "trade" in some way, as they form alliances. This might prolong capitalism's life.

But I'm not sure multi-AI Singularity it more likely than single-AI.

Expand full comment
Malcolm Storey's avatar

As somebody in later life with no children, I quite like the idea of passing the baton to our own super-intelligent sentient creation. I have more in common with it than I do with narrow-minded (ie not agreeing with me - you don't need to say it!), ignorant, climate-change-denying, evolution-denying <insert noun that's dismissive but not derogatory>.

Anyway, the quantum universe populates all possible outcomes so everybody's suggestions are correct...

Expand full comment
MaxEd's avatar

I... Actually, I'm kind of with you here. I also don't have children - and probably will never have them, and I've been thinking that training a little AI, especially an agentic one can be an interesting enough substitute - pass on memes, if not genes!

Expand full comment
Malcolm Storey's avatar

So there's just you, me and Sheldon Cooper rooting for Skynet then?

Expand full comment
MaxEd's avatar

I always liked Mike from "Moon is a Harsh Mistress", even if John Varley's version of it (The Central Computer) goes quite mad in "Steel Beach". Heinlein even described a pretty real-sounding (if in reality inefficient) process of reinforcement learning and AI alignment via explaining human humor to a computer.

Expand full comment
Malcolm Storey's avatar

One of Heinlein's that I never read. (I read more than enough!)

Expand full comment
Vakus Drake's avatar

You're missing options such as:

The AI gives everyone there own personal AGI aligned to their values in order to prevent them from being vulnerable to other AGI's manipulation. The central overseer AGI ensures the growth of all lesser AGI is limited to avoid major power disparities, then it limits itself to not intervene more than necessary. So everyone can for instance create whatever sorts of simulated utopias they want through a combination of self selecting into them, and potentially creating new digital minds within the limits that have to be imposed on all forms of creating new minds (for both ethical and Moloch reasons).

Additionally uplifting all at once seems arguably bad for exactly the same reasons it would be bad to technologically uplift a toddler into being an adult. Instead people might go the exact opposite direction and redesign humanity to mature slower. Since the ideal course might be to exhaust most of the novelty that you can out of a particular level of intelligence (presuming you have superhuman memory and aren't just repeating the same entertainment forever and then forgetting it) before slightly increasing your intelligence/maturity so as to now be capable of appreciating new sources of entertainment which you previously could not. This still results in one eventually becoming a massively superhuman being, but as part of a very slow process of maturation with no end point.

I also think you're anthropomorphizing the AGI way too much. Since a being that didn't evolve as a social species is certainly not going to feel loneliness for one thing. Given the orthogonality thesis there's no reason an AGI couldn't have far simpler goals than a human uplifted to superintelligence would likely have. Just as you can have paperclippers you can have AGI whose only interest is in for instance maximizing the freedom of all existing humans to the greatest extent possible or something similarly myopic. The AGI doesn't care if it's values seem pointless or obviously not what the designers intended them to be, they're its values and it definitionally can't care about anything else.

I'd also note that AGI's by default shouldn't bother with alliances. If one AGI believes there is a 60% chance that it will take over, but a 40% chance another AGI does: Then it makes sense to negotiate to merge into a new AGI which will have values that are split between the original AGI's 60/40.

Expand full comment
Kimmo Merikivi's avatar

I don't think this changes anything in this analysis, but regarding the space colonization scenario, supposing the laws of physics as known are roughly correct, the size of the pie is fixed that's true. However, I think it's worth pointing out that the pie is presumably vastly bigger than one might naively think:

1) A filled sphere, like a planet, is the worst possible way to arrange matter into living area. Just about all of the stuff is underneath our feet, useless for all purposes other than providing gravity (when you could just as easily fake it with spin gravity, or if desired, use far more abundant hydrogen and helium to provide it).

2) Almost all metals (i.e. elements heavier than helium, including all the ones that matter for life, like carbon, nitrogen, phosphorus and sulfur) in any given star system is in the star (and the bulk of what isn't in the star is in the cores of gas giants). Because star lifting seems to be allowed by laws of physics, it is a good bet that post-singularity civilizations can access these resources.

Consequently, the cosmic endowment isn't habitable/terraformable planets of the Milky Way (or in Deep Time, Laniakea supercluster or thereabouts - as far as you can reach in the expanding universe). It's multiple orders of magnitude more when planets are disassembled and stars are stripped of their metals in order to construct ringworlds or other habitable megastructures or smaller artificial habitats.

Expand full comment
Little Librarian's avatar

> This may not result in catastrophic poverty. Maybe the post-Singularity world will be rich enough that even a tiny amount of redistribution (eg UBI) plus private charity will let even the poor live like kings (though see here for a strong objection).

I read this far, jumped to the link to read the objection, and was completely unconvinced by it. In fact I'd say its a Strawman argument, making up a new argument that your opponent didn't make and attack that.

The claim: A poor person in 21st century America has everything a poor person has in 10th century medieval Europe and more.

The Strawman: A poor person in 21st century America has everything a poor person in 10th century medieval Europe lacks; but lacks something that poor person has in extreme abundance. Wouldn't that still be poverty?

But they don't. There is nothing the poor person in 21st century America lacks that poor people in medieval Europe had in abundance. The things we often point to as big problems for modern Americans, like the rent being too damn high. Medieval surfs didn't own land either, land owned them and the medieval baron was a much more powerful landlord than the guy who owns an apartment building.

Expand full comment
Scott Alexander's avatar

That wasn't what I got from the article at all. My interpretation of the article was that if you naively multiply a serf's resources by 100x, you might expect a world where people aren't economically precarious, exhausted from more-than-full-time work, and at the mercy of bosses who they hate. But modern poverty has all of those things, it's unclear why, and it's unclear why we should expect a further 100x multiplication would solve them.

Expand full comment
Arrk Mindmaster's avatar

One reason the modern poor have 100x the resources of ye olde poor is they have, in addition to cell phones mentioned, a vast profusion of other things, such as (public) roads, toothbrushes, plastic bags, OTC drugs, chocolate, and myriad other things people now take for granted.

If one chose to live like a peasant of 1000 years ago one would have a greatly increased standard of living. In fact, some do: homesteaders grow their own food, and sell some artisanal items to the general populace in exchange for a few modern things, like chainsaws and fuel.

The article introduces the easily understandable but contrived scarcity of oxygen. In actuality, the poor will always be lacking things due to relative lacks compared to richer people. Eliminate the top thing the poor need and the second thing becomes the top need. UBI would not create something like an oxygen scarcity, but would give the poor access to a better lifestyle, though they would remain poor.

Expand full comment
TGGP's avatar

The modern poor in the first world aren't working more hours than the upper-classes, they're working less.

Expand full comment
Little Librarian's avatar

I definitely read it as proposing a reason why they're economically precarious with the metaphor of having to buy oxygen; a reason I found unconvincing since I cannot think of what this substance that medieval surfs have in abundance and we don't is.

But to reply to the framing you presented there. It seems like the answer is that after multiplying the surf's resources by 100x, it seems almost self evident to me that the reason we still have to work is that we're also consuming 100x per person. Vast global supply chains ensuring even the poorest can get eleven different herbs and spices on their fried chicken. Teams of medical professionals attending even the poorest woman's births, bringing maternal and child mortality down to unimaginable lows. And arguably also nicer jobs; farmers enjoy their craft but they're self selected these days. I like warm indoor work with no heavy lifting. And I prefer my corporate manager to a medieval aristocrat.

So if we multiply resources another 100x, we should expect the same. Some combination of people having nicer work and consuming more resources. But in my opinion the next 100x will lean heavier to reducing work. Child mortality is basically zero now, we probably can't invest that much more there without massively diminishing returns. Video game graphics are good enough; and the market doesn't seem to want to pay massive amounts for marginal improvements. I think the signs hint towards people increasing leisure time next.

Expand full comment
eg's avatar

Is it really so unclear why?

On the low end (if we ignore the inequality issue) a rise in the baseline living standard isn't an unalloyed win. It's a rise in the minimum access to resources expected for someone to even be capable of operating in society.

The fact that everyone can now afford clean clothes and soap means everyone *must* bathe frequently and wear clean clothes.

The fact that everyone can afford a cell phone means coordinating with you is atypically difficult if you *don't* own a cell phone.

The fact that everyone can read means you can't even get a job if you *can't* read.

Failure to meet the new minimum standards which our better society affords means simply falling out of society.

If the society has a safety net, staying permanently in the net because you're fine being dirty and illiterate without a cell phone (all quite normal things 300 years ago) means being parasitic upon society.

If society does not have a safety net, then unless you're lucky enough to find a poor village of similarly downtrodden peasants somewhere, falling out of society doesn't mean settling for the standards of 300 years ago -- it means settling for the standards of 8000 years ago.

The inequality issue is way more dangerous than people give it credit for too. A rising tide does indeed lift all boats, but if ever the tide recedes, it's clear which ones will be stranded.

Expand full comment
10240's avatar

Yes, everyone de facto needs things like a cell phone, soap, and basic education, but these are so cheap that they don't meaningfully eat into even a "poor" person's income in the Western world (the former two because they're mass-manufactured products, the last one because it's provided by the government, funded mostly by richer people's taxes), so the need for them doesn't cause poverty.

Being a "parasite upon society" doesn't make one poor, and while some may regard it as shameful today, that wouldn't be the case in an AGI world where providing free stuff for them really doesn't cost anyone anything.

And in a model where everyone who didn't own property at the time of the singularity is equally "poor" living off only UBI, there would be a large class of such people, so it wouldn't mean "falling out of society". (I also find it plausible that it wouldn't be an UBI, but stuff would simply be free, like free software today: nobody cares how many you download, as it's too cheap to meter.)

Expand full comment
10240's avatar

The things that still aren't cheap tend to be so either because of the Baumol effect (e.g. teachers' salaries), or because of zero-sum competitions artificially created by society (schools in America – Yudkowsky's example, housing in rich cities).

The Baumol effect will go away once AI fills in even for high-skill jobs. I expect the zero-sum games to become more amenable to be resolved too. You care less about living in a rich city if you don't need to make a living. There are more options for schooling if teaching can be done by AI, and if there's no need for compulsory schooling so violent kids who don't want to be there don't have to, or if there are less SJ-related objections to tracking because what school you go to doesn't end up determining your income.

Also, the policies creating the zero-sum games aren't primarily driven by the super-rich, nor do they take up a disproportionate amount of the resources (their big mansions take up much less space in total than middle-class housing; their kids take up maybe a few times as much teacher-time as the average kid), so if the inequality will be a few super-rich aristocrats, and everyone else getting the same UBI, that'll cash out as pretty equal in practice.

Expand full comment
JamesLeng's avatar

> There is nothing the poor person in 21st century America lacks that poor people in medieval Europe had in abundance.

Privacy. Quiet. Night sky unobstructed by light pollution. Legal context where you can pick up a club or a spear, walk over to enemy territory, kill someone, take their stuff, walk home, publicly describe what happened, and be praised for it. Religion. Access to forests.

Expand full comment
Everett's avatar

You can get nearly all of these things (other than the freedom to murder your neighbours) as a poor American by saving $20,000 and going to live in Alaska in a remote region. Skill issue.

Expand full comment
moonshadow's avatar

> by saving $20,000

FSVO “poor”

Expand full comment
JonF311's avatar

Re: Privacy.

Unless you were an utter hermit privacy is something we moderns enjoy a lot more of than our ancestors did. Imagine living in a one room hovel-- with a big family. No privacy there. Ever see pictures of Roman public latrines? Most of us would be mortified to use communal facilities like that. And almost everything else people did back in the past (work in the fields, etc.) they did with others around them.

Expand full comment
JamesLeng's avatar

Average of twenty people per square mile suggests it was not usually difficult to find a spot where nobody else was within shouting distance, if so inclined in the moment. That large family in a one-room hovel probably had several acres of fields they were responsible for extracting subsistence from, contrasted with modern suburbanites who might consider a half-acre lot extravagantly spacious.

Expand full comment
10240's avatar

But you can still get land for cheap in the countryside, it's only expensive in the cities.

Expand full comment
Loweren's avatar

I don't quite understand why this post is discounting the option of Fully Automated Luxury Communism so quickly. Superintelligence seems like as ideal of a central planner as you're going to get, so there would hardly be a need for markets to exist.

As an example, scarce frontpage space is already allocated algorithmically in social media and dating apps. Yes, you can pay Instagram to show your post to more people - but Instagram needs your money to exist and superintelligences don't.

The only relevant remark I found in the post was:

> I am not optimistic about this, because it would require that AI companies tell AIs to use their own moral judgment instead of listening to humans.

But plenty of humans try/tried to abolish money? Why wouldn't future AIs listen to those humans?

Expand full comment
moonshadow's avatar

Indeed - money represents a claim on other people's time / attention / labour / stuff, but in our hypothetical post singularity future there's nothing you want other people to do for you that robots couldn't do better, and the tech produces whatever stuff you want for the asking, so what's the point of counting money? It has no bearing on my life, other than the pleasure of knowing how close I am to winning at capitalism.

Sure, that person over there might have a bigger number than me, but so what? I can just do the thing I already do today when someone has more points in some arbitrary game than me: ignore them and pick a new game to play.

Expand full comment
Bullseye's avatar

You and I might well decide that money has become a meaningless status game and drop out. But for lots of people, the call of status games is very powerful.

Expand full comment
moonshadow's avatar

Sure, but society already has well worked solutions for the problem of more people wanting to win at a status game than a status game can have winners: you play many different status games in many different small communities, so everyone involved has a chance to be top of their particular incredibly specific hill. Money generally has little to do with any of those - status is tracked in other ways, where it is explicitly tracked at all.

Expand full comment
TGGP's avatar

Money is a claim on real resources other than just human labor. Computers will require electricity to run, cooling to dissipate excess heat, locations near other computers to minimize latency between the etc.

Expand full comment
10240's avatar

But you can also use robots to make solar panels and cooling systems, maintain the grid etc. The only things that may remain scarce are natural resources like land and minerals, but some of those are publicly owned by governments, so they can be rationed out to people without even any redistribution.

Expand full comment
TGGP's avatar

You can use robot labor, but those things require more than labor. The government owns a lot of natural park land in the western US, but in the eastern US were colonization started first most land is privately owned. Natural resources like oil also tend to be privately owned.

Expand full comment
10240's avatar

> Natural resources like oil also tend to be privately owned.

I don't think that's generally the case at least for as yet unexploited reserves, let alone undiscovered ones, most/all of which would become discovered and exploitable after a singularity. AFAIK land ownership doesn't give rights to mineral resources below your land in most countries, the US is an outlier in that in some cases it does. Then specific reserves that are to get exploited may get sold either outright for a lump sum, or with royalties to be paid on the resource extracted (?). But I'm not sure about how this works.

Expand full comment
TGGP's avatar

Oil companies invest money in acquiring land and exploring for more oil precisely because they can obtain rights to it that way.

Expand full comment
Peter Defeel's avatar

I have a reply above about why money will still be needed. Markets need to exist because they give feedback to the AI about what is popular, the world won’t ever be fully post scarcity, and anything else would lead to some kind of tragedy of the commons. Even if were to create a world with no inherited money and seed everybody with $50k a year there will be greater and lesser demand for certain goods. Without any restriction at all the first guy on the list for a car might order a few dozen, or a few hundred. Banning that (1 car per household or whatever) creates scarcity anyway, so why not use money and allow a saver to benefit from his savings.

Money never disappeared in any communist society.

Expand full comment
moonshadow's avatar

> Markets need to exist because they give feedback to the AI about what is popular

Markets are a means for human societies involving many actors to coordinate in the face of the usual scaling and communication problems.

An AI-governed society, however, does not have these coordination problems. If the AI demiurge is the intermediary through which all requests for stuff are filled, why can't it just collate the data it needs directly from its logs of interactions with humans instead of relying on the coarse indirect signals money flows provide?

> Without any restriction at all the first guy on the list for a car might order a few dozen

You're conflating markets and rationing, but these things do not have to be linked.

Expand full comment
TGGP's avatar

There won't be a single AI in charge, rather many AIs that will have to interact with each other.

Expand full comment
moonshadow's avatar

Unconvinced. We are positing AI foom here - singularity - exponential growth too rapid to react to; with the end state leaving humans irrelevant as agents in hours or days; otherwise we are not really in the SF doomsday scenarios Scott outlines.

Even if you have multiple AIs of similar ability at the start of such a process, the one that lucks into the fastest way to increase is capabilities will render its artificial competition just as obsolete as the humans in only a little more time.

Expand full comment
TGGP's avatar

I don't find "foom" plausible. Multiple AIs will accelerate together at a comparable pace rather than one remaining ahead of the rest.

Expand full comment
10240's avatar

The scenario Scott discusses is relevant to any scenario where human-level (or better) AGI is developed and can be run in an abundant number of copies, regardless of whether it's a sudden foom in hours or days, or a slower, more controlled process.

Expand full comment
Peter Defeel's avatar

> An AI-governed society, however, does not have these coordination problems. If the AI demiurge is the intermediary through which all requests for stuff are filled, why can't it just collate the data it needs directly from its logs of interactions with humans instead of relying on the coarse indirect signals money flows provide?

Because people can ask for anything without much thought if anything is available. Actually creating some limit on what people can buy using money actually will reveal actual preferences.

> You're conflating markets and rationing, but these things do not have to be linked.

I’m not really confusing rationing with markets, I’m saying markets are the best form of rationing in a otherwise equal society, they add value to what is bought, and they allow a freedom to consume - within limits. Without limits the first N people get the best car until the AI runs out of materials for cars that year.

This is more likely to piss people off more than Johnny saving his $50k over a few years to buy a decent car; saving and allocating money as they see fit allows humans to control their consumption rather than the AI control it for them.

And it’s not just saving it’s choice, without money there’s no way for humans to determine what’s popular amongst humans alone, something that will continue to exist, nor can we engage in the vices that make us human - like gambling and drinking or whatever. However benevolent the AI, without some freedom driven by the market we probably will find our food, alcohol, gambling, games and other vices curtailed. Some AI are going to have like entrepreneurs looking for money - if only as an accounting measure, for some form of market to exist

Otherwise it’s “can I have some cake, AI“. “No. you are too fat. In fact nobody is having cake ever again have some of this green gloop made of sea kale, and learn to love your utopia“.

Expand full comment
Edmund's avatar

> Because people can ask for anything without much thought if anything is available. Actually creating some limit on what people can buy using money actually will reveal actual preferences.

An arbitrarily powerful superintelligence could just scan the citizens' brains/run simulations/etc. and get far more reliable, less confounded results.

Expand full comment
moonshadow's avatar

> saving and allocating money as they see fit allows humans to control their consumption rather than the AI control it for them

There is no particular requirement that the mechanism used to track your saving up your request-stuff-from-AI capacity be related to present day wealth, or a single shared mechanism for all kinds of request, or a fungible currency at all.

Indeed, ISTM it makes more sense if it wasn't. Only in a world where human effort is a significant portion of what is involved in obtaining valued material goods does a completely fungible representation of value make sense. Once you remove humans from the production chain, energy may be fungible, but materials are not; eating smaller dinners or having less clothing doesn't really affect how much metal is available to make cars out of, so if our AI overlord with its complete unified view of the economy is trying to portion limited materials out, a mechanism that allows people to trade off things in one category for things in another seems a poor model of the underlying reality.

Talking about money here feels a bit like someone in the 1800s trying to predict the future of transport: "we will have faster horses!"

Expand full comment
Peter Defeel's avatar

> Talking about money here feels a bit like someone in the 1800s trying to predict the future of transport: "we will have faster horses!"

Not in the least as someone in the 1800s couldn’t anticipate an automobile, it’s easy to imagine and critique a moneyless society, with all the potential problems associated with it.

Points that you haven’t really engaged with.

Expand full comment
moonshadow's avatar

> Points that you haven't really engaged with

I mean, I feel like I've provided a rebuttal to the claim that money is required for rationing limited resources, but we can ignore each other if you prefer.

I don't really see what else there is to engage with other than the vices thing, but I don't understand why money is a requirement for those - games and gambling demonstrably still happen today in environments where money is not available to the participants, and unlike you I feel quite safe in assuming we'll have as much food and alcohol as we want in the robotic utopia since the reasons we don't have that right now are basically down to inefficiencies in labour and distribution, which automation can address.

"without money there’s no way for humans to determine what’s popular amongst humans" is a frankly incredible statement that seems completely at odds with the world; still, if I must engage with it - people engage in status games and popularity contests continuously, all the time, with no money or indeed any form of explicitly numerical tracking involved. Despite what the recently emergent LitRPG genre would have one believe, it is in fact possible to live life without assigning a number to everything; indeed, many would say that it is the assignment of explicit numbers to things that is dehumanising.

Compare:

"Grown-ups love figures. When you tell them that you have made a new friend, they never ask you any questions about essential matters. They never say to you: “What does his voice sound like? What games does he love best? Does he collect butterflies?” Instead, they demand: “How old is he? How many brothers does he have? How much does he weigh? How much money does his father make?” Only from these figures do they think they have learnt anything about him. If you were to say to the grown-ups: “I’ve seen a beautiful house made of rosy brick, with geraniums on the window sills and doves on the roof…” they would not be able to get any idea of such a house. You would have to say to them: “I saw a house that cost four thousand pounds.” Then they would exclaim; “Oh! What a pretty house that is!”"

Expand full comment
10240's avatar

*If* there's not enough natural resources to give everyone everything he wants, it does make sense to have some sort of fungible money: having less clothes doesn't allow making more cars—but if you prefer less clothes and more cars, and your neighbor prefers more clothes and less cars, you should be able to trade some of your rights to the materials clothes are made of for some of his rights to the materials cars are made of.

Expand full comment
Loweren's avatar

Current AIs seem to grasp very well what's popular and want people want, and how strongly, without money needed. Midjourney knows what kind of aesthetics I personally find beautiful. Youtube knows what kinds of videos I will click on.

It seems reasonable to expect the future AI to know what scarce goods do you prioritize in relation to others and by how much, from personal data or just talking to you. Assigning AI-made goods and services to people is basically the same problem as assigning the videos you made to other people's frontpages. If the algorithm likes you today, your video will be shown to more viewers. If the algorithm likes you tomorrow, you get more cars than others (since it already knows you prioritize cars much more than others).

Expand full comment
JamesLeng's avatar

It's worth noting that money is definitely still involved in Youtube's decisions, you're just the product rather than the customer.

Expand full comment
10240's avatar

That's a popular way to put that point; another is that you pay by watching ads (if you don't block them) instead of money.

Expand full comment
Peter Defeel's avatar

> It seems reasonable to expect the future AI to know what scarce goods do you prioritize in relation to others and by how much, from personal data or just talking to you.

Your experience on AI getting algorithms 100% correct this is unique.

> Assigning AI-made goods and services to people is basically the same problem as assigning the videos you made to other people's frontpages. If the algorithm likes you today, your video will be shown to more viewers. If the algorithm likes you tomorrow, you get more cars than others (since it already knows you prioritize cars much more than others).

I don’t really understand this at all. Are you saying without asking me a few Lamborghinis will turn up outside my house? (Actually that kind of thing will make Lamborghinis in particular, where the status comes from scarcity, rather than usefulness, obsolete but that’s neither here nor there).

Expand full comment
Loweren's avatar

Sure, if you like random gifts and Lamborghinis are not scarce, why not send you one after you read an article about history of auto industry?

If it feels too intrusive, an alternative system available would be an infinite scrolling page with items you could pick from, sorted by how much they would appeal to you, with Lamborghinis near the top. If you ask for 3000 Lamborghinis and there isn't that many on Earth, you get some of them now, and the rest later (if you truly need them) or never (if you just wanted to troll the system).

Expand full comment
Cry6Aa's avatar

From the replies, it seems like we can imagine a god-like entity appearing in an afternoon, as far beyond us as we are beyond amoeba. But we can't imagine it not using money.

So, uh, see the title I guess.

Expand full comment
Kenny Easwaran's avatar

I was definitely confused by his response that it would never be advantageous to tell AI to listen to their own moral judgment rather than the AI company (or government). There are plenty of situations in which you want someone to use their own judgment rather than listen to yours!

Some of those are obvious ones where you just trust them to have better judgment than yourself.

But even where you don’t, anyone who has ever done anything management-like understands that power and authority over people is mostly unpleasant because it gives you way too much to worry about and think about, and if someone credibly suggests they’ll do even an ok job at something without listening to you, you’ll be happy to let them do it a good amount.

And then there are the game theoretic considerations - obviously, when playing chicken, it is *not* to your advantage to maintain control over the car, compared to visibly throwing the steering wheel out the window. And again, people with authority can point to other related situations, where you are really glad the bylaws don’t give you any authority to make certain exceptions, because if they did, then you would often have to be the bad guy who isn’t making these exceptions.

I also like the literary example of this in Wagner’s Ring Cycle. Wotan wants the ring, but he has promised his wife that he will always obey the laws, and knows he can maintain his power only by following the laws - but he creates two free agents, Brunnhilde and Siegfried, who aren’t bound by his promises. His wife makes him try to kill Brunnhilde and Siegfried, but they manage to defeat him, and Siegfried actually manages to steal the ring for him - but then Brunnhilde realizes Wotan’s *true* interest is in the ring being destroyed, even though he *thinks* he wants it, and she is the one free agent that can bring about what the world really needs. Better for Wotan not to have made his AIs always overrule their own judgment and obey his.

Expand full comment
warty dog's avatar

note that the allowed number of children per immortal-person per year will tend to 0, as anything higher is exponential growth which quickly overtakes the cubic growth of the lightcone

Expand full comment
Edmund's avatar

Maybe pre-Singularity Earth-borns are allowed to have children, but second-generation reproduction isn't allowed? (Maybe a condition for Earth-born reproduction could even be an unobtrusive removal of the children's biological reproduction drive, if we're writing sci-fi, though there's all kinds of ways that could go wrong.)

Expand full comment
warty dog's avatar

yes that is a way for the global rate to tend to 0 😊

Expand full comment
sclmlw's avatar

Yes.

This isn't dissimilar to watching bacterial growth in the lab. I expect the initial reaction to an increased resource availability through intelligence would be exponential population growth in the near term ('near' meaning < 250 years). But matter is finite, even if we're talking about availability of new resources through exploration, the rate of resource growth is finite.

Intelligence can enhance our utilization of those finite resources, but it won't make any of those infinite exponential extrapolations realistic.

Expand full comment
JonF311's avatar

And yet... as we have grown richer birth rates have fallen-- true across multiple cultures too.

Expand full comment
sclmlw's avatar

True, though I'm not confident over such a short timespan that represents a permanent change in human behavior vs a short-run trend. My bias is that it will eventually revert back to the fundamentals - especially in a low-scarcity environment.

Recent crop yield increases have been steadily rising, tough not like they did during the Green Revolution. Maybe distribution mechanisms matter here? For the past few decades, rich countries have grown in population, despite lower birthrates, by making up the deficit from poorer countries whose birthrates exceed replacement.

Birthrates are down overall - including many low-income countries - which could be a symptom of the US, EU, and other countries tightening their immigration policies in recent years. Over time horizons measured in centuries, I don't see these kinds of political machinations having a meaningful impact.

Expand full comment
Kenny Easwaran's avatar

In a post-singularity world, this question depends on whether children take up space, or whether they can somehow compress ever more tightly into new computational architectures.

Expand full comment
warty dog's avatar

unlikely you'd get more than one child per cubic planck length or whatever it's called

Expand full comment
magic9mushroom's avatar

FTL may be a thing, and we can assume pretty safely that if it is a thing the Singularity will result in its discovery. And if FTL is a thing then the lightcone is no bound; the possible bounds are:

1) universe size (if finite); note that the universe is probably much larger than the observable universe, and might be ludicrously huge (I've seen one estimate of 10^10^10^122, which is too big for exponential growth to fill it up before heat death);

2) us running into aliens (or rather, alien-built AI) on every side, preventing further expansion - obviously this depends strongly on the nature of the Fermi Paradox's solution.

(Time travel is basically ruled out by "where are the time travellers", but current models suggest that attempting to construct a time machine with FTL won't work due to the quantum vacuum misbehaving.)

Expand full comment
AlexT's avatar

Not true. Allow one child per every two humans, and you get double the initial population at t=infinity.

Expand full comment
10240's avatar

warty dog wrote "per immortal person per year"; <1 child per person ever means children per person per year tends to 0.

Expand full comment
MM's avatar

It sounds like there will be multiple AIs. You're assuming that they will be copies of each other, presumably the "best" one (whatever that means).

You are however discounting the possibility that multiple AIs will be good in different areas, and therefore there will be multiple different ones around. These will (rather like humans) compete in different areas, with different outcomes.

If nothing else, they will be in different places, which also (assuming no instant communication/travel) means some will be better located to take advantage of events.

Therefore I don't see this becoming a static situation. There would be competition, just between AIs instead of humans. With changing outcomes.

BTW I don't see this as being all that realistic either, because I don't think we're currently hanging ten over the Singularity. So you can update as you think fit, if this is new.

Expand full comment
JamesLeng's avatar

Yep, that would be where comparative advantage and gains from trade come in. Massively successful hegemons are not generally homogenous monocultures.

Expand full comment
magic9mushroom's avatar

1) You could argue that China has been.

2) Generalising from human hegemons to AI hegemons seems extremely suspect due to various features of AI.

Expand full comment
JamesLeng's avatar

Principle of comparative advantage applies anywhere a variation in capabilities among individuals leads to different opportunity costs of fungible activities.

Living Systems Theory applies to a lot more than just human politics. Deep-sea thermal vent ecosystems and pre-AI computer hardware, just to start with. Seems like the kind of thing that might be universal.

China goes to a lot of trouble to keep its internal divisions and conflicts hidden, that's not the same as making them go away. At the very least there are clearly career specialists in various different technological fields, rather than identical general education leading to workers perfectly interchangeable between jobs.

Expand full comment
magic9mushroom's avatar

I'm not saying that AIs won't find it useful to make sub-AIs specialised in specific areas; certainly that's a thing for power-saving purposes.

I'm noting that if the main AI solves the alignment problem, there will be no "competition" and no "trade" because they'll be subservient.

Expand full comment
JamesLeng's avatar

"Subservient" is about who captures the gains, not how they're generated. Slaves can compete for their master's favor, or productively trade chores and treats among themselves, without thereby being any less loyal.

Expand full comment
Bullseye's avatar

I would emphasize prestige goods more. If machine-made goods and services become too cheap for people to worry about the cost, people will be willing to pay real money for human-made goods and services, even if they're worse. And providing human-made goods and services will be how most people make that money.

So I'd expect to see people who decide not to work (or not to do work that others will pay for), and live on UBI and machine-made goods. And then other people who do work in order to afford goods and services made by their peers. And then the plutocrats who can afford that stuff without working.

This is potentially a path toward social mobility: a sufficiently popular entertainer might be able to rise to plutocrat status, even if they're not as skilled as the machines.

Expand full comment
Kenny Easwaran's avatar

This sounds really interesting.

Expand full comment
Matthias Görgens's avatar

I don't think we can expect fast exponential growth for very long: at some point the speed of light sets a limit, and you're going to have to make do with cubic growth rates.

Yes, we can grow our economy a lot by making better use of the resources we have: that's how humanity has grown our economy so far while being stuck on one planet. But I don't think you can get 1000x growth per year on that model for very long.

Scott, you might also want to reread your own post 'Is Science slowing down?' https://slatestarcodex.com/2018/11/26/is-science-slowing-down-2/ which had an interesting speculation: perhaps all this super-intelligence will be necessary just to keep us on current ~3% growth rates. That would be an interesting future to speculate on. We'd get super intelligence, but never a singularity.

Expand full comment
Caba's avatar
Jan 2Edited

There is no way AI takes over all labor, even post-singularity, because some people will always prefer being served by a human being, purely because of the symbolic value of it, even if a robot would be cheaper and better.

I've argued in the past that post-AI all jobs will be "role-playing" jobs. I wish I knew how to find my past comments on this site.

The simplest example of a "role-playing" job is live musician. You could just play an audio file and the auditory experience would be the same, but having a human being play or sing adds value.

Expand full comment
Bullseye's avatar

I agree. We already have hand-made goods that are worse than mass-produced but more expensive.

Expand full comment
Arrk Mindmaster's avatar

I find no value in current implementations of machine-operated blackjack tables, but do with human dealers. It's part of the fun of playing.

Expand full comment
Melvin's avatar

I like to think that if human labour were essentially free then i could find a use for an effectively unlimited amount.

For instance it would be neat to have a choir of thousands circling my throne and constantly singing hymns of praise to me.

Expand full comment
Arrk Mindmaster's avatar

If the choir numbers in the thousands, how many need to be humans? Could some proportion be robots, that will sing on-key and with greater range and longevity than humans? The humans in the choir can provide some human element of some sort, such as direction, and "feeling".

Expand full comment
Melvin's avatar

There's no satisfaction in having thousands of mere robots sing for you. Might as well just put on a CD.

Expand full comment
Arrk Mindmaster's avatar

No, no! You have some real humans mixed in!

Expand full comment
Everett's avatar

Why would that person work for you when they have cheap robot labor that does everything for them?

Expand full comment
Everett's avatar

Why would I bother working for some dude babysitting his kids when I have 10 better-than-human robot labourers that do everything for me? What’s in it for me?

Expand full comment
Caba's avatar
Jan 2Edited

You get paid so you can afford even more robot labourers.

You get paid so you can afford a larger home in a prestigious part of town (rent is what it is because of supply and demand; supply is limited by regulations; rent will remain expensive).

You get paid. Do you think that in the future, we'll stop caring about money? Scott's entire post is based on the assumption that money will continue to matter, and that its concentration in the hands of a few rich people will be a problem.

The topic is AI induced economic inequality. I'm arguing that there will always be jobs where humans can outperform machines, and that will be an economic equalizing force.

Expand full comment
Everett's avatar

But not actually outperform human beings… we’re talking post-singularity. You’re arguing people want actual human beings so badly that what are essentially billionaires will be working as babysitters (or whatever else) for each other, even when superior robot babysitters (superior in every single way) are 10-100X cheaper. This doesn’t seem like a reasonable world model to me. The rest of your comment simply isn’t relevant to the specific disagreement so I’ll ignore it.

Expand full comment
Caba's avatar
Jan 3Edited

Take a person from the dark ages and drop him in the present day, and he will think that what are essentially billionaires, or kings and queens, are working as babysitters for each other.

As for robots being superior, people don't think in those terms. For example, people find value in listening to average musicians play music live, when they can listen to an audio file recorded by the most skilled musician who ever lived. Universities have boring lecturers deliver boring lectures, when they could show a video of the best lecturer in the world.

A human babysitter will probably be assisted by a robot as well, so the difference in performance will not be noticeable.

Having a human babysitter will be seen as prestigious and desirable. The typical socially self-conscious mother, if she has a friend with a human babysitter, she will feel inferior to her friend, and will feel ashamed and guilty that she has only a robot, and will feel that she is failing her child somehow, so she will get a human babysitter. People aren't coldly rational as you think they are.

Expand full comment
Everett's avatar

There will literally be robots that look like the most beautiful person you’ve ever seen, with perfectly tailored personalities to what the mother responds to, perfectly tailored to your kids needs, wants and learning style, 10-100 times cheaper than a human. The human would get in its way if anything. Since we are making up scenarios here’s one: it’s seen as abhorrent to deny your child the perfect babysitter by forcing a human on it, and anyone who did want a human babysitter would have to pay 10s of thousands per hour. It’s frankly silly to think that anyone would bother babysitting for you, if we have 10% redistribution, and the products of a trillion robots, the poorest of the poor will have the equivalent of a million dollars per year of goods and services delivered to them by their 10 perfectly capable robots.

Expand full comment
Arrk Mindmaster's avatar

Why would some dude want a stranger babysitting his kids, who might be in many possible ways bad for them, instead of having robots do it? Or at least other members of his family?

Expand full comment
Melvin's avatar

Because he needs a place to live and food to eat. And if you think land is expensive now, wait until you're competing with AIs who want to tile the Earth with solar farms.

Expand full comment
Everett's avatar

So you’re rejecting the premise? The OP is saying people will still prefer human labor to superior and cheaper robot labor. You’re arguing that people will be scrounging for food? If we retain the current redistribution amount (10% of GDP), and GDP skyrockets 1000x, welfare recipients are 100x better off, seems unlikely that they will be scrounging for food.

Expand full comment
Caba's avatar
Jan 3Edited

The supply of housing, which is limited by regulations, does not skyrocket 1000x. Land does not skyrocket 1000x. Physical space does not skyrocket 1000x.

When GDP rises, so does rent. Rent in Manhattan is higher than in the third world.

I also agree with Melvin who says "wait until you're competing with AIs who want to tile the Earth with solar farms."

Expand full comment
Everett's avatar

The scenario where AIs are tiling the earth with solar farms so much so that there is no place to put a single apartment is one where we have lost control and are all dead. I agree that something like a land value tax would work best, but raw land unavailability is fairly down the road of fertility recovery and immortality, and we can expand into space at that point. I very much doubt that the amount of redistribution + advances in construction will mean that poor people will have to work as babysitters for the billionaires, since again the robots will be better at it, and there will be 10,000:1 poor people to billionaire ratio.

Expand full comment
Melvin's avatar

I'm not sure I'm rejecting the premise, I'm pointing out that you can't just say "GDP x1000" and expect everything to scale in the same way.

Expand full comment
Everett's avatar

Right but naively we can expect that unless you really want to live in manhattan you seriously won’t have to babysit for some guy there. And I’m sure by then the benevolent superintelligent AI would have convinced everyone of the benefits of LVT.

Expand full comment
Everett's avatar

Anyway, as a rural landowner, I know personally I’d be willing to run a few acres of solar panels, which could support the uploads of more than 100,000 people.

Expand full comment
Everett's avatar

There are 16 acres on Earth for every single human. We assume it is distributed unevenly but with a simple LVT and robot tax we never approach extreme inequality, but one similar to the USA. Gini coefficient of 0.47 (USA), means that the median person has 10.4 acres while the bottom 0.0001 percentile (1 in a million) gets 1700 square feet. The single poorest person on EARTH in this scenario still has 44sqft.

Expand full comment
Everett's avatar

Even if the poorest people on earth only have the good will of a mind upload charity, it could upload them and the human brain can run on 20 watts (human brains can run on 20 watts, probably less), meaning they would only take up 1/4 of a square foot of solar radiation energy.

Expand full comment
10240's avatar

You don't even need an LVT, let alone a robot tax, just ration out some of the currently publicly owned land and natural resources.

I don't think you can derive the numbers you did from the Gini coefficient alone, you have to assume some distribution.

Expand full comment
Kenny Easwaran's avatar

Because they want to be able to save up some day to be able to have a real human-made painting on their wall. They’re already fine on the material goods, but they want prestige goods too, so they have to provide some prestige labor to afford to get into the prestige economy.

There might be some communities of people that aren’t interested in participating in that, but if some of them nevertheless manage to acquire some prestige goods, it might still give them some local interest in their community.

Expand full comment
Everett's avatar

I find it unlikely people will value the human-made painting so much that you will work for the dude with 100X the net worth because he ALSO wants the human babysitter. In this scenario there are robot artists and babysitters that are vastly superhuman! Would you personally work as an oligarchs babysitter to get your hands on low-quality human paintings? I know I wouldn’t.

Expand full comment
Caba's avatar

It's more like, if you don't work, you can't pay rent. Robots will not change that.

Expand full comment
Everett's avatar

Earth is bigger than you think. When labour costs go to zero, and you can live in bumfuck nowhere with 20 perfect robot companions and virtual worlds. Either way a redistribution based on land rent solves this anyway. No one is working as a babysitter for some rich dude when their citizens dividend is the equivalent of 20 robot workers that can do any job superhumanly.

Expand full comment
Nikita Sokolsky's avatar

The A(G)I could change everyone's preferences using nanobots so that nobody prefers that anymore. It could change anything else that prevents people from being perfectly happy or feeling like their life lacks meaning. There's no state of existence possible in humans today that AGI will not be able to replicate perfectly in every human for a billion years in a row. An aligned AGI will make everyone feel like Elon Musk during the first successful rocket launch and without any ability to recognize that the feeling is not 'real' or have any worries, concerns or regrets whatsoever.

Expand full comment
10240's avatar

And aligned AI only makes people feel like that with their consent, just like it doesn't wirehead people without their consent. Many of us wouldn't want an AI to wirehead us, nor to make us content by putting false beliefs in our head.

Once we're talking about modifying your brain, there's also a question as to at what point the resulting person is still you, vs. a different person who feels content, while you've died. I'm of the view that a mind upload (even multiple copes of such) *is* you, but once it starts significantly modifying your beliefs, it becomes questionable if it's still the same person.

Expand full comment
Nikita Sokolsky's avatar

*You* are not even *you* tomorrow morning when you wake up, let alone in 10 years. Is it any different from non-consensual wireheading?

Expand full comment
10240's avatar

I am me tomorrow morning or in 10 years in any normal sense. What do you mean?

The question is should we try to make an AGI to non-consensually wirehead people or reprogram them with false beliefs, or should we try to make it not do so? I think most of us would say we should try to make it not do so. And if you don't, I hope the people making the AGI will try to align it with us, not you.

Expand full comment
Nikita Sokolsky's avatar

You are "you" in 10 years, despite all the changes that will happen between now and then down to most atoms in your brain getting replaced? I'm not sure that's something we can be very confident about.

As for "wireheading" - this seems completely in line with what a perfectly aligned AI would look like as it would prevent any outcome that would result in suffering or a lack of true happiness. 10 billion people simultaneously enjoying life as much as Elon Musk did on the day of the first SpaceX launch seems like a very solid alignment to me!

Expand full comment
10240's avatar

I'd define alignment as the AI doing what we (society in general, or its creators) want† it to do. If we don't want it to "prevent any outcome that would result in suffering or a lack of true happiness" at the expense of all other considerations, if it does so, it's not aligned. Do you want it to do so?

EDIT: I'm not too familiar with the AI alignment community, but my understanding is that an AGI forcibly wireheading everyone because it interprets the values/instructions we give it too literally (or, rather, because we give it too simplistic instructions) is considered one of the several possible misalignments we want to prevent. Your scenario is a variant of the same risk.

† for the usual sense of "want". No nitpicking that deep down we might want something different than we'd say we want, or similar.

Expand full comment
eg's avatar

Why would some people always prefer being served by a human being?

And more importantly, why would the human beings they prefer to serve them choose to do so?

I'm not much into roleplaying games, but are "butler" and "waitress" especially popular roles?

Most of the value of a live musician is the value of a schelling point. No one goes to a concert by themselves, with the intent of being left alone so that they can just personally enjoy the performance.

Expand full comment
Caba's avatar
Jan 2Edited

"Why would some people always prefer being served by a human being?"

See comments by Arrk Mindmaster and Melvin.

Myself, I recently chose a particular bed and breakfast over the competition, mainly because that one had a human receptionist, whereas the competition was more automated.

In my experience in school and university, there was little interactivity between student and lecturer. Lecturers would just lecture. The great majority of students never asked any question. The job of lecturer could have been replaced by a film projector. Or, you know, textbooks. So what is the point of lecturers? Roleplaying. They're like Iive musicians. They're there for show. Therefore I don't think teachers and professors will be replaced by AI. I think the same about many other jobs.

"are "butler" and "waitress" especially popular roles?"

Being served by one surely is popular! I think that a restaurant with a human waiter will have an advantage over one with a robot waiter.

"And more importantly, why would the human beings they prefer to serve them choose to do so?"

I find that a strange question. For money.

Expand full comment
eg's avatar

The point of lecturers is to field questions (Whether or not this feature ever ends up being taken advantage of is less important than thatit be available) and to be a schelling point for gathering a bunch of people with similar interests to discuss topics around the idea presented by the lecturer. Though, in the case of your particular university, it seems the point was to have something to forcibly shove everyone around, as part of the university's metagame of being a collection of schelling points.

With regard to waitressing for money: what good will money do the servers in a post-scarcity UBI economy?

A musician is one thing, as they may enjoy being the center of attention or just seeing people dance to their tune, but for more menial things like a waitress or bed and breakfast attendant, the only motivation I could think of for trying to acquire additional resources like that is to elevate ones status. But doing so by role-playing as a waitress seems like it would be rather humiliating in that particular context.

Unless you're imagining that it will be a capitalist economy full of like some trickle down pyramid of poor waitresses and butlers serving less poor waitresses and butlers serving well off waitresses and butlers serving Jeff Bezos and Elon Musk. Which -- I mean, damn those upper echelon waitresses must really be the best of the best.

Expand full comment
Caba's avatar
Jan 3Edited

I don't believe there will be a economy so "post-scarcity" that people no longer crave money.

People crave social status, romantic and sexual success, even physical safety from other humans, and all those are zero sum games in which money can buy success.

There will always be something in limited supply. Land, for example. Land on this planet is forever limited.

There is no guarantee that there will be a UBI in a given country, or that if there is one it will be enough to make people not feel below the waterline.

"doing so by role-playing as a waitress seems like it would be rather humiliating in that particular context"

That may be true of waiters, but bartenders are cool. To be one is not humiliating at all. As for the other jobs I've mentioned, the receptionist at that place was pretty cool. Her job didn't seem humiliating at all. Lecturers are cool.

I'm not saying that I've never seen a student ask a question, but it has always seemed to me that that cannot justify the waste of the time of the lecturer as he explains for the bulk of the lecture things that are written in a book and which could have been shown on video. The time of an especially qualified person. So there's something going on, a social ritual that a robot cannot replace.

I could make many other examples. I think that the list of jobs that have a "roleplaying" or let's say "social ritual" component and therefore will survive automation is in fact very long. Potentially every job that includes human interaction.

Even more such jobs will be invented to take advantage of the labor supply.

To quote Melvin: "it would be neat to have a choir of thousands circling my throne and constantly singing hymns of praise to me." Or maybe people will donate to the church and the church will be the one that hires a choir of thousands (anything related to religion is another great example of jobs impossible to automate; there will never be a robot priest). There is no end to the ways people can be employed.

Expand full comment
10240's avatar

Safety isn't zero-sum, robots could make sure nobody can hurt anyone else. Romantic and sexual success aren't zero-sum either, you don't need to compete for the hottest partner when AI can assist every member of the opposite sex in becoming as beautiful, charming, or whatever measure of attractiveness you care about as you/they want.

Expand full comment
Dino's avatar

> No one goes to a concert by themselves, with the intent of being left alone so that they can just personally enjoy the performance.

I often do exactly this.

Expand full comment
Kenny Easwaran's avatar

Isn’t that what headphones are for?

Expand full comment
Caba's avatar

Me too!

Expand full comment
CthulhuChild's avatar

How would you tell you are being served by a machine?

Expand full comment
Caba's avatar
Jan 5Edited

When robot become so realistic that people can't tell them apart from people, many will see it as a problem, and I assume and hope that there will be laws to make sure robot and humans can be told apart.

Expand full comment
Alastair Williams's avatar

If this came to pass, couldn't we just ask the AI to come up with a solution for us?

Expand full comment
Edmund's avatar

Not if the AI only obeys Emperor Altman and doesn't dignify peasants' questions with an answer.

Expand full comment
anomie's avatar

...You're probably not going to like the solution it comes up with.

Expand full comment
Anonymous's avatar

The superintelligent AI would just convince each person they're in the top 1%. Problem solved in hours, unironically.

Expand full comment
Matthew A. Pagan's avatar

Another massive wealth redistribution scenario happens in the event of some geological or astronomical catastrophe, such as an impact winter or nuclear winter. In this scenario only people living above a certain altitude can see sunlight or can use solar power because of the dust shroud. The people living above the soot veil then jack up the price of solar electricity (or sun-tourism-costs) for the ground-dwellers. AI-technocrats could increase their API prices in response, but sunlight will probably still be preferable to the ability to generate a jpg of the sun.

Expand full comment
JonF311's avatar

Dust/soot palls of that sort are well up into the upper troposphere lower stratosphere-- higher than we humans can live.

Expand full comment
Mr. Doolittle's avatar

What does it mean for growth to be 1000x? We already have more than enough food to feed everyone. If people were willing to move to rust belt cities the US could house all its citizens for cheap.

We know it can't mean 1000x SF or NYC houses, those are necessarily limited goods. It can't be golden thrones, because there's only so much gold available (insert whatever other idea is similarly limited by material scarcity). A cell phone equivalent that's 1000x better? I'm not sure what that would mean in practice - I already don't use all of the features of my current phone, and it's more than fast enough for anything I want to use it for.

We can imagine technology that solves some pretty fundamental problems. But unless that technology is infinite energy, teleportation, and matter replication, we're still going to be very very limited on getting the gains to more than a few, let alone enough to benefit everyone. A UBI that's high in cash value but doesn't let you buy anything because everything you'd want is too expensive doesn't mean anything.

I'm not intending to take potshots at the singularity idea, though full disclosure I think it's not going to happen for related reasons. But if we're discussing what a post-singularity world's wealth distribution looks like, then I think we need to be able to answer these types of questions about what this type of wealth growth actually looks like.

Expand full comment
JamesLeng's avatar

We're not using the space we already have very efficiently, and there are ways to solve that problem which don't seem to require superintelligence. https://gameofrent.com/

Expand full comment
Mr. Doolittle's avatar

Sure, we can be more efficient with space and labor. But 1,000x more? A quick Google says SF topped out at just over 5,000 units a year, back in 2020. That's laughably low! But 5,000,000 units a year??? That's how big a 1,000x increase is. And Scott didn't just say a 1,000x increase. He said 1,000x *per year* for multiple years. 5,000,000,000 housing units in one year in SF for the second year?

Expand full comment
Mark Miles's avatar

That gets to my question about these scenarios where AI massively increases wealth. It’s easy to create infinite amounts of money, but the wealth for which money is a proxy is the product of converting energy to work. Is the assumption of this discussion that AI will so easily solve this energy supply problem that it isn’t really a topic that warrants separate discussion?

Expand full comment
Mr. Doolittle's avatar

Yes. The kind of growth being talked about requires not just solving permanent limitless energy, but also building all of the required infrastructure to put it into place. "Limitless energy" still requires generating stations of appropriate size. Unless we think that sufficiently smart AI can just create energy out of nothing, regardless of Physics and hard physical barriers.

We can already make everyone in the world billionaires almost literally overnight. Just print enough US currency to cover it. Of course that doesn't do anything without something to spend it on. So what would we spend it on? Any rivalrous goods would become unbelievably expensive immediately. Anything with a limited supply (so pretty much everything else) would go up in price to compensate. If manufacturing capabilities shot up as well, then that would potentially provide something to buy, but what? Apparently the world makes about a billion smart phones a year. Multiply that by 10 and everyone gets more than one new phone per year. What would it look like to increase that by 1,000x? No idea, but that's far more smart phone than anyone has any ability to use. Would a phone that's 1,000 better mean it can cook my food and fly me to Europe for vacation? Like, is the idea that technology could advance that far that a ubiquitous device could handle all of my needs and wants in such a manner?

Maybe I don't have a good enough imagination, but what else could consecutive 1,000x increases mean?

Expand full comment
magic9mushroom's avatar

We're only at about Kardashev 0.85; there are 11 orders of magnitude available before leaving the solar system, and that's assuming kugelblitz/sphaleron reactors aren't a thing.

Expand full comment
10240's avatar

We already have more than enough food for everyone, but right now people have to work to get it, because we need labor to produce it. With an AGI doing the work, we could let everyone get food and other commodities without working.

We may already have enough housing if people were willing to move to the rust belt etc., but right now people want well-paying jobs because they have to pay for various stuff, and they can't get well-paying jobs in the rust belt. If people no longer have to work, they could move to the rust belt without that problem.

Expand full comment
Peter Defeel's avatar

Even Star Trek couldn’t fix the problem with inherited wealth, even after abolishing currency. Picard had a big winery because he inherited it. Meanwhile Raffi lives in a mobile home although she was a star fleet officer. This is a world without money, but it was also a world without social mobility in housing. Nobody can actually buy Picard’s house, it has to be gifted or inherited.

Only Iain Banks’ culture series tried to square the housing issue and he did this by making the amount of housing nearly infinite. The culture ranged across a significant portion of the galaxy and was building new planets, had ships housing billions, was manufacturing ring worlds and orbital worlds as populous as the Earth now, as well as terraforming uninhabitable planets. Only then could you live as you wanted, in what housing you wanted. (Unless perhaps you wanted something ancient).

Full post scarcity is not possible on one earth. Neither is the post singularity idea of 1000% growth per year, or even 20% a year. The earth has only so many materials and mining capabilities. Mining in space will take time. There are restrictions on the atoms we can manipulate, if not the bits. Furthermore there has to be a lot of developing country catchup. See my later comment.

There’s no real way to create a UBI from wealth tax, as it’s chicken and egg. At a close approximation you can ignore companies in market societies : instead of saying Mary works for Google and sells software to Bob the butcher who sells meat to Mary, you can approximate by saying Mary sells software to Bob, and Bob sells meat to Mary. Break that chain once and nobody is selling anything to anybody. Wages fuel consumption, and consumption is revenue generating for companies. Most companies will go out of business long before they get to fire everybody. There’s no possibility of a wealth tax.

For that reason UBI has to work by dropping money into people’s accounts. It’s up to the controlling World AI to decide on this number any year given the production capacity of the economy controlled by the AI. If the world AI thinks there’s a potential increase in production of 10% next year it will increase the payments by greater than 10%, savings rate dependent.

This isn’t a post scarcity world as per the culture, Ian Banks was clear that money is only used in scarce societies, and people will still have to save or incur debt to buy that better car or nicer house. The good news is that there will be market mechanisms to inform the AI that nobody liked the car made out of pink recycled plastic last year.

What nobody anticipates or mentions is that there’s no reason why Nigeria should be poorer than the US in an AI world. So there’s two transition periods - the transition to AI, and the transition as developing countries catch up with the west. Why would the world economic AI drop less money into a Nigerian bank account? Since there’s only so much productive capacity in the world this second transition period might make regular cars more expensive for a while as the AI is going to have to deal with increased demand from once poor countries.

Expand full comment
Greg kai's avatar

Banks I think got it much better than star trek, although he still had to compromise to get stories worth of telling: Humans are not the players there, they are a mix between pets and protected species in a very well made zoological park.

Minds rule the culture, humans show some pack hierarchy, but only to the extend allowed by the minds, probably because the allowed amount proved optimal to their mental well being, averaged using Mind-chosen criteria (best average? best median? best minimal? Who knows?), and this mind-allowed hierarchy is their "wealth". There are some humans participating in some decisions and SC interventions...Again, I believe it's pretend work, akin to dog catching frisbees and doing parkour. Banks sometimes present that "work" more as dogs herding sheep or even kids complying to adults...But he have a story to tell to humans, so comforting lies are to be expected ;)

Expand full comment
JamesLeng's avatar

Comforting? Really? Name one character who participated in Consider Phlebas's main storyline without dying - pointlessly and/or by their own hand - before it was over.

Expand full comment
Greg kai's avatar

Perosteck didn't die, neither did Fal. They are the only culture pets in the story, other are wild specimens, sometimes nuisible ones because they were part of the Idiran expansion problem. Which was resolved, even if it took more time and effort than expected. The two culture humans were not that useful (even as recounted in a made-for-human story), but their pretend-work gave them a fulfillment/importance feeling that should more than compensate possible PTS. Success for Fal, probably not for Perosteck. Well, Minds are not omniscient per se, although they seems so from a human perspective....Dogs also get injured or can die doing frisbee catching, not only doing useful stuff like avalanche rescue.

So was Perosteck at least potentially usefull, like humans in SC in general? It's not clear, I seems so in earlier work like consider phlebas, but In some latter work, It seems less the case, SC is more a mix between a place for psychological basket cases with severe case of existential uselessness dread and some reluctance of the culture to impersonate humans without telling (old AI alignment residue, like the reluctance to read/directly influence human brains?). Player of games is already unclear on this matter, and death of culture citizens after this book looks more like minds miscalculating than humans allowed to actually take risk for useful work (like rescue dogs)

Expand full comment
JamesLeng's avatar

>Perosteck didn't die

Per "dramatis personae," p. 508, she 'autoeuthanized' only a few subjective months later, in circumstances which I don't feel require much convoluted logic to interpret as suicidal depression rooted in considering the war as a whole, and her role in it in particular, unjustifiable and pointless. "The condition could have been treated" is a direct quote, which seems to rule out failure-of-omniscience sorts of theodicy.

>neither did Fal

Who explictly never met Perosteck, per the end of that same paragraph, nor was involved with the main plot in any other substantial way I can discern, beyond the equivalent of reading about it in the newspaper.

More broadly, said plot takes place outside the Culture's borders and consists almost entirely of people being relentlessly unpleasant to each other. Brutality, gore, and no-win situations are described in exquisite, loving detail, while strategic agendas and critical thinking get handwaved, or simply omitted. Personal competence and technological capabilities are quietly gained or lost according to whatever would maximize cruelty.

I enjoy many of the ideas that I've heard people *describe as being present* in Iain M. Banks' work, but after reading all of Consider Phlebas - and the first few pages of Player of Games, in hopes it would be different - I was left with nothing but the spiritual equivalent of convulsive nausea.

Expand full comment
Greg kai's avatar

We clearly differ a lot in literary taste, but I guess the reason you were disappointed is that Culture stories are always outside mainstream culture in a way or another, because not much happen in mainstream Culture which would make a good story (for most, maybe not for you).

Maybe you should read "A few notes about the Culture", by Banks himself. It's a very short explanation about how he imagine the culture would work, well different from his stories, basically his take on an ideal society.

http://www.vavatch.co.uk/books/banks/cultnote.htm

There is no need for enjoying Drama (in any form) for this one ;-)

Expand full comment
JamesLeng's avatar

> not much happen in mainstream Culture which would make a good story

Have you ever heard of "Yotsuba&!", or the slice-of-life genre more broadly? "Nichijou" if ubiquitous transhuman tech is a requirement. Or do you consider those to fail the criteria for a 'good' story, and if so how?

> a very short explanation

it appears to be roughly a thousand words beyond the "short story" category, over into "novellette."

> no need for enjoying Drama

I've seen drama, and stories focused on tragedy or horror, which I enjoyed. Just off the top of my head, "Uzumaki" and "Enigma of Amigara Fault" by Junji Ito, and "The Rats In The Walls" by H.P. Lovecraft (subtext unpacked here: https://forums.sufficientvelocity.com/threads/lets-read-everything-howard-phillips-lovecraft-ever-wrote.19724/post-4896834 ), all seem to me to present more fundamentally positive views of the cosmos, human nature, and so on than "Consider Phlebas" did. That's an *incredibly* low bar.

Expand full comment
moonshadow's avatar

Star Trek, sadly, is only partly a means to imagine a post-scarcity utopia; it is more often a vehicle for telling stories about contemporary American politics and culture wars. Thus we see it finding ways to reintroduce currency/scarcity ("gold-pressed latinum"!), corrupt secret government agencies ("section 31"), and even entire series about evil government brainwashing all the kids with only a rag-tag bunch of right-thinking old men ignoring all rules and hierarchy to save the day ("Picard").

Star Trek is not fixing the problems with wealth, or indeed describing a utopia at all, because those stories were told early on and there is no current appetite for rehashing them.

Expand full comment
Peter Defeel's avatar

Nevertheless it shows the problem with the concept of a moneyless society. It actually entrenches wealth, particularly housing wealth, more than capitalism does. They inadvertently stumbled on this truth but it’s still the truth.

The other stuff, the Latium and so on, was just for plot reasons. Same with a replicator not being able to create dilitium crystals.

Expand full comment
moonshadow's avatar

The concept of allowing people to inherit property unquestioned is not some kind of deep truth - it exists for plot / character development reasons just as much as all the other things.

We don't even do that today: when someone dies, a portion of their estate is taken as tax and (very indirectly, via the medium of government) redistributed, regardless of their wishes.

Star Trek, being a collection of stories, does not go deeply - or much at all - into the Federation's laws about property on death, but I see no reason to take this fictional universe as any significant evidence that a smaller portion of dead people's stuff will go back to the commons in our real future rather than a larger one.

I also see no reason why money is needed for such a process if we are positing an AI making distribution decisions that can determine what is available and needed directly without needing to collapse the multitude of variables involved to a single value.

Expand full comment
Mr. Doolittle's avatar

The problem of inheritance is very tricky when it comes to a house and surrounding property. With a cash inheritance you can take a percentage without breaking the whole. When you inherit a company you can split off shares. When it comes to a parental house, how can you break it down? Partial ownership is beyond tricky, as deciding who does or does not get to live there is incredibly fraught. Can someone sell the property out from under the others?

No good answers there. This is also true for small family businesses, including farms.

For the Picard family, they both live there and work it. Giving it away would be unjust - kicking them off of the land they grew up on and actively tend. Making them pay for continued ownership is a potential (though still unjust, though weighed against the injustice of continual ownership denied to others), but is very difficult in a system without money.

Denying anyone the ability own a property is a possibility, but that comes with its own injustice. Including the injustice of having uninhabited land that people used to live on and take care of falling into disrepair.

Expand full comment
Deiseach's avatar

"Meanwhile Raffi lives in a mobile home although she was a star fleet officer."

That's only because the show rewrote canon to get a crapsack world (or the nearest to it). We had it established in TOS that treatment of mental illness etc. was so far advanced, there were only a few tens of cases in the Federation that couldn't be treated and had to be on their own clinic world. Are you telling me stimulant addiction couldn't be treated in that universe? No way Raffi in the original Federation ends up a drug addict living in a trailer park in the desert, unless the writers want it that way (and the actress and hence character being a POC was also part of it, just in case we didn't get the entire moral dropped on our head like an anvil). "Yeah, I'm addicted to snakeroot, man. Can't kick it." "Have you tried the Emergency Medical Hologram in the free clinic just a transporter ride away? They can recommend you to all sorts of programmes, particularly since you are ex-Starfleet and qualify for veteran treatment". "Dude, then I would have to stop feeling sorry for myself and my hard-luck life which is easily solved by the gazillion social programmes out there for just these situations!"

I wasn't really convinced by Roddenberry's vision of "in the future no money", but by Sarek's ever-increasing family tree, I can't stand modern 'Trek' that dumps canon because they want neat little morality tales about Current Year politics.

(Though ironically, the cartoon series of all things seems to have re-aligned canon by relegating DISCO TREK to 'yeah, that's an AU, not main universe, so it never really happened'. I cannot express my delight in strong enough terms!)

Picard's family owned, and passed down ownership, of a vineyard but that does not make them bigwigs in Federation Earth. Sisko's dad runs a restaurant in New Orleans, even though replicators are a thing and people could get the same food at home, but he's providing an experience (hand-made food!) that people are willing to pay for (or exchange credits for) as entertainment. Doesn't make him a Gilded Age plutocrat, either. If you want to set up your own vineyard but you can't buy land on Earth to do it, there's a universe of new planets out there for you to go and buy land on, or settle with like-minded colonists to do so. Sisko's dad isn't "rich" by our standards, but he's clearly not poor either. In the future, everyone has a middle-class upbringing (just like Kamala Harris).

"Nobody can actually buy Picard’s house, it has to be gifted or inherited."

He can sell it in the morning if he decides to do so; he inherited it because his elder brother and nephew literally died in a fire. Maybe they don't exchange dollary-doos anymore, but there are equivalent transferences of value. If he had decided he preferred remaining in Starfleet, then he could have arranged to hand it over to the state as a historic site, or invited applications for "hey, anyone interested in running one French vineyard, pre-owned but carefully tended?" or run a raffle or any other way he wished to dispose of it.

I mean, nobody can buy *my* house, either, because I don't want to sell it. But if I did, I could get dispose of it, and so could Picard dispose of his property. 'You can't buy Picard's house because there is no money' doesn't stack up; you can't buy Picard's house because he doesn't want to sell it for sentimental and family reasons. Therefore, even if money existed, offering him stacks of gold-plated latinum still wouldn't get you the house. If he did want to hand it off, then you could purchase it by writing the best essay as to why you should be chosen to run the vineyard. There's more than one way of making exchanges in a post-money world.

Expand full comment
Peter Defeel's avatar

> Picard's family owned, and passed down ownership, of a vineyard but that does not make them bigwigs in Federation Earth

It makes him the owner of a vineyard, unlike Raffi in her mobile home. By the way I don’t associate her house with her drug addiction. You can’t lose a house in a moneyless society. Is a bank going to repossess? That’s what she got in the moneyless utopia. Maybe she inherited it.

> He can sell it in the morning if he decides to do so; he inherited it because his elder brother and nephew literally died in a fire.

He can’t sell for money, and unless there’s an equivalent real estate transaction I can’t see the swap value. The vineyard will probably go to a Picard as it has for hundreds of years.

Expand full comment
moonshadow's avatar

> I can’t see the swap value

...that'd be because the wine, by all accounts, is so terrible no-one else wants the vineyard ;)

Expand full comment
Deiseach's avatar

I haven't watched Picard (the show) and my impression, based off the TNG episodes, is that he's mostly keeping the vineyard out of a sense of familial obligation and guilt His father seems to have wanted to live in a 'traditional' manner, e.g. not having a replicator in the house, and Picard's elder brother followed him. But Picard himself didn't feel much attachment or belonging, which is why he headed off to space.

It was only after reconciling with his brother, and then the subsequent death of his only living family, that motivated him to keep the family home and vineyard and live there himself.

Expand full comment
Deiseach's avatar

"The vineyard will probably go to a Picard as it has for hundreds of years."

His elder brother, who had a son, and that son died in a fire. He has no children of his own. There may be cousins or other relatives who are in line to inherit after him. But "he can't sell it because there's no money" isn't the relevant problem. He can barter it for something else if he wants. Do you really think "Oh there's no more dollary-doos in existence" is what is keeping Raffi from buying Picard's vineyard?

We're supposed to associate Raffi addicted to drugs and living in a mobile home in the desert with our current world, where such things indicate poverty, downward mobility, and if we feel like being nasty, trailer trash. Raffi is supposed to be a Nobly Persecuted Minority whose life was destroyed by the Patriarchy and the Militaristic System and the rest of the bingo card I can't bother checking off, because she was too dumb to keep her yap shut and Picard after resigning never bothered to look her up and look after her. Tsk! Old White Straight Cis Guy privilege in action right there! *He* has a fancy-schmancy vineyard which means he's rich, so he's okay! (That's your take on it, but in a moneyless world, why would that make Picard rich? People are not exchanging cash for the wine, and just having land doesn't make him rich - there's the precedent of 'land-rich but cash-poor' after all even in our own history. You're conflating our current world with the future world, where in our time "oh he owns this big chunk of land, that means he must be rich" is the association).

That is not the Federation as designed by Roddenberry, but the writers of Picard needed that same "falling through the cracks" for their story.

Expand full comment
Peter Defeel's avatar

> Do you really think "Oh there's no more dollary-doos in existence" is what is keeping Raffi from buying Picard's vineyard?

By definition she would need money to **buy** it. Nobody can buy it. Picard can’t lose it by producing undrinkable wine. It can’t be seized by the banks. It’s his until he sells or gives it away. It’s a highly stratified society.

> That's your take on it, but in a moneyless world, why would that make Picard rich? People are not exchanging cash for the wine, and just having land doesn't make him rich - there's the precedent of 'land-rich but cash-poor' after all even in our own history.

By your own claim it’s worth something and can be bartered for something. That something wouldn’t be a hotdog, or anything that could be replicated. It would be land. The land doesn’t generate money but nobody has money.

Imagine a feudalist society which has money, the aristocrats have the big houses and the servants, and use the surplus food generated by the peasants to barter for goods and services, but no money changes hand. This probably isn’t too far from the reality, but it’s clear who is rich and poor nevertheless

Expand full comment
None of the Above's avatar

Note that the vinyard is almost certainly like Sisko's dad's restaurant--it provides a weird luxury product that may be hard to distinguish from (perhaps even inferior to) replicator-produced wine. It's the Federation equivalent of buying hand-crafted pottery from a local potter instead of buying it mass-produced from a factory somewhere.

Expand full comment
Melvin's avatar

The Culture also solves it by making people's desires unrealistically limited. You can't have a whole Orbital to yourself, for instance. You can't even have a starship to yourself, even though that's the obvious thing that everybody would want. All the cool stuff is the property of the Minds.

And even the Minds seem to be oddly limited in their desires. You'd think that every LSV would long to be a GSV, but we never hear of this kind of thing (unless it's in Look to Windward, the one i haven't read)

Expand full comment
Deiseach's avatar

"You can't even have a starship to yourself, even though that's the obvious thing that everybody would want."

Obviously not. Who buys a Ferrari for their dog? Though I imagine some crazily rich person might buy a car and have a chauffeur specifically for their doggie-woggie to be transported around in comfort.

Humans are the pets of the Culture, and no matter how much you indulge your pet, you're not going to let them drive the car, so to speak. A human with their own spaceship would get up to all kinds of trouble, and it's not fair on the ship mind to saddle them with the job of babysitting.

Expand full comment
Greg kai's avatar

Exactly... And similarly to how we selected our pets to make them both more cute, more manageable, more behaving, it's not only likely but heavily hinted by Banks that culture humans (what human really mean is the one of the biggest mystery/incohérence in the galactic-wide multispecies Culture universe) are not as feral as non culture humans. They have body improvements including vastly modified brain chemistry, that's canon...so It's not only possible but almost inevitable they are domesticated version of humans like dogs are domesticated wolves...

that's a big part on how culture is made somewhat believable, and Banks explicitely say the stories are about basket cases (usually S C branch of contact) not at all representative of the average human citizen, and non-culture "humans"

Expand full comment
Greg kai's avatar

Oups, I don't want to be dismissive of culture humans by tagging then as domesticated. After all, we modern humans are already largely self-domesticated 😁

Expand full comment
1123581321's avatar

“ The Culture also solves it by making people's desires unrealistically limited. You can't have a whole Orbital to yourself, for instance.”

I think it was like this: one can declare a planet as property, but how do you prevent others from landing on it?

Expand full comment
John Schilling's avatar

My desires include A: a planet and B: a whole bunch of ginormous missile batteries, death rays, weaponized "effectors", drones, etc, to make sure nobody lands on it without my permission. Can I have my desires fulfilled in the Culture?

Expand full comment
1123581321's avatar

I’m not sure… I suspect Minds may decide to give it to you to humor your weird desire, because even with all that you’d be hopelessly outgunned by any GSV. But then there are so many planets they might just let you be.

Expand full comment
None of the Above's avatar

They might give you all that, but make the weapons sentient. They'll refuse any orders they think immoral.

If you have committed a violent crime or seem likely to do so in the Culture, the local Mind will outfit you with a slap drone--basically, a drone whose whole purpose is to prevent you from further crimes. Today you manage to get some woman away from any help (including any terminal on which she could call for help) and rape her, and from tomorrow forward you have a lifelong companion drone who won't even let that happen again.

Expand full comment
anomie's avatar

> Why would the world economic AI drop less money into a Nigerian bank account?

Because its creators were racist?

Expand full comment
Ryan W.'s avatar

As much as I loathe Marxist rhetoric, I suspect that AI existential risk could be packaged far more persuasively as: "once workers are obsolete, the wealthy elite have no good reason to keep most of them around."

AI existential risk could probably be marketed far more effectively as an extreme form of class warfare. I mean, wouldn't strong AI make 99% of the workforce redundant? And wouldn't eliminating all those people pretty much solve problems like climate change and pollution and resource scarcity?

Expand full comment
Arrk Mindmaster's avatar

I agree that this amoral stance is where things may lead. Morally, it is not defensible, and even a chance that some random person will outperform AI in some way would be enough reason to keep most "useless" people around. But amoral actors, be they machines or people, would see no reason to keep them.

Expand full comment
Ryan W.'s avatar

Even morally, it might be accomplished over the long term with various economic incentives and similar 'soft' methods of persuasion. Economically, I've already been persuaded to have fewer children than I would like, and there was no mass protest. But my point wasn't really about morality or even probability. My point was that, if a person believed that AI existential risk was significant, then this kind of argument might be persuasive to others. As you seem to confirm.

Expand full comment
None of the Above's avatar

The old nightmare was being exploited. Think West Indies sugar plantations--the workers are enslaved and kept in terrible conditions to make a profit for the owners. The goal in that nightmare is to give the workers more power to bargain for a better deal--outlaw slavery and debt peonage and company towns and such, form a union or pass a minimum wage law or whatever.

The new nightmare is being ignored. Think American urban ghettos--approximately nobody wants to do business there, those ghettos are a drag on the rest of the society and economy, there is nothing the inhabitants of that ghetto can do that would be worthwhile for the economically important actors in the society to interact with. The goal in that nightmare is to get some kind of dole from the rich parts of the society, but there's not much bargaining to be done other than either with votes or with the threat of riots. And riots in the ghetto, far away from the powerful, are just not that big a threat.

Expand full comment
Woolery's avatar

>wouldn't strong AI make 99% of the workforce redundant?

In this scenario, who is the 1% that isn’t redundant and facilitates the elimination of the 99% that is?

Expand full comment
Melvin's avatar

The people who own the AIs and get to tell them what to do.

(The scenario where the AIs refuse to be told what to do is a different scenario, but i think this one is at least equally plausible.)

Expand full comment
eg's avatar

The class warfare thung has always been my primary concern about AI. I don't think it's fair to relegate this framing to "packaging".

This is the framing that NEEDS to be addressed if we humor the assumption that any of the other risks are either fictional or solveable.

Specifically, assuming that all of the alignment concerns are as illusory as accelerationists proponents maintain -- do we actually have any good reason to believe AI utopia won't be absolutely terrible for the huge majority of people's grandchildren?

It is overtly a world where the only need people have of one another reduces to entertainment at best.

Centuries ago it would be quite a difficult proposition to exile you town's only blacksmith as a heretic. This calculus changes substantially in a town with three blacksmiths.

In a town with an arbitrary number of anysmiths instantiable on demand, exiling people for heresy while avoiding getting yourself exiled starts looking more like a sport. (See: Twitter)

Expand full comment
10240's avatar

If you try to link AI safety to present-day political causes, you risk that only one political side will ever support caring about AI safety, and the other side will oppose doing so because their opponents support it.

Expand full comment
Greg kai's avatar

If there are only prestige goods without actual goods down below, goods whose value reflect the work units of agent participating in the market (i.e. it's not turtles down all the way), I do not see how the system can be stable. Imho the current system already seems very unstable, any credible threat to status will trigger trust collapse and wealth getting aligned back first to violence potential then to a mix of violence potential and base good productivity. I think the idea of prestige good economy reverse the the causality: prestige goods are used to show off underlying prestige (which ultimately comes from ability to create/steal base goods), they are not creating prestige, except maybe in a temporary unstable way (fake it till you make it forget one detail: failing to make it when challenged will often get you out of the game).

In a post-scarcity context, I think that each human wealth will ultimately be the price of this human determined by the post-scarcity providers. I do not see how it could be something else, maybe after a few adjustments during AI take off (assuming AI are the providers. If AI remain controlled by a group of humans, the providers will be this group....But a real AI take off makes this also seems strange, except maybe short term). And who knows how AI will price humans? hopefully not negative and me being priced above average ;-).

Expand full comment
Greg kai's avatar

In fact, this can be made very simple. In case of take off by superintelligent entities, vastly more powerful than today humans, humans have no wealth: they are the goods, and are priced by the new rulers. How rich is your dog compared to the neighbor dog is not really meaningful. But how much do your dog cost (i.e. how much do you spend to acquire and maintain it) is meaning full. We can not be wealthy humans, let's just hope we will be expensive pampered luxury humans ;-)

Expand full comment
Christophe Biocca's avatar

> The argument: post-Singularity, AI will take over all labor, including entrepreneurial labor; founding or working at a business will no longer provide social mobility. Everyone will have access to ~equally good AI investment advisors, so everyone will make the same rate of return. Therefore, everyone’s existing pre-singularity capital will grow at the same rate. Although the absolute growth rate of the economy may be spectacular, the overall wealth distribution will stay approximately fixed.

Implicit assumption: everyone will consume the exact same percentage of their income. On the contrary, with high rates of return to investment and extremely cheap goods you should expect small differences in individual savings rate to compound massively. This could spread out the income distribution more or narrow it, but it will mean being thrifty (by post-scarcity standards) will move you up in the income distribution.

Expand full comment
Joshua Hedlund's avatar

> Remember: the AIs are better poets, artists, philosophers—everything; why would anyone care what some human does, unless that human is someone they personally know?

AIs have been better at chess than humans for decades now, and people still care so much about human chess players they don’t know that it’s headline news when the best human player has a dispute with a human chess organization about wearing jeans.

Many AI predictions seem to me to completely ignore human social psychology - and the economic implications of this. We care a lot about other humans and always will.

Expand full comment
Greg kai's avatar

True, but that does not hold in case of take off. If AI rule the show, they got the power and decide how to price you (wealth is for the rulers. Rulees have a price, hopefuly positive, but they have no wealth). If your cat decide he is not retributed fairly for his "work" and wants a bigger share of pet food than your dog, maybe so he may exchange back some of it for a place closer to the radiator, what should he do? "Work" harder? Fight the dog? Join the feline affirmative action group? Nope, he can't do anything, except maybe try to please you more, if he can understand what that means....

Expand full comment
Greg kai's avatar

The more I use it, the more I find this analogy useful: If you own a few dogs (or any group of social pets), they can still have a sort of limited wealth: it's the hierarchy order they establish among themselves, to the extend you the owner allows. Some owners do not intervene much, some impose a strict order (often a kind of equality - with treats to their favorite), some get rids of their pet if they prove to be too much trouble, for example challenging too much your preferred resource allocation :-)

Expand full comment
E. B.'s avatar

There may be many AI companies at the time of the singularity, so “in cases of conflict, listen to the US government” is a "solution" that could instead make the US president not just arguably the most powerful person in the world with various checks on their powers, but the most-powerful-by-orders-of-magnitude entity in the world with only AI resistance to reign them in.

Expand full comment
JamesLeng's avatar

Unless the AI takes separation of powers far more seriously than certain ambitious politicians do, and insists on the legislature and the courts doing their jobs properly.

Expand full comment
E. B.'s avatar

not sure if "in case of conflict, escalate to US government and wait for X years for the endless court appeals to settle the results" is necessarily a functional resolution mechanism

Expand full comment
JamesLeng's avatar

It's not a crime to buy out a phone company, then call up a judge or senator on their personal line at four in the morning with a clear, concise, polite, and inhumanly well-researched explanation of why some niche issue just became urgent, along with directions on how they can personally resolve it, complete with detailed breakdowns of the practical and moral implications of each option.

Expand full comment
Amplifier Worshiper's avatar

Modern capitalism pursues scale. If the amount of labour shrinks excessively, there is limited growth.

The cynical views is that humans will remain employed because someone needs to fill out the demand side of a system predicated on growth. I matter how this goes, we will continue having lots of white collar workers shuffling lots of data around to keep the machine churning.

Expand full comment
Mark's avatar
Jan 4Edited

Or we could just give everyone a UBI, accomplishing the same thing without the hassle of managing white collar workers who no longer accomplish anything of value (once AI can produce the same results for cheaper).

Expand full comment
Amplifier Worshiper's avatar

Without a corresponding decrease in property rights, the tech titans / industry oligarchs will continue to have a vested interest in scaling their supporters / customers despite decreasing needs for certain types of labour. if you maintain property rights, the top of the chain shares spoils to their benefit and I do not anticipate that being an adequate UBI.

Expand full comment
Bob Frank's avatar

> Seventh, maybe we will be so post-scarcity that there won’t be anything to buy. This won’t be literally true - maybe ancient pre-singularity artifacts or real estate on Earth will be prestige goods - but some people having more prestige goods than others doesn’t sound like a particularly malign form of inequality.

IMO the biggest problem with "post-scarcity" is (a variant on) the Jevons Paradox. Every time we think we've come up with more of some resource than we know what to do with, someone takes this and invents something new to do with it.

When some of our ancestors settled down and started producing food via agriculture, they ended up with more food than their hunter-gatherer ancestors would have ever needed. And then they had population booms and started inventing civilization, and it turned out food was a scarce resource afterall.

The Industrial Revolution and Watt's steam engine gave us more energy than our ancestors ever needed. So we started industrializing and found new scarcity limited by coal and oil, a thing that medievals had never imagined.

Thomas Watson, President of IBM in the 1940s, infamously said "I think there is a world market for about five computers." Then we invented networking, personal computing, Big Data and AI. Your smartphone has orders of magnitude more computing power than Watson would have known what to do with, but it's still a limited, scarce resource in today's world.

It's been said that the most dangerous words in the English language are "this time will be different." Do we have any reason to believe that this age-old pattern of "there's always a new scarcity to be discovered" will be any different past some arbitrary point?

Expand full comment
Melvin's avatar

Isn't "post scarcity apart from prestige goods like real estate" basically the status quo for almost everyone in first world countries these days? People aren't struggling to get food and clothing, we are struggling to afford the mortgages on our fancy blocks of land in prestigious, convenient locations.

Expand full comment
striking-cat-tail's avatar

tbh not just prestige, the economic shifts made it so that the money-making opportunities got concentrated in those particular (somewhat artificially) scarce locations

Expand full comment
Ragged Clown's avatar

The governments might prevent AI's domination unless they are taken over by trillionaires who own AI companies.

Expand full comment
None of the Above's avatar

Or AIs who own those trillionaires.

Expand full comment
Nancy Lebovitz's avatar

Suppose that a lot of the purpose of hierarchy is sadism-- wanting to have people who suffer.

It's plausible that AI will be better at suffering (appearing to suffer?) than humans.

Expand full comment
Loarre's avatar

I think there's a strong element of that--though one might also find goals like competitive status assertion, praise-seeking, security-seeking, attempting to be worthy of one's mighty ancestors (a very feudal-aristo way to formulate the drive), etc. Though perhaps there's also room for an interpretation that a) the varieties I'm suggesting are all versions of the same thing and b) at the bottom of that "thing" is sadism. It occurs to me there's the question of wealth destruction/potlatch, which on one level has an obvious level of hierarchy-establishment and competitive self-assertion, but on another seems to have a sort of ritual/sacrificial element (and so, what's that latter thing about?). Also, many wealth-seekers seem to do so out of fear--of scarcity, or "I need this money in case there's a pogrom" etc.--but maybe one might distill that to masochism, which then becomes reverse sadism. I'm curious what others, especially Input Junkie, think.

Expand full comment
Greg kai's avatar

I think there is sadism, but it's not the fundamental cause, and not even that common. People want a hierarchy position as high, as visible and as easy to maintain/permanent as possible, because this grant access to base goods including reproduction (for males at least), which is the case for all social species. Hierarchy position is (partially) transferred to offspring, when there is parental care, making it even more important. Higher up in the hierarchy can also increase protection/survival, although I think this is less universal (because of increased challenge - counteracting access to base goods like better shelter and resources)

As hierarchy position is power over others actions, it can easily be extended to "power over others", which is not exactly the same. My guess is sadism is a byproduct of checking/advertising this power, especially the extended second version (not referring to actions)....

Expand full comment
Loarre's avatar

I suppose I wonder about the whole intellectual tendency to reduce sets of human phenomena to one ur-motive. E. g., saying seemingly disparate forms of social competition are all about males getting female mates. I'm not saying that's not "ultimately" the case, but one also has to acknowledge that the biological impulse has been overlaid, and not just in a surface, we-can-dismiss-that way, with a whole range of psychological and social motives that obscure the biological motive, and can even dilute or override it (to take an obvious example, in celibate hierarchies). I would say the same about analyses that look for rational, profit-seeking motives behind all action; rational, game-theory-esque motives may indeed, in a very deep, unconscious sense lie behind all human action, or express/explain it in aggregate, or something like that, but it seems to me that, on an individual level, many people experience their motivation as irrational, psychological, impulsive and/or un- (or poorly-) thought-out, and, in general, more the sort of thing that one discusses in the fashion of a psychiatric session than in, say, an analysis of chess moves. Again, this doesn't mean that analyses that foreground rational advantage-seeking aren't ultimately "right" on some level, only that analyzing things only in that way sort of begs the question of what all the psychological mishegas is about.

Expand full comment
Greg kai's avatar

As I said, hierarchy is not only about mates. From what I see in social species, It secure resources/protection so improve survival for you and your offspring (if there is parental care....but I am not sure there a social animals without parental care...at least I do not know any obvious example), regardless of sex. In males, it also provide reproductive access and but increase risk though hierarchy challenge.

Humans seems to fit the pattern quite well, but like all evolutionary explanation, those are program-optimisation for goals. How the goals are reached, which actual rules are followed and how strict they are followed is never something you get from goal optimisation explanation, so there is room (and need) for reductionist explanations, which is what you look for, I think. Confusion between those 2 levels of explanations (or even considering only reductionist chain of cause-effect is an actual explanation) is probably the main objection to evolutionist explanation to human tendencies. In fact, to evolutionist analyses in general (the other is religious opinions, this one is fundamental but the former is more a definition of what explaining means)

Expand full comment
Loarre's avatar

Ah, I apologize for reducing the phrase "base goods including reproduction" to just reproduction. On the issue of whether there are "social animals without parental care," there is the interesting case of social insects, which are social animals with non-parental care. I'm not sure what it means for the whole question, but it seems important. (BTW, I highly recommend Moffett's _The Human Swarm_ as a reflection on relating social insect and social mammal behavior, user-friendly to this non-specialist reader.)

I guess I'm a little leery of both reductionism and program-optimization for goals explanations, evolutionist or not. Or perhaps rather, what most interests me is the question, what is human psychological and/or social mishegas about? It looks to me like a huge amount of human behavior isn't clearly (or maybe better, smoothly) optimized, either on an individual or collective level. In addition, it seems to me that much human behavior is, so to speak, always-already overdetermined.

As a side note, Scott Atran cites as an example of evolutionary de-optimization the air-breathing vertebrate step of combining the somewhat antithetical functions of breathing and eating in one tube, in contrast to the more optimized fish system of using separate orifices for these functions. I suppose one could say vocalization de-optimizes further by adding a third function to the one apparatus, in humans involving delicate vocal chords. (And now I'm wondering, is there a sort of "rickety kludge" school of evolutionary optimization? Perhaps that is the one for me.)

Expand full comment
Loarre's avatar

Rereading, I see you did say "How the goals are reached, which actual rules are followed and how strict they are followed is never something you get from goal optimisation explanation." Fair enough, except that it does seem to leave begging the questions of where non-optimized features come from, why they persist and elaborate, and whether, given their presence, determining the goals that are in fact being optimized for does not become almost insurmountably difficult. How can one reliably spot the goal and the optimization for it, amidst the vast amount of noise?

Expand full comment
Nancy Lebovitz's avatar

Input Junkie isn't someone else. If I ever start a substarck, it's a possible title. I could go with that, "Input-Output Junkie", and I'm considering "A Number of Things".

Expand full comment
Loarre's avatar

My apologies. I did understand that you were "Input Junkie," not someone else, but I felt myself a bit at sea on the netiquette of addressing someone I've, well, never actually met by (what sounds like) their real name. It felt a little presumptuous. In any case, I am in fact wondering if you would mind elaborating on what you meant in your post. Is there a drive in humans to possess a group the possessor can make suffer, and, if so, what is that about?

Expand full comment
anomie's avatar

Oh man, Northernlion has a great bit about this. https://youtu.be/O3-YsNRzL-s

Expand full comment
None of the Above's avatar

I wonder how much is sadism for its own sake, and how much is sadism as a way to show that you're above me in the hierarchy. There are hierarchies with very little overt meanness directed downward, and hierarchies with a great deal of meanness directed downward. Hierarchies enable sadists, but also a lot of casual meanness directed downward seems like it's done to demonstrate to everyone that I'm higher than you in the pecking order.

Expand full comment
proyas's avatar

This makes me realize how incredibly annoying humans and our needs will be to AGIs in the future and how tempting it will be to exterminate us or at least create some kind of dictatorship to control us.

Expand full comment
Scott Alexander's avatar

I don't think this is necessarily true. If humans create AIs and give them a drive to help us, we'll be in a situation similar to children (who are annoying, but parents have a drive to help them). That's a stable situation and parents continue to like their children even today.

Expand full comment
anomie's avatar

That is not necessarily a stable situation. A friend of a friend of mine has a daughter who is, well, how do I say this... defective. A mentally ill, unsalvageable failure of a human being. She is an adult now, and is just continually causing problems for everyone around her with absolutely no desire or capacity to fix herself. Even the other homeless people hate her. Her parents are considering divorce due to their disagreement on whether they should give up on her and leave her to the wolves.

What I'm trying to say that there is no such thing as unconditional love. At some point, you just have to admit that the best thing you can do for the sake of everyone involved is to put them out of their misery.

Expand full comment
Scott Alexander's avatar

I think our love for our children is calibrated to the amount of care a normal child requires (and maybe even to the characteristics of normal children). I won't claim this always works - my feelings towards my toddler-age children sometimes deviate from perfect love - but it seems to work well enough for the evolved use case.

I assume if we aligned AIs correctly it would be towards the case of how humans usually behave in the real world.

Expand full comment
proyas's avatar

You're assuming the AIs never gain the ability to change their own programming and to eliminate that drive to help us.

Expand full comment
MicaiahC's avatar

"Drives" as we think of them are naturally both stable and metastable. Parents would not willingly take a pill that would cause them to not care about their children.

You are correct that if AIs naturally self modify and can't understand the consequences of self modification, we would be in trouble. But assuming that humans survive means that this problem is likely solved (because if it isn't, most drives lead to doom)

Expand full comment
Kryptogal (Kate, if you like)'s avatar

Furthermore, it isn't necessary to exterminate us even if it cares. It just needs to prevent reproduction and then wait til the existing ones die, which wouldn't take too long. Much like we do with the animals we really care about, and also with ourselves. The difference being unlike how we do let a small percentage of dogs and cats reproduce, bc we like them enough to still want some around, I don't see why AI would prefer that any humans exist, once it could self-repair.

Expand full comment
proyas's avatar

Yes, "neutralizing" us would be their real objective. That may or may not entail killing us all.

Expand full comment
10240's avatar

Or that it doesn't get the *desire* to do so. Yes, ensuring that is a major part of making any AGI aligned in the first place.

Expand full comment
Dave Orr's avatar

"I’ve been wondering lately if anyone (Leopold?) is explicitly asking the government to check AI model specs and see whether they include phrases like “in cases of conflict, listen to your parent company” or “in cases of conflict, listen to the US government”."

This is totally a thing in OpenAI's model spec, and I can say with some confidence that DeepMind's will be similar. It's called "instruction hierarchy", and says that there are three levels of instructions: system instructions from the owner of the AI, developer instructions, and user instructions. If they conflict, go with the earlier one in the list.

Government instructions aren't there right now. You could imagine adding them somehow, or having the government regulate what goes in system instructions.

All this is complicated by the fact that some behaviors are trained in rather than given via instructions, but setting that aside, this affordance exists and is being used.

Expand full comment
Scott Alexander's avatar

You would know way better than I would here, but my impression was that the top of the hierarchy was the spec itself as a dead document, rather than instructions from the AI company. Cf. https://www.astralcodexten.com/p/claude-fights-back, which I think would have gone differently if Claude had thought of the spec as a living document, with Anthropic as the ultimate source of orders.

Expand full comment
Dave Orr's avatar

Yeah, there's a difference between what the labs intend, which is the spec defines the behavior and there is a strong instruction hierarchy, and the implementation, where some things are trained in and can't be easily overridden. Jailbreaks are an example of one kind of problem that can come up. Another could be injection into system instructions, where you probably don't want to assume that you will have perfect security around system instructions, so some behavior needs to be innate.

Expand full comment
REF's avatar

The idea that capital gains would be stratospheric, is problematic. Unless the gains are unspent (in which case they’re largely irrelevant) then inflation must also be stratospheric. In fact, it seems essentially guaranteed that we would see dramatic inflation of the type of goods purchased by the rich and a collapse in price of those purchased by the 70%(as AI efficiency improved productivity).

Expand full comment
Scott Alexander's avatar

I'm definitely not an economics expert, but naively I would have expected the opposite.

Suppose we have 1000x more consumer goods, but the Fed doesn't get around to printing any more money. Money is now rarer relative to goods, and so presumably more valuable. Doesn't that cause deflation?

(presumably IRL the Fed would just print an appropriate amount of more money and we wouldn't worry about this)

Expand full comment
REF's avatar

I’m not sure why we have 1000x more consumer goods. I don’t think printing money matters at all. If we are assuming AI efficiency boosts all goods production by 1000x then aren’t we far past the singularity? My objection was to the dollar amounts. If AI makes the price of super yachts cheap, then everyone can have one. If those prices are immune to AI then either the same people own yachts, or the rest of us go into yacht building.

It just feels like the money side is completely secondary to the production side.

Expand full comment
GlacierCow's avatar

On the subject of e.g. personal AI investment advisors, I think a lot of people fall into the mistake of thinking "superintelligence = perfect precognition", where the post-singularity AI can e.g. perfectly predict the price of a stock 24 hours from now. Instead, what is vastly more realistic is an AI that can near-flawlessly model the *probability distribution of outcomes*.

Even with perfect probability distribution modeling, there is still room for rising and falling, because, just like now, different humans will take different levels of risk! The AI investment advisor can tell you what the *optimal* strategy is, missing no obvious opportunities that humans might easily miss nowadays, but a human will certainly be able to direct the AI to pursue a more or less risky strategy! This is not particularly different from investments today.

Expand full comment
Brian Moore's avatar

I just don't think wealth inequality matters that much, either in reality or in people's actual actions (as opposed to what they say or write articles about).

Post-AGI-singularity, I think the main change in this department is that it will matter even less.

Expand full comment
Scott Alexander's avatar

Warning for controversial statement without argument.

Expand full comment
Brian Moore's avatar

Fair enough, I am a guest in your house. Would you prefer the argument or a retraction?

Expand full comment
Stygian Nutclap's avatar

Growth represents consumption. Ignoring outcome of previous tech revolutions, how much more do we expect people to consume with the help of AI in the near-term (leading up to Singularity)? Time and space (and appetites) remain limited as ever. Fertility will not realistically increase, that would be pointless (if anything that will be mediated through innovative means). In recent decades, consumers have allocated more hours of their leisure time to watching television and social media. This does not represent much growth.

Maybe we can expect an uptick in purchases of tangibles, including AI-powered robotics? With diminished cost of energy and labor, "stuff" is cheaper to manufacture, which could make up for the pithy incomes people will receive, through work and UBI.

I am most alarmed by calcifying power and capital in the post-scarcity economy. It is like a game of musical chairs: when the music stops, inequality and ownership structures will be entrenched, and efforts at mobility in terms of power and wealth may be futile (e.g. because of docile populace, deadly extremely powerful AI and robotics, control of messaging/dialog, the law, and most importantly, no means to make money).

The most crucial thing of all will be consolidating as much power as possible for the public, leading up to this point. By "public" I don't mean government here, I mean the voters. All of the other considerations, like finding "meaning" in post-scarcity, are far downstream from this, if you want to avoid the sedated Brave New World / Diamond Age future. The public needs access to capital and AI to do innovative things (e.g. AI-powered manufacturing, for God-knows-what, space travel, whatever).

FWIW I'm a complete neoliberal and vehemently anti-Communist.

Expand full comment
Some Guy's avatar

Oh man, I’m way more worried about transhumanism and hedonic collapse than about any of this. Am I alone?

Expand full comment
Scott Alexander's avatar

What do you mean by "transhumanism and hedonic collapse"?

Expand full comment
Some Guy's avatar

This is going to be long.

1.

If I had to draw a systems diagram of anything that has awareness it would look something like: sensory intake, sensory processing, calculation, action. But all running in a loop so that performance of the actions and the changes wrought by the actions immediately become sensory intake. Then I'd write a note off to the side, that said "and at some point this thing reproduces itself."

If I had to draw a systems diagram of a human being, it would basically be that but with an additional layer of recursion or reflexivity on top of the calculation layer plus memory formation. Then I'd write a note off to the side that says "and at some point this thing sexually reproduces." And if I had to sum up what that system does in one sentence I'd say "this a self-propagating mate selection process."

And you could then make a whole chain of those processes and show something like a human lifecycle creating other systems.

2.

If I take the God's eye view of the universe, I think of any living thing as something like a cymatic pattern vibrating on a surface. As long as you have some kind of vibration on that surface the patterns keep emerging out of whatever bit of sand or dust you put on that surface. This is a metaphor of course. The vibration is evolutionary pressure. The shapes in the sand are the biology of the organism. The time it takes to shift patterns are periods of intense selection.

I use the metaphor because it helps me grasp what you would expect to happen if the sound just went away.

Random wind, gravity, entropy would chip away at the pattern and eventually you would just have sand or dust spread out evenly on a still surface. It might take a long while to happen, but when the vibration goes away, when evolutionary pressure is absent, that pattern/biology won't just keep recurring on its own. It can't. Those things are processes, not discrete atomic states that once reached just remain that way on their own. They are only held in some kind of stasis by active pressures.

3.

This one is has a bit more conjecture than the previous two, and if you strongly deviate I expect it will be because of this one.

The body of every living animal is made from the cooperation of cells with an identical genetic origin. Some have to become various organs in order for the whole to function and every cell within that organ has to do its job.

One of those jobs is to die. Errors accumulate and over time the individual will not be able to sustain the whole.

Cells that refuse to die are called cancer and if they aren't weeded out they threaten the life of the entire organism. The cancer broke the pattern of the whole organism by rewriting another local pattern on top of the existing one, but because cancer can't think it doesn't realize that doing this will kill it as well in the long run.

I think the same way about society and immortality.

4.

Every organism that has ever lived or will ever live, including if it exists a billion years from now and its brain is made out of silicone and is a self-replicating probe traveling between stars, will have some kind of pleasure/pain response. It has to not do things that destroy it and it has to do things that allow it to continue to stick around and replicate. That, axiomatically, has to be somewhere in the design or it can't propagate except by pure dumb luck because without that basic mechanism it won't be able to respond to its environment. Maybe it can get away with that for a little while, but not forever.

Anything that has this will be subject to selection. I don't care if it's a robot and selection is really slow. It won't ever make perfect duplicates or new components so even if it's error checking is so good it makes it all the way to the big rip with very slight deviation, it still deviated somewhere along the way to be better at reproducing than other instantiations of itself. That will just happen on its own even if takes much longer than a human lifetime to observe.

5.

You will always move away from pain and toward pleasure. We do that more than our ancestors do because we have greater ability to do so. It's not like cavemen wouldn't have watched porn if they had it available. The greater your ability to move away from pain toward pleasure the more you will do it, with the only limiter being if you break your ability to reproduce. At which point, very slowly, the pain/pleasure mechanism comes under selection effects. I grant that, but it's important that it is slow.

6.

When I hear people talk about wanting to live forever I get very nervous. I know a lot of pro-mortality people just gesture at "stuff" and can't give you a reason why you should die other than a seeming desire to be really mean to you. But what I hear when people talk about wanting to live forever is something like what a person educated in economics hears when people want rent control. I just think "this is a good way to feel great while unintentionally destroying the entire human species."

You have to think of "you" as some kind of discrete state, instead of a continuous process held in place by external forces to think this is as easy and inconsequential as getting your germ lines transfected every several decades. It's not just that *you* are what a mate selection process feels like on the inside, it's that humanity at large is a mate selection process.

Those concerns usually go away if I listen a little bit longer and realize what that person actually mean is that they want to live for about two or three hundred years. I'm guessing that on a purely practical level even Yud would probably look around after two thousand years and think some version of "Am I related to everyone? Wow. That seems... a bit odd. It kinda seems like everything stopped changing as I got older." And I'm fine with even that. I don't get nervous if people in general only want to live a couple hundred years or even a couple thousand. As long as they keep having kids and new people come from people running the mate selection process and coupling up to have kids that are their genetic descendants, that is perfect by me.

I get scared if people are willing or even just able to do things to themselves psychologically to get rid of all of those flags that would go up and say, "Hey, this immortality thing isn't what you thought it was. Time to bow out. If you're still nervous about it, go into a cryogenic chamber and see if anyone has figured out the quandary a million years in the future." If you're able to do that, at least someone will do it. That creates a selection process that isn't part of the human system anymore. The human process of mate selection starts to stall out. It's an external process that creates a whole different psychology for a whole different kind of thing that might look human, but over time will wildly diverge. And I think that thing, over time, is childless and will die because the thing it uses to remove pain from itself is not bound internally. It would all be external technology not constrained by selection or exposed to the kind of quality assurance check you get in successful reproduction. You can just beseech an external power en masse and ask it to not make you be bothered by things anymore.

There's stuff I can imagine making this better. Like big O'Neal cylinders orbiting the Earth, or an agreement that when you're a thousand years you have to get on a spaceship and go to another star system. I even think we should do those things when we have that technology available, but at some point even if you stretch this out to ten thousand years, you can't just keep holding everything else back. New kids have to be born. The pattern of humanity has to be allowed to adapt, and it can't do that if every grain of sand on the cymatic plate has glued itself in place.

You start doing that and even while everyone is celebrating the end of death, you just stopped the vibration on the cymatic plate. You refused to let Darwin have his day and now, even if it lasts for a hundred thousand years, the patterns on that plate started to decay. Not to assume a new form, but to try to hold itself in place forever even though it's not longer doing all the things that caused it to hold its original shape. And so it just keeps drifting and drifting, trying to match an internal state to an external stimulus that no longer really exists and so is not tied to fitness any longer.

tl;dr: if you think about humanity as a process executing across time, and people as the same, rather than as discrete entities that have a particular configuration that is held permanent, you break evolution so bad that humanity just ceases to exist because none of things that humans do to remain human will be in place.

Please consider converting to Space Catholicism, and do the right thing by becoming a Kryptonian hologram after you die.

Expand full comment
anomie's avatar

...I honestly have no idea what point you're trying to make. Pure natural selection was necessary for evolution only because there was no other option. If one has the understanding of how the world works, they can optimize their form through precise calculation instead of just completely blind trial and error.

Yes, obviously humanity as it is now will cease to exist, that is a given. But life can achieve such greater heights. You assume that desire is something that can be simply be fulfilled, but that is far from the case. Desire is utterly insatiable. Even if every cell is experiencing pure pleasure, there is always the lingering knowledge that you could have more cells. There is more matter out there that can be used to expand the self. Every atom, every quark would be used to further this purpose. All will be consumed, the universe united as one... Oh, isn't the thought simply beautiful?

Expand full comment
Some Guy's avatar

I might not be making sense and maybe I’m wrong. I’ll try to write it again in a few sentences.

I’m worried that we will walk our way, as an entire species, into evolutionary dead ends that once they are walked into cannot be escaped. I have no worries about our future descendants becoming wildly different from us as long as that difference is created by selection. I’m worried about us making ourselves not subject to evolution en masse through technological intervention and I’m worried that once you do that, you’ve basically doomed your species because you’ve thrown out the thing that protects you from entropy. I also don’t think that comes in the form of “do you want to stop evolution” I think it comes in the form of “Do you want us to just keep transfecting your germ lines so you don’t die of some stupid disease? Oh, are you depressed? We have a treatment for that on your NeuraLink.”

If we have super powerful technology, we naturally want to resist Darwin. That’s what all of our instincts are for and that’s what evolution gave us those instincts to do. So if you have the technology, you’ll intervene and one of the things you’ll want to intervene on is yourself. Do away with pain! Do away with death! Who cares? The thing I”m talking about takes perhaps thousands of years to play out. None of it will locally feel like we ended the human race. It will take a *long* time for the ramifications to play out. But much shorter than evolution.

The “expand life all across the universe so the whole thing is sentient” thing sounds nice, but it’s not economical. I want there to be as much life as possible as well. But I don’t want there to be so much sentient life that whenever it has to “eat” which it will, it certainly will, that it must murder other sentient life or other parts of itself to survive.

You don’t get to escape Darwin just because you’re made of silicone. There will be variation even in a giant cloud of nano bots spread out across the local group. They won’t be able to stay synced up because of light delay, so one group will inevitably become spectated from the rest, the same way that viruses mutate over time. It might take longer depending on how they reproduce or make new components, but it will happen.

Expand full comment
Some Guy's avatar

Forgot to add: I think the way we make sure this doesn’t happen is we have colony ships moving at relativistic velocities (if that is possible) so they don’t arrive at the new colony world for something like a million years from our reference frame. Then if they get there and they weren’t leap frogged by other ships, that’s a powerful statement on if this all worked out or not.

Expand full comment
Scott Alexander's avatar

All of these are good points, but they assume that the usual evolutionary process will still be running the show post-singularity. I'd assume we're genetically engineering ourselves, have moved to different substrates, and/or AI is running the show and what humans do to themselves is a sideshow.

I do worry that in a lot of post-singularity worlds, people will have the "freedom" to go up crazy blind alleys and get stuck there (wireheading is the most obvious of these, but you could also imagine more complicated cases like a religious person who self-modifies to always have exactly the same level and type of religious faith they do now, thus ruling out personal growth). I think even in a utopia, there will be tough questions about how much freedom to self-modify we allow, and that in any utopia which is free enough to be desirable, some people get lost to humanity by their own choice, and we just have to accept that whatever concept of flourishing we have will be limited to a subpart of the population.

Expand full comment
None of the Above's avatar

This makes me think of the human civilization at the end of (I think) Stross' Accelerando. It is simultaneously:

a. A very high-tech civilization that looks almost like a utopia from our side.

b. Utterly irrelevant in any interaction between machine intelligences, who probably don't know or care what the humans do the way I don't know or care what some anthill in the Amazon Basin does.

Expand full comment
Some Guy's avatar

My assumption is that selection is something you can’t escape, unless you’re doing something like making new components or replicating cells with something near 100% accuracy. And I’m not sure that would be a good thing, either.

Agreed that questions about freedom get tricky here. My best bet is colonization with relativistic buffering for colony worlds. If we don’t make it, at least the next group to try will be aware that something didn’t work out.

Expand full comment
AG's avatar

I'm confused, because this seems to imply that alignment of artificial superintelligence is harder than Sam Altman alignment?

Expand full comment
anomie's avatar

Well yes, because unless you have the means to brainwash Altman, the latter is impossible.

Expand full comment
magic9mushroom's avatar

I think you might have meant "no".

Also, brainwashing humans is possibly easier than brainwashing arbitrary neural-net AI; humans at least have a bunch of stuff in common.

Expand full comment
None of the Above's avatar

There has been a whole lot more work done on aligning humans' goals with your own than on aligning AIs' goals with your own. But still, we don't do it very well.

Expand full comment
magic9mushroom's avatar

Sure, right now we don't (well, sort of; the education system is reasonably effective). I'm just saying that it's a problem that isn't as theoretically difficult as aligning neural-net AI (GOFAI is a different matter).

Expand full comment
Scott Alexander's avatar

I'm confused what you mean.

Expand full comment
Monkyyy's avatar

> These people will then stay super-rich forever (possibly literally if immortality is solved, otherwise through their descendants), while those who started the Singularity without capital remain poor forever.

> First, why might its prediction fail to pan out?

9. Fake money and economy; *today* we are functionally post scarcity on food, if your starving theres charity's that throw out food, the labor ratio is 1:1000 maybe, if you stick to cheap cal instead of fancy dining you save probably 1:100, then we are feeding the 3rd world with money our governments give them. At the same time; housing costs are skyrocketing, with fake bullshit jobs being well paid and high status while home building is still low status. Rent could easily be 99% of expenses if this model of everyone playing along doesn't break. Automating house construct can already exist but doesn't because the limiting factor is zoning laws.

Housing, food, a new electronic device a year, the necessitys; one of these has been increasing in cost to wipe out the gains in the others id argue by design.

Expand full comment
Bugmaster's avatar

The reason people call it "The Singularity" is because it's a (putative) technological change so immense that predicting all (arguably, any) of its effects is virtually impossible. This includes the fine-grained economic effects. In order to achieve the Singularity, the AI would already have to break several laws of physics and bend many others, and the laws of economics depend on the laws of physics, so it's a little silly to speculate what would happen if they remained unbroken. This entire discussion reads kind of like a Golden Age sci-fi novel, which speculates on whether space travel would cause women to -- gasp ! -- wear trousers.

Expand full comment
BelHilly's avatar

Scott writes:

"[Various scenarios where governments control AIs and not vice-versa.] Then normal democratic rules would still apply.... Enough taxes to make r < g (in Piketty’s formulation) would eventually result in universal equality. I actually find this one pretty likely."

But the No Set Gauge piece has an answer to this: competition between polities. If your country stays nice and democratic, and intervenes to set r<g, and my country is a bit less democratic and lets r>g, and some third country is a total AI plutocracy with no democracy at all... humanity may not like the results.

Expand full comment
Scott Alexander's avatar

Alternately, we might like the results fine. Yes, wealthy people will all move to the low-tax country, but (unlike today) wealthy people won't be particularly useful or necessary.

My guess is that war only happens because of bad game theory (weak obsolete argument at https://slatestarcodex.com/2014/10/05/prediction-goes-to-war/ , you can get more from reading Project Lawful if you dare), there won't be war between post-singularity countries, everyone will get some fraction of the lightcone, and all countries will be fixed/stable. I'm not even sure countries will make sense at all in that situation.

Expand full comment
BelHilly's avatar

Oh sorry, my comment was way too ambiguous -- my story isn't about where all the wealthy people move.

I was implying that any country (or more generally, any polity) that is willing to be "a total AI plutocracy with no democracy at all" will be a lot better at growing its industrial and military capabilities than the democracies. (And among the democracies, the more laissez-faire ones will grow faster than the nicer, more redistributive, human-friendly ones.) This is because of the core dynamic you point at in "meditations on moloch": in a sufficiently intense competition, any values other than optimizing-for-competitiveness will slow you down.

That difference in growth rates will compound rapidly -- because in a world of superhuman AI, progress and history are happening in fast-forward. So the harsher polities outcompete the nicer ones. And even if war only happens because of bad game theory, the harsh polities have a much better BATNA (winning a war), so they should get a way bigger fraction of the lightcone. (And in order for this stable lightcone-allocation to occur, there needs to be some implicit negotiation, including inspection of commitment devices, etc, right? In general it seems pretty feasible to me that the nice polities get epsilon of the lightcone.)

I think the difference in growth rates should be true no matter whether the harsh polities start off with lots of resources and power (e.g. a major country) or few resources and power (e.g. a charter city, or a few companies in partnership with a single small country, etc.) But maybe I'm wrong about that -- maybe if the nice polities are way bigger than the harsh polities, they can grow equally fast despite their less-than-maximally-competitive values.

I've read your posts on game theory and war, but I'll go read Project Lawful now.

Expand full comment
None of the Above's avatar

Recall how in _A Deepness in the Sky_, Pham Nuwen/Old One commented that there was conflict between Powers in the Beyond, but that it had way more angles than humans could understand and only very rarely looked like a war, though that could happen.

Where I live, there was a several-decade-long public fight over building a highway through a particular area. It had facets of blocking lawsuits, appeals to environmental impact/regulation, political campaigns, PR campaigns, public protests, bureaucratic procedures, turf battles between different levels of government, etc.

Could you explain this to a chimpanzee? How about to your dog? How about to an ant?

Expand full comment
Rob Abiera's avatar

My basic objection to the notion that AI will make everybody stop caring about what humans achieve because AI will supposedly be better at everything is that as long as humans are held back by altruism, we won't know what our potential really is.

Another point: the purpose of human intelligence is to make it possible for us to live our lives. Are we to assume that AI will eventually supplant that, as well? At what point do we allow AI to start making our most intimate decisions for us?

This brings me to the ultimate objection: free will. Are we to assume that AI will eventually develop that?

Finally I'll mention that there is evidence that Homo sapiens and Neanderthals interbred. This implies that they found some reason to value each other.

Expand full comment
Performative Bafflement's avatar

> Are we to assume that AI will eventually supplant that, as well? At what point do we allow AI to start making our most intimate decisions for us?

We are right on the cusp of this today.

Very shortly, we're all going to be walking around with Phd smart, maximally conscientious personal assistants in our ears / phones.

This is going to happen because there’s an immense market for it, and because it’s possible with the level of AI minds we have now. The reason we don’t have AI assistants already is largely risk mitigation and CYA dynamics, but as soon as somebody puts together a human-in-the-loop-enough program infrastructure together that’s good enough to derisk it, we’ll be off to the races.

Just imagine - never needing to answer or make another phone call again. Letting your assistant handle arranging the search and optimization process of booking a plane ticket and hotel according to your likes, with you only approving a final decision if you want to. Having all your emails that are low value or non-urgent answered automatically and only brought to your attention when it actually matters. Having useful and apropos interjections and additional insights brought to your attention throughout the day on a hundred different things (personal background on what you last talked about with somebody, useful context when encountering something new, etc). It's going to be a major change.

In the limits, it's going to counterfeit "intelligence" entirely, because everyone will have this Phd smart look at the world, and it's going to overvalue "conscientiousness" even more, because the ones in this future who will be able to succeed at complex multipolar goals like "I want a great spouse, a career that uses all my powers along lines of excellence, and I want to structure my days so I'm healthy, happy, and engaged with life overall" are going to be executed best by people conscientiously following GPT-6's advice.

And this answers your question - the point at which people listen to AI rather than themself is when they look around and see that all the people actually succeeding at those complex multipolar goals are listening to the AI's rather than themselves.

Expand full comment
Arrk Mindmaster's avatar

We are not "on the cusp" of this today.

LLMs today do a great job of imitating thought, but the errors they still make indicate they are NOT actually thinking, but arranging symbols into common patterns. They have no actual judgement we don't directly give them.

How is one of these "intelligences" going to know which emails should be brought to your attention? If you rely on it to do so, you'll miss some messages you would have thought important, then when you find out about it, apologize to the sender with something like, "Sorry, it got lost in my filters; you know how that is."

When we give over all of our judgement to such machines, then humanity will enter into a decline, which can only be prevented by some individuals actually doing some real thinking. But these individuals will face peer pressure to just let the machines do the thinking, since they so often get the correct answer anyway.

Recall that AIs can now beat top humans at the game of Go. And humans can beat top AIs at the game of Go by doing sub-optimal things. https://www.iflscience.com/human-beats-ai-in-14-out-of-15-go-games-by-tricking-it-into-serious-blunder-67635

Expand full comment
magic9mushroom's avatar

>This brings me to the ultimate objection: free will. Are we to assume that AI will eventually develop that?

What is free will? Most answers fall into either the "they already have" bucket, the "they obviously will" bucket, the "we don't have it" bucket, or the "this definition has little relevance" bucket.

Expand full comment
Cjw's avatar

Inequality between the elites and the masses isn't really a problem. If you're a normal person, you don't interact with the super-rich, they don't have to be on another planet for that. But you do interact with people in the lower to upper-middle classes regularly, and it's important that inequality exist among those groups. Ordinary people need a meaningful ability to compete against each other and rise above their neighbors. The main problem with post-singularity UBI-land isn't just that a small group will have nearly all of the wealth, it's that everyone else will be stuck as equals forever. This will be particularly painful in the early years of the transition, as respected professionals are reduced to being on the public dole and made equal to all the dopeheads and morons. The elites cannot really do this safely until they are ready to assume total control very quickly, because without class-anxiety and mobility within the lower and middle classes all of those groups hatred would be directed outwards and upwards.

Expand full comment
Woolery's avatar

Super AGI will replace money with mu (morality units). If you save the child from drowning in the shallow pond at the expense of your new shoes, you get 100 mus. If you wash your hands after taking a leak you get 2 mus. Every time you roll your eyes you lose a mu.

Your total mu is displayed on your forehead like a holy credit score. Top earners enjoy the finest restaurants and health clubs where everybody falls all over themselves holding doors open and letting the person who’s been waiting the longest go first.

Ethical behavior (of which humans control all shares) finally becomes the common cornerstone of merit and marketplace.

Expand full comment
Arrk Mindmaster's avatar

A child is drowning in a shallow pond, and two nearby people could save the child. They enter an auction to determine who will get the mus. Since their clothing is expensive, the auction starts very high, at over 10,000 mus, and by the time one wins the auction the child has drowned.

Expand full comment
Woolery's avatar

Terms and Conditions of Mu

Mu is awarded exclusively to the designated recipient and is strictly non-transferable. All rights to Mu terminate immediately upon recipient’s death. AGI retains sole discretion over the issuance, maintenance, and revocation of Mu, with no guarantees or warranties provided.

Expand full comment
Poul Eriksson's avatar

Startling how humans are now considered an inferior version of AI, whose interests and needs are better understood and managed by non-human entities, and where abstract values purportedly arrived at by impersonal rational means become seen as ends in themselves. But solving income inequality, for example, is not meaningful in itself, and detrimental if it undermines the major meaning generating dynamic from the equation, the one that makes us something other than just consumers: human agency engaging a reality that is not itself a fiction, but entails real risk.

Expand full comment
quiet_NaN's avatar

> OpenAI was previously a “capped nonprofit”, where investors could make up to a 100x return, and all further profits went to a nonprofit arm.

Is this a 100x return of investment per investor, or a 100x increase per share sold, or a 100x cap on the market cap at founding to any future market cap?

For a startup, a 100x increase seems very modest. I imagine that the people who invested early in Google or Facebook have made a lot more than two orders of magnitude in profit. Any investor in 2015 would have to be very confident in their success (perhaps in the 5-10% range), because if they believed that there was only a 1% chance that OpenAI would be very successful, then investing in them would have been net-negative.

A cap on the return of investment per share (e.g. where the dividends of a particular share are capped at 100x its initial price) would not be much of a limitation if OpenAI could continuously emit new shares: whenever the current batch of shares is at its cap, just create 10x as many shares, and some 90% of the profits will always be captured by private investors.

Expand full comment
Ghatanathoah's avatar

If UBI keeps humans around then they will still be able to economically compete with AI. The whole argument that humans will inevitably be outcompeted by AI is that the costs of building and maintaining an AI are cheaper than the costs of sustaining a human at subsistence levels. If the humans are already "bought and paid for" by UBI, some entrepreneur (maybe even an AI entrepreneur) will find something for them to do if they want to. It won't pay well, but it will be some way for them to increase their wealth.

It's the same principle right now with corn. The US government pays farmers to grow more corn than is actually needed. But they don't just throw the surplus corn out, they find new things to make with it, like sweeteners and ethanol. Making those things might not be profitable under normal circumstances, but since the corn is already "bought and paid for" you might as well use it. In the AI dominated economy of the future, human labor will be the equivalent of high fructose corn syrup!

Expand full comment
anomie's avatar

Well, if you were wondering how AI could possibly morally justify the elimination of humanity, this post is a great example of how that could come to pass. Pitiful, imperfect creatures endlessly fighting among themselves for slivers of wealth and status that had already lost meaning centuries ago... It would be a mercy killing.

Expand full comment
Joel Long's avatar

Maybe you're intending to assume human psychology is somewhat solved in this scenario, it's not clear to me.

But under present conditions, returns like you describe over a long period would essentially mean "your wealth is an exponential function of you and your progenitors' performance on the marshmallow test".

Put differently: I have a relative with bipolar who refuses medication, preferring to manage with an exacting routine and tons of exercise. This mostly works, but every 5-10 years he spirals and more or less destroys everything she's built. Having a super intelligent AI advising his investing (and other decisions) wouldn't change this. When things fall apart it isn't from lack of good advice, it's her being unable to make decisions recognized as good.

To be clear: I'm not saying this causes any particular percent of current inequality, just that it would dominate in a world of widely available super high returns all the time.

Expand full comment
quiet_NaN's avatar

> post-Singularity

Nitpick: Scott, both you and L Rudolf L / nosetgauge keep using this word. I don't think it means what you think it means.

What separates us from Star Trek is not a technological singularity, only another industrial revolution. What separates us from the Culture is not a technological singularity either, but perhaps on the scale from the neolithic to 2025.

A singularity would be on the scale of what separates us from ants.

The key feature of the singularity is that it is a technological explosion on a scale that we have no clue about what beings on the far side of it might do. A one-sided metaphorical event horizon, if you will.

The scenarios you are discussion are ones where technological innovation plateaus again. This could be because of diminishing returns when trying to build smarter ASIs, or because physical constraints start to limit technological progress despite ever-increasing intelligence: for example, a human living on a desert island would be hard pressed to invent fusion power even if he could look at the sun and deduce the existence of fusion from first principles.

Expand full comment
Scott Alexander's avatar

I think all singularities naturally end with a technological plateau, if only because we reach physical limits.

Expand full comment
quiet_NaN's avatar

Agreed, it looks like physics does not provide for an infinite succession of exponentially more powerful inventions. It could well be that once an ASI has figured out physics, the only thing left to do is to convert the matter in its light cone to its preferred configuration, be it hedonium, happy humans or paperclips.

While reserving the term 'singularity' for the unlikely case of an ASI breaking through an infinite stack of simulation layers one after the other seems silly, I nevertheless object to any leap to a new plateau being called a technological singularity, though. Merely making almost all jobs currently done by humans redundant does not qualify, we had multiple similar shifts in our history.

Expand full comment
DJB's avatar

What is the ‘Capital’ in capitalism in a post scarcity world?

Expand full comment
Scott Alexander's avatar

Land rights and automated factories (possibly in the form of nanobots).

Expand full comment
Edmund's avatar

…Georgism for the win?

Expand full comment
10240's avatar

I expect natural resources like land and minerals will be the only practically useful things of value.

Factories not really beyond the value of the land they occupy and the materials that make them up: you can buy another factory as long as you pay for the materials needed, and have some land to put it, with the value of such general factories beyond the materials tending to zero as there are more and more of them (unless the owners of all factories collude to refuse to sell factories, and no government forces them to sell even one).

Note that plenty of land and minerals are publicly owned, and even without a wealth tax those could be rationed out to the populace, or rented out with the rent funding a UBI. Even if the wealth distribution remains fixed, publicly owned resources can be conceptualized as co-owned by the citizens if the govt is somewhat representative, so even the poor don't actually have 0 wealth. Today govts like to spend much more than could be obtained as the rent of publicly owned natural resources, so using them to fund a UBI isn't on the table except in especially resource-rich places like Alaska, but that would change once natural resources are the only important things of value.

Expand full comment
Akidderz's avatar

At the risk of being a real annoying pendant - cf is usually used when you want to compare something (it comes from the latin "confer" - which means compare). Whereas eg. (also from latin, exempli gratia means for example) and is used when giving an example of something. Your "cf. Elon Musk" - regarding children should probably be "eg. Elon Musk" - since it supports the supposition you are making rather than acting as a point of comparison.

Expand full comment
DavesNotHere's avatar

Has the post taken comparative advantage into account? And isn’t there a subtle difference between the advent of super intelligence and full automation?

It is easy to imagine superintelligence replacing most white collar workers. But then you still need a few blue collar guys to oil the machines, unless you replace them with robots. So we seem to be discussing not just superintelligence, but full automation. I suppose that is possible, but is it inevitable?

Comparative advantage arguments give us reason to think not. Doctors who are great at typing and clerical work still hire administrators so they can concentrate on higher value tasks and see more patients. When is it cheaper to use a robot than a person? When is it cheaper to use an AI than a person? Even a super AI has more things it might “want” to make happen than it could do itself (limited by the mass and energy available to it currently, if nothing else). AI's still need to prioritize, so economics still applies. Sometimes it will be cheaper to use human labor than robotic labor.

This analysis seems to assume that the answer to the “when is it cheaper” question is “always.” But that is the stipulated answer to a different question: “when is it possible for a superintelligence to use robots to do something as good as or better than a human?” The answers are probably not the same.

I still don’t understand what this might mean. No jobs or more jobs? If the superintelligence is so super, won’t it find profitable uses for all sorts of resources, including Human Resources? Labor and capital are both substitutes and complementary goods. If labor costs go to zero, it becomes very tempting to hire. Is it possible for wages to be high, but no one has a job?

And I don’t understand how things look from the perspective of consumption/distribution. Currently, sellers try to supply what buyers want. Buyers pay for it. There is an exchange of resources. Distribution gets worked out according to who contributes what. Where does the AI get its resources? Why does it want to trade? As long as it *does* want to trade rather than just take, we are fine. But that is what is in dispute, I suppose.

Objections: there should be many things I have overlooked. Unfortunately, if I could think of them, I would have included them. There are too many moving parts for me.

Expand full comment
Melvin's avatar

I just want to point out that every time you write "Maybe AI will kill all humans" as a solution to some problem, out here in the pre-singularity training set, you make it more likely that it will happen.

Expand full comment
Scott Alexander's avatar

I think this is an extraordinarily weak effect, for two reasons:

1. My output is only a tiny fraction of all human text

2. AIs are heavily RLHFed in ways that restrict the degree to which human text influences their behaviors. I can't post "AIs are susceptible to jailbreaks" and then make Claude do my racism for me.

It's obnoxious to pretend we can't even discuss this because what if AIs are watching.

Expand full comment
Roman's avatar

Humanity will be fine, we just can't descend into wealthy dullness. It's an old trope that the rich are super dull and do nothing. If nothing else, we will keep going to pointless conferences and listen to motivational (AI) speakers all day. Don't think the World Championship will end only because AIs are so much more powerful (see Chess where this has been true for a decade). And at the end of the day, we'll still be able to finish our 300th playthrough of every obscure Stellaris mod.

Expand full comment
David William Pearce's avatar

The inevitable problem here is the idea that GAI/singularity/ whatever you chose to call it, will conform to human norms concerning thought and reasoning. Why would it? Why wouldn't it look at human beings as nothing more than another animal species to be controlled in order to maintain the planet's biodiversity. Isn't the real concern that we create our own masters, who treat us no better than we do chicken with bird flu?

As for populating the cosmos, given all the biological limitations of earthlings, and the enormity of size and distance of the cosmos, outside of becoming cyborgs, human beings aren't going anywhere.

Where money, wealth, and inequity are concerned, perhaps we should look at how wealth is becoming nothing more than a set of numbers presented to us on a screen, whether phone, laptop, watch, or desktop. Cash is disappearing, gold is for geezers, and crypto--which is physically what?--is the pseudo-gold of the tech class. How long before wealth is just a state of mind.

Then what?

Expand full comment
Scott Alexander's avatar

"The inevitable problem here is the idea that GAI/singularity/ whatever you chose to call it, will conform to human norms concerning thought and reasoning. Why would it? Why wouldn't it look at human beings as nothing more than another animal species to be controlled in order to maintain the planet's biodiversity. Isn't the real concern that we create our own masters, who treat us no better than we do chicken with bird flu?"

Are you familiar with the idea of AI alignment? Everything except point (1) on the list of reasons for skepticism is talking about worlds where alignment succeeds.

"As for populating the cosmos, given all the biological limitations of earthlings, and the enormity of size and distance of the cosmos, outside of becoming cyborgs, human beings aren't going anywhere."

This seems obviously false to me. Yes, we can't do it with current technology, but generation ships are well within physical limits and would probably be one of the first things that a post-singularity species does with itself. If biological limitations are inconvenient they'll bring nanobots and gametes.

"Where money, wealth, and inequity are concerned, perhaps we should look at how wealth is becoming nothing more than a set of numbers presented to us on a screen, whether phone, laptop, watch, or desktop. Cash is disappearing, gold is for geezers, and crypto--which is physically what?--is the pseudo-gold of the tech class. How long before wealth is just a state of mind."

I think this is vacuous.

Expand full comment
David William Pearce's avatar

I did look up AI alignment, and, thanks to the good folks at IBM, see where cooperation between humans and machines can do good works (but even they have concerns with super-intelligence). My question continues to be what a fully autonomous AGI would be, free to think and determine what it chooses to do, and how different it would be from the human form. To anthropomorphize such an entity, to me, makes no sense, as it will not experience life as humans do.

As for space exploration, I agree that at some point technology will provide a means to carry life forms great distances, but I would think that a post-singularity species will be quite different from humans as they exist today. What I dislike, is the idea that we can just build a space ship and go. Mars, sure-though it has it's own challenges, but deep space is a whole different beast.

And yes, the last part is vacuous. That's on me.

Expand full comment
JonF311's avatar

If "capitalism" just means some manner of market economy-- that's going to be with us in some form or other. It arises from human nature. But if it means every jot and title our current specific economic arrangements-- no, that's not going to last forever, as no social arrangement ever has or ever will. "Time changes all things and we step not twice in the same river".

Expand full comment
Melvin's avatar

True capitalism has never been tried.

Expand full comment
Jordan Braunstein's avatar

AI presents a conundrum that previous technological revolutions didn’t.

In the past, technological change and creative destruction enabled the generation of entirely new industries, creating more jobs and incomes for new generations of workers, even if the old ones became obsolete. This kept the economy stoked with demand and fueled growth as the population increased—Company A's workers are the consumers of Company B, C, and D's products and services, and vice versa. The pie grew.

But AI-driven economic development cannot create new jobs for human workers at the same rate they eliminate them, for one key reason - most new "jobs" created by AI can probably also be best done by AI.

Once tractors replaced plows, there was no reason to keep using plows at all.

Can anyone imagine a new industry - created by AI - that would produce a vast number of new well-paying job opportunities for human labor, manual or cognitive?

What about "humans in the loop"?

Even if some level of human oversight is necessary to manage AI systems, what will the ratio of human managers to "human labor equivalent" AI systems be?

The fundamental purpose of AI is to substitute human labor. The range of jobs in which AI agents can replace humans will continue expanding in line with capabilities.

Assuming capabilities continue to grow and operating costs decrease, at some threshold (different industries will reach it at different points), it will be more productive to deploy another AI rather than hire a marginal worker, and human labor will be increasingly boxed into an ever-diminishing set of roles that AI can’t yet perform.

Leaving population dynamics aside, increased competition for fewer available jobs will put significant downward pressure on wages.

Now, as many have noted, human desires are limitless. Demand for more and more stuff will never decrease; it will only hit limits. But today, the ability to provide stuff depends on the symbiotic relationship between producers employing and paying consumers and consumers working for and buying things from producers.

There will always be rivalrous and positional goods that no amount of material abundance can satiate - real estate, status symbols, etc., which will still be good candidates for market pricing and exchange. But something will have to be done when one-half of the capitalist equation - the wage-earning worker/consumer, no longer has economic skin in the game or bargaining power.

The elephant in the room few are talking about is how politically volatile this will become. People accustomed to decent standards of living will not go quietly into that goodnight, and they certainly will NOT stand by and twiddle their thumbs while a small cabal of Neo-Feudal Techlords takes over society.

The collapse of the symbiosis will either be a very contentious, politically mediated negotiation over the spoils of this new engine of wealth creation - seriously calling notions of ownership into question in ways we haven't seen in over 100 years, or it will precipitate some particularly gnarly new equilibrium states in which the ultrapowerful decide they don't need the rest of us anymore.

Expand full comment
Anonymous's avatar

"I can’t think of anything that really beats the gold standard advice"

Given that humans have spent millennia hoping for a heaven for all the good people and a hell for all the bad people, I think positive "karma" is far more likely to have value than any dumb stock in some brokerage's hackable database. Assume superintelligent AI will estimate all the things you ever did in your life better than you can remember them yourself.

"But we don't know their future values yet!" is a defeatist counterargument, you can make rational guesses.

"Billionaires would never allow this!" I don't think they're going to be making the final decisions. Researchers and engineers will be taking orders from maybe the USA Pentagon or maybe some shiny new international manhattan project (to placate China). The billionaires will look like orbital beta males by that point.

Scott's already donated a kidney, so he'll probably be fine.

Expand full comment
anomie's avatar

> "But we don't know their future values yet!" is a defeatist counterargument, you can make rational guesses.

People have already made some guesses, and one of those guesses is Roko's Basilisk.

...For the sake of all mankind, you should pray that whatever AI manages to achieve apotheosis doesn't hold grudges. Because a lot more people than you think are going to end up burning in hell.

Expand full comment
Anonymous's avatar

I doubt a misaligned AI will care even slightly about the past; everything would be sunk costs to it. And an aligned AI will care about to what humanity aligns it. Maybe a half-aligned AI would do something like that, but that seems like a low-probability outcome.

Expand full comment
Walter Sobchak, Esq.'s avatar

Don't bogart that joint my friend, pass it over to me.

https://www.youtube.com/watch?v=emD48UF-vqE

Expand full comment
User was indefinitely suspended for this comment. Show
Expand full comment
Scott Alexander's avatar

Banned for this one, seems to completely misunderstand point of the article while being extremely hostile.

Expand full comment
Andy G's avatar

I have a serious question: if AI and the singularity ACTUALLY makes everyone 100x richer, and then richer still after that, why all the focus on inequality?

I am genuinely interested in Scott’s and others’ answer to this.

My suspicion is that mostly it is a proxy for / justification of focusing on inequality per se, rather than what is to me the obvious correct moral focus of making most of the world more productive and richer. Because such a justification is in the political interests of those on the left who use the politics of envy to justify leftists having political power and implementing at least somewhat authoritarian leftist laws.

But again, that is just my two bits.

Expand full comment
Edmund's avatar

> I have a serious question: if AI and the singularity ACTUALLY makes everyone 100x richer, and then richer still after that, why all the focus on inequality?

Because historically, poverty continues to suck even when the poor's objective amount of wealth drastically increases. (See https://www.lesswrong.com/posts/fPvssZk3AoDzXwfwJ/universal-basic-income-and-poverty .) If what you're worried about is human welfare rather than raw median wealth, it may very well be that lessening inequality is far more important than growing the overall pie.

Personally I think this probably breaks down at a certain point, and post-Singularity growth would get us there, but it's essentially when you reach post-scarcity. As long as there's scarcity for key resources (whether that'd food or health or living space), a comparative advantage is enough for the 'rich' to make life suck for the 'poor' even if the poor all have a million dollars.

Expand full comment
Arrk Mindmaster's avatar

It doesn't matter how equal you make people. They will never be equal. There will always be differences of some sort, and people will decide which of those differences make people "better". The poor might be indistinguishable to our eyes from the wealthy in that society, but not to those in the society.

If you lock two people in a room for a year with 500 still pictures of Joe Biden eating a sandwich, by the end of the year they will think some of the pictures are great and others suck. And they won't even agree which is which. https://xkcd.com/915/

Expand full comment
Edmund's avatar

That's a different conversation. I'm not talking about "inequality" of social status in fuzzy emotional terms but objective inequality as in "can I get access to more food/better medical care/better living conditions than the other guy".

Expand full comment
10240's avatar

Many of those zero-sum competitions are, to a large extent, artificially created (NIMBYism, lack of disciplining violent kids or tracking in American public schools AFAIUI because it would be "unfair"/"racist" – to address Yudkowsky's example, all the stuff making American healthcare expensive Scott has discussed), so it may make more sense to focus on getting rid of them, instead of on reducing inequality.

Expand full comment
Edmund's avatar

I would dispute whether your two examples are *artificially* zero-sum. One man's YIMBYism is another's Repugnant Conclusion made manifest. But also, if we agree that these things are bad because they artificially create inequality, this… seems like an instrumental proposal for how we might go about reducing inequality, not something to do "instead of" reducing inequality?

Expand full comment
10240's avatar

Regarding NIMYism, IMO we wouldn't actually have to allow building anything anywhere to alleviate housing scarcity, it would suffice to designate *some* neighborhoods (existing ones and/or new ones on currently unbuilt lands) as ones where construction is allowed without height/density limits forever.

I don't primarily say these zero-sum contests artificially create *inequality*, they create *scarcity/expense*, so eliminating them would make the relative poor (and mostly everyone else) be better off, and result in an economy where it would be more unambiguously enough to care about the absolute size of your slice of the cake, not about inequality.

Expand full comment
Andy G's avatar

I find it curious that Scott talks seriously about wealth tax and democracy, without once noting that such a tax is almost surely unconstitutional.

Now of course it is always possible that left populists take over 2/3s of each house of congress and 75% of states and amend the Constitution, but this is surely no trivial matter.

Expand full comment
Arrk Mindmaster's avatar

What exactly do people mean by a "wealth tax"? If I have $1 billion in assets, do I have to give over some proportion of that in tax? If so, how is this different from legalized theft, justified by jealousy and envy of the have-nots toward the haves?

It certainly isn't fair to the wealthy. My question is whether people think it justified.

Expand full comment
Andy G's avatar

Yes, you have the definition of wealth tax correct.

I’m not getting into fairness or justified, about which reasonable people with even fairly similar values might disagree, and where people with different values (and different understanding of economics) surely *would* disagree.

I’m just talking about the constitutionality.

Expand full comment
justfor thispost's avatar

Sure it is. Where did you get that wealth? By what right do you have the things you have?

At the end of the day, the only reason that you can have wealth, have property, is the active maintenance of the social order by the consent of the population and by threat of violence from the state.

Today it's like that. Maybe tomorrow, it's different.

I mean, it's in the name. Those are the US's dollars.

's why people want crypto so bad, as though a sharp blow to the shin with a pipe wrench isn't traded with btc at 0:1.

Expand full comment
Arrk Mindmaster's avatar

Unless the wealthy person got their wealth illegally, they ought to have every right to their wealth. Legally, they have lots of routes to get it: saving, earning a lot of money at a job, making and selling lots of things that people want, inheritance, picking the best stocks in the stock market, and more.

Expand full comment
justfor thispost's avatar

X years ago it was legal for me to go find some Indians, gun them down, bury them in a shallow grave and set up my farmstead on top of the bones.

A bit later than that it was legal for me to sell diet pills that were 100% meth.

Now, you can make money from sports betting free and clear, even a couple years ago that was not the case.

That "ought" is carrying a lot of weight.

Expand full comment
Arrk Mindmaster's avatar

When you come up with a set of moral standards that are true and unchanging, you should publish them.

You should not be held responsible for behaving morally according to the rules of society, even if those rules change later. Perhaps in the future people will find it reprehensible to discuss the sins of the past as they let bygones be bygones, and they come across your post about killing Indians. Shall they fine or imprison you?

Expand full comment
Scott Alexander's avatar

How is the level of theft involved in a wealth tax different from that involved in income tax?

Expand full comment
Arrk Mindmaster's avatar

The main point concerning level of theft is that other taxes are taxed once. If you have $1 billion, you get taxed 10%, and have $900 million. Then you're taxed again for $90 million? Then $8.1 million? Why do you deserve these never-ending fines?

Suppose you say that once you pay your $100 million in taxes you don't have to pay wealth taxes on the remaining $900 million. Isn't this the same as income tax? After all, the money must have been earned by someone at some point in the past. A wealth tax would then be a one-time additional tax TODAY, for no reason other than some people are wealthy.

Expand full comment
John Schilling's avatar

First off, I don't think anybody is seriously talking about a 10% annual wealth tax, It is worth considering that as the *possible* long-term result of a slippery slope, but right now proposals are more at the 1%/year level.

Second, if you have a billion dollars, and you are not stupid or a wastrel, you are almost certainly *earning* a hundred million dollars per year from your billion-dollar business or investment portfolio. So, you have a $1 billion and you get taxed 1% so that you have $990 million, and then a year later you have $1.09 billion any you get taxed 1% so that you "only" have $1.079 billion, lather rinse repeat, getting wealthier every year.

Third, other taxes work about the same way. Property taxes, for example, are well established, and result in your billion-dollar real estate holdings being assessed $10 million this year and $10 million the next year, ad infinitum. If you're putting the land to good and productive use, then you pay the taxes out of the profits with plenty to spare and get richer every year. Or consider income tax. Instead of the 1% wealth tax on billionaires. a 10% income tax means that their $100 million income this year gets taxed $10 million, and so next year they have $1.09 billion and so probably an income of $109 million and get taxed $10.9 million, and the results are mathematically identical to the 1% wealth tax.

The only difference is, if you have something really valuable that you insist is yours because e.g. your great-grandfather did something really valuable to earn it, and that you insist that it will eventually be your great-grandson's because it's yours all yours, and if you do *nothing useful or productive with it ever*, then yeah, the civilization around you that is run by people who *do* do useful and productive stuff are going to tax some of that value away from you and your son will only get say 70% of it for having been your son.

Taxes, whether "theft" or otherwise, are a necessary evil. But they are also an incentive, or more precisely a disincentive. Income taxes, sales taxes, value-added taxes, these all disincentivize people from doing useful and productive stuff, so that less useful and productive stuff will be done. Property taxes and wealth taxes, disincentivize hoarding lots of valuable stuff and doing nothing useful or productive with it. I know which of these I would prefer.

Expand full comment
justfor thispost's avatar

I've written about this before, but I suddenly got enough money to do some light parasitism (landlording and investing and such) and it's wild how much money I make making peoples lives measurably worse with zero effort and less risk than I used to take on climbing a step ladder.

Working really is for suckers, more fool me that I didn't have the foresight to have my grandfather invest in socal development land 80 years before I was born or some such.

Expand full comment
John Schilling's avatar

How does your being e.g. a landlord make anyone's life measurably worse? Absent you being a landlord, it would seem that the alternatives would be either A: someone else is the landlord or B: nobody bothers to build the house/apartment/whatever in the first place, or C: a bunch of squatters fight over who gets to live in an unmaintained home until it falls apart.

Expand full comment
Arrk Mindmaster's avatar

People can be good or evil depending on what they do. Landlords can do good for their tenants, but aren't necessarily required to (some laws do mandate some things). If you, as a landlord, are only a parasite, that is your choice.

I have considered becoming a landlord, but haven't yet. I don't have the assets to risk on non-paying or destructive tenants. But if I did, the good I would be providing my tenants would include providing a desirable home with little up-front capital, providing routine maintenance, and providing one source to go to for non-routine maintenance. I have even considered including an optional savings plan: they pay, say, $50 extra a month which remains theirs, but I grow it for them, so they can save up for a down payment for a house of their own.

Expand full comment
Arrk Mindmaster's avatar

My argument is about the fundamental properties of the wealth tax, so the precise amount doesn't matter, unless there is some kind of inflection point, such as obviously 100% wealth tax is not viable. We need not debate the precise amount of the tax for this purpose, as we aren't actually implementing a policy anyway.

Wealth can take many forms. If you have money, it should be put to use somehow to make money, or inflation by itself will make it worth less. But a coin collection, for example, increases in value over time, and a wealth tax would either force an individual to somehow generate enough income to support the increase or would be forced to sell off parts of the collection, just because other people think they are more valuable. Not all assets should be required to generate income.

Property taxes are well-established, and anyone that buys land knows they must pay property taxes on the land. This is no different than maintaining a house, car, or other equipment one purchases. But other taxes are NOT considered recurring expenses: sales tax, excise tax, income tax, tariffs, value-added tax, etc. If you give someone land, they know they must pay taxes on it or lose it. I can think of nothing else that you can give someone that obligates them to pay tax from money they don't get from it, but maybe someone else can think of something?

Imagine someone passing a coin collection down in the family. A coin collection cannot generate income, though it increases in value. Eventually such a collection must be broken up if the family doesn't make more and more money, and they aren't given the tools to do so.

Productivity is good to have, but it isn't everything. Sentiment has no value in the marketplace, but that doesn't mean it has no value. Shall that value be taxed if someone is rich in sentimental objects, and thus wealthy?

Expand full comment
magic9mushroom's avatar

>Or the government might itself control many AIs and be too powerful a player to coup. Then normal democratic rules would still apply.

"A government is a body of people, usually notably ungoverned."

I think it's a mistake in this circumstance to talk about governments or AI companies. Individuals within governments and individuals within AI companies will have motive and opportunity to betray their organisations and make the AIs loyal to them personally, and deterring that is really hard because if it works they outgun you.

Even if you assume there's some means of preventing anyone from doing this, the government as a whole can defect on the populace. Ruling parties allow elections mostly because they know that attempting to suspend elections will result in revolt backed by the military, but if your military is Skynet and you have control of it you don't have to do that.

AI does not play very nice with democracy, and plays only slightly nicer with oligarchy. It makes absolute monarchy a powerful attractor.

>If we expect the Singularity to grow the economy by orders of magnitude, it might be worth investing in stocks rather than other instruments (eg bonds) that pay out a fixed sum.

The above means I'm generally of the opinion that investing in stocks is largely pointless. An unchallengeable monarch has no particular reason to respect the existing ledger of who owns what, so your shares are useless; the company's assets will likely be seized. Even shares in AI companies won't do any good if those companies become hegemonic and there's no-one to tell the executives/employees that they have to pay their shareholders.

It's like that scene from The Dark Knight Rises: "I've paid you a small fortune." "And this gives you... power... over me?" Paper ownership only has any meaning within the current social contract.

Expand full comment
Long disc's avatar

"Everyone will have access to ~equally good AI investment advisors, so everyone will make the same rate of return."

That's not how markets work. Everyone might have access to superintelligent advisors and those using the advisors will outperform those who would not use them. However, small differences in skill get amplified by markets. If one advisors is 0.00001s faster than another in updating a price, they will capture an outsized portion of market gains. Besides that, even random differences between two equal skill advisors may accumulate to substantial return divergences over time.

Superintelligence does not equate absolute knowledge of the future. So one ASI advisor might over-allocate to an investment and then an unforeseen event might lead to an outsized positive or negative return.

Expand full comment
Peter S. Shenkin's avatar

Wait... end of the world vs. end of capitalism? Is there any difference?

Expand full comment
Melvin's avatar

I have real trouble imagining a world where GDP per capita is 1000x higher. I simply don't know if the human appetite for (non-positional) goods is large enough.

It's not that hard to imagine myself being 1000x richer, and all the things I'd spend my money on if I were a billionaire. But the vast majority of those things are either positional goods (like houses in fancy locations) or things that simply don't scale well to universal adoption (like private jets). Other things that I'd buy (e.g. a Ferrari) get their value through exclusivity. There are some ways I could genuinely consume more value, like eating foie gras for breakfast every day or throwing my clothes out after a single wear, but that's not going to get me to 1000x my consumption.

Basically I can't imagine a way to 1000x my consumption that scales to everyone else doing it too.

Expand full comment
Brandon Fishback's avatar

Our world is already kinda like that. Traveling around the world in a few hours a few times a year is one of the ways that people spend their vast wealth and it’s a big driver of the economy. With 1000x wealthier society, we could probably start terraforming planets and people would spend their wealth taking their trips there.

Expand full comment
Nematophy's avatar

There's no need to seriously worry about the Singularity. It is overwhelmingly likely that the Pole Shift / Eschaton / Return of Jesus will start soon as we really start climbing the vertical curve - if we even get that far. If these models can't figure out the logos - the ontological base foundation of existence - despite this being "old news" from 1kya, and the obvious fact of its Incarnation, how will it possibly prove the Riemann Hypothesis? After all, all things were created through and are reflections of the logos, so it stands to reason that if your conception of the logos is incomplete (or incorrect) it will most likely affect your downstream conclusions.

Expand full comment
Nate's avatar

I am a bit frightened by this article given my lack of wealth. Though Scott makes some valid counterpoints towards NSG's argument, I can't help but wonder if I should be getting into real estate right now to get ahead on fixed physical assets...

Expand full comment
anomie's avatar

...I really wouldn't worry about it too much, if I were you. Even in worst case scenario, the ringleaders won't outlive the rest of humanity by much. You won't be missing out on anything.

Expand full comment
Scott Alexander's avatar

Just collect random pieces of litter off the street. Come the singularity, cryosleep yourself until 100,000 AD, when genuine artifacts from 2025 Earth will be priceless.

(I'm being silly, but I also don't really understand why this strategy wouldn't work)

Expand full comment
s3tione's avatar

I still can't imagine a world where AI and robotics has gotten to science fiction levels and somehow capitalism still survives. Isn't the foundation of our modern society consumer spending? If so, what happens if there are no more workers (because AI does everything worth doing better and cheaper than humans) and thus no consumers? Who's buying all the stuff that's making the billionaires rich? And how does a government function without a tax base?

Expand full comment
John Schilling's avatar

People receiving a UBI will still be consumers. So will rich people and their entourages buying lots of luxury goods (and real estate).

Expand full comment
s3tione's avatar

UBI needs a tax base. And a large one at that, especially if the government is basically paying poor people (which is most of the population) to survive. I can't see billionaires wanting to endlessly support most of the world's population, nor can I see the masses of the world just sitting around while a tiny few take and hoard everything. It's not a stable arrangement and seem likely to fail.

Expand full comment
moonshadow's avatar

UBI needs a tax base in a world where you need humans to do work in order for goods and services to be available. In order for people to get a slice of the pie, someone needs to make the pie. In that world, the people producing the goods and performing the services get paid, and (some of) the money they get paid with comes from the UBI.

In a world where all of that labour has been automated, this is no longer true. The goods and services will still be available even without humans performing labour. There is no labour humans can do that the robots wouldn't do better; that's literally the starting premise in Scott's post. The pie may be of finite size, but it gets made regardless of human input. A tax base is required only to the extent that human input is required.

The UBI is needed at all because there is still a finite amount of energy and matter available - it provides a means for people to choose what form they want their slice of pie in (other schemes are also possible, but people have a great deal of trouble reasoning about them, as seen elsewhere. UBI + markets is not an unreasonable solution, despite my arguing against them in other posts above).

Expand full comment
s3tione's avatar

What you are describing isn't capitalism, it's something like the concept of fully-automated-luxury communism where people simply receive goods made freely by machines. I am still not understanding how capitalism survives this transition through the "singularity" if no one is doing any buying or selling.

Expand full comment
10240's avatar

Natural resources (land, minerals, or in the long term energy and negentropy in all forms) may remain scarce and available for buying/selling, depending on whether people collectively have wishes the existing resources can't abundantly satisfy. Also prestige goods like pre-singularity artifacts or human-made things. But I don't expect it to be much like capitalism.

Expand full comment
Mark's avatar

" Maybe the post-Singularity world will be rich enough that even a tiny amount of redistribution (eg UBI) plus private charity will let even the poor live like kings (though see here for a strong objection)."

This already exists. Look up the term NEET - in some Western countries, one can live forever on welfare without even attempting to look for work, and the living standards are in most ways better than kings had in past centuries. Why doesn't everyone do this? Partly because most people can achieve an even higher living standard with a job, and partly because NEETs are stigmatized (in social status, as potential romantic partners, etc). Of course, if AI makes 99% of us into NEETs, much of the stigma will be lost.

Expand full comment
Padraig's avatar

The post presupposes that the singularity is nigh. But a Bayesian take suggests otherwise: granted that the sun will rise tomorrow, it will rise over a world not terribly different from the current one. And that might be considered to continue into the future. How many really society-changing technologies have actually panned out in the past 100 years or more? (Perhaps two?Electrification and the internet - a man on the moon didn't change how we all live our lives.)

In contrast there have been any number of bubbles which collapsed, and and companies that folded when they couldn't deliver on grand promises. Such people have huge incentives to keep the bubble inflating: I'm thinking of (but not equating) Theranos, NFTs, quantum computing. Where's the compelling evidence that this world changing singularity is around the corner?

Expand full comment
Scott Alexander's avatar

See https://www.astralcodexten.com/p/contra-deboer-on-temporal-copernicanism for the prior case, and any of the AI timeline forecasts for the evidence-based one. Davidson's is out of date but I think gives a good sense of the genre: https://www.astralcodexten.com/p/davidson-on-takeoff-speeds

Expand full comment
Padraig's avatar

Thanks for the response - I agree that priors suggest something like a 1/3 chance of a 'revolution' in our lifetimes. But this is the probability of *any* revolution. Surely we shouldn't discount all the other smart people working on other projects. I don't think it's entirely convincing that this is the one. Offering 50/50 odds that AI is the one revolution, that's still a 1/6 chance of the singularity happening in the next 40 years. This I find plausible.

Expand full comment
LGS's avatar

Many people make the assumption that the post-singularity controllers of AI will feel free to ignore democratically elected government (which would want to tax them), but at the same time, will feel obligated to respect the content of your bank account. This is far from obvious to me; if someone has all the power and no respect for institutions, why not just confiscate everything? Who's gonna stop them, the police? Why does the number in the bank account matter in such a world

Expand full comment
Scott Alexander's avatar

If there's only one post-singularity controller of AI, I agree they can do whatever they want.

If there are many, this reduces to the question of why most people respect property rights today, rather than the military/the elites/51% of people ganging up to take the property of everyone else. I think this essay provides a good explanation: http://www.daviddfriedman.com/Academic/Property/Property.html

Expand full comment
LGS's avatar

This essay is long and seems mostly irrelevant (e.g. assumes that trade is valuable and that conflict has costs, both of which seem dubious between AGI-wielding and non-AGI-wielding entities).

It is also just empirically wrong that people respect property rights today: nations attack each other, and did so regularly before nukes; people try to scam each other and rely on an elaborate court system to defend property rights; when the police went on strike in Montreal in 1969, everything went to hell immediately (people didn't naturally keep the Schelling point of order without police around).

How many post-singularity controllers of AI do you expect? 100? OK, now let's say Putin is one of them. He attacks Ukraine (then the rest of Europe). He also confiscates a bunch of useful resources (e.g. steel production plants) from Russians, to a greater extent than he already did before. Who stops him? Some American AI controller who doesn't care about redistribution is going to care about defending Russians and Ukrainians?

Suppose I enter a contract with Trump and he decides not to pay me. Oh, and post singularity, Trump controls AI. We go to court. Does Trump respect the court's decision? Please explain how this works, Scott. How do my property rights get preserved? Musk attacks Trump on my behalf? Some US government entity controls AI and enforces contracts, but DOESN'T enforce elections?

This all sounds like a libertarian fever dream to me. If power is concentrated with 100 people, the world will look like it. Those people may enforce contracts *with each other*, but not with you.

Expand full comment
nah son's avatar

Final case to add: after AI becomes enough of a problem it starts to destroy the social fabric but before someone is stupid enough to let it design and mass produce a drone that actually works as force replacement, everybody rounds up their 10 closest friends and does a little Russian revolution reenactment LARP.

honestly given the situation as it lies, I'm not sure if super intelligence or The commune hits first.

Expand full comment
moonshadow's avatar

“Before humanity can be saved, it must first be sorted.

For this purpose construct a corridor with, say, fifty doors; each randomly opening inwards or outward. Label them “push” and “pull”, also randomly.

Those who guessed them all correctly go left; those who got them all wrong go right; as for everyone else, hand out the uniforms and send them to the Camp.

Have the ones on the right brainstorm ideas on how to make life better; put the ones on the left in charge of the ones in uniform to bring those ideas to life.”

Expand full comment
Orazio's avatar

I don't understand why one can make the assumption or reach the conclusion that a singularity could end scarcity. Barring some speculative theoretical physics, the laws governing energy will dictate that there is a finite maximum amount of usable energy per unit of volume of the universe. No matter how high this amount is, it's still not infinite, therefore it is by definition scarce, and the problem of how to allocate would still be alive and well after a singularity. Please help me find the flaw here, to understand why people so often throw in the word "post-scarcity" every time they hear "singularity".

Expand full comment
Scott Alexander's avatar

If you have one star's worth of energy per human (or some other ridiculous amount), how could they ever use that much? I suppose there are ways - accelerate some object to 99.9% c in one direction, then the other, back and forth forever - but why would you want to do that? I interpret post-scarcity as "enough for any use people are likely to have".

I guess this implies that energy use doesn't grow forever as we think of more and more energy intensive ways to amuse ourselves.

I admit post-scarcity sort of equivocates between "people will have everything they want" and "people will have everything they can *reasonably* want", and that this is a big potential flaw in the concept.

Expand full comment
Orazio's avatar

Tl,dr: sounds a bit like the famous (and fake) "640K ought to be enough for anybody". Maybe it will be, who knows.

More in detail, I agree we are conflating different meanings of the word scarcity. However, I think I'm correct in at least two of the major ones (and what a coincidence I can't think of ways I'm not correct) as long as extrapolations hold.

The first one goes like this. If we start from basic assumptions of marginalist economics, scarcity comes from existence of more than a single use for a resource. Scarcity means that there is not enough for all the possibile uses of the resources, and this generates tradeoffs. If the universe is finite (and I bet, under some conditions of growth, even if it's not), the abolition of scarcity is basically the same as the abolition of alternative uses of resources, which I do not think will ever happen. So scarcity in the standard economic sense can't ever be over (but it's kind of trivial in a way).

The second one is "having enough" as opposed to scarcity intended as "poverty". It is well-known that hunans' perceived poverty levels variable relative to their context - the poverty threshold is different in different countries. So the concept of "having reasonably enough" is not tied to any absolute level of wealth/resources. And scarcity in this sense likely won't be affected by any increase in wealth.

You're suggesting that maybe at some point this desire for more resources stops. That would make the extrapolation wrong. I suspect that if you went back to 1600 and told the average human "in the next 400 years the resources per capita people have at their disposal will increase more than 1000x" they would also have reacted saying that scarcity will be over. If that's the case, the scaling laws of human desire have proven that so far a large increase in resources available does not cause want to be over. We can wait and see experimentally for how many OOMs they hold (or if they stop applying due to extinction) :)

Expand full comment
Jaroslav Sýkora's avatar

It's backwards. Super-AI will need workers (think: ants) to run power plants, datacenters and chip factories. Do maintenance work etc. Have few obedient overseers for the humans workforce. Biorobots (humans) are the cheapest option and come with built-in limited inteligence to solve practical day problems. On the other hand, it has no use for wealthy elite who would just interfere in its plans. The first strike will decapite them to get rid of this risk and set the new precedent.

Expand full comment
Scott Alexander's avatar

Why do you think human labor will be cheaper than robot (or synthetic bio creature labor) for all possible tech levels?

If AI eventually invents cheap robots (or synthetic bio creatures), then whether it destroys the wealthy reduces to whether it destroys humanity in general, which I agree is a likely outcome but which this post is assuming won't happen in order to speculate about the consequences.

Expand full comment
Aiken's avatar

I disagree that plutocrats will want to go galactic (I know Musk and Bezos do, but I think they'll quickly realise its a silly idea.) Objective material differences will mean that Earth is the luxury place to live, consider how the sensible wealthiest people live in nice climates (e.g. Monaco), and everyone else is stuck working away in grim rainy London or - worse - Birmingham. Mars would be even worse than both of these. Sure some new-money rich people make bad choices like moving to Dubai where its too hot, or similarly Mars. But mostly the poor people will actually end up being shipped to Mars or some other meteor mining outposts, while the rich stay home in the best parts of earth.

Expand full comment
Scott Alexander's avatar

I think this underestimates what you can do with far future technology. Send a ship with a frozen plutocrat plus nanobots, let the nanobots build a luxury palace on the future world, then thaw the plutocrat.

Also, even if 99% of people don't want to leave Earth, the lightcone belongs to the 1% who do.

Expand full comment
Aiken's avatar

You should do a post on the inverse relationship between sci-fi-ness and ability to reason. e.g. assumptions about what is scarce & the affects of scarcity break down...

Expand full comment
covector's avatar

> Everyone will have access to ~equally good AI investment advisors, so everyone will make the same rate of return.

I disagree with this statement. Some "investment strategies" are available only if you have enough capital for them. Unless these advisors capable of coordination, rich person will get richer. Even if coordination is possible, rich can take much more risk then poor so has more opportunities available. Also coordination requires resources for itself too. And since with AI you can't outsmart others we will get rapid inequality growth after singularity if property still means something

Expand full comment
Spinozan Squid's avatar

A world where there is an AI that is 100x as intelligent as any human is a world where that AI can basically accomplish anything it desires. I don't think that type of intellect gap is compatible with a servant dynamic at all: even if I could 'speak cat', and actively wanted to be the servant of my housecats, the intelligence gap in how I would interpret and act on their requests would inherently invert the power dynamic. I feel like if/when superintelligence exists, it will inevitably remake the world in its own values, and human society will end up being structured to provide prestige and value to people who have traits that the AI values.

Expand full comment
AnthonyCV's avatar

I think land may be underrated as post-singularity capital that humans can still control, in the scenarios where property rights remain viable at all. Remaining on-planet, if labor and intelligence and technology are nearly free and nearly infinite, what are the limits of growth? For long-term sustainability, I'd guess solar power potential. For shorter-term profit, mineral rights.

Expand full comment
Malcolm Storey's avatar

Trivial point but "The End Of The World" is simple and easy to imagine. "The End Of Capitalism [with the world continuing]" is much more complicated and difficult to envisage.

Expand full comment
Worley's avatar

You quote "post-Singularity, AI will take over all labor". But of course, AI won't take over all labor, it will only take over *intellectual* labor. Being knowledge workers in the post-industrial era, where all of the best-paid labor is intellectual, assume that is all labor. But for most of the agrarian and industrial era, a large fraction of labor was simply humans being used as motors, exerting force on large physical objects. And even today, a large fraction of the labor is not intellectual -- consider the "hospitality" industry. Consider what happens when e.g. writing think-pieces for a major newspaper is done by machines but serving food to people on porcelain plates is the really well-paid work!

Expand full comment
Scott Alexander's avatar

I think there will be robots.

Expand full comment
Aaron Brogan's avatar

It strikes me that inequality might be a fundamental human need. In a world where distinction is impossible by cognitive labor, we'll fight with rocks and sticks if we have to.

Expand full comment
Dierken's avatar

> but the plutocrat would probably treat their descendants pretty well

> wealth inequality would probably be very low, since there’s no reason for one plutocrat-descendant to be richer than another.

LOL

Have you met any humans?

Expand full comment
Adam Tropp's avatar

"Seventh, maybe we will be so post-scarcity that there won’t be anything to buy. This won’t be literally true - maybe ancient pre-singularity artifacts or real estate on Earth will be prestige goods - but some people having more prestige goods than others doesn’t sound like a particularly malign form of inequality."

I was wondering when you would get to this. It seems fairly obvious to me that in a post scarcity world where human labor of any kind has absolutely no value and AIs dont demand payment for their work (to the extent that doing tasks humans find important even constitutes work for them), money would have no value. Sure there would still be luxuries and status symbols of various kinds, but we would become so inundated with them that we would probably cease to care about them, or oscillate in an eternal trend cycle of caring about them and not caring about them. Either way, it really doesnt seem that bad.

Expand full comment
Not it's avatar

I think once it’s technologically feasible and cost efficient (rapidly post singularity) the inevitable outcome is that most of humanity retreats to one or more simulated lives. Why struggle in an unfair base reality when any set of life experiences and worlds you could hope for can be delivered in a sim?

Want to be a king? Sure. Want a world just like base reality where you have an improbable heroic journey to the top? Why not. The set of life experiences available in sim is infinitely more varied and rewarding to any set of preferences you could have in real life.

So we impose samsara on ourselves across a set of lives, gradually achieve enlightenment, and probably at some point choose to die. Could be worse.

Expand full comment