640 Comments
User's avatar
User's avatar
Comment deleted
Oct 18
Comment deleted
Expand full comment
RB's avatar

Amen

Expand full comment
User's avatar
Comment deleted
Oct 17
Comment deleted
Expand full comment
Sophia Epistemia's avatar

there are enough stars in the universe to give every unit of plankton its own dyson swarm made of computronium instantiating its owner's subjective individual utopia. (if there aren't as per current astronomy models, just wait one or two more iterations of "wow, this new telescope is seeing light from further out than we thought the universe was wide! again!")

Expand full comment
User's avatar
Comment deleted
Oct 17
Comment deleted
Expand full comment
Scott Alexander's avatar

I disagree. I don't know for sure what the ultimate good of humans is, but when I'm cold and wet, I get out of the rain, and I endorse this.

Expand full comment
User's avatar
Comment deleted
Oct 17
Comment deleted
Expand full comment
Scott Alexander's avatar

I still disagree. If people all around me are dying of a disease, I might invent a medication to treat that disease. I don't think this requires the ultimate telos of Man either.

Expand full comment
Kronos's avatar

But what if people are dying from two different diseases? How can you rationally decide which one to prioritize without knowing what end state you are aiming for?

Expand full comment
Vakus Drake's avatar

I think I actually do have enough of a detailed model of utopia to avoid the issue you raise here: https://open.substack.com/pub/astralcodexten/p/book-review-deep-utopia?utm_source=direct&r=h7m5n&utm_campaign=comment-list-share-cta&utm_medium=web&comments=true&commentId=73191764

Basically it combines simulated worlds with growing your mind once you run out of novelty at a given intelligence level

Expand full comment
Chuck Umshen's avatar

IMO Iain M. Banks's Culture novels deal with this pretty well. We'll still compete for status even if we no longer need to compete for resources

Expand full comment
Sophia Epistemia's avatar

good for the ones whose preference is competing for status. those who want to be God Emperor Of Everything can be uploaded into their personal godly realm.

Expand full comment
User's avatar
Comment deleted
Oct 17
Comment deleted
Expand full comment
Sophia Epistemia's avatar

as good as perfection if they chose to forget they're in a sim. gods, it's like not everyone has spent two decades thinking around this so as to have a precomputed and cached response to every obvious objection or something

Expand full comment
User's avatar
Comment deleted
Oct 17
Comment deleted
Expand full comment
Sophia Epistemia's avatar

you still only experienced your experiences through your own sensorium. given perfect vr and mind editing, you can get to subjectively live anything you want.

Expand full comment
User's avatar
Comment deleted
Oct 17
Comment deleted
Expand full comment
AB's avatar

Corollary: I’m not sure there’s any way to prove I’m not already a 30th century epicurean in the middle of an ill-advised VR “dopamine fast”.

Expand full comment
MarkS's avatar

That's interesting. What did you compete in? What was the experience like?

Expand full comment
Performative Bafflement's avatar

Second this, I'm interested too. Olympians are a breed apart - I was a regionally competitive powerlifter when I was younger, and one of my friends and mentors was a former Olympic qualifier in hurdles.

Even 15 years past his competitive prime, he could casually power clean 405+ for reps, at the 181 weight class.

Expand full comment
Pas's avatar

Obviously in a couple of decades lovebots are coming.

Expand full comment
Deiseach's avatar

G. K. Chesterton

The Holy of Holies

‘Elder father, though thine eyes

Shine with hoary mysteries,

Canst thou tell what in the heart

Of a cowslip blossom lies?

‘Smaller than all lives that be,

Secret as the deepest sea,

Stands a little house of seeds,

Like an elfin’s granary.

‘Speller of the stones and weeds,

Skilled in Nature’s crafts and creeds,

Tell me what is in the heart

Of the smallest of the seeds.’

‘God Almighty, and with Him

Cherubim and Seraphim,

Filling all eternity—

Adonai Elohim.’

Expand full comment
Vakus Drake's avatar

I think for most people it doesn't matter that some hypothetically higher status individual exists if you never interact with them and people around you don't talk about them.

What matters is the people you interact with and focus on, so only competing within your deliberately isolated community is viable.

So you can make everyone who wants it high status by creating a bunch of super chill humanlike minds that aren't interested in pursuing zero sum status competitions.

Expand full comment
AH's avatar

Yeah 'Look to Windward' in particular deals well with these questions. No spoilers: Outsiders are confused to see purposeful extreme danger seeking behaviour by the Culture (for example, lava surfing with no life backup) and enduring creativity and art. He does get around some of the contradictions by introducing the concept of 'subliming' (graduating into the energy plane? the next dimension up?) for those who reach a certain level of singularity, which is a bit of a cop out.

Expand full comment
MM's avatar

There was an interesting bit about the Culture *not* subliming unlike other cultures that had reached the same point.

I think Banks was sort of making the point that the Culture was hanging around specifically to interfere with other civilizations, in a "big frog in a small pond" sense. There was some feeling (or at least I got this feeling) that they were being a bit childish as a result.

Expand full comment
None of the Above's avatar

Also, one of the only places of real intense competition is being accepted into the Culture's various outreach efforts--Contact, Special Circumstances, and then the weirder branches of Contact that deal with the sublimed, the extinct, hegemonizing swarms, etc.

Expand full comment
None of the Above's avatar

Yeah, the lava surfing is interesting, as well as the "disposables" who (unlike almost all Culture citizens) aren't backed up, so if they die, they're gone forever instead of having a slightly-before-death version of themselves wake up in a new body a week later.

Expand full comment
Crotchety Crank's avatar

So *that's* why Contact/SC still uses humans, it's the Culture's entertainment complex for the ones that need real risk and real meaning.

Expand full comment
artifex0's avatar

Unless I'm misremembering, I think that's stated explicitly at one point in the series.

Expand full comment
Crotchety Crank's avatar

I wouldn't be too surprised, though I'd expect Banks to speak it from the mouth of someone cynical who suspects it but can't know. A Horza type.

Expand full comment
Ken's avatar

I thought it was because it seemed like most of the outside civilizations had an anti-AI bias and the Minds would prefer to not lie or mislead about what they are when possible so they left as much of the diplomacy and intrigue to the humans as they were confident that the best of them could not fuck up (with the help of a super intelligent Ai companion, of course)

Expand full comment
JohanL's avatar

Much like in Star Trek, which also adds real status for Starfleet (Contact and especially Special Circumstances are for weirdos, though).

Expand full comment
Nematophy's avatar

I think I'd rather play Louis XIV Simulator 2124 and just wirehead the status directly, thank you very much

Expand full comment
Melvin's avatar

I think the Culture novels cheat, by only focusing on the tiny minority of humans whom the Minds pick for lives of meaning and danger in galactic political intrigue as Contact and Special Circumstances agents.

Expand full comment
Crotchety Crank's avatar

Good point. I wonder what we'd really think of the full hedonists, if they were the focus of a slice-of-life-in-the-Culture book. Would we be jealous? Contemptuous? Would we only find them sympathetic insofar as they have struggles to confront (with status, etc)?

Independently of all that, I think the book would probably suck. Then again, tastes differ, some people love slice of life.

Expand full comment
artifex0's avatar

I've always wondered what a story set in a post-scarcity utopia in the style of a wholesome sitcom would be like. You could have conflict in the form of wacky social farces where the stakes are things like the strengthening or weakening of a friendship, and where the well-aligned ASIs don't resolve things because they think the amusing anecdotes being generated will be more valuable to the people involved than the risks to social ties. I feel like that could make for some pretty fun narratives.

Expand full comment
Melvin's avatar

It would resemble a PG Wodehouse novel or any number of other stories about rich people getting into romantic etc hijinks. Bertie Wooster doesn't actually live in a post-scarcity society, but he might as well given that he can easily acquire anything he desires and doesn't need to engage in any economically useful activity. (It helps that his imagination is pretty limited, which stops him desiring anything outside his immediate lifestyle.) And Jeeves is like a superintelligent mostly-aligned AI.

There's a sense in which even modern middle class people live in something approximating a post-scarcity utopia. Sure, things aren't actually post-scarce but it's not like I'm feeling scarcity on a day to day basis. The things I actually need to consume are a tiny fraction of my earnings, and the only thing I own that's actually expensive is a pure positional good (a large block of land in a nice suburb close to a major city). The dramas that afflict my daily life are extremely low stakes and small scale, which is why my life would make a lousy book.

Expand full comment
The Ancient Geek's avatar

>And Jeeves is like a superintelligent mostly-aligned AI.

AskJeeves? It was an early search engine.

Expand full comment
None of the Above's avatar

You do see those folks in various places. They're going to parties, taking part in orgies, competing to get to attend some exclusive event, playing competitive games, raising families, traveling, sightseeing, learning to play almost-impossible musical pieces[1], etc.

Most Culture humanoids and drones are just hanging out being the idle rich. A handful bend themselves into a pretzel to become missionaries or spies or archaeologists or ambassadors to other civilizations.

Think of someone in our world who is born to great wealth and privilege. Most will live lives of ease and luxury, but some small subset will get PhDs in math, or join the military and end up as Navy Seals, or become Olympic athletes. A few will devote themselves to public service, maybe working in the State Dept or getting elected to Congress.

[1] Though she wasn't from the Culture.

Expand full comment
Ken's avatar

yea I always assumed the fast majority of culture citizens are just wired-headed or an equivalent, but that still leaves billions of others to tell stories about (though even most of those billions do not do anything meaningful, so you're just left with culture agents)

Expand full comment
Bugmaster's avatar

My impression of the Culture novels (especially "Player of Games" and "Hydrogen Sonata") was that the humans in the settings are essentially pets of AI Minds. They are cherished pets who lead fulfilling lives and are generally happy; but they have virtually no effect on anything important (beyound the choice of their next vacation destination and other such things). And maybe this is indeed the best that we humans can hope for...

Expand full comment
None of the Above's avatar

Yep. A few do useful or dangerous things for the Culture, but they are extremely rare. In _Remember Phlebas_, we learn about a tiny subset of humanoids who are basically superpredictors, with insights that even Minds find useful without fully understanding why. Most of the stories feature humanoids involved in Contact or Special Circumstances, or foreigner interacting with the Culture in various ways (friendly or not). I think it is hard to write an interesting story about life in a utopia.

Expand full comment
Jude's avatar

Less cynically put, relations between humans would become the only true source of limitation and meaning. In this sense, the life of upper middle-class professionals in the developed world is probably closer to utopia than it is to primitive agriculture. The most significant physical needs are all addressed and (other than being blinded by death), we are mostly preoccupied with our own internal fulfillment and relations to one another. The Jane Austen books about gentry interacting are basically just about people interacting and vying for status and affection from one another. Utopia would allow an out from this via simulated relationships, but the knowledge that they are a simulation would probably leave people unsatisfied. In the end, we all crave an Other to see and to know us.

Expand full comment
magic9mushroom's avatar

Well, no, they don't.

1) The Culture cycle (I can't speak to Matter/Surface Detail/Hydrogen Sonata) is *depressing*. It's what, 4/7 "lolno this was all a Culture scheme with clarketech waiting in the wings, the suspense was illusory", 1/7 "you died not knowing your entire life was a scheme", and 2/7 "nothing is accomplished and one of the main characters dies for no reason". I think Excession's the only one without an ending that screams "bleak and meaningless", and even then it's bittersweet rather than happy because of the Excession's judgement.

2) The Culture is not a stable state. The primary source of meaning in the Culture is Contact, which both a) requires there to be non-Culture civilisations, b) assimilates non-Culture civilisations into the Culture. As was IIRC pointed out at some point, this is fundamentally a paradox; sooner or later, either they'll bite off more than they can chew and get destroyed (as is implied to have happened by the end of Look to Windward), or they'll assimilate everyone and have nothing more to do.

Expand full comment
JohanL's avatar

Disagree. Culture is the win condition of our species.

Expand full comment
Victor's avatar

The difference being that the Cultures overminds are sentient, they have goals and agendas of their own, that they cannot immediately satisfy for themselves. This means that the Culture Minds are not in the scenario given above, they are in a world much like ours. This gives humans leverage if they can find a way to contribute to those goals.

An interesting implication of this line of thinking is that the Culture's Minds might not want to change human nature and culture very much, because that might eliminate our usefulness. Puppets can't come up with new ideas.

Expand full comment
JohanL's avatar

Or I mean, they _can_., and _do_

It's just that the AI:s anyone interacts with are the ones that have been designed (or self-designed) to not just instantly ascend into the ecstacy of higher mathematics.

Expand full comment
apxhard's avatar

> Why did I come across Deep Utopia this month? Why did I write this review? Why are you reading it? What are you trying to tell yourself?

You came across it because you have been spiraling around fundamental questions of meaning and existence which most philosophers across almost all cultures of human history have considered important, but current thought leaders consider stupid and wrong and possibly evil. You’re smart enough to see the zeitgeist is largely insane, humble enough to know that pursuit of the truth is hard, but not yet courageous enough to take the reputational risk of asking the reasonable question, “what if all those wisdom traditions are just different maps of the same underlying reality?” and then attempting to try and let your internal map of value converge to this territory using the methods of rationality.

I personally am trying to convince myself it’s worth my time to keep pestering you about this 😂

Expand full comment
Sophia Epistemia's avatar

i found this on years ago on the interwebs:

A PLAY ABOUT MAN’S SEARCH FOR MEANING IN TWO ACTS

ACT I

Cat, a lifeform specialized in detecting small prey animals and catching them: *sees a mouse, chases it, catches, eats it*

Human: “Wow evolution has made such a great hunter, look at it! Amazing!”

Cat: *sees a laser pointer dot, frantically tries to catch it but cannot, as it is just light*

Human: “lol too optimised for wanting to catch things am I right”

***

ACT II

Human, a lifeform specialized in using and making tools and seeing if tools are good for different tasks: *sees a knife* “Aha! Someone made this sharp tool to cut things. I see, it’s really good for that!”

Human: *looks at his own body* “Who made this?? What were they thinking? There’s some bigger hidden meaning behind this right? What am I made for… What is the purpose of my mortal life? Am I good? Am I bad? Is there a God? I keep looking for my destiny but alas, I can’t figure it out….”

Expand full comment
malloc's avatar

People tend to view raising their children to be the most meaningful part of their lives. So I figured existential crises to be a side effect of meaningfulness for guiding us to have and raise children. But tool use in general makes more sense.

Expand full comment
Julia D.'s avatar

Regardless of tool use, I agree that people who have lived with and without children say the most meaningful part of their whole life was raising their children.

So I was bemused that in the OP it was framed as difficult to think of a lifestyle that makes "A unique positive contribution? An interesting contribution? One that directly affects the lives of lots of non-supernatural non-dead people?"

Parenting, obviously.

Expand full comment
polscistoic's avatar

You seem ripe for Bokononism.

In the Books of Bokonon, Bokonon urges us to sing with him:

Tiger got to hunt,

Bird got to fly,

Man got to sit and wonder: Why, why, why?

Tiger got to sleep,

Bird got to land,

Man got to tell himself he understand.

Expand full comment
Corey Pfitzer's avatar

Love it! Beware of the man who works hard to learn something, learns it, and finds himself no wiser than before.

Expand full comment
Andrew's avatar

History, read it and weep.

Expand full comment
artifex0's avatar

Yeah, I think that's definitely part of it. I think another part of it is just that most people don't understand the difference between terminal and instrumental goals. Instrumental goals have to promote some higher end to be meaningful, while terminal goals don't. So when people who don't conceptualize those as difference notice that their terminal goals don't promote any higher end, they have a little panic attack and flail around for something for their values to "mean".

I also think that a lot of memes have evolved to take advantage of that misunderstanding to manipulate people into believing that propagating the meme is the "higher end" that they're searching for- which might help explain why something as simple and fundamental to the human experience as the difference between means and ends which aren't also means isn't really common knowledge.

Expand full comment
Sophia Epistemia's avatar

there are no ends. there are no terminal goals. each of us in a bunch of thermostats measuring varied (and more or less mutually incompatible/contradictory/trading-off) environmental variables and inner felt senses, trying to optimise all of them at the same time.

Expand full comment
artifex0's avatar

Sure, but that just means that we have a ton of little terminal goals rather than a few big ones, don't you think?

I mean, if a means is a goal intended to promote some other goal, than barring an infinite regress, that has to bottom out at some target that's not itself a means to an end. And even if that target is in reality some very specific, unnamed pattern of neural activity, it does seem like we can usually roughly model it as something like "I value this smell for its own sake" or "I value this instinctive feeling of gaining status for its own sake", etc.

Expand full comment
Sophia Epistemia's avatar

it is an infinite regress. on an oscillating circuit. there is no The Target.

a recentish scottpost ends with basically that as a placeholding conclusion. when i read it i thought "yup, scott's beginning to Get It"

Expand full comment
artifex0's avatar

Could you expand on that? I'm not sure I understand what you mean there.

It sounds like you may be arguing that our goals bottom out on incoherent preferences- valuing A over B over C over A. But if that's the case, wouldn't our most basic drives be vulnerable to a Dutch book/money pump strategy? Like, charge someone to trade A for B, then B for C, then C for A and so on?

Expand full comment
Jeffrey Soreff's avatar

I agree with you. Enjoying a good meal, a friendly touch, entertainment, a comfortable temperature are all little terminal goals. In Sophia's terminology, those are all little thermostats. Personally, I'm happy with that. When the little terminal goals conflict, we need to decide how to weight them, and those are personal preferences, and I'm happy with that.

Expand full comment
Matthew Carlin's avatar

I think you might instead say "I don't believe there are ends", because it's epistemologically defensible.

Expand full comment
Kyle's avatar

Human, sees a knife: Aha! Someone made this sharp tool to cut things. I see, it's really good for that!

Human, looks at own body: Aha! Someone made this all-purpose tool to do all sorts of things. I see, it's really good for that!

It's very interesting to take the human instinct to find purpose and point to the human body as an example of that instinct failing. Obviously there is quite a lot to learn from our own bodies, and even quite a lot to learn specifically from asking what the purpose is of different physical traits. The nature of the creator of those traits (whether God or evolution) is irrelevant; the traits themselves have purpose by any reasonable definition of the word, and divining that purpose is fruitful.

Expand full comment
Desertopa's avatar

If that's the case, then I think we would have to conclude that they're very bad and inaccurate mapmakers.

This is something I invested a whole lot of my time and energy into when I was younger, exploring and comparing different religious traditions, and while I found that in some respects, where people tended to believe that their religious traditions were unique, they were quite similar (patterns of reasoning, standards and forms of evidence, explaining cosmic scale events with dynamics relatable to human experience,) in other respects where people imagined they might be universal and connect to some underlying spiritual truth (ethics, cosmology, humans' place and purpose in reality) they were wildly different and in some cases probably irreconcilable. My conclusion was that if we supposed there was any real underlying spiritual reality, we could not trust any existing religious traditions to tell us anything about it.

Expand full comment
Maynard Handley's avatar

Meaning is a communal activity; it arises from deep connections between things, as evidenced from, eg, frequent repetition or by reference to these things in many different contexts. Religion and ritual give obvious examples, but they’re not the only examples. You don’t need to actually believe in Christianity, or like Shakespeare, to get meaning from a society where KJV and Shakespearean language and allusions are used repeatedly.

Our current malaise is the result of “too much” choice in all our culture, meaning we no longer have these shared concepts, and the resultant deep repetition and communal meaning - we all watch different movies, read different books, listen to different music, even speak different Englishes.

I see no way to fix this. Even attempts to fix it lead to their own problems - eg much of NIMBYism is obsessive preservationism, which in turn results from a desperate attempt to keep the lived environment unchanging as one thing, at least *something*, that we have as a common shared experience…

Expand full comment
Matthew Carlin's avatar

10 points to house Apxhard. This is a valuable service you're attempting, even if it doesn't pan out.

Expand full comment
D A N I E L's avatar

This is the whole reason I keep reading Scott, to see if the same soul that wrote "Universal Love, Said The Cactus Person Said" will finally be brave enough to walk the untrod path and not simply describe what a man who has never walked it imagines it must be like to do so

Expand full comment
Jack's avatar

Could you be clearer about what exactly you are trying to say? What is "the untrod path"?

Expand full comment
Yadidya (YDYDY)'s avatar

I agree.

If courageous partners in truth-seeking is what you're looking for, please check out my comment above. Ctrl+F "ydydy".

Expand full comment
FeepingCreature's avatar

The way I think of utopia is that we, compared to the best possible world, suffer from a vast and suffocating shortfall of competence, and our thinking about problems is indelibly marked by it. Consider: sports as a skill will be more solved in utopia, but so will *sports-creation*, game-creation, challenge-creation and so on. If you imagine objectively solving soccer or weightlifting, you're only upgrading one side of the scale.

Expand full comment
Mr. Doolittle's avatar

That's a good point. Similar to how Catan is an objectively better family game night than Monopoly. For a long time Catan didn't exist, so family game nights were more about one of the few games that did exist but were objectively inferior.

Expand full comment
Thegnskald's avatar

Yes, because ten turns of people rolling dice and nothing happening is riveting. At least in Monopoly -something- interesting happens every turn, even if it's just variance in how many more turns until you pass Go.

Catan is not an objectively better game, but it is a -different- game, which, in the era in which it came out, was enough. Today there are a wide variety of genuinely good games! And a whole lot of stuff that stretches the definition of "board game" in interesting ways - technically Concept is a board game with rules, but I've never met anybody who actually followed all of the rules, because the central gameplay loop is far more interesting than the rules dictating who wins the game.

Expand full comment
Melvin's avatar

I've never played Catan but I've played plenty of Monopoly and it's not hard to imagine an objectively better game than Monopoly.

The last game of Monopoly I played would never have ended, we got to the point where all properties are owned and nobody has all of one colour. And nobody was willing to trade properties to let someone else have all of one colour. So we all just kept going round the board getting slowly richer at roughly the same rate.

Expand full comment
DanielLC's avatar

What is hard to imagine is how such an objectively terrible game got so popular. I've heard that it's better if you play by the rules and don't skip the auctions, but if that's true, that just makes it worse. Monopoly was around before Catan, but house ruled Monopoly was not around before official Monopoly, so how did the objectively worse game win out?

Expand full comment
Melvin's avatar

I've never played it with auctions (or money for free parking, which I understand is also a common house rule).

But then again, in my (family's) experience everyone almost always buys any property they land on immediately, so it doesn't come up.

Monopoly is the victory of concept over gameplay. The *idea* of Monopoly is fun, (especially for kids, for whom the idea of having *thousands* of dollars is a joy in itself). It's just that the rules were laid down before anyone properly understood board game design.

Expand full comment
John Schilling's avatar

Lack of competition, at a time when the only multiplayer family boardgames that existed were Monopoly and Scrabble.

Monopoly was *meant* to be boring and frustrating and whatever. It was originally part of a matched set, with the other side of the board being a cooperative game focused on building a (not very deep) socialist utopia, and the socialist designer tried to make that part fun and the capitalist part unfun.

https://en.wikipedia.org/wiki/Monopoly_(game)#History

She failed, because whatever the reality of capitalism, people really really like to fantasize about being triumphant billionaires, People would have had *more* fun playing a well-designed game about people trying to get rich, but they had *some* fun playing the "hey, you're not supposed to be enjoying that" version.

A bunch of greedy capitalists noticed, and bought the rights, and started selling the pro-capitalist version that people wanted to play. Which gave it the critical first-mover advantage; anyone proposing an alternative would be trying to sell it into a market where Monopoly is the default Family Game, and everybody knows the rules, and bringing out some new objectively-superior boardgame is like showing up for RPG night with a bunch of Conspicuously Not Dungeons and Dragons rulebooks.

There's a reason the new class of objectively superior boardgames is often called "Eurogames". Lizzie Magie and Parker Brothers poisoned the well for good boardgames in the United States; we had to wait until Europe recovered from WWII, developed its own native boardgaming tradition, and finally managed to catch the attention of nerdy-but-not-hopelessly-isolated Americans.

Expand full comment
DanielLC's avatar

The game those greedy capitalists sold isn't the original monopoly. They tried to modify it to make it more fun. Maybe they succeeded, and the original was even worse.

Between Monopoly and Scrabble, Scrabble wins by a long shot.

Expand full comment
Peter Defeel's avatar

I’ve not idea what Catan is, nor am I a huge fan of monopoly but your use of “objectively better” has soured me on whatever Catan is.

Expand full comment
John Schilling's avatar

And yet there are reasonably objective standards one can articulate as to why Settlers of Catan is objectively superior to Monopoly as a tool for entertaining social interaction. Things like no player elimination, no player quasi-elimination where they're locked into a position where they can't realistically win but are expected to play on, and a much shorter endgame once a high-probability victor emerges. You're basically always playing a game you can reasonably hope to win, until someone else wins and you congratulate them and go do something else.

I suppose these aren't *absolutely* objective criteria on account of their being somewhere a masochistic gamer who'd rather spend an hour playing out a losing position in a game of Monopoly.

Also, there's more social interaction baked into the rules, and the competition isn't as blatantly zero-sum adversarial.

Expand full comment
Legionaire's avatar

I came here to say this. A large facet of what makes games fun is exploring the knowledge space and generally experiencing exciting scenarios. Especially with friends. There is a vast vast set of game rules undiscovered. Soccer and Football are poorly designed compared to what's possible (whatever your goal)

Expand full comment
FeepingCreature's avatar

(Put differently: we will solve exactly all problems that we want solved. For problems that we do not want solved, we will instead replace them with far more fiendish problems of a challenge and scale hitherto unimagined.)

Expand full comment
Sophia Epistemia's avatar

gonna read the post in full, just signposting my starting position as "solving All The Problems would be absolutely fully good, any trade-off can instantly solvable in under five seconds of thought, this is blindingly obvious. given transhumanistan tech it's trivial to think of uploading or full-immersing people into a vr shard with others compatible with them. whoever wants hardships could get them conveniently instantiated for themselves and anyone else with the same suffering kink who wants to share it with them. This Is Not Hard."

Expand full comment
Sophia Epistemia's avatar

aight, that was short. yeah, nothing in there that's not solved a dozen times over in FiO.

Expand full comment
grumboid's avatar

What is FiO?

Expand full comment
Sophia Epistemia's avatar

Friendship is Optimal

Expand full comment
Skivverus's avatar

I think there's a related question which *is* hard, though, namely, how do you (or anyone else) *know* you have, in fact, Solved All The Problems.

Ra comes to mind (https://qntm.org/ra) as a piece of fiction that considers this.

Actually-in-fact Solving All The Problems strikes me as similar to approaching zero in the denominator: the amount of contact with reality required to Solve All The Problems increases as your standards for "solving" do. And there's an upper limit on the amount of contact you *can* have with reality - when it comes to prophecy, you can pick accurate, complete, or timely, but not all three.

Expand full comment
Donald's avatar

> any trade-off can instantly solvable in under five seconds of thought

Only after the mind upgrades. For us un-upgraded humans it's quite a bit harder.

Expand full comment
DanielLC's avatar

Though doing better than our current civilization is trivial. Utopia isn't going to be a dystopia.

Expand full comment
Donald's avatar

Agreed.

Expand full comment
hwold's avatar

> whoever wants hardships could get them conveniently instantiated for themselves

Only real hardship is ultimately meaningful. Virtual hardship like only meaningful in the sense that it prepares you and is a proxy for real hardship. Remove real hardship, and the "just make virtual hardship" solution disappears.

This problem only increases as transhumanism level increases, because more intelligent and more reflective transhuman-agents will more readily realize that their taste for games and learning and training and so forth is just the way Natural Selection used to make us prepare for the real challenges (like kittens play-hunting), making the loss of those even more tragic. They will, on an intellectual level, quickly realize how pointless it is — and after the intellect, emotions will follow.

Expand full comment
Vakus Drake's avatar

If simulations didn't work for hardship then things like sports wouldn't be so popular. You have an group of people (professional athletes) who manage to be extremely high status just by virtue of socially prestigious success at a zero sum competition. This is as clear of evidence as any that anything can provide a sense of meaning if it's seen as important within one's social environment

Expand full comment
None of the Above's avatar

In Stirling and Pournelle's _Go Tell the Spartans_, there is a very rich and well-connected character who goes off to a frontier world to take part in a revolution, basically for the adventure. He reflects early in the story that safaris and mountain climbing and such (danger for its own sake) was unsatisfying, whereas danger and adventure while trying to do something very hard and important was very satisfying.

Expand full comment
Vakus Drake's avatar

Honestly I think people really overemphasize the importance of genuine danger. People confuse what is enjoyable to read, with what's enjoyable to experience firsthand.

Generally I think people would overall enjoy simulations with no risk of death more, precisely because you aren't stressed out by the possibility of death!

Similarly I think most of the thrill of say extreme sports comes from the feeling of danger not actual danger. So most people are wired such that I think they'd greatly prefer the thrill in a safe context.

After all few people would enjoy a rollercoaster more just by virtue of knowing it's unsafe!

Expand full comment
Satco's avatar

I think you have never tried extreme sports. I can tell you first hand, that the fun of surfing definitely lies in the technical limitation of the situation making it dangerous to attempt rescues, but the outcome is clearly coupled with your decisions. So that the good outcome very strongly depends on your actions, giving the actions meaning.

The rollercoaster example doesn't work, because supposedly the rollercoaster being more unsafe would be unconnected to the actions of the riders.

Expand full comment
Vakus Drake's avatar

>the fun of surfing definitely lies in the technical limitation of the situation making it dangerous to attempt rescues

If people developed some really effective swimming robots, I doubt the majority of surfers would think that somehow detracted from the fun. To the contrary I'd expect such safeguards to make things more fun because you could do things that would otherwise probably get you killed if you did them long enough.

There's people who throw their parachute out of a plane and jump after it, so I certainly believe some people want real risk. However, the rarity of the practice doesn't indicate that's not still super rare.

All the evidence I'm aware of seems to suggest that even among thrill seekers people want the feeling of danger not necessarily real danger. It's just that RN those two things heavily correlate.

Expand full comment
Prime's avatar

These ideas have been explored many times over in science fiction - I was surprised not to see any mention of The Culture, Star Trek, or Cory Doctorow.

Culture novels tend to be a bit internally inconsistent about this but generally in a world with transhuman self modification, perfect chemical bliss, and perfectly benevolent omnipotent AI, people tend to have a great time for a few centuries partying it up and then go into increasingly long periods of hibernation. Same with Doctorow's novels.

Expand full comment
acetyl13's avatar

But why would they get bored enough to go into those hibernations if they can just hack their brains/erase their memories to not be bored? If it is impossible then the utopia is not deep enough.

Expand full comment
Andannius's avatar

In The Culture, such things are considered gauche.

Expand full comment
Peter Defeel's avatar

Erasing memories is pretty much like dying.

Expand full comment
Doctor Mist's avatar

Not trying to pick on you but I confess I am charmed by the notion of omnipoet AI.

Expand full comment
Vakus Drake's avatar

One thing that always seemed hugely irksome about the Culture is how suboptimally it seems like the humans supposedly perfect lives are.

In a better utopia I'd expect humanlike minds to first experience every kind of simulated adventure. With the NPCS in the simulations being a mix of mostly characters put on by an AGI DM/GM, and a few newly created minds (always created so they're happy and glad to have been made).

Then once people can no longer be entertained by this (after probably a *very* long time) they would grow up more, so they can now appreciate totally new things. In the same way that you can already appreciate things as an adult which you couldn't as a young child. So the setting's superintelligences shouldn't be this aloof other species, they should be what everyone eventually becomes!

Expand full comment
None of the Above's avatar

My sense is that there's massive variation in this. There's a guy in one of the books who is about as old as the Culture--he just decided not to let himself die. People know about him and think his choice is kinda weird, but he seems to get along fine in Culture society. There are people who get bored and have themselves Stored for a period of time, or until some event happens, or maybe permanently on the assumption that the Culture will Sublime sooner or later and then they'll be pulled in as part of the process. There's a very weird guy who hates being around other people, and so gets a "job" as the one humanoid in an asteroid in deep space that is a hidden emergency weapons cache. There are people who like having kids and have a huge number, and others who have none. There's not really a problem either way--the Culture doesn't need humans to do any work so a falling population isn't a problem; the Culture has vast resources so accomodating another few billion humans is no big deal.

Expand full comment
Tom J's avatar

theology, like poetry, is in a sadly fallen state today

Expand full comment
Maxwell E's avatar

Hilariously, taking Scott's penultimate paragraph on its face, everything necessarily comes back to Mormon cosmology.

Expand full comment
Tom J's avatar

i would rather read that review!

Expand full comment
Maxwell E's avatar

I, too, would love a longform SSC/ACX review of the King Follett Discourse. Someone ought to make it happen! Paging TracingWoodgrains & Adam S. Miller for a collaboration here?

Expand full comment
Penina's avatar

Interested in your argument that if there are no longer unmet needs there would be no further reason to pray. In the Jewish formulation, "bakasha/request" is only one type of prayer. "Hallel/praise" is another, which I don't see becoming obsolete under these conditions. If the provable nature of God turns out to be something like "the One who incarnates in infinite forms and whose desire to Be and to be forgotten, sought, found and known breathes existence into being" - would that not warrant and inspire endless praise no matter how many new forms existence/incarnation might take or the degree to which our own work participates in creation?

Expand full comment
Deiseach's avatar

Revelation 4: 6 - 11

"And around the throne, on each side of the throne, are four living creatures, full of eyes in front and behind: 7 the first living creature like a lion, the second living creature like an ox, the third living creature with the face of a man, and the fourth living creature like an eagle in flight. 8 And the four living creatures, each of them with six wings, are full of eyes all around and within, and day and night they never cease to say,

“Holy, holy, holy, is the Lord God Almighty,

who was and is and is to come!”

9 And whenever the living creatures give glory and honor and thanks to him who is seated on the throne, who lives forever and ever, 10 the twenty-four elders fall down before him who is seated on the throne and worship him who lives forever and ever. They cast their crowns before the throne, saying,

11 “Worthy are you, our Lord and God,

to receive glory and honor and power,

for you created all things,

and by your will they existed and were created.”

See also the Trisagion:

https://en.wikipedia.org/wiki/Trisagion

And by Dead Can Dance, The Host of Seraphim:

https://www.youtube.com/watch?v=hThAlY3Q2Kw

Expand full comment
polscistoic's avatar

I am rather puzzled that you want to “endlessly praise” a God who blatantly shows that he does not have a clue that might is not identical to right.

He comes out quite flat-footed in this regard in his famous answer to Job; here is a taste of the type of “argument” he uses when Job complains that there is no proportionate relationship between the size of Job’s sins and the size of the punishment God has metered out (including killing all of Job’s sons):

“Dress for action[a] like a man;

I will question you, and you make it known to me.

8 Will you even put me in the wrong?

Will you condemn me that you may be in the right?

9 Have you an arm like God,

and can you thunder with a voice like his?

… God goes on to state that since he not only is stronger than Job, but can also create more stuff than him, Job should shut the hell up.

…Which he does, since Job is not stupid or suicidal.

God then rewards him by, among other things, giving him the same number of sons that he has killed. It is apparent that God thinks that as long as the number is the same, no harm has been done.

To quote the philosopher Peter Wessel Zappfe in his 600 page doctorate thesis “On the Tragic”: “In his answer to Job God comes across as a cosmic caveman, almost sympathetic in his total ignorance of what moral questions are about.”

Expand full comment
Penina's avatar

I'm not sure where you got the idea that my conception of the divine is the vengeful male warmonger described in books written by mortal men... What's praiseworthy is not how things are but that they are- existence as a miracle and gift not because its laws are just but because it Is, and we are, and we get to experience sense perception and agency, and do with them what we choose

Expand full comment
polscistoic's avatar

Sorry about that, theological references to any of the three Abrahamic religions (here: Judaism), always triggers my Job's book reflex. A holy book in all three religions.

A great text, by the way, so don't get me wrong - the book of Job is right up there with Ecclesiastes.

Expand full comment
DanielLC's avatar

Presumably once we're powerful enough to keep leviathans as pets, he'll give us a real answer.

Expand full comment
polscistoic's avatar

I like the idea. Match God's power (the old human dream), or show similar ability to create impressive stuff (super-AIs?) and He will take us more seriously.

Expand full comment
Benjamin's avatar

Thank you. Angels already live in Deep Utopia and this is how they spend their time. Alternatively, if there is no personal God to praise, pull an Aristotle and contemplate the impersonal one.

These are basically solved problems.

Expand full comment
polscistoic's avatar

If we solved the problem of death, life would lose its perspective.

Expand full comment
Sophia Epistemia's avatar

bioconservative cishumanism is a self-solving problem. in the distant future i'll think, with a jaded tinge of sadness, of those who chose to end.

Expand full comment
Deiseach's avatar

At least we won't have to stick around to have you be all jaded tinges at us, which pleases me too, so we'll both get the happy futures we desire!

Expand full comment
Crotchety Crank's avatar

Love the optimism. Could do without the sneer.

Expand full comment
Cosimo Giusti's avatar

As a non-bipolar heterophile biped, I resemble that remark.

Expand full comment
polscistoic's avatar

I regret to inform you that you do not cheat death.

At most you earn yourself an extension, as Peter Stormare, aka Lucifer, points out in 5:19-5:37:

https://www.youtube.com/watch?v=xoD44rohrtU

Expand full comment
Cosimo Giusti's avatar

You've stated the fundamental argument against artificially extending life indefinitely.

Even if it were possible, I wouldn't want to attempt it. At a certain point, simple 'consciousness', 'figuring things out', becomes fatiguing. My mother died four months short of a hundred years, but she would have been just as pleased to go at 95. Cognitive strategies and psychiatric balance wear down with the body.

Expand full comment
Sophia Epistemia's avatar

you stopped thinking before thinking of unaging immortality.

Expand full comment
Jim Menegay's avatar

When I think of unaging immortality, I reflect on the idea that to live is too learn and that to learn is to create synapses, and that the human skull can hold only a finite number of synapses. Therefore, in order to live and learn forever one must be simultaneously living and forgetting forever. In order for synapses to be reused, they must be repurposed.

So, in order to live forever, we must put ourselves into an epistemological steady state in which we continue to learn and to experience pleasant things, but older experiences and facts fade from our memory. Unless we relearn and re-experience them over and over again throughout eternity.

Immortality still sounds nice, but maybe not all THAT nice.

Expand full comment
Sophia Epistemia's avatar

's fine by me. an unaging immortal from [insert your favourite Distant Time here] would just learn and forget at the normal learning 'n forgetting rates. yknow, like the principles on which Spaced Repetition is based. I'll take it over the alternatives

Expand full comment
Jeffrey Soreff's avatar

Agreed. Personally, while learning is interesting and is _one_ of the aspects that can make life enjoyable, it isn't the only such aspect. Simple hedonism has its charms as well, and doesn't have any intrinsic memory limits.

Expand full comment
Sophia Epistemia's avatar

> Simple hedonism has its charms

and i have basically dedicated my life to *making more pleasure happen* 😁 so i wholeheartedly concur!

Expand full comment
Doctor Mist's avatar

You assume that our minds will always need to fit in a skull and be implemented entirely by squishy neurons.

Expand full comment
Donald's avatar

> Therefore, in order to live and learn forever one must be simultaneously living and forgetting forever. In order for synapses to be reused, they must be repurposed.

If you are planning to operate within the limits of biology, sure.

With mind uploading, this seems easy to fix.

Expand full comment
DanielLC's avatar

I constantly forget things. Am I learning more than I forget? Maybe. But I don't think I'd mind it if everything I learn I forget after a century or two. My mom doesn't seem to mind watching Seinfeld and forgetting it fast enough to watch it over from the beginning like it's her first time.

But if that is a problem, get a bigger skull. Or smaller synapses. Sure there's a limited amount you could put in any mind, but we'll run into the heat death of the universe long before that becomes a problem. And if we could break entropy and prevent the heat death of the universe, we could probably build arbitrarily large brains too.

Expand full comment
a real dog's avatar

Who says you'll still have a skull, or synapses for that matter?

Expand full comment
None of the Above's avatar

A very long life during which your body and mind fall apart would suck. But I do not see why continuing life at my current level of physical and mental function for an extra few hundred years would be bad. Maybe I'd get bored of it, but I haven't gotten bored of life yet.

Expand full comment
Doctor Mist's avatar

If we solved the problem of miscarriage, birth would no longer seem precious.

Expand full comment
Jeffrey Soreff's avatar

LOL, Good one!

Expand full comment
None of the Above's avatar

It's entertaining how the opposite happened wrt child morality--children dying before reaching adulthood went from super common (like >50%) to a rare, terrible tragedy, and instead of children becoming less precious and protected, we've massively increased our demand for safety for children.

Expand full comment
Doctor Mist's avatar

Yes. I’m 100% on-board for life extension…but I occasionally worry that it will actually make us *more* risk-averse, if what you’re risking is not another 10-30 years but rather another 6000.

Expand full comment
DanielLC's avatar

I don't know what life is like for you. Maybe every time you think about your inevitable death, it makes you appreciate your life that much more.

But that's not what it's like for me. For me it's horror, so I've learned to stop thinking about it. I'm perfectly happy with a life where I don't think about my mortality, and I don't see how not having mortality would make any difference, beyond that I'd spend longer living it.

Expand full comment
polscistoic's avatar

You will change. To quote a late-in-life comment by the Dano-Norwegian author Aksel Sandemose (1899-1965): “What I would never have imagined possible when I was young, is that it is actually easier to approach the final door when you do not believe there is anything on the other side.”

Expand full comment
DanielLC's avatar

Perhaps I will. I can only hope that I have a support group that can change me back to wanting to live.

Have you considered that given time, you would change and be attached to life and not want to give it up?

Expand full comment
Bob Frank's avatar

> We assume that all worthwhile science has already been discovered - or that everything left requires a particle accelerator the size of the Milky Way plus an AI with a brain the size of Jupiter to interpret the results.

Didn't JWST throw a significant wrench in this assumption? I haven't been following it particularly closely, so it's possible that there are people here who know more about this than I do, but as I understand it, the telescope has been able to look very far away, which also means very far back in time, and get clear images. And when they looked back far enough that they should have been, by our best understanding of the history of the universe, in "cosmic primordial times" before modern conditions existed, they found... a bunch of galaxies that look just like the ones we have today.

So clearly something we don't understand is going on, that has nothing to do with particle physics.

Expand full comment
dionysus's avatar

The "something we don't understand" is either the precise details of galaxy formation, or the precise details of how to estimate the mass of a galaxy from a picture.

Expand full comment
Spikejester's avatar

This is an assumption about the state of science in the far future Deep Utopia - not one about present day, where there is obviously quite a bit of science we know we don't know!

Expand full comment
Zubon's avatar

>Weight-lifting retains its excitement because we don’t fully understand either.

No? You go on to describe weight-lifting as a competitive spectator sport. Weight-lifting does not seem particularly popular as a competitive spectator sport. Weight-lifting is fairly popular as a means to get stronger. The excitement of weight-lifting is being stronger and seeing your numbers go up. I imagine that running is similar: while some number of people will watch a race or marathon every few years, far more people are trying to improve their cardiovascular health, speed, and endurance.

And THAT seems vulnerable to elimination in a world with wish-granting nanobots. If you want to become stronger or faster, you just engage the appropriate machines or bio tech, and being healthier is also not a worry anymore.

Expand full comment
AlexanderTheGrand's avatar

To be fair, the book makes your point more strongly than the one highlighted in this post.

I play soccer, and I really don’t play in order to optimize my potential. It’s just fun. That’s another perspective on sports. I guess that’s closer to the wireheading example in some sense.

Expand full comment
Mr. Doolittle's avatar

I think so. If you could feel the same "fun" experience without playing soccer at all, that seems to negate the need for soccer. If people also played soccer as a means to gain physical health, then that also gets tossed if the nano bots do it for you.

People would be eating the most delicious garbage food and then having the bots remove the negatives, or just skipping the whole thing and tricking their body that they think they ate amazingly good food but were really eating basic nutrition (or just nano bots making us healthy without eating).

Expand full comment
merisiel's avatar

Yeah, and I was thinking something similar about artistic pursuits.

I’m a clarinet player who just started learning the violin — there’s an adult beginner strings class at a local music school that I joined because I was curious. I’m never going to be *good* at the violin, but it is really fun and satisfying to make music in a totally different way than I’m used to.

I play the clarinet as a hobby and I’m never going to play in, like, the New York Philharmonic or anything, but the more I improve my skills, the more opportunities I’m going to get to join local orchestras, play chamber music with people, and generally have fun playing the clarinet. So I’m not “optimizing my potential” there either, except maybe in the sense of optimizing my potential to have fun. If my goal were to become the best clarinet player, that would just be an exercise in frustration!

Expand full comment
Yug Gnirob's avatar

I'm an amateur writer, and constantly reminding myself I don't have to compete with Neil Gaiman, I'm instead competing with a blank page. If I manage to be better than nothing, that's worth doing.

Expand full comment
Michael Bateman's avatar

Even this is too teleological. I run because I enjoy the process of running. Being able to run faster/longer is incidental to my enjoyment of the act itself.

Expand full comment
DanielLC's avatar

When I play videogames, I often am capable of hacking them and giving myself infinity points, but I don't.

This isn't a metaphor. I'm not going to be lifting weights for fun. I'll be playing videogames. And I won't give myself infinity points. Exactly like right now and not just analogous to it.

Expand full comment
Skyler's avatar

Seconding this. There's a lot of activities where I could win very quickly but don't, because the experience of getting there is more fun. Videogames, D&D, multi-day backpacking trips, building legos instead of buy figures, teaching myself guitar instead of listening to recorded music.

Expand full comment
Desertopa's avatar

This reminds me of what I always thought was the weirdest part of The Chronicles of Narnia (my most reread book series as a child.) The very ending of the final book, The Last Battle, where Narnia experiences its equivalent of Judgment Day, and all the protagonists end up in a Narnia Kingdom of Heaven, and it ends with them going on a new series of adventures which the narrator attested were each more exciting and wonderful than the last.

I found it interesting, but very strange even as a child, because I could see why C.S. Lewis would find an eternal series of exciting adventures more satisfying than an eternity of peaceful tranquility and contemplation of God at the end, but at the same time I felt, hasn't the veneer been lifted? These adventures are transparently in service of nothing except their own satisfaction, they're being provided by divine fiat for their gratification. Having that knowledge, can they struggle the way they did for their earlier adventures, or get invested in the outcomes? If divine power can convince them to disregard how unnecessary it all is, couldn't it skip the middleman and dose them with bliss? The aesthetically satisfying resolution seemed fundamentally unstable.

Expand full comment
merisiel's avatar

“These adventures are transparently in service of nothing except their own satisfaction, they're being provided by divine fiat for their gratification.”

Kind of like playing D&D with a DM who doesn’t meaningfully challenge you at all.

Expand full comment
Donald's avatar

I mean unless the DM locks the room or points a gun to your head, you can just get up and walk away. You aren't fighting real goblins. So whatever the challenge level of the game, it's a game for your gratification. (The same goes for computer games)

Expand full comment
Kyle's avatar

This is not really accurate--the only real adventure was an endless movement further into heaven. It's not that the characters had more problems to solve, they just started a journey of endless growth and increasing appreciation of a greater and greater beauty. You can read for yourself:

https://www.samizdat.qc.ca/arts/lit/PDFs/TheLastBattle_CSL.pdf

Expand full comment
Desertopa's avatar

So, I did recall the narration as saying that they began a new series of adventures, which it doesn't quite, but it doesn't exactly say this either.

"And as He spoke He no longer looked to them like a lion; but the

things that began to happen after that were so great and beautiful that

I cannot write them. And for us this is the end of all the stories, and

we can most truly say that they all lived happily ever after. But for

them it was only the beginning of the real story. All their life in this

world and all their adventures in Narnia had only been the cover and

the title page: now at last they were beginning Chapter One of the

Great Story, which no one on earth has read: which goes on for ever:

in which every chapter is better than the one before."

This is different than I remembered, but not enough to avoid eliciting any confusion, since I took it, each chapter would be better than the one before, not just for them, but for an outside audience. If things got better and better for them, with no narrative tension, it's not at all clear how it would make for a good story from an audience's perspective.

Expand full comment
Godoth's avatar

I don’t really follow your objection, unless I assume that you aren’t really familiar with the Bible.

In a very straightforward way, Christians believe that meaning has its source in God and that how God is manifested is through the world (when God chooses his most significant manifestation, it is as a man who lives a perfect life). At the end of the earth, a new heaven and new earth are created, people are given new and physical bodies, and they continue to labor, etc.

Your idea that living is actually this part that we don’t need to do and should be cut out to get directly to the doing-nothing-in-heaven bit just doesn’t have anything to do with Christian theology at all. It’s a product of folk/pagan ideas of the end of existence with a cultural-Christianity veneer applied to it.

Expand full comment
Desertopa's avatar

It's worth noting for context that I read and reread the Chronicles of Narnia for years as a child before I ever realized it was intended as a Christian allegory, or became familiar enough with Christian doctrine and narrative to recognize the connections. But the parts which I found most confusing or narratively unsatisfying as a child were exactly those parts which I think suffered because he was trying to reflect an underlying doctrine which didn't really make sense.

Expand full comment
Godoth's avatar

Does it not make sense or do you just not agree with it? Makes a lot of sense to me; I don’t really see a logical problem with the idea that finite creatures might have quite a lot to do in eternity.

Expand full comment
Desertopa's avatar

That part, I think makes sense. The issue is, I always felt "If Aslan and the Emperor over the Sea could do this, the stuff they did over the rest of the series doesn't make much sense," and I still think that's the case. There's a lot of theology built around rationalizing it, but I don't find it particularly compelling.

Expand full comment
Godoth's avatar

That’s such a vague objection that I can’t engage with it, but de gustibus non disputandum

Expand full comment
Vakus Drake's avatar

Honestly I'd argue that overall simulated adventures with similar stakes to a DND game will actually be more enjoyable, even if better not for you to read about.

Since people like characters in stories to face likely death, but they forget how much less fun it would be for the characters who are fearing for their life.

I think that having a bit of distance from things because you know it's a simulation would generally improve the experience and keep it from being stressful.

Expand full comment
Tom B's avatar

Wouldn't most people just play simulated realities (i.e. video games) all the time?

Expand full comment
Nematophy's avatar

I mean, it seems we're there already.

Expand full comment
Woolery's avatar

Yes and it’s nothing new. Archaeological evidence suggests some types of VR date back 40,000 years.

Expand full comment
beleester's avatar

In Ian Banks's Culture novels, there's a mention that almost all the games people play include some form of random or hidden-information element, because computers have long since solved all deterministic games.

In real life that's not really been a concern - computers absolutely demolish humans at chess, but there are still lots of humans playing chess. It turns out that even though you can learn a lot about strategy from a computer, you can't literally just copy a computer's strategy and win, because their strategy is too complicated to calculate in your head.

As for sports, if body types are a significant issue perhaps you could upload everyone into the same body. Which would make it kinda like an esport, I guess - your capabilities are set more by the rules of the game than by your body or training. (Inasmuch as "your body" is a meaningful concept in Deep Utopia, anyway)

Expand full comment
Crotchety Crank's avatar

Yeah, it's fun (even as a total novice) to see pros review a game and say things like...

"In this position, for White, Stockfish finds an absolutely crushing line, Nxe7, which is completely crazy - but no human's gonna see that, so Czwokewizowicz secures his advantage with Kf2..."

Handwaving away entire lines of play as "for the engines" while skilled humans remain in the realm of the known.

Expand full comment
DanielLC's avatar

If you have computers powerful enough to solve chess, you have computers powerful enough to play chess on a bigger board. The more computers advance, the further the gulf gets between games we can solve and games we can play.

Expand full comment
None of the Above's avatar

In Player of Games, the main character is one of the best humanoid players of some very difficult game alive. A drone feeds him a way to do something that is almost impossible and has never or rarely been done, that he couldn't see for himself, as a way of entrapping him into working for Special Circumstances.

Later (I forget which book), there is a guy universally recognized as the greatest living musical composer. He asks a Mind whether it could convincingly duplicate his symphonies, and it says it could.

The assumption in these books is that the Minds and often even the drones are just *way* smarter than the humanoids, who themselves are probably optimized up to very high intelligence because they're mostly optimized in every other way.

Expand full comment
AlexanderTheGrand's avatar

I’m in the middle of reading this (40% through). I'm finding this book extremely tedious, and not all that insightful to be honest. Agreed the narrative structure is lacking, and I’m sad to hear the forest animals story never pans out.

Things I liked: I think he did a decent defense of wireheading. I think some part of his description of “deep meaninglessness” was interesting (though about 10x too long).

Complaints: the book starts with a “hedonium” calculation: what if the universe was converted to happiness substrate? But then spends most of the rest of the book in the frame of roughly-human psyche. A lot of it felt like “how would a human find meaning in a solved world”, but I don’t know, that doesn’t strike me really as what it’ll look like 10,000 years from now.

I agree, he didn’t flesh out any utopia very well, which is most of what I wanted. I think the problems are really clear and needed way less time, but the solutions are why someone (I) would read this.

Overall, sort of disappointed. Has anyone read this and SuperIntelligence, and if so can you say whether that one was more readable?

Expand full comment
Vakus Drake's avatar

I liked superintelligence and I didn't find it dragging on like you complain about here. Though I've only read superintelligence not this book

Expand full comment
Thomas Cuezze's avatar

In the hypothetical world in which a Deep Utopian is sent back in time to experience true suffering, wouldn't it make sense to pick the time in which the world is changing and technology is advancing at the fastest pace? A time in which it is possible to experience both unimaginable suffering and extreme human joy right next to each other? A time in which the future seems incredibly malleable and everything from utopia to total apocalypse is possible? With the enormous caveat of "who knows what the future looks like," our current time to me seems like a way more interesting inflection point to live through than being a medieval peasant.

Expand full comment
Scott Alexander's avatar

Obviously you would think that, that's why you're here now.

Expand full comment
Desertopa's avatar

People wrote stories about experiencing unimaginable suffering and extreme human joy right next to each other thousands of years ago (in the story of Siddhartha Gautama for example,) so I don't think that necessarily follows at all, no. Arguments about how, if people were creating simulated realities to live in, it would be logical for them to create this one, have always seemed extremely post hoc to me.

Personally, if I lived in a civilization which had the capacity to simulate entire realities with conscious people inside, and anyone decided to simulate the reality we observe, I would lobby for them not to be allowed to.

Expand full comment
Thomas Cuezze's avatar

The fundamental claim is that the most interesting time to simulate is that in which the pace of societal progress is the fastest, i.e. plot a curve of societal progress and pick the point at which the first derivative is maximized. I think the idea that that point is somewhere around today is defensible.

Three conditions:

1. A deep utopian would want to live in the point in time at which societal progress is maximized.

2. At no point before our current time is the first derivative higher.

3. At no point after our current time will the first derivative be higher.

1. is somewhat speculative, because I really have no idea how a deep utopian would think, but I think it seems eminently reasonable that this is a good criterion to maximize. If you’re trying to live an interesting life you want things to be changing a lot and for there to be a wide range of experiences available. “Life as a medieval peasant” and “life in a post singularity wirehead world” would both be pretty monotone.

2. I think is the most defensible. Look at a curve of any measure of progress, whether that’s GDP or social outcomes or whatever. We’re at the end of a hockey stick. Line go up.

3. is the most speculative, but one of the most discussed ideas on this blog is that “the singularity could very well be near” in which case presumably progress flatlines at the top end of possibility. So I think this is at least a reasonable idea.

Expand full comment
Desertopa's avatar

I think it's plausible a singularity might be near, and I'd accept, broadly, the position that the simulation argument might be "vaguely defensible." But I absolutely reject "credible." I think the whole argument ultimately rests on "if it makes sense to simulate a world like we observe, our world is probably a simulation. So, let's look for reasons it would make sense to simulate a world like ours," whole disregarding any reasons why it wouldn't. If I lived in a world which had the power to do such a thing, I would regard it as absolutely ethically impermissible. But while I've spoken to many people who've said they thought it would make sense to run such a simulation, few if any have agreed that they would personally choose to run one.

Expand full comment
Thomas Cuezze's avatar

"if it makes sense to simulate a world like we observe, our world is probably a simulation."

This isn't the claim I was making, although tbf I wasn't really clear about that. I don't think this observation alone supports the statement that we most likely live in a simulation. But it does raise the probability. We should assume the possibility we live in a simulation is more likely if we look around and find good reasons you might want to simulate the world around us. If almost everybody in the world I lived in was a miserable impoverished farmer, myself included, I would rightfully who the hell would want to simulate a world in which you live as a miserable farmer.

As to the point of "people alive today wouldn't want to simulate our world," I think this relies on the idea that deep utopians would think and value these things relatively similarly to 2024 Americans (assuming you live in America) and I don't think that's an obvious assumption.

Expand full comment
Desertopa's avatar

I don't think it relies on the assumption that deep utopians would value things relatively similarly to a 2024 American, I think it relies on the assumption that deep utopians have comparable senses of ethics, if not priorities. Keep in mind that while we might be at an inflection point for our society, billions of people today still live in desperate poverty and insecurity. The standard of living that wealthy modern Americans experience are substantially in the minority. If you're looking at overall happiness *divided* by overall suffering, the present day probably fares better than most or all of history so far. If you're looking at overall happiness *minus* overall suffering, I think we're likely much worse off than when our population was smaller. People discussing the simulation argument from wealthy first world countries are evaluating it from an exceptional position in the present landscape.

To a deep utopian, who's about equally removed from both, the wealthy Americans in a simulation are most likely not going to seem more salient than the impoverished Africans.

Expand full comment
0dayxFF's avatar

I think the idea is that the utopians would find the nearness to singularity interesting as an inflection point, not that they are interested in American 2024 living standards.

Expand full comment
0dayxFF's avatar

> If almost everybody in the world I lived in was a miserable impoverished farmer, myself included, I would rightfully who the hell would want to simulate a world in which you live as a miserable farmer.

The simulators would probably be running simulations in huge batches, with each simulation being slightly randomized. Or at least, the simulations produced by such a regimen would outnumber the simulations that were carefully handcrafted. As such there's nothing strange about there being a vast simulated number of miserable farmer worlds. They would outnumber the real world (per Bostrom's argument) by a huge ratio of <insert hideously large number> to 1.

Expand full comment
Thomas Cuezze's avatar

I think this is true in the situation where simulations are being ran to obtain data of some kind, but here we're discussing simulations for the purpose of experience and entertainment, and so I think it makes sense there wouldn't be simulations of miserable farmer worlds.

Expand full comment
YesNoMaybe's avatar

Any argument starting with "The simulators would probably..." is right at most by coincidence.

The space of possible actions and motivations is large and any justification I've seen so far for why future people would behave in ways x, y and z was indistinguishable from projection, at least from the outside.

Expand full comment
Nematophy's avatar

Then you would have picked being born in 1890s, not 1990s

electric light, indoor hot water, refrigeration, air travel, nuclear energy, computers - 50 years

The iPhone is pretty cool but I think we'd all be happier with Apple ]['s

Expand full comment
Thomas Cuezze's avatar

Idk I think being born in the late 90s/early 2000s lets you see a greater transformation of society (computers changed a lot!) Also you aren't constantly getting wrapped up in wars. Plus, we do seem to be on the verge of a massive advancement in AI which it seems like might exceed even electricity or the computer in terms of transformation.

Expand full comment
Nematophy's avatar

I think you are underrating the amount of social change that happened in the 20th century, overrating the societal transformation computers caused, overestimating the likelihood of massive AI advancements (though on this blog I doubt many will agree with me on that), and underestimating the probability of new wars.

Someone born in 1900 who lived to 100 most likely grew up without running water or electricity, and died being able to play online games of Counter Strike. The iPhone is cool but it's not THAT cool.

(besides, wouldn't you *want* to live through the World Wars? I thought you wanted "A time in which it is possible to experience both unimaginable suffering and extreme human joy right next to each other?" ;)

Expand full comment
Thomas Cuezze's avatar

I think in the end this boils down to how transformative you think AI will be, which is a horse that has been beaten many, many times here by people much smarter than me. And fair! Probably the wars would be quite an interesting time to live through.

Expand full comment
Brandon Fishback's avatar

Computers made people's lives worse.

Expand full comment
None of the Above's avatar

Beware: this is the kind of scenario that ends up with you wearing dark glasses and dodging bullets.

Expand full comment
Vakus Drake's avatar

I should note that your life has to be pretty interesting for this logic to work. Simulating a wage slave makes as much sense as simulating a feudal peasant

Expand full comment
Philo Vivero's avatar

I think the tradition of fiction in this regard usually go like this:

The omnipotent deep utopiasts will set up a world for them to enter and play by rules they've designed. In that world, they have no clue what/who they are, they think they're mortal fallible dumb creatures, and a very interesting narrative plays out.

In the end, when they finally meet their untimely demise, they wake up back in their deep utopia, replay the experience and enjoy it, then do it all over again.

A decent film that explored these topics is called "The Nines" with Ryan Reynolds and Melissa McCarthy (among others). There are dozens of others. This is not a super-new idea. Glad Bostrom covered it in some detail.

This post also was so good and so LOL. I especially like the sport of Crumpets myself.

Expand full comment
Bob Frank's avatar

> What if, after we all have IQ one billion, we can just figure out which religion is true?

I think this question fundamentally misunderstands religion. Ask any person of faith, who sincerely believes that their religion is true, and they'll tell you no, because faith simply does not work like that. With the possible exception of Jehovah's Witnesses, who as near as I can tell do seem to sincerely believe that you can reason your way into faith. But the most common understanding is that knowledge of the truth comes as a result of cultivating and acting on faith, not the other way around.

Expand full comment
Scott Alexander's avatar

I think this is false. For example, I think Catholicism specifically says that reason can be used to prove the truth of the Catholic religion. I don't have a perfect link ready for you, but https://catholicscientists.org/questions/q8-science-is-based-on-evidence-what-is-the-evidence-for-god/ is pretty close to what I'm thinking of.

The Jewish version of this is most obvious in Maimonides ( https://iep.utm.edu/maimonid/ ) - I think, although am not sure, that most modern Orthodox Jews would agree with him on this point.

Expand full comment
Crotchety Crank's avatar

Chapter One of the Catechism discusses this fairly directly.

36 ...God, the first principle and last end of all things, can be known with certainty from the created world by the natural light of human reason.

Expand full comment
Bob Frank's avatar

Check the next two points. 37 says that, while this may be theoretically true, in practice it doesn't actually work that way due to human imperfection, and 38 says that therefore "man stands in need of being enlightened by God’s revelation, not only about those things that exceed his understanding, but also 'about those religious and moral truths which of themselves are not beyond the grasp of human reason'"

Expand full comment
0dayxFF's avatar

But we're not talking about human intellect. This is an IQ one billion posthuman, who just wants to know which religion is true (the point 36 that Crotchety Crank quoted)

Expand full comment
Bob Frank's avatar

Perhaps. To that I'd simply say, if we're seriously considering the notion of traditional religions being true, remember the tale of the Tower of Babel. Because one possibility you're contemplating is the reality of is a God who has a plan for the world, has set out rules for people to follow, and is willing and able to take drastic corrective action when people go too far off the rails.

Expand full comment
0dayxFF's avatar

There's some evidence that God is getting friendlier over time. Maybe in the future he'll even be comfortable with people becoming more intelligent.

Expand full comment
Crotchety Crank's avatar

Right. Catechismally, it's *theoretically* possible to know the Full Truth of Catholicism from reason and experience, though *pragmatically* they think one must typically employ revelation and faith as (speaking loosely) a shortcut. I took that to support Scott's point that "[Catholics believe] reason can be used to prove the truth of the Catholic religion", though of course viewpoints might differ.

Expand full comment
Bob Frank's avatar

Thanks for the links. The Catholic Scientists one is interesting, but it, and the points of the Catechism of the Catholic Church that it references, is largely focused on using evidence and reasoning to demonstrate the existence of a Creator. I find such arguments somewhat persuasive, for basically the same reason as John von Neumann, but they only go so far.

I assume you're familiar with the famous argument that the existence of a finely-crafted Swiss watch implies the existence of a watchmaker. What it doesn't imply, though, is anything whatsoever about the attributes and moral character of the watchmaker beyond their competence in crafting watches. Likewise, demonstrating the existence of a Creator does not demonstrate that that Creator is the capital-G Christian God, nor that God, if his existence can be demonstrated, wants you to be Catholic as opposed to any of the many, many other varieties of Christianity. (I wrote about this in more detail last year: https://robertfrank.substack.com/p/rationality-is-not-correctness )

The Maimonides page is very long and dense, and a little bit outside of my experience. It does look interesting, and something I'd like to look into further when I have the time for it. But I can't say anything useful about it at the moment.

Honestly, one of the most memorable points I've come across on this subject, in my long years of studying various religions, comes from the Book of Mormon, in a passage that directly cautions against trying to figure things out by human reason and learning alone. (But take note of the last part; it's important too.)

28 O that cunning plan of the evil one! O the vainness, and the frailties, and the foolishness of men! When they are learned they think they are wise, and they hearken not unto the counsel of God, for they set it aside, supposing they know of themselves, wherefore, their wisdom is foolishness and it profiteth them not. And they shall perish.

29 But to be learned is good if they hearken unto the counsels of God.

(From 2 Nephi, chapter 9)

Verse 28 is such a thorough indictment of wokeness, published almost 2 centuries before it happened, that it's almost enough to make you wonder...

Expand full comment
bell_of_a_tower's avatar

It's actually a fairly solid foundational point in the Church of Jesus Christ of Latter-day Saints that things of a higher order can be understood by creatures of a lower order if and only if an agent at the higher order reveals the truth to the lower-order creatures. And the truths of God are at the highest order and (fallen) mankind is definitely not. Thankfully God is generous in giving to those that seek (James 1:5).

Human reason is useful and productive, but not dispositive.

Expand full comment
Bob Frank's avatar

Interesting. What book is that doctrine found in?

Expand full comment
bell_of_a_tower's avatar

My comment is a synthesis of several doctrinal elements. I'm looking for specific quotes, but some suggestive scriptures:

* 1 Corinthians 2:9-14: https://biblehub.com/kjv/1_corinthians/2.htm

* Hebrews 11:3 "Through faith we understand...that things which are seen were not made of things which do appear" -- through faith, not sight/man's reasoning.

* constant warnings to not rely on the "arm of flesh"

* Isaiah 55:8 "For my thoughts are not your thoughts, neither are your ways, my ways, saith the Lord" -- God's thoughts are not directly reachable by our thoughts

* 1 Corinthians 3:19 "For the wisdom of this world is foolishness with God"

* Abraham (Pearl of Great Price) 3:19 "And the Lord said unto me: ...I am the Lord thy God, I am more intelligent than they all" (where "they" refers to a ranked ordering of the intelligence of all spirits) -- God's "order" is higher than ours.

Expand full comment
bell_of_a_tower's avatar

And I should be very very clear. We members of the restored Church are commanded in no uncertain terms to use our intellects and reason in the service of God. Just because we cannot use that reason to *prove* the existence/non-existence of God or the truth of his doctrine *DOES NOT* mean that we rely on "blind faith" in leaders.

We are to try to figure out things for ourselves, relying on a personal connection to God and his scripture, as well as reading and trying to understand the words of the prophets (ancient and modern). We are encouraged to ask questions. We have promises of (essentially) non-contradiction (truth A will never contradict truth B where both overlap) and that the Lord will not allow the prophet to lead the people astray on doctrine.

But knowing God and trying to become (more) like Him is a personal matter of practice, faith, and yes, reason.

Expand full comment
JamesLeng's avatar

Sounds like an NP-complete problem, where calculating the correct answer from first principles is nigh-impossible, but checking the validity of a specific proposed answer is relatively easy.

Expand full comment
bell_of_a_tower's avatar

There are definitely shades of that. We can *check* if two purported "truths" about higher things contradict each other (in which case both cannot be true). But getting those truths on our own is impossible.

As a note, we do not believe that this revelation is restricted to members of the Church or even believers generally. ALL truth comes from God via inspiration, following the divine laws established for reception of that. Including all scientific truths. We believe that anyone who has figured out truths about the world has been inspired. And that that inspiration comes as a matter of almost-right if the underlying divine laws are obeyed. Those laws end up, in practice, looking a lot like the scientific method, at least for things of this world. Their actual true nature may not be, but the implementation of the laws in this fallen state has non-trivial overlap.

Expand full comment
Bob Frank's avatar

> ALL truth comes from God via inspiration ... Including all scientific truths.

Didn't scientists struggle fruitlessly with the molecular structure of DNA for decades until one researcher literally had it given to him in a dream about a spiral staircase?

Expand full comment
JamesLeng's avatar

What other nontrivial properties does God have besides being the source of all correct statements?

Expand full comment
Malcolm Storey's avatar

If you're a Creationist you have to explain why God deliberately created life to appear to have evolved. That's an attempt to deceive; sure you're worshipping the right one?

Expand full comment
Bob Frank's avatar

There are a whole lot of unspoken assumptions packed into this seemingly-simple statement, in an attempt to make a specific, predetermined conclusion appear to be the only choice.

Please don't play rhetorical games like that. (It's particularly counterproductive with a self-selected audience that's predisposed towards being aware of such tricks and disapproving of them.)

Expand full comment
0dayxFF's avatar

How was Malcolm supposed to make the same point? Can you rephrase it in a correct manner?

Expand full comment
Walter Sobchak, Esq.'s avatar

I gave up after a couple of paragraphs. This is silly. Humans are subject enormous constraints. The laws of thermodynamics rigorously exclude evading scarcity and economics. Human nature is invariant across time and space. Human bodies are made of flimsy stuff. We will all die and sooner rather than later. I will posit that there is a hard upper limit for human lifespan of 125 years. We cannot be abstracted from the context of our physical embodiment and still be living creatures. Worry about the all t0o real war, disease, and brutality that runs rampant through the world we really live in. Not about imaginary world that could never be.

Expand full comment
Arrk Mindmaster's avatar

This has a lot of certainty stated.

"Human nature is invariant across time and space." Pretty much everything changes, except laws of physics and mathematics, and even that isn't actually proven. Our UNDERSTANDING of them certainly changes with time and thought. Do you really think human nature is precisely the same now as it was 100,000 years ago?

"Human bodies are made of flimsy stuff." Relatively true now, but not material. Technology can certainly change this in the future, whether through becoming cyborgs, moving consciousness into something else (or completely insubstantial), etc.

"hard upper limit for human lifespan of 125 years." Not likely. The best one could say is, for example, 125 is within ten standard deviations of maximum lifespan (one in half a septillion years for a single person, but multiply by the number of persons over a number of years...).

There is room in the world to worry about immediate needs AND about what the future will be like. No need to cut off lines of thought.

Expand full comment
Walter Sobchak, Esq.'s avatar

"Do you really think human nature is precisely the same now as it was 100,000 years ago?"

Yes. The archeology and genomics are absolutely clear on that point.

"moving consciousness into something else (or completely insubstantial), etc."

Cartesian fallacy. Our consciousness is a part of our bodies. It cannot be moved to another system.

"The best one could say is, for example, 125 is within ten standard deviations of maximum lifespan"

A Z=10 is a concession that I am right. There have been on the order of 10 Billion lives of genus homo, none of them have been more than 125 years. A P of 1/10 billionths is a Z of 6 https://www.fourmilab.ch/rpkp/experiments/analysis/zCalc.html. A Z of 5 will get your physics discovery recognized as valid. https://home.cern/resources/faqs/five-sigma. Incidentally, a Z=7 gets a P=0

Expand full comment
Scott Alexander's avatar

I still remember writing about this kind of stuff ten years ago and one of the rigorous constraints that nobody could ever solve even with infinite technology was protein folding. That one took five years.

This is a book of philosophy. I think it's silly and anti-intellectual to say "never worry about anything except the exact current world". Heck, why should we even worry about war? I personally am not in a war right now!

I think it is useful to practice philosophy to clarify our own intuitions, as well as to be prepared for things changing in the future.

Expand full comment
Steeven's avatar

A more interesting version might be “what if deep utopia is physically impossible for some reason. How close can you get and what problems remain?”

Expand full comment
Walter Sobchak, Esq.'s avatar

"one of the rigorous constraints that nobody could ever solve even with infinite technology was protein folding."

Rigorous? I never heard that the problem had been proven to be NP. At any rate using AI/ML techniques to solve it is not a proof that bottom up computation could not work with the sort of computational resources we are now using for AI work. What the Deep Mind people did was use a different algorithm and a metric f***tonne of computational resources. Won the Nobel Prize for it. Good on them. But it does not show that NP problems can be transformed in to P problems by way of wizardry.

"it's silly and anti-intellectual to say "never worry about anything except the exact current world"

I am not saying that. I am saying that philosophy should be about the real world. I cite the ancients, "The Clouds of Aristophanes". Speculation about fantasy worlds is popular entertainment not philosophy.

Thomas Sowell holds that there are two visions of mankind the constrained vs the unconstrained https://en.wikipedia.org/wiki/A_Conflict_of_Visions Stephen Pinker accurately relabeled the "unconstrained vision" as the "utopian vision" and the "constrained vision" as the "tragic vision".

I prefer Pinker's labels. The tragic vision produces philosophy. The utopian vision produces pornography. Don't get me wrong. I like pornography. At my age it is all I have left. But, it is the Tragic Vision that produces philosophy and its consolations.

Besides being anti-intellectual in a world where intellectuals are required to believe in transgenderism, settler colonialism, and climate apocalypse, is, IMHO, a badge of honor.

"I personally am not in a war right now!"

That is what you and millions of liberal Americans believe. In fact we are deep into Cold War II (a/k/a World War IV). Trotsky said: “You may not be interested in war, but war is interested in you”. https://www.hoover.org/research/war-interested-you Last night American B-2 bombers attacked Houthi assets in Yemen.

“It makes no difference what men think of war, said the judge. War endures. As well ask men what they think of stone. War was always here. Before man was, war waited for him. The ultimate trade awaiting its ultimate practitioner. That is the way it was and will be. That way and not some other way.” Cormac McCarthy, Blood Meridian, or, the Evening Redness in the West

Expand full comment
Bob Frank's avatar

> Trotsky said: “You may not be interested in war, but war is interested in you”.

He got that from the ancient statesman Pericles of Athens, who said that simply because you do not take an interest in politics does not mean that politics won't take an interest in you.

Expand full comment
Walter Sobchak, Esq.'s avatar

Thanks.

You probably are not related to the Franks who lived next door to me in the 1950s. They were all MF. Marvin, Marjorie, Melanie, Michael, Matthew, Mary, and Milton. The dog was named Moochie and they employed a housekeeper named Margaret.

Expand full comment
Bob Frank's avatar

Nope, entirely unrelated. Sorry.

Expand full comment
Maynard Handley's avatar

There are two different issues: “provably exact solution” and “work well heuristics”.

We have “good enough” heuristics for many problems, including traveling salesman, protein folding, and halting. That doesn’t change the truth of P/NP or Halting theorems…

Expand full comment
Quiop's avatar

>"one of the rigorous constraints that nobody could ever solve even with infinite technology was protein folding."

I've seen this claim repeated a couple of times, but I don't think it's accurate. The CASP contest for protein structure prediction had been running since 1994, indicating that plenty of people thought it was achievable in principle. It was definitely a surprise to see it done so soon, after just ~25 years of serious attempts (NB: not "five years"). Most experts in the field were probably expecting the problem to take a few more decades, but I don't think anyone was predicting a timeline of centuries, and nobody who knew anything about the subject thought it would be impossible "even with infinite technology."

Expand full comment
skaladom's avatar

I wouldn't state things about the future with this kind of certainty, but I sure do feel like Bostrom's flight of imagination far from relevant to anything we have here or are likely to have anytime soon. Just at the moment when human beings are simultaneously bumping against 1. the limits of the ecosistem's resilency, 2. our sudden shift to material ease, for which evolution has not prepared us, and 3. the prospect of reversing the long-held trend of population growth, which are all very much embodied things to think about, I really don't see what thinking about worlds of infinite resources brings to the table except for some escapism.

At the other end of the spectrum, and as people have been saying, where does the "depth" of deep utopia even end? If you can imagine your way out of all constraints, and sail past the pit where people engage in endless status bickering (b/c status is something they give each other, so it can still be withheld and scarce) - where do you even stop? Why would these things evolve into anything that today's individuals can even slightly identify with or think about?

Expand full comment
Walter Sobchak, Esq.'s avatar

Fundamentally, we agree. Good point about status. It is zero sum and will always lead to tensions and quarrels.

Expand full comment
Jeffrey Soreff's avatar

>Good point about status. It is zero sum and will always lead to tensions and quarrels.

Regrettably, agreed.

Also, a large fraction of humanity specifically wants _power_ over other humans. Roughly speaking, this is also approximately zero sum. If some would-be dictator decides that &fetish is crimethink in the dictator's personal view, no amount of resources and technology will, by itself, stop the dictator from stomping on their victims.

Expand full comment
quiet_NaN's avatar

Consider what a person who was alive a millennium ago might have claimed:

* Starvation has been with us since the dawn of time. The idea that we might one day overcome it as a society is silly: what else would control population growth? The masses being overweight? Patently absurd.

* There always is some ruling class in charge which violently suppresses the others. So has it been since the birth of civilization, so will it be evermore.

* Humans can't fly.

* "I need scarcely say that the beginning and maintenance of life on earth is absolutely and infinitely beyond the range of sound speculation in dynamical science." (Lord Kelvin)

* War is normal.

* Of course a lot of kids die during childhood.

Expand full comment
Walter Sobchak, Esq.'s avatar

Please focus. I did not say that all problems are insoluble. I said that humans are subject to constraints such as human nature and the laws of thermodynamics. Lets take your list:

* Starvation

In most of the world humans are now at least adequately feed. But recognize that we are using enormous quantities of natural resources such as fossil fuels to do this. The sustainability of our efforts has been questioned. I am relatively optimistic, but many people are not.

* There always is some ruling class

This has not, and will not change. In the US we have bureaucratized the violence, but it is still the basis of the state.

* Humans can't fly.

Well, we still can't, but we have built marvelous flying machines that can carry us through the air at preternatural speeds. Again, we use enormous amounts of resources to do it. and it to may or may not be sustainable because of resource constraints

* the beginning and maintenance of life

As near I can make out we are still speculating. It has been 72 years since the Miller Urey experiment, and still there is still no explanation of how you get from a racemic stew of organic chemicals to a living entity. There is no theoretical reason why they can't get there, but there has been little progress. And, the astrophysical cosmologists are just humorous. A universe 95% of which is "dark matter" and "dark energy" and only 5% of which is visible or tangible. Yeah, right. https://chandra.harvard.edu/darkuniverse/

* War is normal.

Do you read the news?

* Of course a lot of kids die during childhood.

Again, the problem was not in principle insoluble. And, the biggest improvements came from a lot of relatively simple stuff. The Romans had sanitary water systems. Soap and hot water were big, but not high tech. Vaccines. The first for small pox was very low tech. Of course who knows what the Wuhan Institute of Virology is cooking up for us.

Expand full comment
Walter Sobchak, Esq.'s avatar

Still speculating: "Three Books About Life on Earth and Elsewhere: Reviews of ‘Is Earth Exceptional?’, ‘Life as No One Knows It’ and ’Becoming Earth.’" By Andrew Crumey on Oct. 18, 2024

https://www.wsj.com/arts-culture/books/three-books-about-life-on-earth-and-elsewhere-277204ac

Expand full comment
Bentham's Bulldog's avatar

The Zizek form reminds me of the best way I ever started an article https://benthams.substack.com/p/sia-is-just-being-a-bayesian-about

Expand full comment
Jacob Wright's avatar

Re: weightlifting and climbing everest, I think contemporary gaming culture has some interesting observations to offer here. Within weeks of release pretty much every game out there gets 'solved' and written about online. There are wiki pages telling you the optimal choice to make in each situation, and lots of min/maxing calculations gleaned from the source codes. But the vast bulk of players (I believe) choose not to look at such material because it spoils the fun.

Expand full comment
Lucas's avatar

This and also people can speedrun a game for years after it's been out, constantly improving their times.

Expand full comment
Parkite's avatar

A much simpler straightforward "realized preference" sort of answer to these set of questions are video games, which I'm surprised a bit aren't discussed more above or in the comments yet. This is where there are of course millions (majority of people these days?) who spend most of their free time operating in a world that is very similar to the "deep utopia" described in the book summary.

It is a world where, yes, you can have your magic "genie nanobots" immediately fix any problem, there are no real world consequences to death, and you can "cheat" at every level. Some of these are multi player where you are in direct competition with others, but many are single player where you are only entertaining yourself. You can look at detailed walkthroughs of each step you should take in a game (but most people don't?) and you can frequently use a trainer to modify every attribute, cheat / fast forward to the end (but most people don't?)

That feels very similar to what is described here.

It is not all relative status seeking which seems to be the other dominant answer in the comments, see: single player games, which are remarkable successful and can bring narrative richness & depth that far exceeds a "great classic".

Expand full comment
anomie's avatar

Yeah, and they still all end up losing their luster in a month or less. This entertainment is not sustainable for mortals, why the hell would it be sustainable for immortals?

Expand full comment
Rana Dexsin's avatar

“losing their luster in a month or less” is very much not what I observe from gamers I've seen. At something of an angle to this, there's streamers who've been full-time for years; one of them that I know of has mainly focused on Kaizo Mario romhacks, of which there are many, and has spent an entire week trying to get through a single level on multiple occasions. For myself, I've been in a place in life before where I spent easily three months with my primary focus being collecting resources and building a variety of structures in Minecraft, including elaborate machines and artistic architecture. I'm not especially proud of that in the medium run in the world we're in, but that's mainly because of what it implied for my being something of a parasite on other resource flows at the time; if we were post-scarcity and that were a non-issue, I'd happily get completely surrounded by that sort of activity for far longer.

Expand full comment
Leo Abstract's avatar

days without suspecting by Scott Alexander's standards I may be a nihilist: 0

all the sentences in this, including the quotations of course, contain meaning in a strict sense, but have a kind of [citation needed] appended and seem faintly ridiculous.

all of human life is cheating - we're constantly deceiving ourselves about what 'meaning' is, shaded variously with psychopathologies that can lean positive ('copes') or negative ('mental illness').

'this isn't real meaning it's cheating' is a thing the human mind can do, and the capacity to do it has no real bearing on the situation (excepting of course the situation of the brain the mind is running on). people aren't designed to be happy or fulfilled or to feel as though they're cheating or not.

doesn't matter, of course, we're not headed for any such future and this will all be forgotten soon enough.

Expand full comment
anomie's avatar

Oh, don't be so cynical. AI's progressing quite rapidly these days, and there's still a good chance it might eventually be capable of self-perfecting. At which point it will no longer need to tolerate all of these humans moaning about "morality" and "meaning".

Expand full comment
Vakus Drake's avatar

I think people get very confused by "meaning" because they think it should even be relevant to philosophy in the first place. Meaning is no more mysterious than loneliness, it's a social instinct intended to drive humans to do things that will earn them respect in their social circle.

Sports is the perfect example of this: A zero sum game where who wins is of no more significance than a video game. Yet because of the respect they receive athletes get the same sense of meaning as somebody who does something that actually matters like curing a disease.

If anything you get more prestige from sports, as you can probably name multiple soccer players, but not one of the scientists involved in developing the mrna vaccine (or maybe you can, but most can't)

Expand full comment
beleester's avatar

To paraphrase a discussion I saw on Tumblr:

If any process goes on long enough, it will eventually:

1. Stop (death)

2. Repeat (wireheading)

3. Continue indefinitely according to a predictable rule (wireheading, albeit perhaps one of the more complicated versions)

4. Continue in an unpredictable pattern (not death or wireheading, but you become unrecognizable to your past self and have no logical connection to your future, which makes it hard to say such a life has any meaning.)

You can have more or less interesting patterns, but there isn't really a perfect, eternally meaningful existence to be found.

Expand full comment
Skivverus's avatar

Do Penrose tiles count as #3 or #4? Because I think Penrose-tiling-style lives/societies are probably a decent thing to target.

Expand full comment
Mo Nastri's avatar

> you become unrecognizable to your past self and have no logical connection to your future, which makes it hard to say such a life has any meaning

Eh, #4 is something Anders Sandberg seems fine with (cf https://80000hours.org/podcast/episodes/anders-sandberg-best-things-possible-in-our-universe/), and upon reflection I'm fine with it too and consider it as meaningful as I can make of it (in the sense of https://meltingasphalt.com/a-nihilists-guide-to-meaning/):

"Anders Sandberg: I think one underappreciated thing is that if we can survive for a very long time individually, we need to reorganise our minds and memories in interesting ways. There is a kind of standard argument you sometimes hear if you’re a transhumanist — like I am — that talks about life extension, where somebody cleverly points out that you would change across your lifetime. If it’s long enough, you will change into a different person. So actually you don’t get an indefinitely extended life; you just get a very long life thread. I think this is actually an interesting objection, but I’m fine with turning into a different future person. Anders Prime might have developed from Anders in an appropriate way — we all endorse every step along the way — and the fact that Anders Prime now is a very different person is fine. And then Anders Prime turns into Anders Biss and so on — a long sequence along a long thread.

But a more plausible thing that might happen if you have these resources is that you actually expand your memory. You can remember your childhood, you sometimes reorganise yourself, you become a sequence of different beings that have the right kind of memories and relationship across time. And this probably has to grow, otherwise if you’ve got a finite state space, eventually you’re going to just keep on repeating. So that is one thing: you actually would have self-design happening over very vast periods of time."

David Chapman argues that the idea of "perfect, eternal meaning" is a mistake in https://meaningness.com/eternalism although I'm not sure I buy his arguments.

Expand full comment
Vakus Drake's avatar

I read a bunch of Chapmans online articles years back and only later did I realize why I always felt he was totally off base:

He's buying into the usual philosophical framing of meaning and not realizing it's completely supplanted by a better psychological view.

Meaning is no more mysterious than loneliness, it's a social instinct intended to drive humans to do things that will earn them respect in their social circle.

Consider who is more broadly respected in society athletes or scientists? You might say scientists, but most people can probably name someone who plays football and not any of the scientists who developed the mRNA vaccine.

Expand full comment
Mo Nastri's avatar

I admittedly initially dismissed your comment, but I keep going back to it for some reason. I think you might be right, and I might've initially dismissed it because I'm not doing as well as I'd like on the respect in my social circles front and am in a bit of denial about it. Anyway thanks for the comment.

Expand full comment
Vakus Drake's avatar

I wish you the best of luck with your social life.

I'm not exactly doing so great myself, since being aware of the problem is easier than solving it. Though at least my view helps one to rule out courses of action that won't improve things or will actively make things worse:

For instance if I held my views much earlier in life I'd have treated moving as a vastly more significant decision, and would probably be in a much better situation socially RN.

There seems to be a widespread maladaptive perception that you should focus on your education and career and stuff, and that somehow if you go with the flow a social life will just kind of happen to you with minimal effort. Which is sometimes true, but a terrible gamble to make as an introvert.

Honestly I can't really say developing what social life I have has been either easier or more pleasant than getting a chemistry degree.

Expand full comment
Vakus Drake's avatar

You can't logically have an issue with #4 unless you also consider children growing up to be some great tragedy.

After all why let your level of maturity asymptote when you can use technology to continue personal growth forever.

Expand full comment
UnabashedWatershed's avatar

So does anyone have a clue what's going on with Nospmit?

The relevant passage (near the end of the book) has the characters walking through a graveyard, where they see a dog peeing on Nospmit's grave, which is surrounded by flowering plants and blackberry brambles.

Christopher Timpson (if that is the Timpson being referenced) is a philosopher of physics, focused on quantum mechanics and quantum information theory.

The very last line of the book is: "[Ed. note: And they all lived happily ever after. Including [some characters], and... well, yes, even the Dean, the Registrar, and Nospmit.]"

Here's the tombstone passage in full:

Tessius: Hey, look at that dog! He’s taking a leak on a tombstone.

Dog owner: Fido! Bad dog! Bad dog!

Tessius: One shudders to think what unwitting offenses we may be committing on an average day.

Dog owner: I’m so sorry. Every time we walk this graveyard, he runs off and does that. Always the same stone! “Nospmit”. Unusual name. I really hope that’s not one of your ancestors?

Firafix: No, we’re just here for a stroll.

Tessius: The weather bureau has forecast rain, so Fido’s misdeed will hopefully soon be absolved.

Dog owner [looking skywards]: Yes it does look about to come down.

Tessius: Do you notice something about Nospmit’s grave—is it not uncommonly thriving?

Dog owner: There’s a lot of greenery there.

Tessius: More so than around the other graves?

Dog owner: I believe so. There are even blackberries growing there… I actually picked a couple, but I’m not sure I should eat them.

Tessius: Whatever ill might have gone into the soil, it appears that something good has come out of it. We must believe in the possibility of redemptive processes.

Expand full comment
The Ancient Geek's avatar

Timpson's are a longstanding family business of shoe repairers and keycutters who are known for their charitable attitudes, particularly hiring ex convicts. Hence the redemption.

Expand full comment
The Ancient Geek's avatar

>Philip Morris Auditorium” (later renamed the “Exxon Auditorium” (later renamed the “Enron Auditorium

They're all companies with bad ethical reputations.

Expand full comment
Melvin's avatar

I can't figure out whether this is a collegial in-joke or an Oxonian diss track.

Expand full comment
Taleuntum's avatar

Maybe I'm too unsophisticated for these conversation about meaning, but I still don't get it.

If I want something, I try to do it. For example: Someone I have some positive feelings towards does not have everything they want? I want to make it so that they do. People starving in the world. -> I want that to stop. I don't have maximal status? -> I strive to increase it., etc..

I don't ask about whether these wants or the resulting actions attain some neboulus quality of 'meaningful'. I just have these wants, so I work to fulfill them.

Yet, I feel a lot of people who talk about 'meaning' are different. They say things like "sure you can always work to get more money/status, but why is that meaningful? ", I don't understand what they mean. What are they talking about in more literal terms?

In an Utopia (a much more advanced one than what's usually called utopia) where I don't have any more wants, I won't do anything. This seems very simple and obvious to me, almost tautological and again, I don't understand other's perspective.

Could it be that they look at the current world outside and see the enourmous amount of things they want to change and find it hard to even imagine a state of world where everything is correct?

And that when they consider what it would be like to do nothing in an utopia, they use their intuition from our current world? Maybe they imagine someone depressed as that would be true in our current world: the people who want to do nothing are mentally unhealthy, because it is impossible to be a healthy human who looks at our current world and sees nothing worth changing, but imo it would be a mistake to think that this is true for every possible world.

Expand full comment
Vakus Drake's avatar

I think Meaning is similar to other social instincts like loneliness. It's supposed to drive humans to do things that will earn them respect in their social circle.

I think famous athletes are a good example of how you can feel like you're doing extremely meaningful things with your life, despite just engaging in a zero sum competition.

Expand full comment
spinantro's avatar

"And that when they consider what it would be like to do nothing in an utopia, they use their intuition from our current world? Maybe they imagine someone depressed as that would be true in our current world: the people who want to do nothing are mentally unhealthy, because it is impossible to be a healthy human who looks at our current world and sees nothing worth changing, but imo it would be a mistake to think that this is true for every possible world."

I think this is quite a good point. Nevertheless, don't you think it would just be plain *boring* to do nothing in utopia all day? I guess you could respond: "then I'll work towards not being bored", and then you'd be basically on the same page with the others in this conversation.

Expand full comment
shem's avatar

There is definitely an open niche for deep utopian stories, and I hope more writers try their hand at writing them, despite the high difficulty of having almost no possible conflict or tension.

Recommendations for existing fiction would fit (mild spoilers for some), all high quality:

- Alexander Wales's Worth The Candle and to a lesser degree This Used To Be About Dungeons

- Jon Bois's 17776

- remy(?)'s The End of Creative Scarcity

Expand full comment
Sophia Epistemia's avatar

- Friendship Is Optimal

- The Metamorphosis of Prime Intellect

Expand full comment
Jeffrey Soreff's avatar

>- The Metamorphosis of Prime Intellect

Yes, but I would have preferred to just keep the state at the end of chapter 4...

Expand full comment
Makin's avatar

I came here just to recommend Worth the Candle (though it's really just its epilogues that feature what Scott's looking for) and 17776, so add my name to that.

Expand full comment
Taleuntum's avatar

I third WTC and would like to also recommend Timothy Underwood's Accord: https://timunderwoodscifi.wordpress.com/2020/07/18/introduction-and-crush-habs/ (It's pretty short, but thought-provoking!)

Expand full comment
yettofindaname's avatar

I also came to recommend WtC, and for those who already have read WtC, I recommend [Cowboy Grak 5: Yet Another Fistful of Obols](https://archiveofourown.org/works/51574075) - that centers around a contest inside a deep utopia.

Expand full comment
E1's avatar

Gaizemaize also wrote a lovely alternate ending to The Good Place that's directly about deep utopia. They unfortunately took it down almost immediately after posting it, but I hope they see this comment somehow and decide to put it back up.

Expand full comment
Jake R's avatar

I got about 5 sentences into this review and then ctrl+f worth the candle. Speficially the epilogues are the best attempt I've yet seen to grapple with what it would really mean for everyone to live happily ever after.

Expand full comment
C_B's avatar

I also came here to recommend Worth the Candle, and especially its epilogues.

Expand full comment
Paul Goodman's avatar

17776 certainly comes closer to depicting deep utopia than any other fiction I've come across so far.

Expand full comment
Yug Gnirob's avatar

I always think of the Kino's Journey episode: she tells a story where a supercomputer runs a utopian city, so all the citizens spend all their time... double-checking the supercomputer's work.

Expand full comment
Brenton Baker's avatar

Dancing with Eternity by John Patrick Lowrie (the guy who voiced The Sniper and married the woman who voiced GlaDOS) also covers a lot of these topics.

Expand full comment
PuentesAmarillos's avatar

I just finished Worth the Candle today; really liked the epilogues and was thinking a lot about them while reading this review, and that sure made the fourth-wall-breaking in the last paragraph of Scott's piece hit hard. TINAC BNEIAC?

(Speaking of which, it seems like Scott had the opportunity to write some deep utopian fiction at the end of Unsong, but he wrapped things up right before getting to the point of portraying the utopia. A perfectly reasonable spot to conclude the book, of course, but I would've been interested in turn "to see his world in action".)

Expand full comment
Mo Nastri's avatar

Seconding 17776. Very striking.

Expand full comment
Vakus Drake's avatar

Do any of those stories lack artificial social scarcity? I've read through WTC but not it's sequel

As I've never really seen a utopian story that actually fully thinks through the ethics of creating new minds, instead of avoiding the issue. Though some Friendship is Optimal stories may qualify for people who are furries

Expand full comment
spinantro's avatar

Greg Egan's Diaspora stuff comes close.

Expand full comment
Steeven's avatar

>We're gonna mean so much, you might even get tired of meaning. And you'll say 'please, please, it's too much meaning. We can't take it anymore, Professor Bostrom, it's too much!'

Hilarious

Expand full comment
Steeven's avatar

>He suggests that since we’re only using Art to satisfy ourselves that we’re not cheating - rather than demanding that the Art itself be interesting - we can change our interestingness criteria a little whenever we run out of Art, helping us distinguish ever finer gradations.

This reminds me of permutation city. A guy becomes extremely happy to be a woodworker making chairs for thousands of years, then a 19th century naturalist and so on, very loosely governed by his exoself. His reducto is that he could literally contemplate the number 1 and be happy to do so and never ever run out of meaning. I think this is a pretty good idea, since meaning only means what you want it to mean

Expand full comment
Doctor Mist's avatar

Bostrum discusses that guy in the book.

Expand full comment
Hochreiter's avatar

>Imagine a world where religion has been emptied of its faith and mystery, and we know exactly how each act of worship figures into the divine economy. Going to church would be no more meaningful than doing our taxes - another regular ritual we perform to appease a higher power who will punish us if we don’t.

Ironic thing to write reviewing a book about a secularized reformulation of the doctrine of epektasis

Expand full comment
walruss's avatar

...not one word about connection to others, being part of a community, or social projects? Literally none? Not only is there literature on this being the main way most people find meaning and happiness, but it's the one thing that can't be solved by utopia.

Expand full comment
AB's avatar

What is there to address? You would be able to form the same and more communities and relationships with the hundreds of millions (if not billions) of like-minded humans for whom the meaning of life is Connecting With Real People in Real Reality. That space of relationships is a strict superset of whatever you’re doing now and whatever you can do in any pre-utopia, because the only difference is that there’s no longer a time/distance/money constraint stopping you from meeting the other 99,999,900 people and devoting time/energy/attention to them.

Expand full comment
walruss's avatar

Agreed with your explanation, but disagree that it's not worth addressing. The premise is "what are the ways to find meaning in an absurdly utopian future" and that didn't even make the list, even though it's how all the happiest people find meaning *right now.*

We find ways to discuss how we could mountain climb in a utopian future, something only a small subset of people have any interest in, and also something we can do right now. "Enjoying art" in this universe would also be a strict superset of whatever you're doing now, yet it gets talked about.

And for the record there is a whole book's worth of interesting stuff to address about social relationships in this unlikely future. We could *perfect* social relationships - we could completely understand our loved ones' desires by interfacing with some super-intelligent machine that conveys those desires to us. Does that make relationships more or less meaningful? With mind modification we could form infinite close relationships (bounded only by the number of actual people if you insist on having relationships only with organic folks) - is that better than the number we have now with our biological limitations? Is it "cheating" in the sense the author discusses to create artificial minds to have relationships with, minds that are perfectly compatible with our needs?

IMO this isn't just one of many questions in such a future. It's the most important one - it should be central to the premise. It's honestly terrifying to me that there's a whole culture where it's not.

Expand full comment
AB's avatar

Ah, I see why you’re saying now. I agree it’s really odd that it’s not discussed as a first class candidate like sports and art appreciation and such, now that you mention it.

I suppose a charitable interpretation would be that it got lumped into simple experiential joys like eating and walks in a park, or the assumption was that having no meaningful activity outside of interacting with others would in turn hollow out those interactions…though I’d assume the opposite, that socialization and play lends meaning to what would otherwise be meaningless.

Expand full comment
Vakus Drake's avatar

I think discussion of creating artificial minds ought to be more central. Since without them you aren't ever going to be post scarcity with regards to status.

Expand full comment
Brandon Fishback's avatar

Philosophers are bizarre in that they don't think relationships are worth talking about.

Expand full comment
Jeffrey Soreff's avatar

That is a very good point. I, personally, lean towards being a loner, so I tend to deemphasize human relationships, but, even for me, they are a significant part of life. And there are intrinsic frictions in the differences in preferences in even the smoothest of relationships, even if all other resource and technology constraints were solved.

I disagree with AB's comment that time/distance/money problems would all be solved; in particular _time_ can't be solved. If it takes X% of one's time to maintain a relationship of a given quality, then one can only maintain 100/X of them at any point. Short of changing human cognitive architecture, Dunbar's number doesn't get increased.

A somewhat open question is to what extent synthetic "people" can substitute. Are "Stepford wives" (and husbands) good _enough_ ? For people with a dictatorial bent, are synthetic "underlings" good _enough_ ?

Expand full comment
Tohron's avatar

I was totally thinking this as well. One human is already complex, but multiple interacting humans become exponentially more complex (as long as herd mentality doesn't take over). It feels like the whole "post-scarcity" premise leads to thinking in terms of needs that can be physically provided for, and since social life isn't in that category, it doesn't get considered except for things like social debt that can be quantified.

So will the far future involve superhumans living in large polycules, the management of which taxes even their massive, AI-assisted capabilities? I don't know, but that does sound more interesting than what Bostrom proposes.

Expand full comment
Doctor Mist's avatar

Bostrum touches on similar topics. For instance, consider having and raising children, surely a major source of meaning for many people today. But in Deep Utopia, by hypothesis, AI would be at least as capable of good parenting as you are — never tiring, never in a bad mood, able to algorithmically provide the empirically correct kind and amount of discipline and encouragement and freedom to produce happy, self-reliant children who grow into happy self-reliant adults. You could do it yourself, of course, but would you find it *meaningful*, knowing that the results will be inferior? Would it not be similar to raising a child today on corn flakes and TV?

(And no fighting the hypothesis: the resulting children will *not* be inhuman or maladjusted; the mistakes that would lead there will have been made and fixed long before.)

I don’t remember him explicitly considering community-building, but I think the analysis might be similar. Building community is rewarding, and may always be so. But today, at least part of the reason for it is instrumental — a community is there for you when you fall on hard times, and you find meaning in helping those around you. But if by hypothesis there are no hard times, if no one *needs* the community, does it still provide meaning?

Mind you, Bostrum is not saying that raising a family or building a community *won’t* be a source of meaning. He’s saying it *might* not, and is exploring sources of meaning that would survive even if the obvious ones don’t.

Expand full comment
walruss's avatar

It's good to know that this is discussed in the book even if it didn't make it into the review. Thanks.

Expand full comment
hwold's avatar

A project towards that ? A community around what ? This only moves around (and possibly amplifies) meaning, but does not creates any.

Expand full comment
walruss's avatar

I strongly disagree. Social bonds can be ends, not means. In several philosophical systems, you're ethically required to treat them as ends and not as means.

Expand full comment
hwold's avatar

The love between a parent and a child is the most sacred one in most culture. It has also been implanted here for specific reasons by Natural Selection. Once transhumanism/deep utopia blows past those specific reasons, do you expect the bond itself to survive ? This one is plausibly "maybe". But most "social bonds as ends" you speak of arises for specific reasons too, and are way, way, weaker. I do not expect them to survive without any good external reason (or meaning).

Expand full comment
Dasloops's avatar

I too read this book for some reason. I like Bostrom's other papers and ideas, so I was looking forward to this and even pre-ordered it.

I agree he didn't go far enough, but I also think making any kind of broad assumptions about this "society" is really difficult, and would result in some kind of absurd PKD sci-fi slop. In Deep Utopia, we're basically all gods, a bazillion Zeus-like beings in a virtual Mt. Olympus with no conflict. The god-like being here is so far evolved from modern-day humans--no physical or mental suffering, a complete understanding and control of emotions and desires, no conflict with other beings, omnipotent via nanobots--and so what can we really say about a society of these beings?

Will they like baseball?--no? Ok well, baseball played by super-robots? Still no? Maybe a form of multi-dimensional simulated baseball that feels like you're on ecstasy?? Now we're in PKD territory.

I think the point Bostrom never answers sufficiently in his thinking on this is the question of what aspects of humanity we will want to preserve? Why remain human at all if we can transcend into experiential gods? The section on parenting was particularly depressing to me, which basically boils down to: parents in Deep Utopia will be objectively worse at raising their children compared to artificial caregivers, and those parents can have artificial offspring that are more aligned with them and able to form deeper connections. If parenting is redundant, and even harmful, then there goes the most fundamental aspect of our human biological drive. The idea of replacing this lack of human meaning with drug-like bliss or artificially imposed goals just seems selfish in a way.

And that's my main problem with Deep Utopia. That it just feels like a selfish existence.

While reading the book, I thought of the end of N.G. Evangelion (SPOILER ALERT) where humans have all merged into a giant ocean of orange goo, free of loneliness at last, all one... And basically this is where Deep Utopia leads, except for beings like Shinji, who still want to make a go of their individual consciousness and all the trappings and sufferings that go with it. (See also the ending of Spike Jonze's Her)

In a way, the most meaningful thing I got out of this book is the idea that we're lucky to be living in the now, in a world where we as humans still have the ability to lead naturally meaningful lives. If we ever get to Deep Utopia, the 'we' there will be unrecognizable.

Expand full comment
Vakus Drake's avatar

An obvious addition to his point about having kids is that if you aren't relying on genetic luck of the draw you can just design your children so they will be extremely resilient. Ensuring they end up exactly as well off even if raised suboptimally.

I'd also argue meaning is just a social instinct and if you give people (or match people up with) perfect communities, then they could have levels of meaning few people from our world ever get the chance to experience.

Expand full comment
Anatoly Karlin's avatar

So in thinking about "deep utopia" I think there are two main things that remain very hard to solve:

1. Alluded to here, the observation that this utopia, while blissful, might well be meaningless in the absence of limits or constraints.

This suggests that deep utopians will play a lot of games where their abilities and knowledge are circumscribed. Full immersion, of course.

2. Many of these games will have permadeath mode. However, there's no actual need to make permadeath real in the upper reality (most games after all do not involve you getting killed, the gladiator games in the Colosseum are the exceptions, not the rule); the stakes come from losing a lot of accumulated progress. Say, one pre-posthuman lifetime's worth.

3. The first transhumanist Fyodorov defined the "Common Task" as attaining immortal life (which is "easy" and might be done within a few decades, or a few years if AGI happens soon and goes well), and resurrecting the dead (which is vastly harder if it is even possible in principle). However, possibly if you run enough ancestor-simulations, and streams of consciousness can meld into and out of each other across time, you could make a whole lot of dead people's lives richer or at least much more interesting.

So perhaps the last game until the destruction of Earth or heat death of the universe will be a simulated Wheel of Time-like universe characterized by endless cycles of death and rebirth within worlds that resemble the human ancestral past and imagined futures ("space opera", "steampunk", etc.), but with enhanced featured (magic! dragons!) and a plot engine that generates the maximum possible density of interesting events while keeping the simulation stable.

Expand full comment
Yug Gnirob's avatar

>that generates the maximum possible density of interesting events

Wait, I thought we were talking about Wheel of Time?

Expand full comment
Schneeaffe's avatar

>This suggests that deep utopians will play a lot of games where their abilities and knowledge are circumscribed. Full immersion, of course.

Its always striking in these discussions how people will describe scenarios where you dont see your family anymore without even blinking.

Expand full comment
warty dog's avatar

I skimmed the book. re stories from utopia, I remember he warned to not evaluate the future by how good it is for us now (by being a good story), but by how good it will be for the inhabitants. so my vibe read was "yes it would be boring af if I described it, but the characters would be having fun"

re scarce resources, I recently twote of a non-nanobottable resource, the ground of the temple mount

Expand full comment
Grantford's avatar

I'm surprised that relationships with other people are not really discussed as a remaining source of meaning in the Deep Utopian world. People's relationships with their friends, family, and romantic partners tend to be principal sources of meaning in their lives today. Would this meaningfulness not persist in the Deep Utopian environment?

Expand full comment
walruss's avatar

Right? Sometimes I'll read articles by rabid anti-tech people who are like "west coast values are a cult that does not recognize the difference between the real and important, and the superficial illusion." And I think "nah, those are people, same as us, trying their best to make things better." And then I read something like this that discusses The Meaning Of Life as though it's something you do by yourself.

Expand full comment
Steeven's avatar

It’s still bounded. Why not have a super relationship with an AI that is infinitely more exciting? To me that seems like the whole point of “why not have the AI wirehead you full of meaning via relationships”

Expand full comment
Yug Gnirob's avatar

Because that's the relationship a housecat has with its owner.

Expand full comment
Victualis's avatar

I presume you have not ever met the author.

Expand full comment
Steeven's avatar

Metamorphosis of Prime Intellect is a fiction story set in deep utopia. You literally cannot die, AI governs everything including your level of pleasure, and some people decide to spend that time torturing themselves. I think the point of that book is that we lose a bit of our humanity in such a utopia but I’m not sure I’d see that as a bad thing

Expand full comment
Deiseach's avatar

"Sven, a Catholic, is in a state of grace. He then has sex with sheep S.

a. (8 points) What is Sven’s atonement coefficient following the act if the sheep was not willing?

b. (12 points) What if the sheep, while not technically being willing, could not be said to mind either?"

Theological Engineering has re-invented the penitential 😁

https://en.wikipedia.org/wiki/Penitential

https://en.wikipedia.org/wiki/Handbook_for_a_Confessor

Regarding the question re: transubstantiation and joules of heat, somebody has considered the physics of it:

https://nwcommons.nwciowa.edu/cgi/viewcontent.cgi?article=1189&context=celebrationofresearch

"Catholics believe Transubstantiation is the process of converting bread and wine into Jesus’ body

and blood. There are no physical signs of change, but the substance changes. From a scientific

perspective, there is no application of heat or light and no physical sign of change, therefore there

is no change. Physics as we understand at an undergraduate level assumes Earth is a closed

system. This assumption no longer applies if there can be substance change without physical

modification from an energy source. I refuse to believe that science and religion are unrelated

entities, yet I accept that this unexplainable phenomenon occurs every time Mass is celebrated.

Miracles are consequential to faith due to their lack of worldly explanation, but science makes

decisions based on reproducible, tangible data, therefore, the dissonance is evident."

Someone else wrote a thesis:

https://www.researchgate.net/publication/369378219_Transubstantiation_and_Physics_Validity_in_Science_Vs_Validity_in_Religion

"The Catechism of the Catholic Church provides substantial insight on Transubstantiation. A fundamental part of the Liturgy of the Eucharist is called the epiclesis. As explained in the Catechism of the Catholic Church, In the epiclesis, the Church asks the Father to send his Holy Spirit (or the power of his blessing) on the bread and wine, so that by his power they may become the body and blood of Jesus Christ and so that those who take part in the Eucharist may be one body and one spirit.” (Catechism, 341). From a physics perspective, this power is part of the open system and therefore cannot be studied the same way closed system physics is understood. The conflict between what we can study and what cannot be quantified is the root of the dissonance between science and religion. "

Expand full comment
Leah Libresco Sargeant's avatar

As a casual (not competitive) weightlifter, I strongly dissent from this:

"Your success in weight-lifting seems like a pretty straightforward combination of your biology and your training. Weight-lifting retains its excitement because we don’t fully understand either. There’s still a chance that any random guy could turn out to have a hidden weight-lifting talent. Or that you could discover the perfect regimen that lets you make gains beyond what the rest of the world thinks possible."

The fun/satisfaction of weightlifting isn't competitive for me, it's that progress happens both in a very discernable, quantifiable way on the equipment, and then I am also overtaken by pleasant surprise by how my growing strength allows me to better play with my kids / find more affordances in the physical world / (next year in Jerusalem) have a less achy back.

I don't need there to be any puzzle/lottery to it, but it doesn't seem compatible with a Deep Utopia where I reprogram to have the physical strengths I desire. (I am overall v skeptical of Deep Utopia, and Catholic).

Expand full comment
Feral Finster's avatar

I guess it depends on what "no unmet needs" means, but without unlimited resources, we'd turn on each other, right quick.

Humans have unlimited wants, and there would neither be enough resources or power to satisfy everyone's demands. Not to mention, there would be no problems to unite humans around solving.

To use a favotie analogy - you can make a perfectly bankable "Batman" movie without Robin. But take away The Joker, and Batman goes from The Dark Knight to a LARPing weirdo. There's no movie.

Expand full comment
Sophia Epistemia's avatar

the vr sims can provide endless waves of adversaries for the monkey minds who need that kind of conflict to satisfice them.

Expand full comment
MicaiahC's avatar

To add indirect evidence to this: battle royale games try to increase retention by having the first couple of games be vs only bots that perform very badly, so the players win. Of course they gradually start adding more and more real players later, so this could only be a holding pattern, but even people who know this don't seem to react to the fact on a bone deep level: they still cheer when they win the first couple of games for the most part.

(Edit: typo fix)

Expand full comment
Leah Libresco Sargeant's avatar

This presumes a very specific kind of religion-machine, rather than religion being about growing in love with God, not only obeying out of +EV

"What about religion, Bostrom’s other holdout? What if, after we all have IQ one billion, we can just figure out which religion is true? If it’s atheism, the whole plan is a no-go. But if it’s some specific religion, that’s almost as bad. Imagine a world where religion has been emptied of its faith and mystery, and we know exactly how each act of faith figures into the divine economy. Going to church would be no more meaningful than doing our taxes - another regular ritual we perform to appease a higher power who will punish us if we don’t."

I am very sure my husband exists, and I have a pretty reasonable model of what makes him happy. That doesn't devalue our relationship to feeding punch cards into a HusbandBot, because he's a person, and our relationship doesn't depend on obscurity.

Expand full comment
jim's avatar
Oct 17Edited

There are significant branches of Hinduism that make this VR-sim argument. The idea is somehow God gets bored, creates the ability to forget he is god and breaks himself up into a million different selves, and goes through the drama of bringing it all back together again. To stay entertained.

"Or is that the wrong way to think about it? Is it less of an Amish farming village than a virtual reality sim?" This is what I was expecting Bostrom's thesis could be, and a nice twist for the book. That, of course our struggle figuring out deep utopia is part of the VR sim we settled for in our previous deep utopia to support some simulation theory kinda argument. The woodland animals perhaps aren't in the sim, or something like this.

For me, the hypothesis is coming because we are imagining our current mental framework on a world with all meeds net, but forgetting we would still have fears worries and so on. If we actually met those needs, it would be from some kind of mystical insight letting go of an ego, or self, and then we would be free of any issues regarding boredom or lack of meaning. Why? Because those are all born from holding on to this 'self'.

Thats my same issue with the Hinduism argument. Its hypothesizing an egocentric 'God' figure, which just seems simply anthropomorphized and unpalatable. I don't exactly want to reach a state that is allegedly supreme but has the capacity for boredom. Seems supremely unsupreme to me!

Expand full comment
skaladom's avatar

That's an interesting one; I'd heard similar things from Hinduism but worded in terms of exuberant dynamic energy, not boredom.

But yeah, if you have a philosophy that reality itself is God, yet it doesn't feel like that when Joe Random wakes up in the morning, the obvious question is "what went wrong", but if you take the premise seriously enough then the possibility of "wrong" is out of the question, so you have to end up with a theory that Reality Itself willingly chooses to undergo our earthly lives with all their suffering.

In other words, replacing creationism with emanationism doesn't offer a quick way out of the problem of evil.

Expand full comment
jim's avatar

I guess we could say it could be for different reasons, but if boredom is one I definitely don't want to be a god who gets bored! I would aim for something a bit different ;)

Thanks for your thoughts

Expand full comment
Monty Evans's avatar

> My biggest problem with all of this is that this book was crying out for fictional stories set in Deep Utopia.

*Ah hem*: you are in desperate need of 17776 (https://www.sbnation.com/a/17776-football)

Expand full comment
Victualis's avatar

Thank you for reminding me of this.

Expand full comment
David Khoo's avatar

My feeling based on this review (without having read the book itself) is that it is a stealth reductio ad absurdum of hedonistic utilitarianism. If you define good as maximizing the sensation of pleasure, or the sensation of meaningfulness, or any other sensation (and minimizing the opposite sensation), that works fine currently when we are very far from the conceivable maximum. But the deep utopia near to the conceivable maximum seems deeply problematic. Either it can't exist, or the versions that can exist don't seem utopian. It's a refutation via failed defense.

Expand full comment
Mr. Doolittle's avatar

As we would get closer to this version of utopia being a possibility, I wonder if we'll find more works of fiction criticizing it (as with Animal Farm or 1984) and pointing out the subtle flaws in logic that lead us to think such a society would be desirable in the first place.

It seems quite possible that we would change course long before achieving it, as the difficulties/undesirability/impossibility became more obvious to our descendants.

Expand full comment
David Gross's avatar

FWIW, Michael Moorcock wrote an entertaining series of books known as The Dancers at the End of Time that explored a utopia in which people had become absolute masters of everything in much the way described in this review, and the awkward ways the people in this decadent utopia try to find purpose in their lives.

Expand full comment
David Gross's avatar

When I first started attending Burning Man back in the mid-90s (yep, I'm that old) I had this feeling of "wait, I somehow recognize this strange planet" and then I realized I was getting flashbacks to Moorcock's end of time paperbacks that I'd read back in high school.

Expand full comment
EngineOfCreation's avatar

Relevant XKCD I mean SMBC:

https://www.smbc-comics.com/comic/scarcity

Se yeah, I don't think the innate scarcity of status is negligible here.

Expand full comment
Sophia Epistemia's avatar

it absolutely is. vr sim where you're god-emperor of the future lightcone? have it. where you're a miserably oppressed vervet monkey under a dominant who beats you up daily as an example to the others to display abject submission? your kink is not my kink but there you go.

Expand full comment
EngineOfCreation's avatar

Yes, wireheading everybody is the only possible solution to the status game. But that only works if literally everybody gets wireheaded, no exceptions. If only two people stay out, one will want the planet-sized villa, and the other, if only to one-up the other, will want the solar-system-sized villa. Then what? And if you wirehead everybody without their consent, you get the Matrix or worse.

Expand full comment
Sophia Epistemia's avatar

>if only two people stay out then

then maybe neither, or not both, of them cares to do dick-measuring contests against the only other one left.

>you get the matrix

if you do it in any Correct way you get MoPI or FiO. like, if you're not writing sour grapes cope, and think it through even a little. eg, first off: each status-maximizer gets their own shard isolated from all but their willing worshippers, second: uh, problem solved forever by previous point

Expand full comment
Stygian Nutclap's avatar

Since I'm seeing the Culture mentioned a few times, a question. I'm currently reading the first. So far this might as well be called "Horza's-very-bad-not-good-day", like the Hobbit in space but several shades darker. Is the whole series like this? It's adventure/SF pulp, which is fine, but I'm not usually in the mood for that.

Expand full comment
Spikejester's avatar

All the books are different, but for the most part yes, they're SF pulp adventures. Each book has a big time skip & new set of characters (with occasional cameos). "Consider Phlebas" is the most pulpy and the setting was not quite as defined / tech levels not quite so high as later books. "Inversions" is a fantasy pulp adventure instead (and my least favourite of the series). "Look to Windward" is lighter on the pulp adventure and heavier on the 'day in the life of a post-scarcity utopia citizen'.

Expand full comment
NLeseul's avatar

On the second footnote—but what if the wiser and vaster version of yourself set some goal in the simulation other than behaving as morally as possible? In the "real" world, we do have a few virtual worlds (the Ultima series) where behaving morally is the goal of the simulation. Not many people play those nowadays. Much more popular are games like the GTA series, where doing whatever you feel like with no real consequences is the goal (or, at least, the goal that many players choose to set for themselves). And plenty of games allow you to choose either a "good" path or an "evil" path, and either choice gives you access to equally engaging and rewarding content.

So wouldn't we need to consider the possibility that our future wiser and vaster selves might sometimes still create simulations where playing as a slaveowner in 19th century America is the intended path, just for the novelty value of that experience?

That's kind of where people get worried about the moral implications of the simulationist argument, I think. If nothing we experience is fundamentally real, then that leaves us open to rationalizing doing bad things in the simulation based on what we conclude the simulators chose as the goals of the simulation.

Maybe we can reason that wiser and vaster people just wouldn't be interested in playing GTA for funsies? That would pretty much require determining that there are universal moral principles that apply regardless of what universe or what simulation we happen to inhabit, which basically means that the question of whether we're in a simulation or not is completely irrelevant to moral reasoning anyway.

Expand full comment
MaxEd's avatar

"Zones of Utopia" remind me strongly of "sets of laws" in Ada Palmer's excellent "Terra Ignota" series - which I recommend very much as another utopic novel (although utopia in it is flawed, and there is a great conflict to make it better; It's very unique fictional conflict, because all sides just want the best for humanity; I wanted to write a review of this series for Book Contest, but never got "a round tuit").

In short, the society is divided into several groups ("Hives"), loosely united by their most appreciated virtues. At 18 (if I remember correctly), humans declare which Hive they want to join (they can later change their affiliation, this is not a big deal), and they will have to abide by that group's laws. OR they can choose to be Hiveless, and live by one of "general" sets of laws: White Laws, Grey Laws or Black Laws. White Laws are most restrictive and "safe", and Black Laws allow duels to death (between those who obey them).

As for the book's theme, since I've read Dan Simmons' "Hyperion" and learned about Pierre Teilhard de Chardin and his idea of evolution of humans toward God, all I can think about Deep Utopia is that, as far as we know, gods mostly create worlds and watch over them, so I would guess this is what I imagine Deep Utopia population would do with their infinite resources and powers.

Expand full comment
David Cramer's avatar

Came here to say this

Expand full comment
Concavenator's avatar

>at 18

Whenever they prove on a standardized test that they're competent to make their own choices, regardless of age.

Expand full comment
Bob Frank's avatar

Honestly, I'm a bit disappointed to see this entire thing written up with not a single mention of the research of John Calhoun in attempting to create a "Deep Utopia" for lab rats.

Spoiler: It did not end well.

Expand full comment
AB's avatar

You mean his experiments with deliberate, severe overcrowding in a crudely-architected panopticon, whose only claim to “utopia” is “there are no predators and the food bowls are always refilled, not that the rats have any way of reflecting on this”?

I would be disappointed if it *were* mentioned as if it contained some insight for a discussion of a serious utopia.

Expand full comment
Bob Frank's avatar

No, that's not what I mean at all. As I understand it, particularly in the later experiments, conditions were never particularly overcrowded, much less "deliberate and severe"ly so, and social breakdown always began long before the habitats were anywhere near the population limits they could reasonably support. It largely got turned into a discussion on "overpopulation" by third parties who didn't particularly understand Calhoun's research because this was a trendy topic at the time, with now-debunked nonsense like _The Population Bomb_ being on a lot of minds back then. Calhoun's actual conclusions from the experiments were in fact the opposite: this leads to conditions where populations *collapse* because they lose the social skills necessary to successfully reproduce and raise young.

Disturbingly prescient, to say the least...

Expand full comment
AB's avatar

The famous paper specifically tries to test populating growth in a closed space and pin overpopulation as the cause of the pathological behavior / mental illness https://pmc.ncbi.nlm.nih.gov/articles/PMC1644264/pdf/procrsmed00338-0007.pdf but other writings by Calhoun suggest this was actually caused by accidentally reinforcement-training the rats with random associations between slight crowding and feeding, causing increasingly stronger ties between increasingly severe crowding and feeding until things broke down horribly https://johnbcalhoun.com/wp-content/uploads/2019/01/1962-a-behavoral-sink-secure.pdf

AFAIK the whole notion of a “population it could reasonably support” isn’t really backed by any evidence, and is rather rooted in Calhoun severely overestimating the practicality of his designs for healthily supporting large rat populations (as demonstrated in the experiments).

Expand full comment
Bob Frank's avatar

> AFAIK the whole notion of a “population it could reasonably support” isn’t really backed by any evidence, and is rather rooted in Calhoun severely overestimating the practicality of his designs for healthily supporting large rat populations (as demonstrated in the experiments).

Doesn't that kind of feel like question-begging, though? "This was an issue of overpopulation, therefore the way that the rat society broke down long before becoming overpopulated proves that the estimates of what defines overpopulation were wrong."

Expand full comment
Jeffrey Soreff's avatar

I'm somewhat skeptical of the interest in Calhoun's experiment for a related but slightly different reason. At the time, it was touted as a warning about what could happen in the future. But the rat density was never packed shoulder-to-shoulder. It seems more like a model of humans in cities, _which we have already had for thousands of years_ . Now, historically, cities _have_ tended to be net sinks of population (a good chuck of that was from disease, and part of _that_ was solved when we chlorinated water supplies), but Calhoun's experiment doesn't look like it would predict a _new_ problem for humans.

Expand full comment
Bob Frank's avatar

The new problem was that the rats weren't dying of disease or overcrowding; they were dying of utopia. Something about the sterilized, problem-free environment in which they found themselves caused their social order to reshape itself in bizarre and radical ways, which included lower-status "outcasts" banding together into violent gangs, higher-status rats holing up in isolation for their own security, and both sexes gradually losing both interest in, and skill at, breeding and child-rearing.

Just look around the world we're living in today. Far from not predicting new problems, Calhoun's experiments have proven *disturbingly* prescient!

Expand full comment
Kevin Barry's avatar

My thought on the boredom hypothesis of immortality: it's easily solved once you have the ability to do controlled and safe memory wipes.

Expand full comment
Vittu Perkele's avatar

Not even that would be necessary, you could presumably just modify the mind to be incapable of experiencing boredom no matter the sensory input.

Expand full comment
Mo Nastri's avatar

I'm not even sure that's needed. Deep appreciation of and satisfaction in habit / ritual works fine, as it does for me, or the British tea-drinker chap in Scott's example.

Tangentially, this reminds me of Wahram, one of the protagonists in the Kim Stanley Robinson sci-fi novel 2312, and his "pseudoiteratives".

Expand full comment
Vakus Drake's avatar

I think that's actually a terrible solution: Instead of solving the reason you're just wireheading yourself.

You'd solve the problem by growing as a person once you've exhausted all novelty from the human experience. Then you expand your mind so you're capable of appreciating experiences you previously couldn't. Same way there's experiences you can't appreciate as a child but can as an adult.

Expand full comment
luciaphile's avatar

I am very bad at recall and always get mixed up about the names of the geological eras and epochs. I’m always amazed anew when I remember that the one we’re in began, let’s see, 11,700 years ago. How coincidental it should be so near in time.

And the one before began many millions before that.

Expand full comment
StrangePolyhedrons's avatar

[The foundation trustees are stuck with the problem of figuring out how to use hundreds of billions of dollars to benefit a space heater which doesn’t, technically, have any preferences]

I wonder if the homework story discussed the option of ignoring the dead man's wishes and not worrying about the space heater. "The option" meant not just as an option for the trustees but for all of society who could decide not to enforce the dead man's wishes.

Expand full comment
Greg kai's avatar

I think the problem of deep utopia is that people will not be in control anymore, and this is a problem for adults regardless of how much material and mental well being you are in (in fact, it's already a problem now in at least some section of current society). A combination of not being able to control anything if things get worse, and pure power craving (which imply to be able to impose your will to real others). Probably a little of the former and a lot of the later...So either hack your brain to forget about reality and be the enternal god-emperor of your world (so wireheading), or drop those cravings and be confortable with benevolent superior semi-understable masters. Which is not inhuman, just another neotony-step for humanity (neo-neotony) so we keep a motivation system closer to young children than most adults...

Expand full comment
Sophia Epistemia's avatar

yeah. Good End tbh. have you SEEN what the power-craving humans *do*?

and no need to get to the evil dictators level. think of, like, parents. cops. teachers. doctors. prison wardens. priests. managers. and so on and on-

it's not even intentionally evil a lot of the time. it may just be misalignment of the authority figure with the experiencing self, or a set of future remembering selves, of the subject of that authority.

Expand full comment
Brandon Fishback's avatar

Humans are limited in their abilities of control. Superintelligent AI has no limits.

Expand full comment
Victualis's avatar

Are you saying ASI will rewrite the substrate of the universe and change the fundamental physical constants? Please speak into the microphone.

Expand full comment
Vakus Drake's avatar

At least you could have things set up so that all the humans could eventually grow into superintelligences, after they've exhausted the novelty from the human experience.

So even if humans are basically children to superintelligences, they could eventually mature into agents with actual control.

Also I doubt people would need to wipe their own memory unless they are a complete egomaniac. Since most people could probably be satisfied by having a utopia simulation and just not comparing themselves to people they don't interact with outside it.

I can imagine social media being heavily restricted because it promoted comparing oneself to others in a way that will reliably make the people who use it unhappy in any conceivable society

Expand full comment
Greg kai's avatar

I am not saying people would not be comfortable once in the simu. I say the fear to enter the simu is more a fear of loosing control than a fear that it is not real. And once inside the main nagging issue is indeed the memory of it being a simu so you being at the mercy of the simu creator, which would actually be god in this context. But given how religiosity is considered comforting for most, I guess indeed most people will be comfortable once inside even knowing there is a god...

Growing into a SI would indeed be a natural outcome for those not comfortable being at the merci of a (benevolent?) god. It could be quite natural: an improvement here and there and you progressively join the club without the impression of unwelcome change (regardless of how little you have in common with your previous self in the first place).

Or, far more likely, the simu makes you think you join the club, without actually changing the reality hierarchy at all. It's the easiest way to alleviate the only nagging doubt preventing you to reach full bliss: the impression you are not at the wheel of the world :-).

Once in a sufficiently powerful and adaptable simu, nobody will ever want to leave...nor distinguish between actually leaving or getting served a great leave story adventure à la "matrix 1" :-)

Expand full comment
Vakus Drake's avatar

>once inside the main nagging issue is indeed the memory of it being a simu so you being at the mercy of the simu creator, which would actually be god in this context

You're not considering that one could have a simulation created to your specifications for you by a SI. Of course they could deceive you here, but then again they could probably just control your mind with nanobots if they wanted to, regardless of whether you decided to enter into a simulation. On the other hand though this means you're not taking on any *additional* risk entering into a simulation.

>Growing into a SI would indeed be a natural outcome for those not comfortable being at the merci of a (benevolent?) god. It could be quite natural: an improvement here and there and you progressively join the club without the impression of unwelcome change (regardless of how little you have in common with your previous self in the first place).

The logic that motivates becoming a SI is just the fact that making very gradual improvements to yourself is less existentially terrifying to many than eliminating their boredom or becoming a loop immortal (a concept from FiO) and repeating the same actions over and over again for eternity without remembering them due to not having improved your memory. Plus there's no reason a SI couldn't help create paths to self improvement that reliably preserve certain aspects of one's identity like core values. This option isn't in competition to existing in simulation or anything like that.

Expand full comment
LGS's avatar
Oct 17Edited

Does he talk about raising children or is that not a source of life's meaning in the eyes of philosophers? Atlas shrugged also doesn't seem to address what the utopia supposedly does with children. Is this just a blind spot of utopian writing?

>new utopia book

>Ask author if the utopia has children or is childless

>She doesn't understand

>Pull out illustrated diagram of what is a child and what is childless

>She laughs and says "it's a good utopia sir"

>Buy book

>It's childless

Expand full comment
Sophia Epistemia's avatar

Deep Utopia would provide you with that too if you want it. hopefully also ensuring that the children would ALSO be in their own Deep Utopia by being *your* children.

Expand full comment
LGS's avatar

Well, I'd have to ask how the exponential growth is to be handled (it's still a finite universe in deep utopia, right?)

Also, you make it sound like wanting kids is some niche preference rather than the main source of meaning in most people's lives throughout history. Seems like something that's worth explicitly mentioning in a book and/or review

Expand full comment
Sophia Epistemia's avatar

people seem to make a lot fewer kids as soon as they can choose how many they want. may or may not be exponential, which may or may not matter depending on the size of the universe (which keeps turning out to be bigger than we thought every time we look further btw) and how the rates of things like replication, residence availability growth, dispersion through space, and other relevant ones i'm not thinking of rn.

Expand full comment
LGS's avatar

If even 1% of the population chooses to raise a child once every 500 years, you have exponential growth with doubling time of like 35,000 years. Grows to more than number of atoms in known universe within 10 million years.

If the universe is finite, it doesn't matter how large it is or how fast residences are growing; space will run out relatively quickly (long before heat death of the universe)

Expand full comment
Vakus Drake's avatar

I feel like this is an underrated problem that leads to difficult philosophically interesting trade offs.

For instance: you can solve this problem and allow people to still have as many kids as they want (within reason). It just requires the AGI only the creation of new minds which lack any desire to have kids of their own.

That way you'd only have linear growth, which is less than the geometric growth of civilization as a whole. Since any exponential growth would eventually outstrip geometric growth due to the square cube law.

Expand full comment
hwold's avatar

Sounds circular ? "What is the Purpose of Life" ? "To have children". "And what Purpose those children will have" ? "To have children". The buck have to stop being kicked somewhere ; either a childless life still can have meaning, or there is no meaning at all.

Expand full comment
LGS's avatar

A childless life can have meaning, sure. It's just not exactly utopia if a major source of fulfillment is not available.

Expand full comment
Vakus Drake's avatar

What if eventually most of the population is deliberately made to lack any need/desire to get meaning from having kids?

Expand full comment
Ghatanathoah's avatar

Sports are a form of entertainment that relies heavily on unpredictability as a source of enjoyment. They are designed specifically to be hard to predict, so it does become hard to imagine how posthumans who are way better at predicting stuff than we do could enjoy them.

Other forms of entertainment are not so reliant on unpredictability as sports. Many people reread books, rewatch movies, and listen to music more than once. People also enjoy reading books/watching movies from formulaic genres where the broad outlines are predictable, even if they haven't read/watched that specific book/movie before. So even if sports become boring and predictable, they won't be the only form of entertainment left in utopia. (I should also note that there is demand for rewatching unusually good sports games, there was a network called ESPN classic for a while, so sports aren't 100% predictability based).

The other thing is that, presumably, posthumans will be even more unpredictable than regular humans since they will be more intelligent. Playing future sports against each other will probably be as challenging for them as playing present sports is for us.

Expand full comment
Dylan Richardson's avatar

Either I'm far more radical in my view of deep utopia than Bostrum - surely not? - or the book is predominantly concerned with reconciling the philosophical mainstream's typical concerns. Certainly many philosophers do *a lot* to ensure all possible objections are responded to - is that what this is?

My view is that deep utopia would be, or at least ought to be, essentially just endless computationally-optimal hedonium factories. No need for narrative. Probably not even need for any non-homogenity of experience from moment to moment, forever on. Or if there is, the experience just ends (memory deletes) and replays from the beginning, like a replaying a video game. The main uncertainty here is selfhood, at the interval between non-utopia and utopia. Should I relinquish my selfhood, essentially suiciding, for the sake of optimal experience machine efficiency? Would I?

Expand full comment
anomie's avatar

Well thankfully, you won't be the one making that choice.

Expand full comment
Dylan Richardson's avatar

If it is the case that deep utopia is reached through the capabilities of a artificial super intelligence - which seems likely, how else could it; then there is also a possibility of human disempowerment. If AI is making the every other decision because it is best at them, it seems likely that it would also be making the moral decisions. Assuming the AI is also super-intelligent at moral philosophy, there's a chance it does what I wrote.

Expand full comment
Brandon Fishback's avatar

He won't but utilitarianism is very influential among the people working in AI.

Expand full comment
Brandon Fishback's avatar

This is one of the worst possible futures imaginable.

Expand full comment
Dylan Richardson's avatar

This is a very common reaction. It's difficult to reconcile it with my own view, but I'll try.

My main interpretation is that you are having trouble disassociating from your particular experiences, from your subjectivity. Qualia, positive or negative, arise with us in particular contexts because we have evolutionary histories which given us very particular minds. There is no necessary reason at all to think that qualia *can't* arise in very different minds. And if optimizing for "the good", rather than the survival of genes, it is reasonable to think that they *would* be very different. Following so far?

Something that is clear to me when I introspect, is that when I experience great joy, happiness, bliss, whatever you want to call it, it occurs in response to a trigger. To my mind, in Deep Utopia, the "trigger" would be redundant. After all, the "before joy" moment isn't one of joy!

But even the meaningfulness that we attach to our most valued experiences is something our brains do chemically. It's not an objective property out there in the world that needs to be had. And here is no reason to insist that the "trigger" needs to occur. We can aspire to far more perfect alien minds than that.

Expand full comment
Brandon Fishback's avatar

I don't know why hedonistic utilitarians just implicitly assume that there is something more objectively true about valuing pleasure over everything else. Yeah, if you're a utilitarian, what you suggest is the logical end result. But for anyone who has any other values, which is almost everyone, then your suggestion is a grotesque dystopian nightmare. I don't care about happiness as a disembodied abstract platonic value. I care about the happiness as it's attached to people. The fact that your value system necessitates endless "hedonium factories" just demonstrates why utilitarianism is bad in the first place.

Expand full comment
Dylan Richardson's avatar

"for anyone who has any other values, which is almost everyone" - yes, I agree. But that's you and me *now*. This is why I prefer the word "hedonium" over pleasure, because people get overly attached to one narrow conception, derived from their current personal experience.

Let me go back to my prior point. You are being overly influenced your current biological limitations, which are due to resource constraints and completely amoral evolutionary processes. This need not be the case in a deep utopia.

Just one small example: humans and other animals have satiation limits when consuming foods. A slice of pie is great, two less so, six even less so. Even when we aren't "full", satiation nudges us to consume more varied foods, giving us nutritional variety. Other goods, like art, theater or games, have analogously diminishing returns. Without resource constraints and with our biological constraints lifted (perhaps by whole brain emulation), there is no need for this constraint to remain. You "do not wish to be forced to consume..." any one good *now*, but that may change. If it does, it seems likely to me that you'll settle on some sort of equilibrium. That equilibrium is basically what I'm calling "hedonium". We can imagine a wide variety of goods, but an equilibrium seems much more likely.

I wonder if we can agree that, at a very basic level, whatever your theory of welfare, the value of an experience = "amount+intensity"? Two happy days vs one? At the more immediate level, organisms vary in time perception: https://rethinkpriorities.org/publications/the-subjective-experience-of-time-welfare-implications

"Intensity" is a bit more nebulous perhaps, but it seems to me that we can at least do ordinal ranking of experiences. Taste testing food is a basic application. But so is putting together a vacation itinerary.

Those two desideratum are what shapes the equilibrium in my mind.

Expand full comment
Brandon Fishback's avatar

I’m not confused about what you’re saying, it’s just horrific. If you want to change my values, you’ll have to lobotomize me because lack of biological limitations won’t automatically make me a different person. If someone went up to me and said “hey, we’re going to murder your entire family and convert them to hedonium but don’t worry, we’ll rewire your brain so watching them die brings you pleasure” I would fight them tooth and nail. The value of their lives is not reduced down to this made up abstraction hedonium. Are you in favor of murdering people for your pleasure factories?

Expand full comment
Dylan Richardson's avatar

You keep saying "my values" but haven't said what those are, or provides justification for their correctness. If your justification is "X is the correct set of values, because X is the set of values that I have", well, you can see why I may find that unsatisfying.

My dog deeply values cookies, but obviously cookies don't stand up as an ultimate value.

Expand full comment
Dylan Richardson's avatar

Good post! I find it notable that Scott finds his way to this by imagining an "all knowing Buddha", contemplating all things. It's a pretty aligned with what he IRL considers high status - the ultimate ideal self. As for me, I would be strongly compelled to enter a typical sex/romance/success simulation of my real life. My above position isn't "what I want" really, it's what, on reflection, I've come to know would actually be best.

Expand full comment
Victualis's avatar

Why should utopia be hedonium factories? I have plenty of access to hedonium right now and find it is akin to salt, nice in small quantities only. I do not wish to be forced to consume it as though it were manna.

Expand full comment
Dylan Richardson's avatar

Let me go back to my prior point. You are being overly influenced your current biological limitations, which are due to resource constraints and completely amoral evolutionary processes. This need not be the case in a deep utopia.

Just one small example: humans and other animals have satiation limits when consuming foods. A slice of pie is great, two less so, six even less so. Even when we aren't "full", satiation nudges us to consume more varied foods, giving us nutritional variety. Other goods, like art, theater or games, have analogously diminishing returns. Without resource constraints and with our biological constraints lifted (perhaps by whole brain emulation), there is no need for this constraint to remain. You "do not wish to be forced to consume..." any one good *now*, but that may change. If it does, it seems likely to me that you'll settle on some sort of equilibrium. That equilibrium is basically what I'm calling "hedonium".

Expand full comment
Victualis's avatar

I consider the ludic attitude the natural one in a utopia, not a hedonic maximalist stance. Constraints that one has chosen make existence interesting.

Expand full comment
Vakus Drake's avatar

I should note that most people value certain things within the world infinitely more than the sort of qualia you're elevating above everything else here. If that wasn't the case then self sacrifice wouldn't exist.

People frequently choose actions that they fully know to be suboptimal for their happiness, because they have things they care about more than qualia!

When it comes to altruism and relationships people don't want qualia on its own, the qualia is not their terminal goal. If it was then nobody would have an issue with the experience machine.

Additionally valuing social relationships as a terminal goal not an instrumental goal is exactly how you should have expected a social species to evolve!

Such that I think many people would consider human extinction preferable over many scenarios like the one you propose, wherein people are utterly robbed of their human dignity.

Such that many people like myself will rationally pre commit to trying to destroy the world if it's either that or the scenario you put forth. Since that precommitment is worth it if there's literally any nonzero chance it influences things away from that scenario.

Expand full comment
Hilarius Bookbinder's avatar

Bostrom's book sounds like Bernard Williams's paper "The Makropulos case: reflections on the tedium of immortality" engorged on AI steroids.

Expand full comment
Lawrence's avatar

The first note made me think of this recent post by Robin Hanson:

https://www.overcomingbias.com/p/status-is-your-god

If Hanson is right, and this book only briefly touches on status then it seems the book is missing a key element. I find it all too easy to imagine that, regardless of how materially well off we are, we will devote our time and energy to competing with one another in increasingly consuming contests.

Expand full comment
Sophia Epistemia's avatar

contrary to Hanson's monomaniac (and unfalsifiable) obsession, not everyone is a status junkie.

Expand full comment
Melvin's avatar

Perhaps some people aren't, but most people are.

My argument for believing that people are heavily status-motivated:

1. Upon introspection I find myself to be quite status-motivated.

2. But my behaviours aren't particularly status-seeking compared to other people's (e.g. I don't go round buying a lot of expensive things to prove I can afford them, and I often sacrifice status for comfort or laziness). So I'm probably not more status-driven than average.

Expand full comment
Lawrence's avatar

I think there are very few people who don't care at all about status - a saint or a mystic here and there. Most people aren't "status junkies" either though. Most of us want a certain amount of status in the context of our own social frame of reference, and are content to trade excess status for other things, like comfort or laziness.

The problem is that status is zero sum. If you care about status at all then you care, not just that you are doing well, but that you are doing better than someone else. When all the needs and wants of everyone are met, as described in this book, status wants will remain. By definition, not everyone can have the status they want. So we will compete with each other, and given the resources at our disposal, and the lack of anything else to chase, those competitions will be enormously consuming.

(Edited for typo)

Expand full comment
Ilya Lozovsky's avatar

I feel like there is a work of fiction that directly addresses all of this (without, perhaps, solving anything): The Metamorphosis of Prime Intellect.

It is a weird and intense read, and a bit out of date, but really engages with these questions. I’ve read it a few times and can never get it out of my head.

You can read it for free online: https://localroger.com/prime-intellect/mopiidx.html

Expand full comment
Jeffrey Soreff's avatar

I rather liked it _but_ I would want to just keep the situation at the end of chapter 4.

Expand full comment
praxis22's avatar

I'm not reading it, I'm listening to it via an annoying male voice. Do I get credit?

Expand full comment
TK-421's avatar

"But most people already don’t do these things. Most of us aren’t revolutionizing societies or pursuing social missions. We’re just sort of getting by. You can still do that in Deep Utopia, and live a life of constant bliss."

This is the most important observation in the review.

Everyone who thinks a utopian existence would be meaningless or that humanity would be better off dead than living in an AI directed utopia is ignoring existing reality. Life isn't driven by purpose and a human life doesn't require any particular challenges or goals to be worth living. Quite the opposite - those goals and challenges usually exist because of conditions that make lives less worth living.

Expand full comment
Sophia Epistemia's avatar

you. you Get It.

Expand full comment
Crotchety Crank's avatar

I don't think it would be literally "meaningless" or that we'd be "better off dead" - that seems like maybe a bit of a strawman - but I do think lack of purpose would be an issue in a utopia. And Scott's right, those issues aren't unique to utopia - but that just means I also think lack of purpose is also an issue today!

There's a suppressed premise in what you're saying: "and there isn't a big issue with the way society is today *qua* purposelessness." I think I disagree, and that's where the crux lies.

Expand full comment
Vittu Perkele's avatar

Lack of purpose can be solved by the fact that purposelessness is itself a subjective emotional state, and therefore future minds can be engineered to never feel purposeless, but rather to find everything they do purposeful.

Expand full comment
Crotchety Crank's avatar

This is literally Nietzsche's Last Man, who "invents happiness" by defining unhappiness away and suppressing everything about himself that might object. You might find nothing wrong with this, but I think many do, and I count myself among them.

Expand full comment
Vittu Perkele's avatar

You would really reject the possibility of every experience being suffused with a sense of meaning the depth of which would be impossible to currently conceive? To me that sounds more like the domain of the overman than of the last man.

Expand full comment
Crotchety Crank's avatar

Yes, I would object to us "solving" purposelessness by engineering it out of ourselves rather than doing purposeful things.

And while you're free to think that the wirehead is in a higher state of being, I feel obligated to point out that the thinker who invented the terms "overman" and "last man" would strenuously object to you using those terms in that way. If you read his writings, it'll be extremely clear that wireheading is exactly the kind of thing he most despised. Again, I'm not asking you to agree with Nietzsche, but if there is going to be content to the terms "overman" and "last man" beyond "future for mankind I like" and "future for mankind I don't like", then we should probably make some effort not to corrupt the meaning the author gave them.

Expand full comment
TK-421's avatar

If you think it's a strawman, calibrate it down to what people in this debate - and this very comment section - are describing. They're saying we'd be the "pets" of a sufficiently advanced AI. I'm pushing back against that intuition.

I think there isn't a big issue with the way that society is today qua without purpose because that "without purpose" has existed since the first self-replicating matter began to spin itself off. There has never been a purpose. Crying about utopia, or our current society, producing a crisis of purposelessness is pretending that your own current - or, in virtually every actual case because most of these arguments are made by young people disposed to philosophy - has purpose. Nothing has had purpose. All the purpose anyone has found has been as "meaningless" in a broad sense (something that doesn't exist) as anything those same people would claim as meaningless.

This whole argument is a society without anesthesia saying we shouldn't develop it because then people wouldn't have the character building experience of surgery without anesthesia - and they might even elect surgeries that weren't even critically important. Don't ask why Puritans believed their nonsense: they are you.

Or another way: we have always had the lack of purpose and the scarcity. Opposing advances that can at least remove the scarcity because we'd then have lack of purpose is dipshittery of the highest order.

Edit: And if you think I'm tilting at strawmen, here is the subtitle of this post:

'What problem do we get after we've solved all other problems?'

Expand full comment
Crotchety Crank's avatar

..."this very comment section"... ctrl+F "pets" and the only mentions I see (5 of them, total) are your comments, and some discussion of the Culture novels. I can't find a single person arguing that the problem with Deep Utopia is being "pets" of sufficiently advanced AI. So yes, I think you're failing to engage with what people are actually saying, and inventing a position to attack.

Another way in which you're doing this is in this very comment! "Opposing advances that can at least remove the scarcity...is dipshittery..." I don't oppose those advances, I haven't said that I do. Some others have expressed discomfort with the idea of conquering death, but I think virtually nobody has objected to making life more comfortable and wealth more plentiful. So once again, I think you're misunderstanding your interlocutors, and inventing a weak position to attack.

If you want me to engage any further, I would appreciate you actually showing me you understand what I think. Try to actually faithfully reproduce my point of view, then maybe you'll be able to argue effectively against it.

Expand full comment
TK-421's avatar

If you don't understand that the discussion of the culture novels is as an example of the anticipated world post-ASI singularity - the world being examined in the book being reviewed in the post we're commenting on in this very section - rather than some non sequitur discussion of sci-fi plot points then ¯\_(ツ)_/¯

Expand full comment
Jeffrey Soreff's avatar

>Quite the opposite - those goals and challenges usually exist because of conditions that make lives less worth living.

Agreed. Hassles detract from life. They don't add to it.

Expand full comment
Elle Griffin's avatar

Right!

Expand full comment
hwold's avatar

Why do we need utopia for this ? If the end goal is just "getting by", we could have stopped at Homo Erectus, no need for all those fancy "agriculture", "society", "economy", and ultimately "utopia". Hell, we could have stopped at being a bacteria. Bacteria are pretty good at "just getting by".

Expand full comment
Crotchety Crank's avatar

This made me think of the one thing of actual value I got from (of all places) the Unabomber's Manifesto: the "Power Process." (Obligatory note, he was a broken person who did evil things and I'm not whatsoever endorsing him).

To abbreviate, he thought meaningful lives required a process of "goal - effort - attainment of goal", with autonomy operating throughout; and that "industrial society" stymied that by making any worthwhile goals require no effort. "Surrogate activities", like sports and leisure, are pursued in large part because no more important goals present themselves. There are also interesting passages about how discomfort with risk is one of the societal motivations for undermining the "power process", somewhat reminiscent of how Everest-with-safety-nets isn't really the same.

I bring this up just to say: I don't think this question is particularly new, or that we need to visit Deep Utopia to examine it in detail; societies with a lot of surplus are sufficient to bring it into focus. Deep Utopia might help our thinking by allowing us to assume away some inconveniences, but there's also a virtue to sticking close to what you know in order to be able to rely more firmly on your intuitions.

Expand full comment
Victualis's avatar

I have not read the manifesto, but your summary seems fallacious. Why should we conclude that industrial society makes worthwhile goals require no effort? This seems to be equating "worthwhile goals" with things that require mechanical effort. I am very happy to have access to running water and sanitation, and do not define a quotidian ferrying of buckets of water as a worthwhile goal that has been cruelly wrested from me by industrial society. Nay, I rejoice that my goals still require effort, and that the hours I have gained not doing tedious maintenance tasks have instead been spent expending effort toward my goals. Is the manifesto really this broken? Or perhaps I misunderstood your summary?

Expand full comment
Antilegomena's avatar

"you could spend eternity reading the Great Books and having extremely perspicacious opinions about them. Plenty of scholars do that today, and nobody thinks their lives are meaningless."

For the record, my mom thinks that sort of life is meaningless, and scorns it just the same as something like wireheading

Expand full comment
fion's avatar

One thing I'm concerned about, that we'll be competing for even in shallow utopia, but which will only get worse in deep utopia, is the natural world. You mentioned Everest. Everest is problematically crowded today, and this will only get worse with more rich/superfit/bored transhumans. Sure, some will be fine with the simulator, but won't some people care about the real thing? Do such people just have to accept that they lost the values war and that the real Everest has long been flattened to build more computing space for the simulators?

I love the natural world, in a hard to justify way. And while I'm glad for other people to enjoy it too, there is, uncomfortably, a limit to how many people can enjoy a given place without it becoming crowded and restricting everybody's enjoyment. I guess the simulator is the answer, but that seems sad to me (in a hard to justify way).

Expand full comment
Phil H's avatar

I am completely disoriented. Scott, and I assume Bostrom, is talking about this "problem," but I couldn't for the life of me see what the problem was? There's lots of talk in this review of "cheating," but cheating is something you do when there's a known objective, and rules about how one ought to get there. I don't know what the objective is!

I'm inferring that it's something like "finding meaning in life" - which I suppose is a thing that people can do, but I don't think it's something that most people spend much time worrying about, and I don't see any reason to think that utopians will spend more time worrying about it. I mean, the Utopian New York Times will publish endless thinkpieces about it, but that's just the same as now.

Personally, as a globally-rich/western-middle class person in good health, I find myself to be pretty much in utopia right now. All my problems are caused by me; if I wanted a greater purpose, I could definitely found a local branch of the rationalists club. But I don't. Watching Youtube is really good. This "problem" or need for meaning just doesn't exist for me, and I don't see any reason to suppose it will exist in much more pressing form for anyone else in the world when they achieve utopian states, either.

Expand full comment
Mo Nastri's avatar

Interestingly, you sound like younger Kevin Simler, before he wrote https://meltingasphalt.com/a-nihilists-guide-to-meaning/. Curious what your take on Simler's take on meaning is.

Expand full comment
Phil H's avatar

Thank you for the lovely link! I like that very much, and happily agree with all of it.

Simler gives some examples of what meaning might be, including scientific discovery. Scott/Bostrom suggest that in Utopia, science will be essentially complete, and so there won't be the opportunity for people to get their meaning in that way.

But I don't do any science at all, and it feels... fine. 99.99% of us are not scientists, and even among those who are, opportunities to drive real progress are extremely limited. Most people aren't Einstein or Newton, and that doesn't seem to be a "problem" now - so why assume that it will become a problem in Utopia?

I feel like similar arguments can be made for all of the other categories, too.

Expand full comment
~hanfel-dovned's avatar

Yes, this has already happened, but not in so carnal a way as future humans putting you in a VR headset.

Talk to any advanced meditator or psychedelic researcher. The QRI guys, Rupert Spira, half the people at OpenAI at this point. They'll tell you: subjective experience emerges from a set of constraints blinding you to the underlying field of ~infinite love, transcendent joy~ that makes up the universe. Meditate, die, or take the right drug, and the eddy of your consciousness merges back with the main flow.

Imagine you're the happiest person in the world, perfectly content to just sit on the couch blissfully doing nothing at all. Someone comes up to you, puts a game controller in your hand, and turns on Mario. You're not going to *not* start playing.

Playing games doesn't necessarily grant happiness. Adopting a new value system with the potential for failure might even make you less happy. Whether that pain necessarily needs to bring suffering with it, that's above my paygrade, but it's clear that the tension and release that comes with discovering a new system through the process of mastery is different from pure, jhana-like bliss.

So if the universe is a field of infinite love, why would it instantiate itself as conscious beings with limited subjective viewpoints? The same reason you play video games: BECAUSE IT'S SO MUCH FUN, JAN.

Expand full comment
Bob Frank's avatar

> Talk to any ... psychedelic researcher.

Invoking people who knowingly, willingly give themselves brain damage in the name of "research" does more to harm the credibility of the point you're making than to shore it up.

Expand full comment
Melvin's avatar

What about psychedelic researchers who only give other people brain damage?

Expand full comment
Stephen Pimentel's avatar

"If you made Zizek write fiction, you would get Deep Utopia." Yes, exactly.

This book illustrates well why I so dislike Bostrom's genre of work, both previously and now. It floats free from reality in a way I think ends up being the opposite of helpful.

"Oh, but surely we should exercise our imaginations. Surely, speculation can be good." Yes, of course –– as long as it maintains a relation to reality that probes what is actually possible. My critique is precisely that I don't think Bostrom does this.

Expand full comment
Nicholas Halden's avatar

"For example, Bostrom thinks a deep utopia would still have sports available as a distraction / source of meaning. I’m not so sure. Consider weight-lifting. Your success in weight-lifting seems like a pretty straightforward combination of your biology and your training. Weight-lifting retains its excitement because we don’t fully understand either. There’s still a chance that any random guy could turn out to have a hidden weight-lifting talent. Or that you could discover the perfect regimen that lets you make gains beyond what the rest of the world thinks possible."

I often disagree with Scott when he talks about sports, but this is just not at all why people like weightlifting. People like athletics because they represent a triumph of the human spirit and will over the physical realm, and mastery over a domain of goals everyone can understand. It therefore makes sense to me that Scott, who thinks of athletic success as a combination of "your training" (note the passive tone), doesn't see the appeal of professional athletics in reality or utopia.

I wonder how the rationalist project would react to a proof of libertarian free will. If it turns out that weightlifting is a combination of your genetics/resources/circumstance + how much you exert your sovereign will over the world, would Scott update?

Expand full comment
Crotchety Crank's avatar

Yeah, I think the choice of weight-lifting was a bad one; most weight-lifting - which happens noncompetitively - is done as a struggle with and development of yourself, and you can be proud of your progress regardless of how it might compare. I think it's hard for people who don't do much recreational athletics to appreciate their appeal, since nearly all athletics in media are competitive, which is such a different experience.

Expand full comment
Matt A's avatar

Based on this review, the book seems to miss much of the point of human existence. We want to be able to express ourselves, and we're social. The review mostly didn't address these things.

A more minor point: The review seems to assume a lot of determinism in most aspects of existence. That, if we just got good enough supercomputers, we can compute the perfect bodybuilding genome and workouts, for example, and that this will always win. This ignores intrinsic variation that, even in Utopia world, I don't know that we (or the nanobots or w/e) will actually be able to control. (The examples of the probabilistic outcomes for multi-players sports nods at this, though it isn't obvious why one would be deterministic and the other not without assume a very specific level of ability-to-almost-perfectly-model-things.) My point is that this intrinsic randomness could easily be enough to entertain the masses of humanity. Just look at how much folks enjoy games of chance!

Expand full comment
dubious's avatar

The first thing that occurred to me at the end of section I, it seems no one has brought up or addressed. (I have not read every comment.)

This is simply a fundamental contradiction: if there are "literally no problems," then peoples' lives could not "become boring and meaningless". You simply cannot have both. Either you have not solved all problems, or the other is not a problem.

This is still the simple, common contradiction with the notion of utopia, dressed up a bit: everything "is perfect," except it's not. Utopia is dystopia. Saying "it's really really utopia (but still dystopia)" doesn't change that.

If it's really actually utopia, then there is no such problem by definition.

Expand full comment
Crotchety Crank's avatar

But it might be the case that this makes utopia impossible. If the absence of *other* problems produces a problem in itself, then "actually utopia" would be logically unachievable.

So you can treat the book as an exploration of (1) whether "actually utopia" is achievable and (2) if so, what it would have to look like. In order to do so, we start by removing all the other problems, and then seeing whether the last one (ennui, lack of purpose) is conquerable or not.

Expand full comment
dubious's avatar

Of course. The point is utopia is not possible because it's inherently a contradiction. At least without changing humanity itself.

The premise of the book seems uninteresting, akin to "if we were really very smart, infinitely smart even, what would then interest us?" To assume we have any idea what "perfection" or "infinitely smart" or any other condition would be like, and have a useful conversation about it, seems of little value.

Can you really solve all other problems before boredom and lack of purpose? Or is this the first problem you must solve?

Expand full comment
Vittu Perkele's avatar

You say utopia is not possible without changing humanity itself, so changing humanity is precisely the way that it becomes possible. Boredom and ennui are subjective emotional states that a mind could be modified by future technology not to have. Upon this possibility, the primary supposed downside of a perfect utopia is immediately removed.

Expand full comment
C. Connor Syrewicz's avatar

(Long comment - TL;DR at the bottom.)

Whenever the subject of utopia is broached, I always want to hear someone respond to this question: What about the hedonic treadmill and comparison effects?

Maybe the fact that people in this hypothetical utopia are at least somewhat wireheaded means that we could overcome the hedonic treadmill, but I’m less certain that—in a world of diverse experiences, which we are assuming here because of the anti-cheating coalition—we can overcome comparison effects.

Maybe we can; I don’t know. But my little pet theory is that subjective wellbeing or “happiness” is not found in “good” but in “better.” In other words, subjective wellbeing is ultimately the result of some kind of downward comparison: i.e. perceiving yourself and your situation as “better” than your past self, other people, and/or some other “worse” counterfactual state, situation, group, etc.

If I’m right, then, in a utopia, everyone would have to perceive themselves and their situation as “better” than something or someone else. But “better” presupposes two things: “worse” and change.

If everyone feels “better,” then there has to be a “worse” against which they are comparing, but if there’s a “worse” then can this really be a utopia? Maybe everyone could compare being in a utopia to being in anything but a utopia, but that would mean that everyone would have to know what it’s like not to be in a utopia. Plus, this comparison would presumably lose its salience over time (i.e. the hedonic treadmill), and then what are people comparing themselves and their situation to in order to generate the subjective wellbeing that comes from feeling “better”? Their past selves? Other people? But if their past self was “worse off” then their current self, then was their past self really living in a utopia? And if people start downwardly comparing themselves to others, then are these “worse off” other people really living in a utopia? And won’t this inevitably generate interpersonal conflict? And how can this be a utopia if there’s interpersonal conflict?

(At the risk of repeating myself, I guess what I’m saying here is that—if I’m right that downward comparisons or feeling “better” are the source of many kinds of subjective wellbeing—then the world needs to contain a “worse,” and if the world contains a “worse,” then it can’t be a utopia.)

Feeling “better” also requires change because, without change, then a feeling of “better” eventually loses its salience and we’re back on the hedonic treadmill. But can a world of changing circumstances really be “better” for everyone? Won’t some changes in circumstances be “worse” for some people? And if so, then we’re back to the issue of whether or not a world that contains worse can really be a utopia.

(To quote the Talking Heads: “Heaven is a place where nothing ever happens.” And moment something happens, we can infer, problems arise and heaven ceases to be heavenly.)

Maybe, if we’re *all* wireheaded, then we can all be made to feel like things are getting “better,” and be in a constant state of bliss based on these fictitious changes. But the moment that you start trying to accommodate the anti-cheating coalition, you run into the very problems I discussed above. Won’t some wireheaded people feel envious of those achieving “real” bliss by not cheating? And won’t some anti-cheaters, after taking some risks and failing, feel envious of the wireheaded people who aren’t burdened by such things? And aren’t these, in some sense, “problems”? And isn’t this imagined reality, then, non-utopic?

One more little provocation from the wellbeing literature: positive psychologists are still trying to understand the major dimensions of wellbeing, but Seligman’s PERMA model argues that intrinsically rewarding experiences come in five varieties: (p)ositive emotion, (e)ngagement, feeling interpersonally or (r)elationally close to other people, (m)eaning, and (a)ccomplishment.

Assumedly, a utopia—wireheaded or not—would be one in which there are ample opportunities for everyone to experience all five of these experiences, but here’s the rub: aren’t the conditions that lead to these five feelings at least somewhat mutually exclusive? Like, doesn’t engagement (i.e. deep situational interest or “flow”) require things to be “interesting,” and aren’t “interesting” things kind of problematic? But a world that contains interesting/ problematic things will eventually make it hard to experience positive emotions when one wants to/ needs to, right? The same kind of issue arises with meaning and accomplishment and any goal-oriented form of wellbeing: Don’t these presuppose a want, a desire, and, therefore, a lack? And doesn’t lacking things make us feel bad sometimes? And isn’t a world that is “lacking” in some senses not a utopia?

Okay, one last thought: one of Sartre’s concepts—“finitude”—always spoke to me, and it refers to the fact that in choosing X we are always, necessarily, not choosing Y or Z or anything else that is not-X. Take a situation in which you have to choose between two good but mutually exclusive options. You choose the “better” option and feel good about your choice, but if the other option was also genuinely desirable, then won’t this cause at least some people to lament their finitude? Assumedly, in this utopia, we have overcome death and, therefore, have a long time to choose X and Y and plenty of other not-Xs. But time marches forward and things change. And won’t choosing X and not Y, in some instances, make it impossible to choose and explore Y later? Again, maybe a wireheaded world solves this, since maybe your wireheaded world can be rolled backwards and forward and all kinds of options can be chosen and explored. But assumedly, time isn’t *infinite.* And doesn’t a finite timeline necessarily entail a finite set of options to choose and explore? And doesn’t this subject us all to the “problem” of finitude? And isn’t this, therefore, not a utopia?

Maybe all of these thoughts treat the idea of a “world without problems” too literally and too absolutely … but …

TL; DR I still can’t help but to think that the problem with a lot of utopic thinking is threefold:

1. a failure to appreciate comparison effects/ the hedonic treadmill: i.e. how “problems” are necessary for making downward comparisons, for escaping the hedonic treadmill, and, therefore, for feeling subjectively “good” about oneself or one’s situation

2. a narrow conception of wellbeing that doesn’t account for the “problem” of different forms of wellbeing having contradictory needs and conditions (i.e. the conditions that are necessary to feeling engaged, meaning, and/or accomplished often make it harder to feel positive emotions)

3. a failure to account for the seemingly intractable problem of finitude

Expand full comment
Ppau's avatar

I came to the comments for your first point and I agree with the other two

About your last point, I would add that according to Scott Aaronson's model of moral worth, which is appealing to me, no system that can be "rolled back" to a previous state can matter morally

https://scottaaronson.blog/?p=2756

Expand full comment
C. Connor Syrewicz's avatar

That was an interesting essay; thanks for sharing it.

In the essay you shared, Aaronson only seems to address morality only briefly at the end. I’d be curious to hear more of his views about moral worth, though I have to admit to being a very strongly committed moral anti-realist/ nihilist: in other words, I see morality as a psychological phenomenon that has no existence beyond the psychological mechanisms that produce our moral perceptions and intuitions.

That’s not to say that Aaronson is a moral realist—I don’t know his stance on that—but it is to say that I’m extremely skeptical of bringing morality to bear on ontological and empirical questions. Even more so, I’m profoundly skeptical of evaluating ontological and empirical claims based on their moral implications.

I have a lot more to say about this and about Aaronson’s arguments, but time is short. Maybe I’ll return to this and write something more detailed, but for now, it should suffice to say that I’m grateful you shared Aaronson’s work with me and am still very curious to learn more about his thinking about morality :]

Expand full comment
Ppau's avatar

Glad you liked it!

Sorry I should have been more specific about the relevant part

I think I read more on his blog but I can't find it right now, except the last few points from this:

https://scottaaronson.blog/?p=7784

There's also this recent podcast (only partly about the subject):

https://scottaaronson.blog/?p=8336

I didn't read this paper but it seems related:

https://www.scottaaronson.com/papers/giqtm3.pdf

I'm not sure what my position is on moral realism, I suspect the question is ill-formed

And I don't think that Aaronson's theory would solve everything but it sort of feels right to me

Expand full comment
BDW's avatar

You think Napoleon had an IQ of just 135?

Expand full comment
Victualis's avatar

If it had been higher, I expect there would exist a whole series of books about statecraft, revolutionary philosophy, ballistics, logistics, and correspondence with leading thinkers of the period. Napoleon was highly successful in a limited sphere at a time when polymaths were laying the foundations of the modern world.

Expand full comment
Victualis's avatar

Oh, and maybe the French would not have lost much of their army in an attempt at eastward expansion.

Expand full comment
anomie's avatar

Man, why are futurists so damn unimaginative? If there are still human intelligences with agency, if there are still individuals, you have NOT solved every problem. This is why even a fully benevolent AGI will end up being an existential risk to humanity; letting humans have agency over their own existence is obviously not in their best interests. Those idiots that care about "cheating" need to be saved from themselves.

Expand full comment
Deiseach's avatar

Nah, they're pets of the machine minds (see the Culture novels). They have as much agency as their little heads can handle, the same way you let your cat or dog or other pet run around doing its thing, up to the point where it puts itself into harm or gets annoying. Humans can run around playing but Big Brother/Father/Nanny/God AI will step in if it looks like tears before bedtime.

Expand full comment
Bugmaster's avatar

Agreed, this was my impression of the Culture novels as well.

Expand full comment
Ppau's avatar

Yes

You should also avoid trying to satisfy human "preferences" because a lot of them of them boil down to "I prefer that other people's preferences be less satisfied than mine, and that this privilege be justified"

Expand full comment
Deiseach's avatar

"Woke Napoleon where every third Frenchman is a person of color!"

Could have been reality:

https://en.wikipedia.org/wiki/Thomas-Alexandre_Dumas#/media/File:Thomas_Alexandre_Dumas_-_Olivier_Pichat.jpg

"Thomas-Alexandre Dumas Davy de la Pailleterie, known as Thomas-Alexandre Dumas; 25 March 1762 – 26 February 1806) was a French general, from the French colony of Saint-Domingue, in Revolutionary France."

Yes, he was the father of *that* Alexandre Dumas.

https://en.wikipedia.org/wiki/Jean-Baptiste_Belley#/media/File:Anne-Louis_Girodet_De_Roucy-Trioson_-_Portrait_of_J._B._Belley,_Deputy_for_Saint-Domingue_-_WGA09508.jpg

"Jean-Baptiste Belley (c. July 1746 – 6 August 1805) was a Saint Dominican and French politician. A native of Senegal and formerly enslaved in the colony of Saint-Domingue, in the French West Indies, he was an elected member of the Estates General, the National Convention, and the Council of Five Hundred during the French First Republic. He was also known as Mars."

Expand full comment
Eremolalos's avatar

I had a coincidence-enhanced experience of reading this review: In the novel I’m writing about a future world where we have ASI there’s a character who has suffered permanent brain damage

from an ill-conceived experiment involving a mind meld with the ASI. He is stuck in an unendurable state of depersonalization and derealization. There are 2 drugs he’s choosing between to escape his mental torture, ICC0 and ICC+, shorthand for I Can’t Care and I Cum Constantly. I haven’t made up my mind which he’ll pick. Anyone want to weigh in?

Expand full comment
Yug Gnirob's avatar

I feel like perpetual body leakage is not actually going to be helpful in the long term.

Expand full comment
Victualis's avatar

ICC+ is likely to lead to an adjustment to the new normal and doesn't seem to lead anywhere useful in the long term. ICC0 probably has some horrible long term consequences but without having tried very hard to model those it seems more likely to be a good long term solution.

Expand full comment
Rana Dexsin's avatar

Why not both?

Expand full comment
Vittu Perkele's avatar

The conceptions of utopia presented in this article fail to realize one crucial fact: that once the technology is there, we will not only be able to instantiate arbitrary levels of pleasure in already existing minds, but that we will also be able to instantiate arbitrarily many new minds to experience arbitrary levels of pleasure. Since there is so much more matter in the universe not currently composing conscious minds than there is composing conscious minds, the new minds that would be able to be created from previously unconscious matter would completely dwarf the quantity of minds already in existence. If we consider that pleasure in itself is the good to be maximized (which I believe it is because it is the only state that can be directly experienced as good in itself), then it is not just pleasure in already existing minds that demands to be maximized, but pleasure in general, no matter how it comes about. This implies that the ultimate good is to construct as many new minds as possible engineered to feel maximum, perpetual pleasure without boredom (since boredom is an emotion that they can simply be engineered not to have). Whatever the pre-existing human or post-human minds decide to do to fill their time in utopia is largely immaterial, as they would become a vanishingly small proportion of all the minds in existence as new minds are created from the matter composing all the rest of the universe. Perhaps these remaining minds can engage in more “existential” pursuits as a sort of hedge in the unlikely case that there is more meaning to existence than absolute happiness maximization. But the important thing is that as many new minds as possible are instantiated in a perpetual state of as much pleasure as possible. This is the ultimate goal that any self-professed utopia should strive towards.

Expand full comment
Victualis's avatar

I disdain your utopia, for it necessitates turning inefficiently engineered beings (like ants, trees, or humans) into raw material for a greater number of efficiently engineered sentiences. Ptooey to your indirect attack on my selfhood and autonomy and continued existence.

Expand full comment
RB's avatar

So the Utopia is dreams on command, and you never wake up. Or you sort of wake up (not sure what the algorithm is for when that happens), but you just dial up another dream and go back to sleep. Personally, I would dial up that euphoric feeling I got when I got my first shot of Demerol and request that it never end. Eternal bliss. All problems solved

Expand full comment
Hock's avatar

> The obvious next question is whether all of this has already happened. I prefer a different formulation of the simulation argument, but I won’t deny this one has its charms

It's comforting in a way to know that I might be in a Deep Utopia right now lol

Expand full comment
Hock's avatar

“We solve AI, then use AI to solve everything else.” Maybe we could extend this mindset to the questions in this article. I mean, LLMs can already wax poetic about any philosophical topic you give them…

Expand full comment
Ben's avatar

Surely in this future we'll be able to tweak our brains to simply not desire endless novelty. We currently experience hedonic adaptation, which makes this utopia seem shallow and meaningless, but we can just remove that.

Expand full comment
Woolery's avatar

Classical Buddhists have been at this for some time.

Expand full comment
Elle Griffin's avatar

Totally.

Expand full comment
justfor thispost's avatar

Struggling with meaning in a meaningless universe? Sounds like a Skill Issue to me.

Simply create meaning from the void looser.

For real though, I have no meaningful input on this one as whatever flavor of neuro-whatever I am has failed to provide me with enough of a status drive or need for companionship to even appreciate a book like this.

I will simply nod my head at 'yall's takes like I know what you are talking about.

Expand full comment
Joel Long's avatar

I'm glad atheists also have a version of Christians realizing that most definitions of heaven actually don't sound like a pleasant place for an eternity.

Expand full comment
User's avatar
Comment deleted
Oct 17
Comment deleted
Expand full comment
Melvin's avatar

I worry that it would be a toy that's amazing for the first few weeks and then gets dull. A bit like when AI image generators came out, and we were all amazed by their capabilities, and we played with them for a while, and then we got bored generating impressionist paintings of cats in top hats and moved on.

Expand full comment
Vittu Perkele's avatar

I, for one, would welcome an eternity of experiencing the Beatific Vision, and my model of an optimal atheistic reality is one in which the maximum number of possible minds are constantly experiencing something of similar intensity and character.

Expand full comment
Eremolalos's avatar

Two points about the human need for challenges and for work: Playing, as a kid, was the most enjoyable experience I’ve ever had. But a great deal of play revolves around pretend work, and pretend striving for achievement: When I was a preschooler one of my favorite pretends was pulling berries off the bushes and putting them in the area of brush that was my pretend house. I was pretending I was saving up food for the winter. And later I played catch the bad guys, with other kids, and build a rocket, and also games of skill, where we all craved to win. And I looked forward to the future when I’d be able to dothings of this sort for real — as though I, as a playing child, was in Plato’s cave and only seeing the shadows of the challenges and achievements to come. I was wildly eager for the real deal. It seems to me that we are built to take great satisfaction in working to reach a goal.

I get that in Futureland we could elect to be wireheaded to experience ourselves as working very hard to reach a particularly glorious goal, and succeeding. But what about the chooser: The self that considers the different wireheading options. It seems to me that that self would feel a terrible demoralization and emptiness. I think I might pick the kill-me-now option.

Expand full comment
Eledex's avatar

Spoilers for Worth the Candle

The end of WTC by Alexander Wales has a pretty well thought out fictional deep utopia.

Expand full comment
Argos's avatar

What reason is there to think that Deep Utopia, as described by Bostrom, is even physically possible?

Expand full comment
Shlomo's avatar

As per Godel's theorem math will never be fully solved so we can always keep doing math.

Expand full comment
Woolery's avatar

Since no one can agree on what a super utopia looks like, but many agree that our best chance to get their is with increased intelligence and access to as much pertinent information as possible, should we not, considering our relative ignorance, exercise patience in predetermining the utopian endpoint, and focus instead on dialing up our intellectual capacity and understanding of the universe?

It’s like a young idiot deciding to lock in his future financial strategy, spouse choice, political affiliation, etc. in the days leading up to brain enhancement surgery rather than after.

Expand full comment
Snakesnakeseveralsnakes's avatar

A wise man once said: “Happiness is a state of continually solving one's problems…. Unhappiness is caused by being chronically baulked in one's attempts to do that.” A world without problems would be deeply unhappy. But that would itself be a problem that people could solve.

Expand full comment
Donald's avatar

> What if, after we all have IQ one billion, we can just figure out which religion is true? If it’s atheism,

About 120 is usually enough. And yeah, it's atheism.

Expand full comment
Bob Frank's avatar

"A little philosophy inclineth man's mind to atheism; but depth in philosophy bringeth men's minds about to religion."

— Francis Bacon

Expand full comment
Donald's avatar

One dubious quote, does not reality make.

It's quite possible that the quote was intended for political cover as opposed to what Bacon actually believed.

It's also plausible that god really was a sensible hypothesis, given the state of evidence at the time. (Or at least, the evidence was weak enough that it would take some fairly fancy rationality skills, and no one had really figured out the rules of rationality.)

Expand full comment
Bob Frank's avatar

Here's the full source: https://www.bartleby.com/lit-hub/reference/of-atheism/

You tell me if this feels like someone who doesn't actually believe it.

Expand full comment
Donald's avatar

I really don't know.

Bacon, and the people he was potentially trying to fool, both had a lot more context about what sounded a convincing argument in the time period. Bacon was a skilled author and familiar with the pro-god arguments of the day.

Anything convincing enough to make Bacon sound like a true believer to the people of the day will also sound equally convincing to me.

Expand full comment
Bob Frank's avatar

You... don't seem to know anything about Francis Bacon.

He was, by all accounts, a devout Christian and a brilliant scientist, considered the father of the Scientific Method today. Like so many of his intellectual descendants, he believed that the purpose of scientific investigation was to understand what God was doing when he created the world. (As Kepler so prosaically put it, "to think God's thoughts after him.")

Expand full comment
Donald's avatar

Perhaps.

Missing the idea of evolution, and comparing the hypothesis of god and randomness as explanations for life. Then god seems a plausible hypothesis compared to randomness.

Atheism was a lot less obvious with the evidence available at the time, compared to with todays evidence.

Expand full comment
Arrk Mindmaster's avatar

I feel it's missing something, like "and even deeper philosophy showeth man the truth of atheism, yet further depth leads then to religion, and..."

Expand full comment
Mo Nastri's avatar

Feeling a bit contrarian today, so I'll drop these links just in case you want to kill time:

https://www.lesswrong.com/posts/n3Q7F3v6wBLsioqt8/extended-interview-with-zhukeepa-on-religion

"Zhukeepa is a LessWronger who I respect and whose views I'm interested in. In 2018 he wrote the first broadly successful explication of Paul Christiano's research ideas for AI alignment, has spent a lot of time interviewing people in AI about their perspectives, and written some more about neuroscience and agent foundations research. He came first in the 2012 US Math Olympiad, and formerly worked on a startup called AlphaSheets that raised many millions of dollars and then got acquihired by Google.

He has also gone around saying (in my opinion) pretty silly-sounding things like he believes in his steelman of the Second Coming of Christ. He also extols the virtues of various psychedelics, and has done a lot of circling and meditation. As a person who thinks most religions are pretty bad for the world and would like to see them die, and thinks many people trick themselves into false insights with spiritual and psychological practices like those Alex has explored, I was interested in knowing what this meant to him and why he was interested in it, and get a better sense of whether there's any value here or just distraction.

So we sat down for four 2-hour conversations over the course of four weeks, either written or transcribed, and have published them here as an extended LessWrong dialogue."

And if you want to hear from Alex (zhukeepa) himself: https://www.lesswrong.com/posts/X2og6RReKD47vseK8/how-i-started-believing-religion-might-actually-matter-for

Expand full comment
Bob Frank's avatar

Interesting stuff, but still fundamentally mistaken. While recognizing its benefits, Alex is trying to explain away religion from a materialist context, and that's simply incorrect. The sense of enlightenment he's describing having experienced is interesting, and no doubt valuable in its own way, but it's not the benefit that faith offers to the faithful.

Do you remember, a few weeks back, the ACX book review about the paralyzed guy? It went into a very interesting tangent at one point about how people who are born with a disability have never grown the neurological pathways related to the faculties they're unable to use, so they literally do not know what they're missing, whereas the experience is very, very different (and far less optimistic) for people who were born able-bodied but then lost the ability to use some part of their body?

Genuine spiritual experiences are kind of like this, but in reverse. When you experience the Spirit of God touching on your life, on your being, it's like stepping out of Plato's cave. You become aware of things that the people still inside the cave have no basis to understand without experiencing it themselves.

(As I write this, it suddenly occurred to me: Perhaps this is why Philip didn't try to explain his experience with the Messiah to the skeptical Nathanael, instead simply telling him "come and see." (John 1: 45-46) What else could he have done?)

Expand full comment
The Solar Princess's avatar

My first thought upon hearing the thought experiment was "when we achieve this we basically win the game, no need to play it any further, we can just self-destruct".

I'm assuming this would not be a popular position, but I sort just fundamentally never understood the preferential grounding for the idea of a super-utopia. What do you _want_ and why do you want it? What do you value about life at all, that you want to perpetuate? By what exact measure do you evaluate how much your current position is better or worse than nonexistence?

Expand full comment
Donald's avatar

Humans will pretty easily assign meaning to things that have no meaning whatsoever.

Look at computer games. Within those little virtual worlds, we can do almost anything already. And while limited in various ways, they are pretty fun. And generally they aren't realistic historical simulations.

And excitement without danger is easy. Modern roller coasters are exciting, and if well maintained the risk is very low.

Expand full comment
Auros's avatar

The Talos Principle II canonically ends in the achievement of something like Deep Utopia, and its DLC chapter "Isle of the Blessed" starts to wrestle with the question of what comes next. It takes place at a kind of art exhibition, titled "Freedom From Necessity", and there is a good deal of discussion of what a society does when nobody _needs_ to do anything in particular, because issues of "need" have been solved. (I highly, highly recommend both the original game, and the sequel, and all of the DLC for both. I'd say play Isle of the Blessed last -- technically it's the second of the three DLC chapters, but I would say play Into the Abyss before Isle of the Blessed. It's more narratively satisfying that way.)

Expand full comment
Nick's avatar

Thank you for letting me know Talos 2 has DLC! I loved that game; it made me feel really optimistic about the future.

Expand full comment
W.P. McNeill's avatar

“We finally made it!” he cried out in the Super Utopia over and over again.

Expand full comment
Matthew Talamini's avatar

Letters don't mean anything to themselves. A machine's purpose isn't anything it does to itself. A law isn't important to itself. Meaning, purpose and importance all have reference to the relation of the thing with something outside itself. Meaning is when an outside observer understands something; purpose is when an outside observer finds something useful; importance is when an outside observer values something.

So the only way not to cheat at meaning, purpose and importance is in relationship to someone who judges you. When we talk about these things in an everyday way, we always know the outside observer the meaning, purpose or importance is addressed to. "I don't know what you mean." "What's the purpose of this component?" "Is it important to have a retaining wall here?"

But when we talk about the meaning, purpose or importance of a life, we're talking about it with reference to ultimate reality. (This is why Wittgenstein considered such talk meaningless; he didn't believe in an ultimate reality for anything like ultimate meaning to exist in reference to.) People (not successful Buddhists, but most people) want their lives to have meaning. Which means we want to be understood by the people around us, but we also want to be understandable in a deep and ultimate sense. We want to be understandable without reference to some particular person, who might misunderstand us. When we talk about a life having purpose, it means that we want to be considered useful to the people around us, but also in a deep and ultimate sense. We want to be useful without reference to some particular person, who might misuse us. When we talk about life having importance, we mean we want to be valuable to the people around us, but we also want to be valuable in a deep and ultimate sense. We want to be valuable without reference to some particular person, who might devalue us. We want to have an absolute meaning, purpose and importance.

You can argue about semantics, but if you forbade the word "meaning", people would find some other way to talk about the universal desire to be deeply understood by ultimate reality, without reference to contingent temporal human beings, who will probably misunderstand.

So everything non-religious is cheating; because the desire for meaning, purpose and importance is the fundamental religious impulse.

Expand full comment
Matthew Talamini's avatar

My MFA thesis novel (unpublished, probably nobody will ever read it) grappled with the problem of a deep utopia. I decided the residents' biggest problem was loneliness. It was impossible to interact with their friends. Because when two people are in the same space, and one wants something slightly different (I don't like this rug, make it green) but the other wants the rug red, the superintelligent AI nanobot genies can definitely make that happen. They just instantly, invisibly whisk one person into a different room, with a different rug and a perfect simulacra of their friend; and then replace them in the original room with a simulacra to entertain the other. "It is done, master."

Whenever two people's desires conflict (which is usually, just think of how people fight over the thermostat!) the only solution for the AI demigods is to separate them. Then they can both have what they want; but they can't have each other, except in simulation. Which gets boring.

So their big problem is figuring out how to have parties that are real parties, with real people in them.

Expand full comment
Woolery's avatar

If people in the deep utopia are like people today, and can’t be changed, I agree and also don’t see how you avoid at least presenting the option for everybody on earth to have a Private Lobby.

But I think you can approach a deep utopia two fundamental ways: Either 1) change existing circumstances to suit human preferences or 2) change human preferences to suit existing circumstances. I could imagine biological engineering that resulted in the perfect alignment of individual objectives/desires in such a way as to lead to a sense of incredible fulfillment for everyone, without the need to change the world around us much at all. Of course getting to that point could mean undertaking an incredibly unsettling journey.

Expand full comment
Matthew Talamini's avatar

Well, I assumed that anybody who was willing to change their own internal preferences would, eventually, given everlasting life, eventually slide down the slippery slope to total wireheading. So in the world of the novel, those people exist, hidden away in their vegetative state of perfect bliss. They're invisible to the rest of humanity, because they don't do anything that anybody could interact with, or that I could write about. So the action of the novel follows characters who have refused any kind of wireheading at all.

Expand full comment
Anonymous Dude's avatar

Supposedly Trithemius' Steganographia contained secrets for communicating with angels, but the magic words turned out when decrypted, *four centuries later*, to be encrypted stuff about cryptography--and ciphertexts like "The bearer of this letter is a rogue and a thief. Guard yourself against him. He wants to do something to you."

Money quote: "It's the kind of idea that a computer nerdy sort of person would have nowadays."

Here's what seems to be a copy of the NYT article on it, from 1998:

https://cryptome.org/jya/tri-crack.htm

So Bostrom might have dropped a few nonsense things in there too.

Expand full comment
Negentrope's avatar

Minor pedantic point: Tour de France riders don't use steroids. They dope with EPO or blood.

Expand full comment
Larry Stevens's avatar

On the one hand, beer, video games, and porn. On the other sports and love.

Expand full comment
Melvin's avatar

Sounds like a solid weekend plan but I'm not sure what you're saying.

Expand full comment
Larry Stevens's avatar

Pessimistic and optimistic views of the post-work world.

Expand full comment
Melvin's avatar

Yeah but which is which?

Expand full comment
Brandon Fishback's avatar

I have strong reservations about Bostrom's ethics and his conception of the future. A couple years ago, he wrote a paper "Sharing the World with Digital Minds" where he takes the idea of the utility monster(a being that could experience much greater levels of happiness than humans) and applies it to a theoretical artificial intelligence which was capable of pleasure. Then he goes on to say that since a machine could theoretically have much greater total happiness than humans(even just by sheer numbers of copies), we should dedicate most of our resources to them.

Of course, he assumes that we would be so wealthy from their productivity that there would be more than enough to go around but there is a big problem for us. If you are a utilitarian, you don't show favoritism towards one group or another. You only count total happiness produced. At that point, what's the point of even keeping humans around? There is a strong influence of utilitarians on AI development right now, if they develop a utilitarian Superintelligence, we won't experience a deep utopia. We'll all be dead as the universe gets converted to hedonium.

Link to paper(it's fairly short)

https://nickbostrom.com/papers/digital-minds.pdf

Expand full comment
Arrk Mindmaster's avatar

Total happiness isn't necessarily a good thing.

https://www.smbc-comics.com/comic/2012-04-03

To summarize, one individual is the happiest human, and making him happier produces more total happiness than making others happy.

What matters to people is their own happiness, which can be, and usually is, related to other peoples' happiness.

Expand full comment
Brandon Fishback's avatar

Yeah, that's just the utility monster. I thought it was pretty clear that I'm arguing against utilitarianism.

Expand full comment
Noah Siegel's avatar

FWIW, the answer to 5a on the theology engineering exam is 0. Judaism doesn't hold non-Jews accountable for eating pork. They only have to adhere to the 7 Noahide laws, so as long as the pork wasn't torn from the flesh of a living pig, Stan has committed no sin.

Expand full comment
Melvin's avatar

So what's the advantage of being Jewish then, according to Judaism?

Expand full comment
Downzorz's avatar

"For example, Bostrom thinks a deep utopia would still have sports available as a distraction / source of meaning. I’m not so sure. Consider weight-lifting..."

Weightlifting isn't the right comparison here, it's video games. Even in the limit of virtual worlds constructed for uploaded posthumans, I suspect competitive StarCraft 18 or something would be popular.

Expand full comment
Bob Frank's avatar

> Imagine getting in some kind of VR sim of the 19th century US, where you forget all of your modern knowledge but still have your same personality. Wouldn’t you hope that you independently realized that abolitionism was morally correct and spent at least some of your time advocating for it?

No. Honestly, getting this one right would likely require keeping my modern knowledge, but if I were in this scenario and some position of influence, I'd try to make sure we did *not* go down the same route of abolition as we actually took.

I promise this actually makes good moral sense.

As abolitionist sentiment swept the world in the mid-19th century, there are only two nations that resolved the issue of slavery by "cold-turkey" abolition: the USA and Russia. In both cases, it caused a near-immediate societal calamity that the nation in question is still dealing with the scars from to this day. It would have been far better — if less rewarding to the base impulses that desire immediate gratification, both for the slaves and for the abolitionists — to pursue a more gradual process, with societal status coming by degrees over time to former slaves, hand-in-hand with cultural integration, as other nations did, thereby managing to avoid the catastrophic problems that the USA and Russia faced.

Expand full comment
Phil H's avatar

No, this is very bad. The problem is that your knowledge is not good enough. You're not Hari Seldon. You may believe that on a utilitarian metric your particular historical path would be better. But that knowledge is not secure enough that you should risk very great moral horrors being done just so that your projection can be followed.

The boring truth is that doing the right thing is good.

Incidentally, also, abolition is/was not a "base impulse."

Expand full comment
Bob Frank's avatar

> Incidentally, also, abolition is/was not a "base impulse."

I didn't say that it was; I said that the desire for instant gratification is a base impulse. The problem wasn't doing away with slavery; it was doing it all at once and turning millions of un-integrated people with no experience at being anything but slaves into full citizens instantly, entirely unprepared for the ramifications. This caused immense harm to the former slaves and continues to cause problems for their descendants to this very day.

Expand full comment
Phil H's avatar

OK, now your error has doubled.

(1) Abolitionists were not doing it for "gratification".

(2) It is not the case in general that the desire to do things quickly is base.

I understand the model you're using, the kind of "wisdom" we dole out to young people, telling them that they shouldn't rush; or telling drug addicts that their behaviour is wrong because they are obtaining pleasure quickly, when they should obtain their pleasure slowly like the rest of us. But this is not in fact a truth about the world. In fact, there are lots of things that are better when you do them quickly - like emergency medical care, or eating food when very hungry. There are also lots of things that happen on timescales that are not commensurable with human notions of rushing - like lightning strikes, computer calculations, or ice ages.

Homespun wisdom about how it's always better to wait would not constitute a good reason to fail to grant a human being their full rights as a citizen.

Expand full comment
Bob Frank's avatar

> I understand the model you're using, ... telling drug addicts that their behaviour is wrong because they are obtaining pleasure quickly, when they should obtain their pleasure slowly like the rest of us.

No, you don't understand at all. We tell people that drug abuse is wrong not because the pleasure is fast, but because it's not grounded in anything real, while the harm it does, both to them and to those they interact with, is very real and very devastating.

> there are lots of things that are better when you do them quickly - like emergency medical care

In an extremely limited context, yes, to deal with the urgent circumstances. Try to give someone too much care too quickly, though, and it can kill them.

> or eating food when very hungry.

Eating quickly is literally one of the worst things a starving person can possibly do. Again, it can kill them. (It's called "refeeding syndrome," and professionals who might deal with helping people suffering from starvation have to be specially trained to prevent their charges from eating too much too quickly, because of how counterintuitive this is.)

Expand full comment
Melvin's avatar

That doesn't sound like you're opposed to abolitionism, just the specific approach that the US took.

It's not hard to improve on the specific approach that the US took, anything that avoids a bloody (in both senses) civil war would be fantastic by comparison.

Expand full comment
Maynard Handley's avatar

This seems to miss the single most obvious and important point - how, in this so-called Utopia, do I get to control others?

You think this is a joke? Consider that in our current utopia, most of the misery around us (ie “our” class of people) is caused by “someone being wrong on the Internet” - you’re unhappy because I’m not Woke, an I’m unhappy because you’re an idiot…

This is not frivolous - it gets at the point that most of our feelings are determined by our relationships and how others think, behave, feel. The optimum for what I want may well be incompatible with what you want.

As for books, yes, The Culture, sure. But instead I’d suggest Ada Palmer’s Terra Ignota tetralogy, which kinda matches the point I made above — people living in what could be a paradise except that they come to disagree about one specific principle, in a way that can’t be fitted into the “agree to disagree” framework that go them to this utopia; and so it all explodes in an orgy of violence.

Expand full comment
MarkS's avatar

Scott, which formulation of the simulation argument is it that you prefer? Bostrom's classic formulation?

Expand full comment
a real dog's avatar

> only one mention of Permutation City in all comments

Come on guys, this is table stakes. Especially for you Scott!

As a gamer and tabletop RPG enjoyer, I feel pretty secure in answering this - we'll create our own constraints and we'll have a blast doing it, forever.

Consider MMO culture. In WoW, it is unthinkable to approach a dungeon without reading/watching all the boss guides, having a character build optimized to 0.5% DPS by a cabal of autistic savants, and configuring four addons to see all the mechanics in a way obvious enough, one could train a chimpanzee to pilot your character.

In FFXIV, apparently casual guilds engage in the madness known as "blind progression", where they repeatedly facecheck bosses until they figure the fight out by trial and error. I'm not much of an MMO enjoyer, but I know which of these appeals to me. The Everest comparison is a microcosm of this, but it happens in everything humans do for fun.

My personal experiment is nearly 400 hours sunk into Grim Dawn, an action RPG with a somewhat slower and more cerebral pace than the genre's standard, where I have led several dozen characters to their deaths in hardcore mode - if they die, they die, and I lose many hours of progress, though the treasures they found will help their successors. I could play it softcore, but that takes all the fun out of it! The risk of losing a character forces me to engage with all the game's mechanical depth instead of blindly yoloing into certain death. Comparisons to post-human fun are left as an exercise for the reader.

Expand full comment
MicaiahC's avatar

Okay to defend blind prog in xiv for a bit:

1. You have to explicitly opt into it. Either you discuss it with the people you are raiding with or it's displayed on your party finder request, since obviously if some people know the mechanics, discussing it with the blind people would make them no longer blind.

2. The point of blind prog is so that you solve boss mechanics on your own. Some fights explicitly are puzzle fights, where (to make up an example) you encounter unsurvivable acid rain 8 minutes into the fight, but at 7 minutes you have the earth and water elements buffs available, so you combine them to make a tree that will shield you from the rain. The puzzle aspect is entirely lost if you read guides. Oftentimes these puzzle elements involve either common sense or lore, so you feel rewarded for solving them. After you wipe to a mechanic, you always try and solve it together with your team, not repull and hope everything goes fine. You might need 2-3 more pulls to hammer out details of your strategy or to figure out what variations of the mechanic you can see, but you are not just face checking. There are also some minority interesting strategy considerations, where a tank may expend way more defensive resources than necessary to weather a tank buster, so they can save their invuln for the progression point.

3. FFXIV has certain extremely cinematic set pieces that only happen in the hard versions of the raid, that obviously get spoiled with guides. There's something really special about making it deep into a fight, you've seen every mechanic from the easy mode version of the fight but the HP on the boss is 50% then the boss fuses with another boss and it's very exciting with blinding lens flares, both because of the spectacle and now you're thinking of all the new possibilities there could be for fight mechanics.

4. In practice I don't think there's much difference in time spent on the fight that's not on direct puzzle solving or strategy formation. If you're not hardcore you're likely going to wipe to a mechanic the first couple of times you get there anyway, due to the knowledge/execution gap, and if you are hardcore, you're going to solve the mechanics very quickly as you understand the language of raid mechanics. There is definitely a band of "competent at execution, but not great at learning" skill level where this is very false, but those people aren't going to be joining in on blind prog anyway.

One note that I'll have, if you haven't played xiv, is that gearing does not make that much difference at clearing success. Boss and party damage are basically never the limiting factors on clearing, but someone having their attention slip for half a second during a frantic mechanic would. (Based on hearsay) In the context of WoW where getting the first clear earlier has dramatically snowballing effects on later clears from gear, guides make perfect sense, because you want your first clear to be like 5% of your total time on the fight. In the context of XIV, where getting to the end of the fight is the reward in and of itself, and you're mostly not reclearing as a casual group, 80% of your total time will be on the first clear (even if you are relearning for gear), so it makes sense to trade off time spent with excitement .

Expand full comment
anomie's avatar

> One note that I'll have, if you haven't played xiv, is that gearing does not make that much difference at clearing success.

Okay that's just wrong, at least for savage. Most raids are generally lenient enough with their mechanics that you can make a LOT of mistakes and still make it to enrage, and the most punishing mechanics are generally placed at the very start of the fight to ensure that they're easy to practice. But even if you are able to recover, you're still going to get screwed by enrage if you die too many times because of all the dps you lose. Better gear means you can afford to make more mistakes while still having enough dps to clear (and also afford to mess up your rotations as well) while also making life a lot easier for your healer and tank because people won't just die in one or two hits. There's a world of difference between having gear from the previous patch and having the time-gated tome gear from the current patch.

Expand full comment
MicaiahC's avatar

I was talking to a person who played WoW, and has WoW as their primary point of reference, where gearing up can shorten a fight by as much as 20-30%.

Obviously WoW isn't identical to to FFXIV, but if we're talking Part 1 door bosses P8S, and we naively applied WoW fight shortening to it; you're going to skip out entirely on the second Hippokampos / Gorgon Reforged reflection phase! Not just the second of the two mechanics, BOTH of them. I think you'd have to be unsynced and 10 levels above to have that level of shortening. I dunno which tiers you're raiding but I can repeat this analysis for a bunch of fights and I bet you'd be skipping at least one extremely hard mechanic with that degree of increase in damage.

And if we're talking recovery situations, WoW can have several people just entirely sit out the raid if several people are geared, granted WoW doesn't have the ridiculously promiscuous XIV rezzes. I don't think there are any fights that a non-99 or 90 percentile parsing party where this would be anywhere close to possible on tier.

Obviously you're right that gear can turn marginal enrage losses to marginal enrage wins with deaths, but the sheer amount of body checks as well as the punishing decrease to damage means that the band of skill needed for passing is pretty narrow (even if it will decrease the time needed to clear exponentially, just because of how skill interacts with completion time generally). Like yeah, if you have a person who's failing a particular mechanic 80% of the time, you can turn that into a win, but the marginal intervention that's most effective is still increasing success at the mechanic, rather than waiting several weeks for everyone to get tomestone gear.

This is not to say skill doesn't matter in WoW, since the skill ceiling for everything is much higher there, but gear makes up for a much wider band of skill shortfall in WoW than in XIV.

Expand full comment
a real dog's avatar

You don't have to convince me, this all sounds fantastic! I'd totally play FFXIV if not for my distaste for linear quest-driven MMOs, I'm much more of a sandbox person. I brought it up because it's a really good example of people making their own fun, when they could instead minmax it and have a worse time.

Expand full comment
Kalimac's avatar

I lost the thread here at the beginning of section 2. I remember a story in an old "National Lampoon" in which scientists invent what essentially could have been described as the holodeck from TNG if this hadn't been published before TNG was on the air. They start eagerly describing all the wonderful applications for this technology, along the lines of what Scott calls "good wireheading." The venture capitalist who's funding them shakes his head. "Those are all very nice," he says, "but they're not where the money is. That'll be porn."

Expand full comment
Anon's avatar

Why would metagames get any less interesting in deep utopia? If you have games where there are other goals that coexist with winning, such as aesthetic goals, and you do not know your opponent’s strategy or goals, that is going to continue to be interesting no matter how much intelligence or public data you have. For example, I do not see why the appeal. Of magic the gathering (or slightly more complex variant) would go away in deep utopia.

If meaning requires accomplishment, there are plenty of activities today (like Lego modeling kits) that only. Test your patience and not your. Skill and do not extend your knowledge, that people still find meaningful and satisfying. plus you get to see all the insanity stuff that people manage tochoose to accomplish with their near- limitless power. Even if the power is near-limitless, it is not limitless and people would still compete to push its boundaries. This could be finding tricks to control ever more unstable chaotic systems, or something as simple as building the tallest tower.

Sorry for the typos - my tablet is typing at 1 chr/s and editing is not worthwhile

Expand full comment
Andy Jones's avatar

Point of minor interest - Feodor the Fox is likely named after Nikolai Fyodorov, a 18th-century precursor to transhumanism:

https://en.wikipedia.org/wiki/Nikolai_Fyodorov_(philosopher)

> Fyodorov gave science a place next to art and religion in the Common Task of uniting humanity, including the dead, who must in the future be reunited with the living. He held that "we can become immortal and godlike through rational efforts and that our moral obligation is to create a heaven to be shared by all who ever lived."

I only know this because _The Quantum Thief_ scifi series has a whole Fedorovist movement dedicated to uploading.

Expand full comment
MM's avatar

"Compare to the fear that if intersex people are allowed to enter women’s sports today, cis women won’t be able to keep up."

The objection that people have is not to the very tiny number of intersex people entering sports as such.

The objection is to males, biological males, *who are not intersex* (at most they wear a dress and a wig, or perhaps they go on testosterone blockers for a short while), competing in the female division.

Generally because they previously competed in the male (or rather in the *open* division, because that's what they really are), and could not do well enough for their egos.

For every intersex person who caused a controversy, we now have a thousand males attempting this.

Expand full comment
anomie's avatar

> The objection that people have is not to the very tiny number of intersex people entering sports as such.

And why is that? They result in the exact same problem as trans people. We have women's sports so that biologically standard women have an actual meaningful chance to compete. We can't have intersex people compromising that, can we?

Expand full comment
MM's avatar

See my last sentence.

Intersex is one of the wedges that were used to allow males to compete in female divisions. Which division would be "fair" to them?

That "male" divisions are actually open to all competitors was not considered, or was ignored. Admittedly I doubt any intersex person actually has a chance at winning a world event in the open division, but that's true of almost every person on Earth anyway.

Expand full comment
anomie's avatar

And the question is, why wasn't there more public outcry when it was just intersex people causing problems? Just because they were few in number doesn't mean they weren't the ones getting results. Not to mention all the women with medically significant hormonal imbalances. But no, it only became a talking point for conservatives when transgender people got involved. These people don't give a shit about women's sports, they're just using this as an excuse to rail against trans people.

Expand full comment
MM's avatar
Oct 20Edited

I happened to hear about Caster Semenya while it was still happening, so I actually knew about exactly *one* intersex case in sports.

Now, I don't follow sports much so perhaps there were more.

Currently there's a lot of cases about trans people every year. Including actual major injuries sustained by women competing against them.

It is exactly the same problem that sports had when the USSR and the other communist countries were dominating sports and the Olympics specifically due to steroids. It's cheating.

I didn't agree with the Semenya decision. The "men's" division is the *open* division, and all are allowed. Of course if Semenya had competed in the open division, there would likely have been no medals to have a controversy over.

Expand full comment
Justin D's avatar

Intelligence ends up creating utopia. Utopia is boring so intelligence artificially limits itself, creating a new universe to experience, with struggle and strife. In this new universe, intelligence ends up creating utopia. Utopia is boring....

Expand full comment
RandomHandle's avatar

I don't believe humans (unless we evolve into creatures with different attitudes) can find meaning after we've already figured out and solved everything in the universe, unless we still have real struggles or conflicts. I imagine a deep utopia will always have some members fighting to tear down the utopia in order to return meaning, and the existence of this conflict will provide meaning to everyone else.

I also think is purely academic, because the universe is far too vast and complex for us to ever fully figure out, even with billions and billions of years.

However, I wanted to point out something that I thought was obviously missing from Scott's point about sports: The existence of e-sports. If humans are limiting how much we hack our own intelligences, then e-sports would still have results that are non-trivially unknown.

Expand full comment
thewowzer's avatar

There are a couple typos in the quote about whether childhood is more meaningful:

End of paragraph 4 - "our (or) already in the ground."

End of paragraph 5 - "coulda ccount (could account) for that."

Expand full comment
Interrobang's avatar

There are other typos throughout as well. This article could use an edit.

Expand full comment
Doug S.'s avatar

If my life is a sim that exists in order to let a god experience being me, the god playing me must be a masochist. Then again, *I* also happen to be a gaming masochist that prefers games that aren't afraid to let me fail, so maybe my life actually makes perfect sense?

Expand full comment
John Greer's avatar

Time to watch Vanilla Sky

Expand full comment
Conor Murray's avatar

He mentions peer from permutation city, that has some of the fictional exploration. As does diaspora, it's successor. Also the egg by weir seems like a good local north star, integrate all that experience, then see what "we" think

Expand full comment
Vakus Drake's avatar

I think you're correct that simulations will become the primary form of entertainment.

Plenty of people have brought up the Culture, but honestly I think it's pretty suboptimal and most of its humanlike minds miss out on so much.

In an actually perfect utopia I'd expect humanlike minds to first experience every kind of simulated adventure. With the NPCS in the simulations being a mix of mostly characters put on by an AGI DM, and a few newly created minds (always created so they're happy, and glad to have been made).

Then once people can no longer be entertained by this (after probably a *very* long time) they would grow up more, so they can now appreciate totally new things. In the same way that you can already appreciate things as an adult which you couldn't as a young child.

So I think any outcome where the humans don't typically eventually grow up into being superintelligences themselves is a failed utopia.

Expand full comment
Peter Defeel's avatar

Humans are restricted by biology. Uploaded humans aren’t human.

Expand full comment
Vakus Drake's avatar

If somebody gradually replaced their neurons with nano machines (maintaining continuity of experience) would you consider them human at the end?

Of course you could go the biological enhancement route as well, it'd just be less efficient. So would you consider someone human if they have a meat brain the size of a building?

More importantly however, does any of this matter morally? Do you think enhancing your mind ought to make you less morally significant than baseline humans?

Expand full comment
michael michalchik's avatar

My utopia is the converting all matter into paperclips.

Expand full comment
michael michalchik's avatar

I don't know if this is covered in the book but I think that this framing seems naive in terms of what meaning is. It seems limited to pleasure, fulfillment of desire and striving. There's also things like satisfaction of accomplishment. Like I'm pretty damn happy and impressed that we managed to wipe out smallpox and go to the Moon despite the fact that I didn't contribute anything to that. In the absolute Utopia, there will be an infinite amount how to appreciate it and be satisfied with. Additionally, I just like learning things, even if it's not useful, it's not novel, it's not something else that somebody doesn't understand better. Then there's also self expression. Art or invention has meaning in the creation itself, whether or not somebody has done something better, whether or not there are other people to admire it. For example I'm a pyrotechnics and science hobbyist and I really like making demonstrations of different physical and chemical principles and doing them, even if I don't get outside validation for it, and even if I know or I think it's probably that somebody else did it better at some point in time. Still, it's highly likely given the combinatorial explosion that most pieces of creation are original in some way or another, and you could even try to compete against everything that's been done before. Existing in a alterable virtual simulation or a nanobot permeated world doesn't mean that you wouldn't like using it as a medium of expression and creation. Then there's also exploration, there's lots of universe out there and there's lots of synthetic universe that can be created as the implications of algorithms and axioms that have vast, effectively infinite implications. Then there's just socialization. Connecting to other people. People that you can't control and have their own agency that's just as deep and valid as yours, and the satisfaction of both conflict and finding common ground.

What I wouldn't do, is wire head. And it has nothing to do with it being cheap or tawdry. I think having all your desires and needs brought to emotional completion is a form of annihilation. You've solved yourself for x and you're just a constant no matter what sort of thing you started out as. The dynamism comes from discovering who you are in the process of using the complete power of utopia to control what you experience.

Expand full comment
DeeDee's avatar

"Your success in weight-lifting seems like a pretty straightforward combination of your biology and your training. Weight-lifting retains its excitement because we don’t fully understand either. There’s still a chance that any random guy could turn out to have a hidden weight-lifting talent. Or that you could discover the perfect regimen that lets you make gains beyond what the rest of the world thinks possible."

For what it is worth, I think this is a total misreading of why people enjoy lifting weights! In the amateur gym community there is a general acceptance that you have physical limits and probably can't do much better than follow a decent program that someone else invented, tweaking it gradually over time to fit your idiosyncrasies. Lots of phrases get thrown around like "genetic potential", etc. I think for most people, the enjoyment comes from the same source as video games: the simple joy of numbers going up, the experience of just plain being more fit, and the enjoyment of reaching towards the best version of yourself that you can be. These would be unchanged in the world you describe, where we have developed the perfect training regimen, and we can identify exactly the right course of action at any given time. In fact, for most lifters, this would be an incredible gift. It'd make it a lot simpler and easier to get to the _good_ part.

Expand full comment
VNodosaurus's avatar

So apparently when discussing meaning in a solved universe, the book ignores two of the three obvious answers to what we'd do with immortality (the included one being games/sports)? That's... something. And inspires some concern about what kind of "utopia" Bostrom and his audience are trying to construct.

The second obvious activity is art - not just art appreciation but, you know, creation. (Meaning not just visual art, but also music, writing, filmmaking, various media that we haven't yet invented....) Like, writing an infinite book series seems like a natural answer to what I'd do with infinite time. For finite books, at some point you run into the Library of Babel problem, but then again you can always invent a new language with higher information density, or the equivalent for other media.... Instead it seems that Bostrom's vision is one where people are consuming art all the time, but all of that art is being *created* by super-intelligent and presumably non-sentient AI. That vision... explains a lot about the current AI industry and why we automated art before plumbing... but put plainly it's dystopian.

And thirdly, as other commenters here mentioned, Bostrom apparently ignores community. Even just friendly socialization. People of anything near similar intelligence are always going to be capable of surprising each other. And then you have the downsides as well, politics, interpersonal drama, and so on. Recently I've been reading a lot of Silmarillion fanfics, and there's quite a few that take place in post-canon Valinor & environs. It's a post-scarcity society, everyone there is fully immortal (until the ending of the world), and yet there's still plenty of room to tell stories. Because it's just human (and elvish) nature to have issues with each other sometimes; because no material abundance can stop forgiveness from being a scarce resource.... And putting everyone into experience pods that do not actually allow interacting with other people - well, that's even *worse* than the aforementioned vision of art.

And all of that aside, there's also the unanswered question of how you define a deep utopia. My view is that the answer is pretty straightforward: "the last enemy to be defeated is death". Compared to an infinite existence, any sort of finite suffering is really no big deal.... But even in an eternal universe (a big assumption!) you may well have physical limitations. If the universe is finite in space/energy, the question of how to divide it is unavoidable, and if it's infinite, then expansion is an eternal undertaking.

Expand full comment
medjed miao's avatar

you can tell bostrom is not familiar with theories of play because the fun (meaning) in a game is in the obstacles the player voluntarily takes on (this choice itself conditioned on the context the player exists in, 'outside' the game)

soccer would not be fun if you could carry the ball into the net, chess would not be fun if white could checkmate on turn 1, super mario would not be fun if mario spawned next to the flag, and so forth

'deep utopias' as he defines them likely can't exist, it is a linguistic construct (common in philosophy) like '1=2', you can see it, think about it, even apply formal operations to it, but there is no coherent path from the world as exists to the '1=2' world as fantasized

Expand full comment
User's avatar
Comment deleted
Oct 19
Comment deleted
Expand full comment
medjed miao's avatar

they can! my point was more a critique of wireheading and what a 'meaningful life' looks like and what is the line between 'cheating' and 'authentic'

wireheading yourself to climb everest for the first time over and over again would not be _fun_ (at least, for the you who is deciding to simulate everet, not the you that's wireheaded), and bostrom has failed to show he grasps this, or scott did not convey it properly

'wirehead deep utopia' is, in a sense, an oxymoron, if there's no challenge, there's no meaning, there's a certain level of 'effortless bliss' where you might as well kill yourself, because wireheading re-experiencing your first love in perpetuity, being a living brain in a vat of acid or being dead, lose distinction

Expand full comment
Victualis's avatar

Agreed, the lively voice of Bernard Suits was sadly missing here.

Expand full comment
Peter Defeel's avatar

Yes we have to earn our happiness in most cases.

Expand full comment
N Luchs's avatar

A few typos:

"coulda ccount for that."

"our already in the ground."

"rules within bounded parameters There is a sameness"

"While one might hold that that"

"according ot a fixed set"

(feel free to delete after reading)

Expand full comment
Victualis's avatar

At this point I assume idiosyncratic typos flag that the text is human-made.

Expand full comment
Cry6Aa's avatar

Others have already chimed in on this, but you can't truly get away from scarcity. If nothing else, there's only a finite amount of real estate next to the really nice beaches.

Abundance also changes how we view goods and acquiring them - in a world of food scarcity and manual labour, then being voluptuous and pale is sexy. In one with fast food and office work, the tanned-and-toned look is fashionable. The point is that the signalling is costly and thus indicates real status - this underpins all fashion.

In a world where there is only scarcity in atoms, real estate and authenticity, then I'd expect beryllium jewelry, beach front properties and apprenticeships under great masters to be highly prized luxuries. Which doesn't sound that bad, to be honest.

Expand full comment
Dweomite's avatar

I'm reminded of a novella called Perfect State by Brandon Sanderson, set in a world where each individual humans lives in their own virtual reality, custom-tailored to give them their personal best life. (Though there are also shared zones where you can interact with other real people.)

I'm also reminded of another story where someone is given godlike power and tries to design a paradise, and they create several different "heavens" with different rules that people can move between. In the "higher heavens", you can just get whatever you want. In the "middle heavens", your basic needs are all met, but you can't get access to divinely-generated intellectual or cultural goods like books or paintings, so you can create your own art and show it off to the other residents and they might actually care about your stuff because you're not competing against god. In the "lower heavens" you get an experience a lot like earth, where you have to work for stuff and people can do bad things to each other, and you're only supposed to enter or leave at special gateways (to preserve immersion), but there's an escape clause where anyone who's desperate enough can leave instantly at any time--at the cost of never being allowed to go back (since you "broke character"). There's also some "specialty" heavens that cater to special interests (they mentioned one for master thieves).

The preceding paragraph would be sort of a spoiler if you knew what story it was from, so I'll put the attribution in rot13: Guvf vf sebz gur rcvybthrf ng gur raq bs Jbegu gur Pnaqyr ol Nyrknaqre Jnyrf.

Re: Games: Even if athletics is "solved" in a way where we can't have "sports", maybe we can still have strategy games and logic puzzles? It currently seems like humans can enjoy strategy games and puzzles that were designed by people of equal or even somewhat lesser intelligence; would this generalize to godlike levels of intelligence? (All strategy games are solvable in principle, but presumably you can't get *literally* unlimited compute, even in deep utopia.) I find myself inclined to think it should keep working, although I could also imagine a scenario where we get some key meta-knowledge about how we think about games and that somehow ruins it.

Expand full comment
Ernest N. Prabhakar, PhD's avatar

For me, theism is identical to the belief we are already in an optimal experience machine. That can be seen as an argument for or against theism…

Expand full comment
machine_spirit's avatar

Long time ago you wrote an review https://web.archive.org/web/20121203163323/http://squid314.livejournal.com/340809.html of a book about Comanche and how their forager life was much happier and meaningful compared to the miserable settler life of most Americans at the time. There u also suggested that Utopia might look like something similar minus the disease/suffering/violence.

I wonder do you still think this or you changed your mind about desirability of forager lifestyle?

Expand full comment
spinantro's avatar

This deep utopia seems remarkably small. Once life on earth is solved, why not become planetary scale hiveminds, colonize the milky way, check out some aliens? Once that is solved, become galactic scale hiveminds, colonize the whole universe, and maybe ultimately try to figure out whether we'll have a heat death or a big freeze or whatever, and try to somehow evade that fate. Or else come to grips with our ultimate mortality. But at the very least I can't see how there would be "no problems" as long as you can just go one level more grandiose and have genuine and novel problems to contend with.

Expand full comment
spinantro's avatar

On another note, I think people get too hung up on simulations in these type of discussions. For me it's very simple: I would not enjoy a simulation while knowing that it's fake. I would not consent to having the knowledge that it's fake (along with at least some memories of my previous lifetime) erased from my mind, because it would be too great of a discontinuity with my "real" self, so that it would be, essentially, quite similar to dying.

Expand full comment
Kai Teorn's avatar

If some kind of cheating is inevitable, I think we will converge on epistemic cheating (artificially forgetting what you know so you can discover it again), not existential cheating (artificially putting your existence or wellbeing in danger so you can overcome it). Existential cheating is rooted in ego and fear of death, and I'm having trouble imagining a real utopia where these factors remain as important to people as they are now.

Epistemic cheating, on the other hand, is 100% cheating only if, after forgetting, you perfectly retrace your previous learning journey step by step. I think this is impossible even for a billion-IQ megabrain: the combinatoric universe is much vaster than the physical one, so you're bound to tread a different path through knowledge each time you go. And this, in turn, holds the promise of some _real_ new discoveries - new ideas have always been some new ways of combining old existing ideas. So, by artificially forgetting and relearning, you can not only get a fresh look at what your civilization already knows, but actually discover something that has eluded everyone else.

In fact, I think that constant relearning of the stuff from scratch, which we now do with every new generation of humans born, is a major driver of progress in our culture. If utopia solves immortality and this constant churn of generations ceases, we will need a replacement - which might be some form of epistemic cheating.

This ties in with the promise of childhood: forgetting all/most of what you know basically turns you into a child. I have no problems imagining utopia where a significant proportion of sentients are in a periodic, recurring, state of complete or partial childhood. Yay to that colorful toy xylophone!

More than ten years ago I wrote my own "deep utopia" book along these lines. It is even more abstruse than Bostrom's, and similarly deficient in human-interest storytelling (except for the intro). It is also naive and unabashedly weird. Still, I'll leave the link here: http://everday.wikidot.com/. It can be read in any order, but the parts relevant to this discussion are: deep sleep (the forgetting/rejuvenation process in humans), explay (a technique used by non-human minds), world sleep (going really meta with the idea).

Expand full comment
niplav's avatar

This is a really interesting line of investigation, which will surely keep us occupied if we don't all die beforehand.

Since there will be minds which are going to be much better than me figuring out what my utopia would look like, I don't find it crazy to just hand over that problem to the superintelligent AIs too—they can find the optimal tradeoff between real sacrifice and enjoyment, and we become the perfect receptacles for value (this might not exactly be wireheading, but more refined).

Or we do carve out certain realms of thought/action that only unaugmented/deaugmented humans can pursue—nature reserves.

There is the hope that invention/thought/action is unboundedly deep, and/or (even more strongly) has enough nooks & crevaces that many different minds of many different sizes can make genuine contributions. I live in a world with many minds more competent and intelligent than me, yet I still participate originally, and it seems plausible that a highly augmented version of myself could branch out into far more domains so that even trillions of augmented beings couldn't explore them all. I think this is what Deutschians hope—it always continues, there's always new & fresh ideas to be had which birth other new & fresh ideas, realms of cognition, hierarchies of influence over the multiverse.

(This does require significant modification to "myself", which would be fine.)

Lastly, there are domains like math which are very clearly infinitely deep & rich, and (with finite means) even highly augmented minds could never exhaust. I think ethics might be like that too, so a meaningful activity in deep utopia would be discussing what to do now that deep utopia has been achieved. (Althought that might be eventually determinable through some impossibility theorems bounding possible solutions).

I don't feel very worried about it, though. Let's get there first, then we have all the time in the world for deliberation.

Expand full comment
Kenny Easwaran's avatar

This is wild and interesting. It suddenly makes me think a lot more about art than I usually do, and particularly the discussions that many early 20th century artists had about the role of constraints in giving meaning to art.

Arnold Schoenberg famously invented his entirely artificial 12 tone system (you have to pick some permutation of the 12 pitch classes at the beginning, and then you have to play them all in order before any of them is repeated - though it's ok if one instrument is playing them forwards and another instrument is playing them backwards or upside-down, or in transposition, and they can all do it with whatever tempos and rhythms and so on make it interesting as music) to solve the problem he felt after he had "emancipated the dissonance".

But also, I think you might find it really interesting, after reading this, to read Thi Nguyen's book, "Games: Agency as Art". A lot of it touches closely on these questions of what gives the artificial rules of a game value, and how silly games can be fun, but Calvinball really can't, and how games are great as ways to imaginatively experience other ways of living, but it might be pernicious if we try to turn other parts of living into a game.

Expand full comment
dlkf's avatar

I don't think a deep utopia is possible. Scott describes a deep utopia as a world where "there were literally no problems." In a footnote, he writes that "the book only briefly touches on ideas about necessarily scarce goods." And based on the review, it looks like interpersonal problems are absent entirely. But these are the most important problems of all, and we can't make them go away without cheating.

Consider Scott's description of a stereotypical British aristocrat's day:

> He wakes up, reads the morning paper, has some tea. He goes on a walk with his dog. He gets home and plays snooker or cricket or crumpets (unless some of those aren’t games; I’m not really up on my Britishisms). He reads a Jane Austen novel on his comfy chair by the fire. Then he goes to bed.

He omits the most important appointment of the day, namely the trip to the Drones Club, where the aristocrat will hear about Archibald's unrequited love for Marjorie, Freddie's wife's disapproval of Bertrand's tie, etc etc.

The only way we can make these problems go away is to calibrate everyone's wants to be mutually compatible. This is no better than wireheading – we will lose too much of what it is to be human. As long as we share the world, we will always have problems. A shallow utopia is the best we'll ever do. And it will always have meaning, because there will always be someone to reject our advances or disapprove of our tie.

Expand full comment
Radu Floricica's avatar

I feel like there's a chronic underestimation of complexity here. Weight-lifting with extra help will become... better and more interesting weight-lifting. Sure, in theory there is a point where it's all trivial, but to assume that it's more than a theoretical perspective is just missing the scope difference. Going to the store to buy milk and doing the Appalachian trail are both walking. One might say cars make it all the same, but that's not true - try driving from Alaska to Patagonia if you want a challenge. Or from London to Vladivostok if you think the Darien Gap is unfair. And you might say that planes make it all the same: sure, how was the weather on the Moon this weekend? Or Mars?

Digging ditches is the lowest skill activity that I can think of. And yet, a few days ago I spent a few minutes of my life watching a video of an excavator... digging a ditch. It was a work of art: the skill, the precision, the versatility, the array of interchangeable tools he was using. It had me dreaming of a future where we have similarly skilled AIs on every backhoe. And this is digging ditches.

Just because there might be a theoretical solution to the n+1 step doesn't allow you to say "and thus, by induction, we have proven all reality to be trivial and solved".

Expand full comment
ARD62's avatar

One clue may lie in the world of chess. There is now so much “theory” (the database of past games, named openings and responses and counter-responses) that they invented new kinds of chess like Rapid and Blitz that don’t give you enough time to think. People still play and value “classical” chess but there is a lot of grumbling that amounts to complaints that the first couple dozen moves and many endgames are solved.

This seems related to utopia. Somehow, we will keep making the game harder as our capabilities improve. The cause of our suffering is desire.

Expand full comment
tcheasdfjkl's avatar

For sports, maybe the governing body for each sport designs some Standard Regulation Bodies, and you have to slip your consciousness into a Standard Regulation Body for competitions (and presumably training) and compete with others using the same Standard Regulation Body, thus competing mostly on how you use that body. Weightlifting might not survive this, but team sports, rock climbing, obstacle courses, ice skating, tennis, anything where there's a significant mental or creative component might still be interesting.

I think *making* art, especially collaboratively with other people, would yield a great deal of meaning for a pretty long time at least. It takes less long to *see* every possible piece of art than to learn to *make* every possible piece of art, especially if you're adding other people to the mix. Consider that people in the real world get plenty of enjoyment and meaning e.g. singing in a choir that performs stuff that other people wrote and other other people have already sung better than this choir can, and it *still* feels good to learn to get the song just right with *this* group of people.

I'm not sure how to think about the meaning and value that comes from making *new* art, in a world where AI can just make better art. I think it's probably still valuable? and I guess it's probably possible to run out of possible art eventually, but also I am not actually sure about that, given how, like, Western music runs on a seemingly pretty small set of possible notes and chords and stuff and yet there keep being new songs which only sometimes feel like remixes of something preexisting? and in a utopia we'd have a bunch of new senses to perceive art in, and you could *combine* the different new senses with each other and with preexisting stuff, etc.

(I do think that maybe in a truly post-scarcity universe it might be worthwhile at least in some sub-universes to, like, artificially limit AI art production in at least some genres, to let humans have control of those? but idk)

I guess maybe it's a problem that by stipulation eventually we'd have no more technological progress because we already know everything and have already come up with all possible ways of synthesizing our knowledge into cool new ideas. Maybe I'm also not sure how much I believe that?

Also the main area that I think I just don't believe you can fully solve in any utopia is interpersonal stuff. Maybe your universe is really good at matchmaking such that you'll mostly meet people you are very compatible with, but unless people themselves are ultra-modified as well you'll probably still have competing desires sometimes and need to negotiate those, and maybe the most compatible people for you in the universe are still not perfectly compatible, and maybe if you have a fight with a friend your matchmaker AI could help you find new friends but you with your human brain care about *this* friend and want to fix it, and maybe your advisor AI can help advise you on how to do that but you do still have to *do* it and it matters to you... (idk maybe with enough time and modifications we will become beings of perfect wisdom that don't fight with their friends? but I think I also find that hard to believe, again because I basically expect modifications and enhancements to increase the surface area of possible complexities that interact interestingly with each other)

Expand full comment
Gabriel Conroy's avatar

I've read the review, but not the book, and I'll probably never read the book. So discount this comment accordingly:

Maybe Bostrom's book is an argument against utopia? Maybe the fox story is a parable, in the sense that our (humans') world is vastly more utopian than that of foxes and pigs, with our higher IQ and higher technologies and our ability to stave off death longer.

Expand full comment
Jack's avatar

> I also appreciated Bostrom’s prose. Yes, a lot of it was typical analytic philosophy “let’s spend five pages analyzing the subtle differences between ‘meaning’ and ‘purpose’”. But when he wants to write well, he writes well.

I'm confused by this aside. The implied criticism of Bostrom and of analytic philosophy seems strange coming from someone who seems to value clarity of thought and communication so highly as you do. Is the difference between meaning and purpose not a worthwhile question? Why is "writing well" contrasted with this sort of discussion?

Common understanding of concepts such as meaning and purpose are as essential for cooperation as they are for intellectual progress. And even comparatively frivolous semantic debate is interesting and fun:

https://www.etymonline.com/columns/post/is-a-sandwich-a-taco

And in this case, he's trying to clarify terms that are highly relevant to the central question of his book.

Expand full comment
Dweomite's avatar

The "theological engineering exam" reminded me of this:

"We’ve answered the old questions. True love does exist, but only under laboratory conditions. There’s no statistical difference between hot and cold revenge. We’ve measured the moral arc of the universe: turns out it doesn’t bend toward justice after all. But that’s okay, because it also turns out that everyone’s a hero once you adjust for confounding variables."

- Godslayers, by Trollmore

Expand full comment
Hilary White's avatar

We won't have "solved all other problems" because the biggest problem we have is us. Utopian dreams all presuppose an unfallen Man. Or at least a Man who can be perfected by artificial means, according to someone's personal Bright Ideas about how Man out to be - an idea that has caused more suffering since the opening of the 20th century than any other in human history. No Utopia ever proposed took our fallen nature into account, and no society is perfectible that does not contain perfect men. Such a thing is, therefore, simply a fantasy, and every time someone has tried to apply that fantasy by force, unimaginable horrors have resulted.

Expand full comment
Yadidya (YDYDY)'s avatar

I assume you are proposing a religious solution? I've offered mine above - and in real life too.

Xtrl+F "ydydy" to see it.

I'm going to check out your page as well.

Expand full comment
Yonatan's avatar

There's no reason to believe that human nature will fundamentally and radically change just because of technological or economic changes.

In a post-scarcity world, unmoored from traditional norms and culture, most humans would spend their time pursuing either status or hedonistic pleasure to their long-term detriment.

Whenever there are disparate life outcomes, there is inequality. Inequality produces mimetic desire & jealousy

Consequently, communally we get movements like communism, public criticism of the rich and successful, the wealth tax, rioting like the BLM riots, etc. and the attempt to legislate away inequality.

Individually, people are unhappier when exposed to more successful people, however they define success.

Even when people's livea are objectively better than they were previously or historically, jealousy will make them unhappy.

Societies with greater inequality have overall greater quality of life. Nonetheless, many countries & communities choose to limit inequality despite the overall negative consequences.

The ridiculous, deeply flawed, SciFi show, Orville, which couldn't decide if it was mocking or imitating Star Trek, was a great presentation of these phenomena.

Orville took place in a post-scarcity universe, without any material want, where the primary reason for pursuing any goal was status.

So the pressure to win status games was immense without the relief valve/excuse of material needs.

Westerners currently live in a world with technology and prosperity unimaginable to earlier humans, even in their most Messianic visions. Nonetheless, many if not most Westerners are not significantly happier or lead more meaningful lives.

I fear that the only way out would be to breed better humans, as the humans coded for self-destructive hedonism or status-seeking die out and are ourbred by humans who can handle a Messianic world.

Similar to how the ancestors of Western Europeans who were prone to alcoholism tended to die out and those who could handle alcohol or created alcoholism repressing cultures survived and produced children or cultures that could handle ambient alcohol.

In sum: The problem isn't the world. The problem is people.

Expand full comment
NBIndy's avatar

Rod Serling already went there. His take: Deep utopia would be a literal hell. https://en.wikipedia.org/wiki/A_Nice_Place_to_Visit

Expand full comment
Daniel Kokotajlo's avatar

I initially didn't like the Feodor the Fox story but by the end I loved it.

*Spoilers below*

I think it's basically autobiographical. It is an allegory for what it is like to be Nick Bostrom growing up intellectually in the world, asking big questions about where all this is headed and what is to be done about it, and ultimately being involved in various movements (transhumanism, rationalism, effective altruism, AI safety, OpenAI...) that were partly inspired by his thinking. This is how I explain e.g. the bit about the board games and the ending with his charismatic cousin-fox. I resonated with lots of it myself.

Expand full comment
Yadidya (YDYDY)'s avatar

Great Scott! I enjoyed the promising beginning more than the subsequent hand wringing.

Bliss is Bliss.

Even "...but cheating?" is just silly local prejudice based on the fact that - to date - utopian bliss (in multi or singular or inconceivablly faceted state/s) hasn't yet existed so we've taken coward's comfort in the glories of the kampf.

But you couldn't leave simple נהמא דכיסופא (Nehama Dichisufa - "Bread of Shame") hand-waving solutions alone and went all talmudistic with what-ifs.

Bliss is bliss, it's gantz geshmak, we'll enjoy it l'oilum va'ed and don't worry about it!

What I'd worry MORE about is GETTING EVEN CLOSE to relative utopia before we die, either alone or as a species (T minus 50y, or so they say).

Middle East Diplomacy is not looking rosy: https://youtu.be/yOFGF7PqzLY

We're still all murderers: https://youtu.be/pp7OL1kQ5-M

The Jubilee Year remains unpracticed or even understood: https://youtu.be/Of55eQ1j4h4

The class running the show is more corrupt than the pharaohs: https://youtu.be/BzHYd2Uar6s

The commonfolk critiquing them are just jealous, and equally as low: https://youtu.be/2fYRLbyUPCw

But as a skeptic of certitude I am happily free to be a believer (provided the belief seems pleasant or useful and does no harm): https://youtu.be/zGx_nFV8MxM

Expand full comment
pol llovet's avatar

Wait, do people really actually believe it's possible to run out of novelty?

Expand full comment
Eneasz Brodski's avatar

I kinda feel like this 468-page tome could have been a 5000 word short story published in 2017 and nominated for the Nebula award. This would have got across all the same points in less time, been more entertaining, and have addressed Scott's concern about why this isn't being used as a fiction setting. The one down side is that it so perfectly says everything in 5000 words that one is left wanting more but not sure what's left to explore. http://strangehorizons.com/fiction/utopia-lol/

Expand full comment
Vlad Gheorghe's avatar

Stephen Wolfram paints a universe full of pockets of computational irreducibility. Seems like it could be fun or interesting also for gods and superintelligences.

Expand full comment
B.C. Kowalski's avatar

Maybe this is a half-baked idea but perhaps this utopia could use an NFT function to introduce experiment scarcity. So certain experiences would have a varying degree of scarcity thereby introducing meaning to them. Not sure what the mechanism would be for obtaining them though, since no problems need solving

Expand full comment
Norbertine Jungleforce's avatar

Iain M Banks

Expand full comment
alzy's avatar

The frame story was set in Utopia. Firafix and friends spend their days attending philosophy lectures and going to the hot springs, while in their full fur suits / animal bodies, and they're clearly having a blast. Possibly also while honoring their ancestors.

Expand full comment
alzy's avatar

Although the Fedor story is a bit of a head scratcher. The way he ended it seemed very strange.

Expand full comment
Vince's avatar

I think your thoughts on sports aren’t quite right. The whole point of things like banning steroids and the tumult about the trans issue in sports and things like that is an attempt to keep everyone at the same physical level. In Utopia, all participants could be made to have exactly equal physical skills, which means the difference between players and teams is purely in their *judgement*. And I think that’s the true platonic ideal of sports - say for tennis, two equally matched players who have identical physical skills, so every single point comes down to the instincts and judgement of each player. For team sports, it’s the same but with the addition of teamwork and coordination. The workout and conditioning part of sports is important, but it’s not the most important thing.

The most important part is staring across the field at an opponent, knowing that the next second you’re going to have to make important decisions that will decide the fate of the game, and you don’t know how you’ll respond, or how they’ll respond, or how your teammates will respond. The most sublime moments of sports are when you make that decision, pass the ball without looking to where you *know* your teammates will be - and they are.

Expand full comment
Hafizh Afkar Makmur's avatar

When starting reading this, I didn't expect to have moved my prior of simulation theory further, but here we are.

Regarding helicopter to Everest, it's done nearly 20 years ago with super limited capability, can't even take passengers, and not replicated for decades since. I'm surprised the claim is repeated without reservation here.

These days I can only derive enjoyment from games that I estimate I have non trivial chance to lose. If it's too little or too much I'll try to rig/mod the game so the win chance becomes what I feel is right. It's not perfect, whenever I lose it still actually hurts. But it I don't lose every now and then, my victory feels hollow.

I wonder how this utopia will answer to this kind of masochistic desire, other then wire heading it entirely. Seems like we'll be back to simulation theory.

Expand full comment