635 Comments
Comment deleted
Expand full comment

Most people have a very narrow view of the world, by necessity. They are also trained to be on the lookout for scams, also by necessity. Once you realize these two facts, the world of persuasion changes immensely. Even very obvious moral, philosophical, or practical truths will hit a lot of resistance. You need to develop reasoning and arguments that is relevant and understandable to the people you need to convince.

"White people are evil" is never going to be a selling point to white people. If you need white people's buy in to make a plan work, you need a different approach. If you can move your plan forward without the support of a group, then perhaps you should reconsider the position that "This outgroup who I consider evil is in control of everything and preventing other people from getting ahead."

Expand full comment

> "White people are evil" is never going to be a selling point to white people.

You underestimate how narcissistically self-obsessed some white people are.

Expand full comment

Of course, and we have seen plenty of evidence that some white people do in fact express that opinion.

We're missing two things though.

1. The population of white people who accept "we are evil" terminology is much smaller than the population of white people who reject it, in part or full. What percentage of the population would accept a less confrontational message is the most interesting question here.

2. People can say whatever they want, often in self-serving ways. Measure how many ultra-progressive white parents in NYC are sending their kids to public schools instead of private over the last 10 years (about the maximum timeframe of the current progressive push), and I think you'll find that the actual actions of these people is not much different than it was before.

Expand full comment

Hypothetically, what's going on is white people really saying "White people are evil, but those *other* white people are more evil."

Expand full comment

I think there's some good evidence that there's a group of white people that say white people are evil, but do not seem to include themselves at all. Freddie deBoer has had several articles looking at specific examples.

It's a lot easier to say "other white people" are a certain way, which is just naked outgroup bias.

Expand full comment

> Ted will ask you to give one of his talks.

As a counterpoint, the top TED talk by views is by waitbutwhy, a blogger whose only Amazon e-book is called "we finally figured out how to put a blog on an e-reader".

Talk: https://youtu.be/arj7oStGLkU

Blog: https://waitbutwhy.com

Expand full comment

Fun fact, the writer of waitbutwhy is rationalist/EA adjacent and wrote one of the most popular intros to AI risk.

Expand full comment

Wow, I found out about Tim & his blog around the time he gave that talk and I had no idea it became the most-watched TED talk.

Expand full comment

Once again happy not to be a utilitarian. Good review!

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

On a scale of 1 to 10...... never mind.

Expand full comment
founding

The repugnant conclusion always reminds me of an old joke from Lore Sjoberg about Gamers reacting to a cookbook:

"I found an awesome loophole! On page 242 it says "Add oregano to taste!" It doesn't say how much oregano, or what sort of taste! You can add as much oregano as you want! I'm going to make my friends eat infinite oregano and they'll have to do it because the recipe says so!"

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

> ...happiness 0.001 might not be that bad. People seem to avoid suicide out of stubbornness or moral objections, so “the lowest threshold at which living is still slightly better than dying” doesn’t necessarily mean the level of depression we associate with most real-world suicides. It could still be a sort of okay life. Derek Parfit describes it as “listening to Muzak and eating potatoes”.

Except now, you have to deal with the fact that many-to-most existing people lead lives *worse than death*. Mercy killings of the ill-off become morally compulsory; they may not actively choose it, some may even resist, but only because they're cowards who don't know what's good for them.

Put the zero point too low, and consequentialism demands you tile the universe with shitty lives. Put it too high, and consequentialism demands you cleanse the world of the poor. There is no zero point satisfying to our intuitions on this matter, which is a shame, because it's an *extremely critical* philosophical point for the theory - possibly the *most* critical, for reasons MacAskill makes clear.

Expand full comment

This whole way of thinking (ten billion people, 0.001 happiness) seems absurd to me.

If happiness were as simple as inches, ok, sure. But I don’t think that’s the case. “Some people are happier than others” is a very different thing from “everyone has a happiness number and happiness numbers can be meaningful added linearly.”

Like, if we’re REALLY longtetmist and care about human happiness, shouldn’t we do a bunch of research into creating a utility monster AI that just sits around wireheading itself to infinity?

Expand full comment

AIs aren't human, so caring about human happiness doesn't imply caring about utility-monster AI happiness.

I'm not sure there's a simple fix for this that would restore the utility-monster argument without encoding additional strong assumptions; normal humans don't have uncapped happiness, so to provide a "this is insane" to a "disqualify all 'happy' utility monsters from personhood" argument you need to have Nietzsche-like intuitions regarding the desirability of the Ubermensch.

Expand full comment

Why should I believe that normal humans have capped happiness?

Expand full comment

Normal human brains have a maximum size. This implies a finite number of possible states. Finite sets of finite numbers must have a maximum.

As a stronger claim, activating the reward circuit more than some finite amount will burn it out.

Expand full comment

Is having the reward circuit activated the same thing as happiness?

Expand full comment

It's pretty close, and even if there are additional stipulations added the same capped-activity issue tends to crop up.

Expand full comment

I think the argument goes through regardless of the function which is defining happiness. If it maps states to values, then there are a finite number of states and therefore a finite number of happiness values.

Expand full comment

> AIs aren't human, so caring about human happiness doesn't imply caring about utility-monster AI happiness.

I don't see what makes humans special. If there's a person / sentient entity, there's a sentient entity. Ofc. AI doesn't imply personhood, but I assume Mark meant AI which is a person and not just some utility function.

Expand full comment

Lots of people explicitly only care about humans (e.g. https://slatestarcodex.com/2019/05/01/update-to-partial-retraction-of-animal-value-and-neuron-number/).

You can also go into social contract theory, which says that morality is basically a contract between beings to achieve better results for all parties (compared to the state of nature) and at least some large chunk of possible minds are not useful to give concessions to because they are not capable of negotiating in good faith and giving concessions in return. This doesn't rule out all at-least-human-equivalent AIs (e.g. a 1:1 upload of some actual non-sociopath human would probably pass), but it rules out a huge chunk of them.

More basically, I was pointing out that the argument was formally invalid insofar as it assumed something not stated. One can argue the assumption should be granted, but it should still be noted.

Expand full comment

The point is that we *don’t* just believe that some people are happier than others. At least sometimes, we believe that it’s overall better to make one person better off and another person worse off. You need a lot more structure to get this as precise as real numbers, but even just believing that some trade-offs are improvements, and that improvement is transitive, gets you much of the way there (though I don’t necessarily believe you automatically get a repugnant conclusion).

Expand full comment

> At least sometimes, we believe that it’s overall better to make one person better off and another person worse off.

Can you say more about this? I get that if you can lower A’s happiness by 0.1 and raise B’s happiness by 100, the total happiness of that pair is on net higher. But this is a tautology dressed up as a belief; it’s an insight that’s valid so long as we accept some premises that I’m rejecting here: namely, the premises that it’s possible to accurately measure and compare happiness in ways that allow for meaningful numerical comparisons, and that it’s possible to “make someone less happy” without a bunch of other side effects.

Expand full comment

You’re assuming the conclusion in that example. There are fundamental claims like “fixing someone’s broken arm is worth it even if you step on someone’s toe on the way”, and this claim together with many others is what we systematize when we say that a broken arm is worth 100 utils and a stepped toe is only worth 1. It’s exactly the same as with any other measurement - when we say that the mass of a brick is 500 grams what we mean is that the brick will tip the scales against some objects but not others. In reality, you can never move a brick without moving many other objects, but the numerical representation of mass is a convenient tool for summarizing all these facts, even though the fundamental thing doesn’t involve numbers at all.

Expand full comment

How are you choosing how many utils you assign to each item? I understand that if we _could_ measure things like pain and goodness directly then sure, we’d want to maximize those things. But I don’t think we can measure these things. Is there something I’m missing?

Expand full comment

This is absolutely the hard part. In orthodox decision theory, you get a scale for an individual by seeing what chancy options they prefer to others, and designating one particular difference arbitrarily as a unit, and then saying that something else is worth n units to them if they are willing to trade a 1/nk probability of that thing against a 1/k probability of the unit (and you have to measure their probabilities simultaneously - you don’t get to use objective probabilities). To get interpersonal comparisons, there are good arguments that it is impossible, but there are also arguments that actual people are similar enough in their preferences that we can use some standard good as the unit to compare different people’s preferences.

In principle, there’s nothing different about this than measuring anything else we measure with numbers, but in practice there are few general laws we can use to simplify things the way we do with masses and lengths.

Expand full comment

I think that we agree on two important points here.

1. Some things are better than other things

2. That is not actually possible in practice to assign meaningful numeric values to how good everything is.

The main point of difference is that having acknowledged 2 you seem to want to continue to reason as if it is possible, whereas I would rather stop trying.

What is my preferred solution? Humility and a bias towards inaction. I will make blatantly obvious trade-offs like stepping in a toe to save a broken arm, but not non-obvious ones. I won't step on a thousand toes to save a broken arm, nor break an arm to save a thousand trodden toes. I will let nature take its course. On a personal level this looks like minding your own business, on a societal level it looks like libertarianism.

Expand full comment

I think the problem here is that many things that can be done at the level of laws or policies end up with harming some people to help others, and the only way to evaluate them seems like it's trying to figure out whether that tradeoff is worthwhile. If there is a policy that will take $1 from every American and then give $10 million to one guy, is that a good or bad policy? How can we tell?

Expand full comment

I think including thermodynamics or conservation of mass is sufficient to fend off the repugnant conclusion. So long as creating new people has material opportunity costs, proof fails at the "...without harming anyone who already exists" step.

As a third option, even if a repeatable exception to basically everything we know about physics were discovered, defining "happiness" in such a way that it includes economic concepts such as comparative advantage, gains from trade, and deadweight losses, could make those perfect, frictionless transfers of utility between arbitrary individuals increasingly implausible at larger scales - or, if transaction costs are assumed to be arbitrarily low, adding new meaningfully distinct people to the system will inherently provide marginal benefits proportional to the number of people who already exist (thus combinatorially accelerating as it scales up), due to new trade opportunities.

Either way, some of the sneaky spherical-cow assumptions start to require extraordinary justification, without which the repugnant conclusion falls apart.

Expand full comment

To me, this is a fundamental problem with utilitarianism. Assigning utils is generally a matter of gross estimation and intuitive assignments about how others would feel about things. We can randomly assign 1 util, 10 utils, or 1,000,000 utils to whatever activity or life we want. If you try to do the math with such shaky numbers, you can make the conclusion be anything you want.

Expand full comment

Being more specific, let's look at animal welfare. Some people assign 0 moral weight to animal suffering. Others say it's a fraction of humans (so if a human is worth 1, maybe a smart animal is worth 0.25 and a dumb animal is worth 0.01), or some say that it's equal to humans. How can you do any math and plan any future action using numbers with such wildly varying levels? Are insects worth 0.0001 of a human? If so, then they are of more moral worth than all humans combined. If you manually adjust down your numbers because the conclusions seem off, then the original numbers and any calculation used to achieve them was just window dressing on our intuitions. If we're just leaning on intuition, then utilitarianism itself is pointless.

Expand full comment

That’s not a problem with utilitarianism - it’s a problem with simplistic attempts to think you are *applying* utilitarianism.

Expand full comment

What's the purpose of a thought experiment if the results aren't actionable? When someone calculates out the utils of a proposed moral choice, they aren't just asking an abstract, they are searching for a means of making life choices. If I can assign a 3.4 util to something, but someone else assigns a 5.9, then are we doing anything more than trying to express our moral intuitions? Utilitarianism is supposed to be a rejection of using moral intuition, but there's too little consistency to see it.

Expand full comment

Instead of asking the hard intractable question, “what does good mean”, utilitarianism punts, and says “ok assume there is some meaningful good for people, you know like saving them for drowning. Shouldn’t we multiply good by numbers of people?”

In other words, we don’t need to think at all about what good means for an individual, we can just take that as some given and use multiplication to compare different amounts. It amounts to little other than a belief in equality as good, but without coming out and saying as such.

Expand full comment

Utilitarianism *denies* that equality is good. It says that everyone’s good counts equally, which in some circumstances means we should work *against* equality.

Expand full comment

You can perform. calculations where a single good is multiplied a number of people, but you can't make non arbitrary comparisons between different goods. So it looks like utilitarianism is only useable in well behaved special cases.

Expand full comment

Utilitarianism doesn’t tell you how to decide what to do - it systematizes claims about what is right. Just because someone accepts relativity theory as correct doesn’t mean that they should do relativistic calculations when trying to catch a baseball. There are some circumstances where computing power and measurement are good enough that applying the theory is useful, but there’s no reason to think those are ordinary situations (or that the situations in an Einsteinian thought experiment are ever likely to be actual).

Expand full comment

Utilitarianism systemizes claims about what is right, based upon certain assumptions about what is right.

Do you think those assumptions are questionable?

Expand full comment

> To me, this is a fundamental problem with utilitarianism. Assigning utils is generally a matter of gross estimation and intuitive assignments about how others would feel about things. We can randomly assign 1 util, 10 utils, or 1,000,000 utils to whatever activity or life we want. If you try to do the math with such shaky numbers, you can make the conclusion be anything you want.

If you don't bother to try to estimate this _somehow_, you're maximally wrong or happened to guess correctly.

Expand full comment

Or you could reject the premise that it’s meaningful to quantify good on a per-person basis.

Expand full comment

Von Neumann showed us how to make happiness numbers meaningful. If I am indifferent between leading life A and a coin toss between life B and death, B has twice the happiness (aka utility) of A. The implication for maximizing total utility ...

Expand full comment

I can imagine a wireheading addict might prefer death to non-wireheading. So from his point of view, non-wireheading lives have negative utility. If different people have different hedonic set points / preferences then you can't really talk of B having twice the utility of A without reference to a particular observer frame. Utility is as subjective as the subjective theory of value that determines market prices.

Expand full comment

If you measure someone's utility by revealed preference, anyone who hasn't yet committed suicide and isn't currently locked up on suicide watch somewhere is deciding that their life is, in fact, worth living.

This seems like about the right way to measure what's best for people most of the time--if you choose X over Y, you must think X is better than Y. But of course we run into many places where people make choices that seem terrible for them, ranging from "I'd rather have today's heroin fix than eat" to "I'd rather sleep with this sexy tattooed bad girl than keep my marriage intact." We even put some people into the category of "can't make their own decisions"--children, the senile, the seriously mentally ill, the seriously intellectually disabled, etc.

And in many cases, the utilitarian decisions involve making tradeoffs between people that can't be addressed in this way, or can't without breaking a lot of what people want to use that reasoning to do. (By revealed preference, I spend a lot more on Starbucks than I do on feeding starving children in the third world, so that must be the correct utility calculation, right?)

Expand full comment

What makes death "good for them"? Is their Darwinian fitness increased if you kill them?

Expand full comment

The claim is that their well-being is increased. No need to bring Darwinian fitness into this. (What is well-being? That’s the difficult question here. I think the most plausible answer is something like “getting what they want”. In order to have negative well-being, your life has to consist of enough things hurting you and going in ways that make you unhappy that just not existing would be better.)

Expand full comment

I was responding to someone who claimed these people "don't know what's good for them", which sounds different from your "getting what they want". So a preference utilitarian would not kill such people, because by their own actions they appear to prefer living regardless of whether it's "good for them" by Crotchety Crank's standards.

Expand full comment

I feel like preference-satisfaction based theories of wellbeing interact weirdly with population ethics. Like, creating people who strongly wish to not exist but have strong preference-satisfaction scores in other respects seems bad. And if you try to restrict well-being to preferences 'about oneself' or something like that to avoid this implication, then you end up denying people all kinds of tradeoffs they might want to make (e.g. someone giving up some personal gain to save an old tree they care about).

Expand full comment

As a datapoint, my dad spent the last six months of his life in constant and increasing pain, with a terminal cancer diagnosis. He had no dependents, was an atheist, and kept a loaded handgun in his nightstand. And yet, he didn't commit suicide. I don't think anyone he cared about would have *blamed* him for committing suicide, and he told me he'd considered it, but he just didn't feel right about doing it.

Would he have increased his well being by doing so? I mean, from his own actions, he presumably didn't think so. But it's not a stretch to imagine someone doing a utilitarian calculation and saying "this guy's life is clearly in negative utility territory, let's put him out of his misery."

Expand full comment

I was specifically responding to the excerpt from Scott where he discusses what a life at utility 0.001–just barely better than nonexistence—looks like. He suggested it might be a merely *drab* life, mediocre music and bland food—but then anything worse is worse than nonexistence, hence the death of anyone in a worse position than that improves the universe.

I take it you disagree with the idea that a life barely worth living looks like that, and you think the zero utility point is a lot lower. But then you have to confront the “repugnant conclusion” in its harshest form—the dark conclusion Scott was hoping to dodge.

Expand full comment

This isn't a problem with consequentialism. You can make judgements about which worlds are better than others without making claims about what you ought to do. You've just pointed out a trouble with rating how good worlds are.

Expand full comment

Utilitarianism is a moral theory - a theory about what you "ought to do" - which explicitly works by aiming for the future which has the "greatest happiness for the greatest number". If there is "a trouble with rating how good worlds are", then for a utilitarian, that will directly cause troublesome "claims about what you ought to do".

On the other hand, if "claims about what you ought to do" *aren't* based on "judgements about which worlds are better than others", you've dodged the worry. But then you're probably not a consequentialist, and certainly not a utilitarian.

Expand full comment
founding

I generally solve that by accepting that people should have no restraints to ending their own life whenever they wish, and then setting the zero point at wherever would cause them to never do that even if there was zero stigma (which is a fairly high point).

Expand full comment

Feel free to define it that way; but if you’re a utilitarian, you probably shouldn’t. If you do, anyone under that “fairly high point” is making a negative contribution to universal utility, and the world is better off with them gone. I don’t think that “solves the problem” at all, it epitomizes it.

Expand full comment
founding

Valid, I do somewhat eschew utilitarianism for voluntarism here, in that something can be neither mandatory nor forbidden, and the very nature of it being neither holds at least enough inherent value to cover for the negative impact on people who made the wrong choice.

Expand full comment

Even if a person is below your fairly high point of utility at a given moment, they may still realistically expect to make a net contribution to universal utility by means of, e.g. research, charitable work, influencing the behavior of others, etc. no? Or, they may have a realistic expectation that at some future point their situation may change for the better. It's not like anyone who happens to fall under the threshold should at that instant be offed, right?

Expand full comment

Lots of people who attempt suicide and fail end up being very thankful that they failed, later on.

Indeed, it seems to me that people who commit suicide because their life is objectively terrible are *way* less common than people who commit suicide because their brain is malfunctioning in a way that makes them miserable enough in the moment to commit suicide. This famously includes many people who were apparently happy and functional a few days or weeks earlier, many people who objectively seem to have good lives, many people who appear to have a great deal to live for, etc.

I mean, you can make the revealed preference argument here w.r.t. the guy whose uncontrolled bipolar disorder finally led him to kill himself must have simply been making a rational decision to end his own unbearable suffering, but that's certainly not how we handle things in our society, and also it's very common that the person who tries and fails is later very glad they failed, or that someone who is feeling suicidal explicitly takes steps to make sure they don't kill themselves (seeking treatment or hospitalization, asking relatives to take guns and poisons out of the home, etc.)

Expand full comment

There are four issues with your arguments that come to me directly.

First: uncertainty. As we can not be certain how happy someone else is it is better to give them the benefit of the doubt, because killing them is a very permanent solution that we can not undo. It good to keep optionality.

Second: if we actually live in such a world people would be less happy. This kind of world would be terrifying to live in and make everyone miserable, thus starting a death spiral. That doesn't sound very utilitarian to me.

Third: Instrumental Utility of people to make other people happy. Some people might be just below the recommended level of happiness, but their existence brings so much happiness to others that they should keep existing.

Fourth: Limit the number of future people that can be happy (which I guess kind of fall under instrumental).

Expand full comment

The particular people who happen to be left remaining after the apocalypse is considerably more important than the availability of easily exploitable coal deposits.

People in many countries around the world are struggling to achieve industrialisation today despite a relative abundance of coal (available either to mine themselves or on the global market), plus immediate access to almost all scientific and technological information ever produced including literal blueprints for industrial equipment. That these people would suddenly be able to create industry after the apocalypse with no internet and no foreign trade, even with all the coal in the world readily available to them, is a loopy idea.

Medieval England was wealthier per capita than over a dozen countries are today in real terms, all without the benefit of integrated global markets, the internet and industrialization already having been achieved somewhere else.

I of course do not expect MacASkill to have written this in his book even if he recognized it to be true.

Expand full comment

Why does it have to be suddenly? From the point of view of "quintillions" and "galactic superclusters", a difference of say 50 thousand years basically doesn't matter, but it would be enough time for evolution to do its thing in regards to any thoughtcrime concerns you might have.

Expand full comment

It’s much easier to get industrialization started if you have access to coal and there isn’t already someone else who owns that coal and is taking it.

Also, you’ll have to give an example of a place that isn’t industrialized - I’m not aware of any other than some small bits of the Amazon and the Sentinel islands.

Expand full comment

The set of people alive a few decades after the apocalypse will be substantially different from the population before it, both in terms of culture and to a lesser extent in terms of genes.

Expand full comment

Also, in recovering from an apocalypse that doesn't blast us back to the stone age, we will start with a bunch of advantages our ancestors lacked--phonetic writing, Arabic numerals including 0 and a decimal point, the germ theory of disease, the concept of experiment as a way to learn things, the basic idea of there being natural laws that can be understood and exploited, etc. Even if our successors have forgotten most other stuff, just those things will make everything go a hell of a lot faster.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Gentle mention, you missed Sean Carrol, via his Mindscape Podcast, as a recent interviewer https://www.preposterousuniverse.com/podcast/2022/08/15/207-william-macaskill-on-maximizing-good-in-the-present-and-future/

Expand full comment

The repugnant conclusion seems unintuitive to me, specifically because it fails to consider the shape of the population-happiness tradeoff curve.

If you imagine this curve being concave down, then normal moral intuitions seem to apply: a large population that isn’t quite at carrying capacity is better than a much smaller, slightly happier population.

It’s really the concave up case that is unintuitive: where your options are a small happy population or a huge miserable one. But there’s no clear reason to my mind to imagining this is the case. Peoples utility of consumption seems to plateau relatively sharply, suggesting that a smaller society really wouldn’t unlock tons of happiness, and that having a giga-society where people still had net positive lives might not actually be many more people than the current 7 billion.

I don’t want to deny that it’s unintuitive that 20 billion people at happiness 10 really do outperform 1 billion at happiness 90, but I posit that it’s mostly unintuitive because it’d so rarely be just those two options.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

The part where you equalize happiness across population is the problem, once you forbid that the whole thing falls apart and you have sane ethics again.

I'll leave the political implications as an exercise to the reader.

Expand full comment

The Repugnant Conclusion seems to me to be a paradox, and the point of a paradox is to demonstrate that your reasoning went wrong somewhere and you need to retrace your steps to figure out where your mistake was.

Like the Grandfather Paradox is not a cookbook for cool science experiments. It strongly suggest time travel is impossible. And whaddayaknow, time travel turns out to be impossible.

Expand full comment

Stable time-loops aren't forbidden via our current understanding of physics. And the RC isn't a "paradox".

Expand full comment

Whether they're permitted by GR and whether they're physically real are different questions, and I don't think a stable time loop is what anyone means by "time travel" anyway.

I didn't say RC was a formal paradox, but it has a similar flavor and I think we should consider it in the same way.

Expand full comment

Well the RC assumes you can measure that effect. That there is a measurable unit of utility of 1 and a measurable utility of 100. I don’t know if that’s the full range but thats what I see in the literature. Negative utility exists too of course.

A good argument against a large population with positive 1 is that it doesnt take much to turn that society into a large population with negative 1, thats a loss of 2 units of utility per person. Which might just mean that everybody is almost as miserable as before but hungrier, or some other effect. We go from an over all utility of plus 1 billion to a negative 1 billion pretty quickly (the Standard scenario being 1 billion people with an utility of 1). On the other hand the contrasted 1 million people with a utility of 100, also seeing the same unit drop, goes to 98 million positive utility points. The RC isnt robust.

Expand full comment

There are strong reasons to prefer a Schelling Fence option of (randomly making this up), say 30 utils. So we can happily make the decision to add people and reduce happiness for a while, but would stop well before the bare minimum scenarios (1, or .001 from Scott).

The reason? As you say, it's easy for people to drop a small amount, and now you've created mass unhappiness. You need a buffer to account for bad times - like a famine or flood, individual heartache, etc.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Important to distinguish between the so-called "Repugnant Conclusion" itself ("a much larger, less happy population is better, at least for sufficient values of 'much' and utility remaining above some zero point") and the argument that this conclusion is in fact repugnant and should be rejected. They are, after all, opposites.

This mostly seems to be an argument that the latter is wrong and hence the "RC" is actually true.

Expand full comment

2 things: First, the "number of atoms" limit annoyed me when I saw it, since we can obviously get value from moving atoms around (sometimes even back to the same place!), so the possibilities of value-production are *much* higher than the constraints outlined.

Secondly, stealing my own comment from a related reddit thread on MacAskill: "The thing I took away from [his profile in the New Yorker] is that contrary to "near-termist" views, longtermism has no effective feedback mechanism for when it's gone off the rails.

As covered in the review of The Antipolitics Machine, even neartermist interventions can go off the rails. Even simple, effective interventions like bednets are resulting in environmental pollution or being used as fishing nets! But at least we can pick up on these mistakes after a couple of years, and course correct or repriotise.

With longtermist views, there is no feedback mechanism on unforeseen externalities, mistaken assumptions, etc. All you get at best in deontological assessments like "hmmm, they seem to be spending money on nice offices instead of doing the work", as covered in the article, or maybe "holy crap they're speeding up where we want them to slow down!" The need for epistemic humility in light of exceedingly poor feedback mechanisms calls for a deprioritisation of longtermist concerns compared to what is currently the general feel in what is communicated from the community."

Expand full comment

On the first point, you might appreciate our deep-dive into the logic here: https://philpapers.org/rec/MANWIT-6 (The second two-thirds of the paper is about refuting infinities, but the first third lays out why it takes pretty unreasonable assumptions to assume continuing growth.)

Expand full comment

I think this *should* be adequately dealt with by considering probabilities (not just probabilities of various underlying states of the world, but probabilities that actions now will have unforeseen effects). I do think this means that for most acts, considering the long term effects isn’t relevant. I think longtermists refer to this as “cluelessness”.

Expand full comment

How do you estimate the probability of something unknown? Like, can you tell me what the biggest concern of 2025 is going to be? We can guess about things we might be worried about today, but even something three years away is completely unknown to us. In 2019 very few people would have guessed that the following year's biggest concern was going to be a variation of the common cold.

Expand full comment

Sorry, this is silly and wrong. We estimate probabilities of unknown things all the time, and in mid-2019, everyone well calibrated who was paying attention would have said the probability of a novel respiratory pandemic was in the range in 1-5%, because base rates are a thing. Yes, the question of longer term prediction is harder, and over long enough time frames cluelessness becomes a critical issue. But 3 years isn't long enough. So of course we can't say with confidence, much less certainty, what the biggest concern of 2025 will be, but we can say many things about the distribution of likely things which would be big news - plausible flashpoints for wars, disasters, technological changes, risks from AI, economic crises that could occur, and so on.

Expand full comment

So, to the point of the book being reviewed, tell me how to actualize a number of small potentials? Your list is the kinds of things people expect to be big news, and I guess reasonably likely (if only because they are in huge vague phrases like "disasters"). How do you actualize the generic phrase "technological changes" as a flashpoint for action a year in advance? Should we prepare for a tsunami hitting Indonesia, or an earthquake in California? Some of our preparations might carry over, but a lot would not. What if instead "disaster" ended up being flooding in Russia?

The absolute smart money for "biggest item of 2020" in 2019 was the US Presidential Election. I would not be upset at anyone in 2019 who predicted that to be the biggest story. They would have been wrong. Anyone predicting a general and vague "respiratory pandemic" would still have had no idea what kind, where and how it would start and spread, and how to handle it. Someone saying "vaccine" would have been on a decent path, but that wasn't possible to do until the strain was isolated, after which we had a working vaccine in like, two days. If our best predictions can fail monumentally with less than a year of lead time, how much time and money are we willing to spend on figuring out and preparing for the big story of 2025? If we had a pot of $100 billion to spend purely on preparing for the biggest issue of 2025, how should we spend it?

Expand full comment

I'll also note that the second biggest story of 2020 was still not the election, but something absolutely *nobody* predicted - the BLM protests following George Floyd's death. Arguably "black man killed by police, unrest ensues" was something that could have some level of predictive value, but would anyone have bet strongly on the level of unrest? What preparations could or should we have made in advance of Floyd's death to prepare for that situation?

Expand full comment

First, preparedness spending is a thing, and risk mitigation is an entire field that governments spend time on. For example, the US had a pandemic response plan under Obama, but unfortunately Trump got rid of the office that was supposed to lead it, the the global health security team on the National Security Council. That's not a failure of preparation, but rather an idiotic dismissal of preparation that had already occurred, without replacement. Not so long ago, local disasters like floods led to starvation, instead of emergency response from FEMA. Thankfully, the US has agencies that respond.

And there are lots of things to spend money on that would yield preparedness benefits. So even if we could with high confidence predict the single biggest event in 2025, it would be strange to the point of absurdity to only prepare for the single biggest event, instead of mitigating a variety of threats, which, again, is what governments and disaster planning experts already do.

But if you want to know what I'd spend $100 billion on, it would mostly go to the American Pandemic Preparedness Plan, which was planned for $65.3 billion, and somehow wasn't funded yet - because unless we do something, COVID-19 won't be the last pandemic.

Expand full comment

> Even simple, effective interventions like bednets are resulting in environmental pollution or being used as fishing nets!

Turns out this is not actually a problem: https://www.vox.com/future-perfect/2018/10/18/17984040/bednets-tools-fight-mosquitoes-malaria-myths-fishing

Expand full comment

“suppose the current GDP growth rate is 2%/year. At that rate, the world ten thousand years from now will be only 10^86 times richer. But if you increase the growth rate to 3%, then it will be a whole 10^128 times richer! Okay, never mind, this is a stupid argument. There are only 10^67 atoms in our lightcone; even if we converted all of them into consumer goods, we couldn’t become 10^86 times richer.”

This is a common economic fallacy. Growth is not necessarily correlated with resource production. For example, if you were able to upload every living human’s mind onto a quantum computer, you could feasibly recreate reality at the highest possible fidelity a human could experience while simultaneously giving every living human their own unique planet--all while using less than the mass of the Earth.

As another example, consider the smartphone. A smartphone is several hundred times more valuable than a shovel, and yet a shovel probably has more total mass. This is because the utility of the smartphone, as well as the complicated processes needed to manufacture it, combine to create a price far higher than the simple shovel.

So yes, we could become 10^86 times richer using only 10^67 atoms. You simply have to assume that we become 10^19 times better at putting atoms into useful shapes. Frankly, the latter possibility seems far more likely than that humanity ever fully exploits even a fraction of atoms in the observable universe.

Expand full comment

I think you're wrong on the merits about becoming "10^19 times better at putting atoms into useful shapes," if only because it implies really infeasible things about preferences. There are also fundamental limits, and while 10^19 doesn't get close to them, I think they show that the practical limits are likely to be a real constraint.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Certainly, and I don’t mean to imply that I think getting 10^19 times better is an ordinary occurrence.

However, something close to that large has occurred in the past 70 years. The cost of computer memory has gotten around 10^14 times cheaper (source: https://hblok.net/storage/) and the size of a single bit of storage has decreases by a similar amount.

My main point is merely that economic growth is not at all inherently limited by the amount of resources available.

Nor, in fact, do physical limits necessarily impede growth, though they do stop growth in certain areas. Large diesel combustion engines, for example, are quite close to the maximum theoretical thermodynamic efficiency. Computer chips will also soon reach physical limitations caused by the size of atoms and their ability to prevent electrons from probabilistically flipping bits.

But who is to say that we do not simply switch to quantum computers, which have different physical limitations? Diesel engines could be replaced with small-scale fission reactors, or even fusion.

Are these potentially unrealistic? Yes. But note that the relevant question is not “will we run out of resources” (maybe) or “will we run into physical limits” (definitely), but whether we will always be able to find new, more useful technologies with ever-greater physical limitations indefinitely, and if not, when we will run out.

If you have the answer to that question, please write a book about it.

Expand full comment

> but whether we will always be able to find new, more useful technologies with ever-greater physical limitations indefinitely, and if not, when we will run out.

Assuming our current understanding of science is more or less true, the answer is "no". There's a limit on how much energy generation, computation, and other useful work you can pack into a cubic centimeter of space. Ultimately, E=mc^2, but you will hit other physical limitations long before that.

One obvious retort to this is, "aha, but obviously we will discover entirely new scientific principles that will allow for impossible things to happen". That is a valid point, but if you believe that, then you are no longer engaged in predicting the future; you're not even engaged in wild speculation; rather, you're just writing science fantasy.

Expand full comment

No. It is not fantasy to claim that we will discover new technologies which will allow us to do things we currently do not think are possible. It is obvious—unless you believe our understanding of physics is complete.

Will this allow for infinite growth? Almost certainly not. But to answer the question affirmatively or negatively is to engage in an equally useless act of speculation.

You don’t know what future science entails, and neither do I. It is sufficient for my point, however, that such growth merely be possible, and have occurred in the past. Both are true.

Expand full comment

> Will this allow for infinite growth? Almost certainly not. But to answer the question affirmatively or negatively is to engage in an equally useless act of speculation.

Well, we have some of that "useless speculation" here, which I think responds to a bunch of your points, and address the claim about not knowing if physics is correct in section 4.1.1 - https://philpapers.org/archive/MANWIT-6.pdf

And in any case, we're not looking for refutations to infinite value for the purposes of this discussion, we're looking at potential growth over "merely" the next 10,000 or 100,000 years.

Expand full comment

“We restrict our interests to a single universe that obeys the laws of physics as currently (partially) understood. In this understanding, the light-speed limit is absolute, quantum physics can be interpreted without multiverses, and thermodynamic limits are unavoidable.”

Limiting yourself to physics as it is currently understood sort of ignores the whole point of discussing future scientific breakthroughs. Indeed, some current-day theoretical research into what has been termed “Alcubierre Drives” allows for faster-than-light travel within the bonds of Einstein.

Indeed, in the section you reference, the authors essentially make a half-assed argument that scientific progress builds on itself (until it doesn’t, see: Ptolemy and the Copernican Revolution) before admitting that you cannot actually disprove the possibility of infinities.

Regardless, I agree that we can make some reasonable estimates for the near and medium term. But anybody who thinks they can tell you what the hard limits of human achievement will be after a million more years of human science (or even 500) has let their ego get ahead of their intelligence.

Expand full comment

As I said, you are free to imagine new discoveries that violate laws of physics as we know them today -- speed of light, conservation of energy, conservation of momentum, and so on. I fully agree with you that such things are possible, just like ghosts or gremlins are possible. However, the problem is that our current model of reality is not just a guess; rather, it appears to fit what actually exists quite well. So well, in fact, that we can build complex devices using this model. Every time you use a computer, you affirm that our current understanding of physics is likely true.

So, you can't have it both ways -- either you assume that our current scientific knowledge is just wildly off-base; or you assume that it is valid enough for you to make reasonable predictions about the future. You can't just throw up your hands and say, for example, "well, it sure would seem like speed of light is a constant, but we could be wrong, so we'll definitely have FTL travel one day".

Expand full comment

What is so special about a computer that the same argument couldn't apply to a mechanical clock? Someone in the 18th century waving their timepiece around and claiming that it proves that human science is pretty much settled wouldn't be very sensible.

Expand full comment

> However, something close to that large has occurred in the past 70 years. The cost of computer memory has gotten around 10^14 times cheaper (source: https://hblok.net/storage/) and the size of a single bit of storage has decreases by a similar amount.

Wikipedia says "The first magnetic tape drive, the Univac Uniservo, recorded at the density of 128 bit/in on a half-inch magnetic tape, resulting in the areal density of 256 bit/in2.[6]" That was in 1951. The current numbers are obviously far better - "since then, the increase in density has matched Moore's Law, reaching 1 Tbit/in2 in 2014.[2] In 2015, Seagate introduced a hard drive with a density of 1.34 Tbit/in2,[3] more than 600 million times that of the IBM 350. It is expected that current recording technology can "feasibly" scale to at least 5 Tbit/in2 in the near future.[3][4]"

Assuming we're at 5Tbit/in2 in 2021, that's 70 years, and an increase of 160,000,000,000x, we see a growth rate of about 45% per year. Continue that for another century or two and we're talking about black hole bounds on total information - it can't actually continue.

Expand full comment

It doesn’t have to continue for my point to be correct.

Digital computers store knowledge more densely than books. Imagine making the argument “it would be impossible to store billions of people’s daily thoughts, we would run out of paper and places to put it,” in 1530. The printing press is still a relaticely new technology, and it allows for vastly faster and cheaper production of books than previously. Indeed, by the 20th century, industrial printing presses rival anything ever created by hand for a fraction of the cost.

Certainly, you would be technically correct to argue that there is a limit. Using the existing technology of paper, creating something like Facebook would be impossible. But you’ve simply dodged the question.

The fact that technologies such as paper or digital computers have physical limits is not particularly important.

The point is that they can grow exponentially, reach their physical limits, and then be replaced by a new technology which can also grow exponentially. There’s no way to prove that this cycle cannot continue ad infinitum, unless you already possess perfect knowledge of physics.

Do I consider such infinite growth likely? No. But this is my intuition about the laws of physics, and that is a dangerous space for laypeople to speculate.

Expand full comment

This isn't an argument about the medium, it's an argument about the physics. Information is physical, possible density of accessible information is limited, and with exponential growth, those limits are reached within centuries, not millennia.

And if you're conditioning on our current understanding of physics being wrong *in specific ways that allow infinite value*, that greatly limits the worlds in which your objection matters.

Expand full comment

If it turned out that creating new universes is possible, does that undermine your argument? I think it is respectable physics to suggest that new universe generation is a natural event that has occurred many times (whatever that means in this context) and that it is not beyond the realms of physics to assume that eventually this natural process could be harnessed to make new ones on demand.

Expand full comment

A decent shovel is around $40. 40x200 = 8000.

Expand full comment

>if you were able to upload every living human’s mind onto a quantum computer, you could feasibly recreate reality at the highest possible fidelity a human could experience while simultaneously giving every living human their own unique planet--all while using less than the mass of the Earth.

This seems to me like a highly specific and a very fragile claim to base your objection on.

Quantum computers are not obviously better than any other kind of computers in general purpose tasks as far as I know (which is admittedly not much, feel free to school me if you know better), most experts I read are tired of the trope of using a Qcomputer as a generic stand-in for a super duper magic computer, they say QCs will only substantially improve our cryptography and chemical\micro-biological simulation abilities. You can say that the latter task is relevant to simulating humans but that would be asserting things about the yet-uninvented brain simulation discipline: it may very well turn out to be that simulating human brains is really just matrix multiplication, and then QCs would be no better and perhaps worse next to conventional parallel super computers.

The second part of the claim is in essence an assertion about how 'gullible' the human brain is, how little resources does it take to convince it it's actually experiencing a much bigger reality. On its face, it seems true enough : a modern open-world game like, say, GTA V is simulating a city-sized reality with the resources of a lap-sized computer. So even if the computers scaled to a room-sized data center to enable an extremely realistic and much more immersive simulation, you still have a room-to-city ratio, which is actually much more generous than the each-person-gets-an-earth ratio (earth, 510.1 million km^2 ~=~ 5*10^14 m^2 in area, can fit 10^12 computing rooms each 500 m^2 in area, so 1000 such room to each of a 10-billion population. Vastly more efficiency can be squeezed by using volumes instead of areas and\or more dense computing matter and\or more efficient simulation algorithms and\or etc etc etc.). If 995 room-computers were devoted to simulating the current part of the world your brain is focused in and 5 were used to simulate a less faithful ambient world around that, I guess it sounds a about right you can give a HD earth to each human while using less than an earth's worth of resources.

BUT, those humans will live forever, and computers need energy. More than energy, computers *hate* heat, vehemently and desperately. As the scientist and futurist Issac Arthur always remarks, heat is the real enemy opposing any kind of order-building activity. Heat is Entropy's handmaid. So, I won't be surprised if, after you factor in heat dissipation and energy supply for about a billion year per human (a reasonable definition of 'forever'), it will turn you actually need a Solar System worth of space and material to house your simulation-blessed civilization. Still not bad, but hardly "an earth per human without using all of real earth" good.

>A smartphone is several hundred times more valuable than a shovel, and yet a shovel probably has more total mass.

You're ignoring the sheer unimaginable mass and energy that went into the whole ecosystem of processes that made the smartphone possible. Cutting edge chip fabrication facilities cost in the range of 1 billion dollars each, how many shovels *is* that? And that's just the SoC at the very deep core of your phone. How many tons of matter and millions of mega joules and billions of dollars is the mining industry that extracted the lithium in the batteries and the smelting industry which crafted the glass in its screen ? How much of those again in the shipping industry that got its myriad parts and components from all over the world and into one Asian factory somewhere and then transported the final phone to the myriad shores where its needed?

Once you do all of that, congratulations, you just created a dumb piece of intricately-put-together matter that has the *potential* to be useful, but is actually dumb as a brick until its loaded with a bunch of kinky patterns of voltages called Software. And once you get to Software, oh boy. How many people and energy and billions went into the Linux kernel? The Android OS in general? The absolute mess that is the app ecosystem?

Now, I can expect one your objections: that all of this is a one-time cost per phone, that once you have the phone and all its software you can finally use it to generate value to payback all what you had put in and more. But are you sure? Smartphones are not eternal you know, I bet the average lasts about 3 to 5 years. I bought mine in late 2018 and I feel like an ancient fossil keeping it while all of those around me switched 2 or 3 times in this period. Are you sure that in this awfully short lifetime, smartphones do actually generate enough (happiness, energy, utility, whatever) to payback all what went into them? The Linux kernel (like all complex software) is full of bugs, does the increasing amount of effort and money and brainpower spent developing and maintaining it justify the amount of value it actually generates? When, finally, Linux is discarded in favor of some hot new thing (like all complex software), will it net positive or negative on average?

And this is really a pet peeve of mine. People are so worshipful of the utter mess that is modern civilization, so quick to marvel at its amusing gadgets, so inclined to pat themselves on the back for creating it. And, sure, modern civilization is *different*, I can do things with my smartphone that nobody prior to its invention could imagine, not in this shape and form. But "different" is not "efficient", walking on your hands is very different, and also very dumb and inefficient of you're an average human. Are you actually *sure*, like bet-your-life-sure, that modern civilization is actually net positive or even break-even, and not just a party trick that looks cool but is actually dumb and inefficient when you start measuring ?

Expand full comment

Could be, a lot of it. Windows, toilet paper, backhoes, trains, etc. And especially sanitation and antibiotics, vaccines, transfusions, etc. Keepers. Massive marketing of junk food and the junk viewing on media, no matter how much the critics love breaking bad, nah.

Expand full comment

The problem with this sort of argument is that if you devalue the things needed for survival to 0.00001% of the economy then someone can buy them for $5000 and now they own you.

Even granting a quantum computer that everyone lives in (which is a philosophical trap and you should feel bad for bringing it up), someone's gotta keep it running. If there are trillions of people living in it, protecting it is extremely important. I don't know what the percentage should be, but if you value survival needs at, say, 20% of the economy, that places a limit on how valuable the other 80% can be.

Expand full comment

I always used to make arguments against the repugnant conclusion by saying step C (equalising happiness) was smuggling in communism, or the abolition of Art and Science, etc.

I still think it shows some weird unconscious modern axioms that the step "now equalise everything between people" is seen as uncontroversial and most proofs spend little time on it.

However, I think I'm going to follow OP's suggestion and just tell this nonsense to bugger off.

Expand full comment
author

I think if you have specific people, you can argue they shouldn't be equal, but that since we're at an abstract level where none of the people have any qualities, it seems weird to actively prefer that a randomly chosen subset be happier than some other randomly chosen subset, even at the cost of making average global utility lower.

Expand full comment

Keeping an assumption of all happiness examples being net positive here.

I still think there's a difference between a life of suffering mollified by treats and a genuinely good life. I can see the real life logic of this gradually removing good lives in pursuit of 20% less botfly infestations. The "true equality is slime" meme.

But I have a nice home and family, so I think I probably fear being "equalised."

Expand full comment

The preference only makes sense when comparing with the world where the unhappier people don't exist. The comparison is only important if we consider a transition between the worlds.

If the happier people are not supposed to be affected by the new people appearing, then they are not supposed to be affected. Otherwise they have good arguments to resist their appearance. As you said this is bait-and-switch.

Expand full comment

When doing math, I might say that at a certain level of approximation, it's fine to round to whole numbers, and it often is. But if it happens that I'm adding 0.1 (rounded down to 0) a million times, the approximation error has swamped the measurement.

The abstraction involved in the repugnant conclusion -- the assumptions about how humans are happy or unhappy and what it means to experience a mix of joy and suffering map to abstract quals -- result in the same type of failure mode.

Expand full comment

> we're at an abstract level where none of the people have any qualities

This is exactly why I very much enjoyed how you walked away from the mugging in the review and refused to play the philosophy game. None of this is real, it's just words. First show me the button that can create 5 billion people with happiness 80, and then I'll start worrying about what it means to press it.

(And I'll probably conclude that a world where "happiness" reduces to one number that can be exactly measured is so weird and scary and different from what I'm used to that I don't want any part in bringing it about.)

Expand full comment

Alas the author is not Scott, it's part of the book review competition. Scott remains in danger of being mugged! (maybe)

Though I don't know if it's totally alarmist to suggest the repugnant conclusion might meme people into destroying civilisation in pursuit of "total aggregate happiness."

A version of it is nearly making me have a fourth kid so...

Expand full comment

> Alas the author is not Scott, it's part of the book review competition.

Huh? I thought the title for those started with "Your Book Review", whereas this one is just "Book Review". Maybe it's mistitled, but I thought it read very much like Scott :)

(It's plenty confusing though, I remember last year, Scott posted a great review of Arabian Nights in the middle of the book review contest, many people liked it and commented they were planning to vote for it in the contest, until they realized, oh wait, it's "Book Review: ...", not "*Your* Book Review: ...".)

Expand full comment

You're right and it's happened again.

Expand full comment

I think this is exactly on point. Because if there's a button, we're talking about actual people again, aren't we? You've destroyed billions of people in exchange for more billions of people. If not, then the premise that you can "create another 10 billion people who won't have an effect on the first 5 billion" is wrong. There's a bizarrely-omnipotent/omniscient Decider able to magic people into existence. That decider is going to magic away 5 points of happiness in your life, whatever that means.

The mugging comes when you transfer a thought experiment riddled with unrealistic assumptions into the realistic world:

"Imagine you know they won't have an impact on the other people", except that's not possible.

"Okay, but imagine you know they'll be blissfully happy or abjectly miserable or whatever I need to make this thought experiment work." You can't know that.

"Imagine you could, though!" Okay, I've imagined a world that doesn't exist. What do you want me to do with it?

"Now apply what you've learned to the world that does exist." That's not how this works. You asked me to image an impossible world, now you want me to, what, imagine the impossible world applies to the real one? Nope.

Expand full comment

Unless one considers inequality to be a fundamental part of, perhaps even the, human production function, in which case at least _some_ inequality is very desirable indeed.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

I can understand *instrumentally* valuing some level of inequality, if inequality is necessary to produce more of what we terminally value (whether that's total wellbeing or average wellbeing or something else entirely). It seems to me like this is what you're saying: inequality is valuable *because* it's necessary to the human production function. It's valuable because of its consequences, but not *terminally* valuable in itself.

But I don't think instrumentally valuing inequality is enough to escape the repugnant conclusion.

Here's how I'd operationalize what you're saying (by my understanding, possibly incorrectly): "World B, due to the benefits of inequality, will at some later point be better off than World C." But now this is a different thought experiment; we've stopped engaging with worlds A, B, and C as presented above and decided to compare some other worlds B-prime and C-prime instead. (Also, by virtue of what is B-prime better than C-prime? Total happiness, average happiness, both, something else?)

Another way to get at the same point: we can interpret the values of A, B, and C as being somehow integrated over all time, factoring in all the value that those worlds will create. Taking this view, valuing B over C seems to require valuing inequality *as such*, absent of any consequences (since the consequences are already fully accounted for, and C ended up with more total and average happiness). And if you can no longer justify your preference for greater inequality based on its consequences, it seems hard to justify at all.

edit: see also this comment below; the argument just relies on a tiny nudge towards equality (as long as it makes the happier population slightly less happy) -- you don't need to go all the way to full equality.

https://astralcodexten.substack.com/p/book-review-what-we-owe-the-future/comment/8566058

Expand full comment

I don't think I would value *inequality* for its own sake, but I do think I place nonzero utility on "how happy anyone is allowed to be"; that is, going from "the happiest person in the universe is at 100" to "the happiest person in the universe is at 90" is intrinsically negative to some degree. It would then follow that "level out everyone's happiness" is not an automatic good. Intuitively I'd be most inclined to support it when the least happy people are most miserable (as opposed to "good but merely less good) so it would also follow that the worlds that need "only a tiny nudge" to equality are exactly the ones where I'd be most likely to reject the proposal of levelling everyone off.

Expand full comment

Isn't that a sign that "let us imagine an abstract population A, where N more people with utility = previous utility minus delta appears as if by magic" is not maybe a useful way to think about real-life ethics?

Expand full comment

Changing happiness would take effort, the effort may (would) make more people unhappier, and there is no guarantee that the end point would make more (or any) people happier.

Removing this is like working at the "assume a spherical chicken of uniform density" level of philosophy - it's a way to spend an afternoon, but not a way to decide what to devote your life to.

I have not thought it through, but the "repugnant conclusion" probably fails on this as well.

Expand full comment

Yes. I think it’s no more problematic than the thought experiment about a utilitarian doctor who can cut up one health innocent person to provide organs to save five lives. Sounds horrific, but at least in part it’s because we really can’t imagine the thought experiment where you’ve got such certainty that this will work, with no other ill effects.

Expand full comment

More so, I think we intuitively know many other ill effects (such as people actively avoiding doctors!), and reject the theory before we even really consider it.

Eugenics seemed right to a large group of people, including many of the brightest minds of their time. We now fully reject most of what they said, and it's not because they weren't thoughtful. It's because of the practical application (as demonstrated most effectively by Hitler), and how obviously wrong it appeared to pretty much everyone. A real life application of "smart people should breed more and dumb people less" gets you a Hitler. Unintended consequences, for sure, but real.

Expand full comment

Well, Hitler did his level best to wipe out the highest-IQ population in Europe, which doesn't look like a central example of eugenics. Did the Nazis do anything to try to impose "smart people breed more dumb people breed less?" Other than forcibly sterilizing or murdering seriously disabled people, I've never heard of anything. Were they paying a bounty for German scientists or doctors to have more kids or something?

A bunch of the countries Hitler was at war with also had eugenics programs. And the reason they were terrible wasn't death camps or murder factories, it was the individual-level awfulness of having some bureaucrat or judge have the power to declare that "three generations of imbeciles are enough" and have you forcibly sterilized. In the US, those decisions were often partly based on race (because it was a deeply racist society), but it's not like that kind of coercive eugenics would be okay if it *weren't* applied in a racially biased manner. A bunch of Scandinavian countries also had eugenics programs, which presumably were all blonde haired blue eyed people sterilizing other blonde haired blue eyed people. I doubt this made it any less nasty.

Expand full comment

I largely agree with "Step C is probably the problem".

My read is this: The problem is the unspoken assumption that the "goodness" of a world can be taken by linearly summing the "goodness" of each life within it.

How certain are we that this is a realistic assumption? Why *should* it be true? There are an infinity of ways to summarize a population of individual numbers, and choosing one is a fraught task even in mundane real-world statistics.

Our judgment on a world should not contain *any* delight about its highest highs, its greatest heroes? Nor any unique appreciation for the valor exhibited by those at the lowest lows?

Given how aesthetic human morality is--the more I think about it, the less I think that "just sum up all the numbers" is an obviously right or good answer. I think it's very likely that redistributing all the utility in World B to get to World C can, at least potentially, make that world *worse*.

Has some downer implications for the pithy phrase "shut up and multiply", but we can stand to lose a catchphrase or two. Doesn't even mean utilitarianism is toast. Just have to be more careful about how many conclusions you draw and the confidence with which you claim them.

Expand full comment

You get to the same result even if it's "lower the original people's wellbeing by 0.0001% to raise the new people's wellbeing up (either just a bit, or all the way to this level)." It doesn't require full equality, just that there is any reasonable tradeoff between the wellbeing of the original people and the new people. I think most people would agree that the annoyance of having to retie a shoelace is worth others having a great and fulfilling day, and once you accept any tradeoff in that regard the repugnant conclusion is back.

Expand full comment

This guy is trying to seagull my eyes 100%.

Expand full comment

Ahaha, I'm afraid the seagulls always do start with "you have so many chips, just one please!" and eventually they end up with your eyeballs.

Expand full comment

More seriously, if the argument can derive something huge with the addition of a very minor tradeoff, then it feels like it should be able to derive it without it OR there's something wrong with the argument (not that I can find it)

Expand full comment

To quote Sir Boyle Roche:

Opposing a grant for some public works:

“What, Mr. Speaker, and so we are to beggar ourselves for the fear of vexing posterity! Now, I would ask the honourable gentleman, and this still more honourable house, why we should put ourselves out of the way to do anything for posterity; for what has posterity done for us? I apprehend gentlemen have entirely mistaken my words. I assure the house that by posterity I do not mean my ancestors, but those who are to come immediately after them.”

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Despite what most people would agree on, perhaps all such exchanges should be Pareto improvements, and that's the only way out of the conclusion.

I wasn't really expecting alt-right turbolibertarianism to be the only consistent political framework yet here we are.

Expand full comment

Isn't that the good old-fashioned reductio ab absurdum? If we start off with "having to retie your shoelaces is a trivial annoyance" and then it is supposed to bring us to "and now you have to bring into being a gazillion people who live on moss and rainwater and sleep on beds of nails", then surely you can say "that is absurd, and I am getting off this bus right here at this stop". Yeah, if I ride all the way to the end of the line, the Repugnant Conclusion is the last stop, but in that scenario, it's the last stop where the bus careers over the edge of the cliff. Nobody is forced to sit in the bus all the way to that ending, and nobody can be forced to do so.

Expand full comment

I think the general problem with this bus ride is that Rationalists tend to assume that all relevant parameters scale linearly. Except for AI, which grows exponentially. But basically, these are the only two options. Therefore, given two points such as "I'm having a normal day" and "I was having a normal day but now my shoelaces are untied", Rationalists will immediately draw a straight line through these points directly to "a gazillion seagulls feasting on a gazillion eyeballs". In reality, however, pretty much nothing in nature works this way -- and certainly not human psychology.

Expand full comment

Reductio ad absurdum is a valid form of proof, though! So some point of the chain of inference must be incorrect if you agree with the premise but disagree with the conclusion. It's disturbing if you can't point out where is the error, because that may mean you are wrong and the conclusion holds (or the error is just hidden well).

Expand full comment

Step C isn't just "equalise everything", it's "equalise everything *and then make everything better for everyone*". B is "X people at level 100, X people at level 60"; C is "2X people at level 95".

You have to view equality not just as neutral but as *actively bad* to reject the RC on that basis.

Expand full comment

I thought it was obvious that equality was actively bad?

We literally wouldn't exist without inequality because absent inequalities there is no sexual competition, therefore no evolution and no us.

Expand full comment

Evolution isn’t logically necessary to produce us. If you’re going to worry about practical difficulties like that, there are so many earlier places to stop the argument. The argument is about imagining if some things *were* possible, and people were as well off as described even after accounting for all physical problems, whether they would be better.

Expand full comment

Our existence is fantastically improbably without evolution.

Expand full comment

People also assume that equalizing happiness is actually possible. In financial terms it at least facially looks so - you can take money from one person and give it to another. But how do you equalize other aspects of happiness? What do you do for people whose primary happiness motivations are based on other factors such as health or status? What if making one person happy requires making others unhappy?

Expand full comment

"There are only 10^67 atoms in our lightcone"

Are there really? That doesn't seem right. There are about 10^57 atoms in the sun

https://www.quora.com/How-many-atoms-fit-in-the-sun

So 10^67 atoms is what we'd get if there were about ten billion stars of equal average size in our light cone. This seems, at least, inconsistent with the supposition that we might colonize the Virgo Supercluster (population: about a trillion stars.)

Expand full comment
author

I think this is implying "light cone within the 10,000 years mentioned by the example".

Expand full comment

Yes, though in our paper - https://philpapers.org/rec/MANWIT-6 - we used 100,000 light years, which seems like a better number given that it's basically encompassing the Milky Way, which is fundamental limit on human expansion via interstellar colonization over the next 100,000 years, and due to spacing between galaxies, is a moderately strong limit even over the next ten million years. (Modulus some of the smaller satellite galaxies, which might expand this to 150,000 light years, albeit with a much smaller corresponding increase in mass.)

Expand full comment

Conditional on the child's existence, it's better for them to be healthy than neutral, but you can't condition on that if you're trying to decide whether to create them.

If our options are "sick child", "neutral child", and "do nothing", it's reasonable to say that creating the neutral child and doing nothing are morally equal for the purposes of this comparison; but if we also have the option "healthy child", then in that comparison we might treat doing nothing as equal to creating the healthy child. That might sound inconsistent, but the actual rule here is that doing nothing is equal to the best positive-or-neutral child creation option (whatever that might be), and better than any negative one.

For an example of other choices that work kind of like this - imagine you have two options: play Civilization and lose, or go to a moderately interesting museum. It's hard to say that one of these options is better than the other, so you might as well treat them as equal. But now suppose that you also have the option of playing Civ and winning. That's presumably more fun than losing, but it's still not clearly better than the museum, so now "play Civ and win" and "museum" are equal, while "play Civ and lose" is eliminated as an inferior choice.

Expand full comment

How close would you say your conceptualization of this is to the idea of Pareto optimality?

Expand full comment

There's a lot of silly questions like this related to the transitivity of preferences. Two example from a behavioral economics textbook that I use (https://www.amazon.com/Course-Behavioral-Economics-Erik-Angner/dp/1352010801/):

Suppose you are indifferent between a vacation in Florida versus a vacation in California. Now someone offers you that same vacation in Florida, plus an apple. Strictly better than no apple, so transitivity says you should also strictly prefer that to the California vacation.

Or suppose you have a line of 1000 cups of tea, each with one more grain of sugar than the last. You can't tell the difference between cups next to each other so you're indifferent between them, but the last cup is clearly sweeter than the first. According to transitivity, you should also be indifferent between the first and last.

In reality, people seem to only register differences in utility of certain relative amounts. Trying to make arguments appealing to both transitivity and tiny differences in utility is basically philosophical mugging.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

> MacAskill introduces long-termism with the Broken Bottle hypothetical: you are hiking in the forest and you drop a bottle. It breaks into sharp glass shards. You expect a barefoot child to run down the trail and injure herself. Should you pick up the shards? What if it the trail is rarely used, and it would be a whole year before the expected injury? What if it is very rarely used, and it would be a millennium?

This is a really bad hypothetical! I've done a lot of barefoot running. The sharp edges of glass erode very quickly, and glass quickly becomes pretty much harmless to barefoot runners unless it has been recently broken (less than a week in most outdoor conditions). Even if it's still sharp, it's not a very serious threat (I've cut my foot fairly early in a run and had no trouble running many more miles with no lasting harm done). When you run barefoot you watch where you step and would simply not step on the glass. And trail running is extremely advanced for barefooters - rocks and branches are far more dangerous to a barefoot runner than glass, so any child who can comfortably run on a trail has experience and very tough feet, and would not be threatened by mere glass shards. This is a scenario imagined by someone who has clearly never ran even a mile unshod.

Expand full comment

I thought of this too but this is not the point of the argument. Just pretend it's a hypothetical universe with hypothetically forever sharp glass, otherwise barefoot-welcoming trail and lots of unshod children with tender feet. It's easy enough to construct a mental universe in which this dilemma works.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

It's not the point, no, but my point is: if the hypothetical underlying this entire book is so ignorant of reality, how much should we trust the author?

Expand full comment

It’s important to improve thought experiments to appropriately consider physical possibilities and our intuitions. But with a philosophy book or a math book, you shouldn’t be *trusting* the author - the author is just stepping you through reasoning and you should be deciding whether you trust *yourself*.

Expand full comment

I wouldn't characterize this book as a work of moral philosophy, it makes too many empirical claims.

If you're promoting "long-termism" you need to demonstrate the ability to reason about long-term outcomes. Demonstrating a comical inability to do so in a toy case you've constructed as your ideal hypothetical, points to either your own incompetence or the futility of the project. In either case, it casts doubt on his overall argument even if the reasoning is valid.

Expand full comment

Beautifully and succinctly expressed. Thank you!

Expand full comment

I trust myself that when I see "oh no, the cute moppet argument!", I decide the author of same is full of beans and I don't accept any arguments they are trying to sell me, look at this lovely bridge, on special offer today, wouldn't anyone be proud to own it?

The Drowning Child and the Barefoot Child are both recipients of the Darwin Awards and we should be glad to remove such stupidity so early from the gene pool!

Expand full comment

I laughed out loud, thank you

Expand full comment

My standard response to the Drowning Child is that on the first day that it happens, I might well save the kid and think nothing more of it.

If I find myself wading in to save the kid again the very next day, then *someone* - the child, its caretakers, or whoever is responsible for the body of water; possibly all three - is subsequently getting thrashed within an inch of their life, so they know better in the future.

On the third day, I'm taking the bus.

Expand full comment

Mine is that I'm wearing the expensive suit because I'm on my way to a job interview, and by jumping into a torrential wall of water, I die; my family is made homeless, and as my own daughter lays dying of starvation, phthisis and exposure, she gasps out "what idiot jumps into a flash flood anyway?"

Expand full comment

But thought experiments of this sort are not meant to be realistic at all (trolley problem, anyone?) In general it's a reasonable heuristic (don't trust an author who commits basic blunders) but I don't think it applies in this case.

Kenny Easwaran makes a valid point below as well. You don't need to trust philosophers, you only need to examine the arguments they're laying out.

Expand full comment

You still need for your thought experiment to map onto reality somehow, otherwise you're merely counting angels dancing on pinheads.

The trolley problem is not a situation anyone is likely to find themselves in at any point in their life, but it's not like it is fundamentally *unrealistic*. It *could* happen to you.

More importantly, the trolley problem is merely one example of the sort of problems that real people *do*, in fact, face quite regularly. Take the current war in Ukraine - there are civilians trapped in a location that is subject to heavy enemy bombardment; do you attempt to evacuate them, with a significant chance of failure and loss of your rescue party, or do you abandon them to their fate?

However, there are numerous thought experiments that are much harder to map onto any sort of real-life scenario. I'd say that asking "so, how does this cash out?" is well-advised before you even begin to evaluate the argument, because most thought experiments are, frankly, pretty bad.

Expand full comment

To put the point differently, if the author uses a simplified picture of the world for his hypotheticals without realizing it he may be badly underestimating the difficulty, in a complicated world, of knowing the long term consequences of current choices.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

That's a valid criticism. However, I feel that there are two questions here:

1) What is the desired long-term outcome of our current choices, from a moral standpoint?

2) How do we go about achieving the desired long-term outcome, given the difficulty of predicting long-term consequences of current choices?

Those are separate questions and you can't criticize the author for failing to adequately address question 2 if he's focusing on addressing question 1 (which I believe he is when he conducts the glass shard thought experiment).

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

I don't think the hypothetical addresses only question 1. The answer to question 1 here is simple: we would rather that our actions not maim any children, now, or in the future.

It's the attempt to predict the long-term consequences of a simple action (leaving glass shards on a trail, given that a barefoot child will be running down it in the future), which is strictly part of question 2, that MacAskill trips over himself. In a hypothetical he set up to be the simplest possible case where we can obviously predict the consequences of our actions, he fails to correctly predict the consequences of actions.

He picked this hypothetical! As his obvious case! This is his drowning child, it's like if Peter Singer intentionally picked Aquaman as the drowning child, or if the original trolley problem included a third track that didn't require killing anyone.

Expand full comment

I don't think question 1 is so simple. Of course ideally we would not injure any children at any point in time. The question is, is there a moral discount applied to children in distant future. Is hurting a child a million years from now exactly as bad as hurting a child today? Or better? Or worse?

Expand full comment
Aug 27, 2022·edited Aug 27, 2022

The answer to 2) is easy: do nothing.

Technological progress has happened and is happening. We have reason to expect it to continue to happen. People in the future will be inconceivably wealthier than us, and even more capable of solving their problems and fixing things, given that they will have many more examples of how (and how not) to do it than we do.

We don't owe them anything. They owe us, the bastards.

Given this, the answer to 1 is easy also. What is hard for us is easy for them; and we should not strive to control other people's lives.

Expand full comment

"Pretend I have a hypothetical which justifies my conclusion" is not good policy when engaging with a difficult conclusion. (Here "you" isn't you, it's MacAskill)

Expand full comment

But it's not that. The conclusion here is not "the shards will hurt a kid a million years from now". The conclusion is "IF we knew the shards would hurt a kid a million years from now, we should care about it as much as about hurting the kid today." For an "If X, then Y" statement to be true, the X part need not be true at all.

Expand full comment

But MacAskill is failing to construct a good hypothetical. People are sticking on this not because they can't imagine a hypothetical that supports the point, but because "he can't even make one" is a point in favor of "long termists don't realize how much they're misunderstanding and how wrong their interventions might be".

Expand full comment

I don't disagree on this. I disagree on a different point but I don't know how to put it more clearly than I already have. :(

Expand full comment

I think I got it from the other reply chain, no worries

Expand full comment

Ok, replace it with dropping some rust resistant nails.

Expand full comment

Or why not bear traps? I'm in agreement with Mentat Saboteur on this: such hypotheticals are not meant to bear any resemblance to reality, but rather to set up an appeal to the emotions (ironic, considering that the people creating them would say they are using reason and logic).

After all, why make it a *barefoot* *child* running up the trail? Why not an adult? Are we supposed to care less about a six foot four beefy hairy guy in bare feet getting cut on glass?

For maximum appeal to moderns, drop the child and make it a cute widdle puppy or kitty. People care more about their animals today.

Expand full comment

I was thinking the glass would just get covered with dirt over time. Layers accumulating over time is pretty basic geological history.

Expand full comment

My reaction to the glass shards example is that I feel intuitively less concerned about someone stepping on it in a hundred years than tomorrow and I'm someone who finds caring about future people reasonably intuitive in general. A small harm in a world a hundred years in the future that I barely comprehend just seems less immediate than it happening tomorrow. I care about the future people existing and their big picture happiness, but caring about them stepping on glass seems like micro-managing the future in a way that doesn't seem worth it.

So for me that example does the opposite of what it's intended to.

Expand full comment

It's inadvertently a great hypothetical because it shows how many wrong assumptions can get folded into long-termism.

Expand full comment

I'm not disagreeing with this part at all.

It's a decent hypothetical for answering the first question I referred to earlier ("should we care today about a better tomorrow?", where tomorrow stands for "hundred/thousand/million years from now) and a very flawed one for answering the second one ("what should we do today for a better tomorrow?")

Expand full comment

Fair.

I personally don't think it provides much value for the first question beyond what's already common: we care about the future, but at a discount because of how little we understand the future. It's good not to break the future, but beyond that, it's in many ways not our place to manage it.

In this way I think the second question is intertwined with the first one, and the hypothetical doesn't move me, at least.

Expand full comment

The real-world version of this is the use of landmines and cluster munitions in war, right? Kids born a generation after the end of the war occasionally lose a leg stepping on an old mine. How much should you be willing to give up in military success today in order to avoid some kids a generation from now losing legs?

Expand full comment
Aug 27, 2022·edited Aug 27, 2022

But I think it's completely uncontroversial among regular non-EA people that landmines aren't worth it; if you think they are worth it, you are probably a self interested belligerent in an actual war, and everyone (including those in other wars) doesn't want you to do it. Even the US, place of questionable military practices, hasn't used landmines in 30 years.

The point of which is to say, EA isn't bringing anything new to the table in examples like these.

Expand full comment
(Banned)Aug 25, 2022·edited Aug 25, 2022

This whole discussion is asinine. The vast majority of people do not know or care that glass erodes over these time frames, therefore the hypothetical is for all intents and purposes identical to one made in a world where glass does not erode that quickly.

This whole argument is barely one pip above the level of saying that somebody made a grammatical error and so trchnically their argument is incoherent and the making of this error means their reasoning should not be trusted.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

When I think of happiness 0.01, I don't think of someone on the edge of suicide. I shudder at the thought of living the sorts of lives the vast majority of people have lived historically, yet almost all of them have wanted and tried to prolong their lives. Given how evolution shaped us, it makes sense that we are wired to care about our survival and hope for things to be better, even under great duress. So a suicidal person would have a happiness level well under 0, probably for an extended period of time.

If you think of a person with 0.01 happiness as someone whose life is pretty decent by our standards, the repugnant conclusion doesn't seem so repugnant. If you take a page from the negative utilitarians' book (without subscribing fully to them), you can weight the negatives of pain higher than the positives of pleasure, and say that neutral needs many times more pleasure than pain because pain is more bad than pleasure is good.

Another way to put it is that a life of 0.01 happiness is a life you must actually decide you'd want to live, in addition to your own life, if you had the choice to. If your intuition tells you that you wouldn't want to live it, then its value is not truly >0, and you must shift the scale. Then, once your intuition tells you that this is a life you'd marginally prefer to get to experience yourself, then the repugnant conclusion no longer seems repugnant.

Expand full comment

Came here to post this. Additionally, .01 can be characterized as “muzak and potatoes” but could easily also be lots of highs and lows that add up to a life the people living it are glad they have: heartbreak, turning that heartbreak into art, and so on. So a .01x1e100 world could be very vibrant and interesting, not just “gray.” It would contain more awesome experiences, insights, and cultural diversity than the 5 billion person flourishing world, and no one would feel as though their participation in the enterprise wasn’t worth it.

A more realistic concern is that a galaxy of humans or dyson sphere full of ems or whatever that *didn’t* put a lot of effort and coordination and whatever info ensuring everyone’s flourishing would almost certainly feature a lot of extreme suffering and lives you wouldn’t want to have lived. I consider this pretty relevant to the considerations of whether you’d prefer singletons vs competitive equilibria.

Expand full comment

This is probably a lot of the problem with the repugnant conclusion - we really don’t know how to visualize it. (I sometimes say “the repugnant conclusion is that millions of people should be allowed to live in skyscrapers in San Francisco without cars”.)

Expand full comment

RE: muzak and potatoes, it does depend what the muzak is (e.g. continuous loops of John Ritter's Christmas treacle would indeed be hell on earth). As an Irish person, I cannot agree that potatoes are a sign of badness 😁

Expand full comment

It’s not meant to be bad - it’s meant to be just fine.

Expand full comment

Yeah this is pretty close to what I wanted to say. I would consider the thresholds of quality-of-life for a life to be "worth starting" and "worth continuing" to actually be pretty far apart (with the former being much higher than the latter - e.g. for my own life, I very very strongly want not to die, but I'm kind of on the fence as to whether or not I'm glad I was born), and a main thing that makes the repugnant conclusion seem repugnant is people assuming that the former must be as low as the latter.

Expand full comment

> If I had to play the philosophy game, I would assert that it’s always bad to create new people whose lives are below zero, and neutral to slightly bad to create new people whose lives are positive but below average.

Any view that takes the average into account falls into the Aliens on Alpha Centauri problem, where if there are a quadrillion aliens living near Alpha Centauri, universal average utility is mostly determined by them, so whether it's good or bad to create new people depends mostly on how happy or miserable they are, even if we never interact with them. If those aliens are miserable, a 0.001 human life is raising the average, so we still basically get the Repugnant Conclusion; if they're living lives of bliss, then even the best human life brings down the average and we shouldn't create it.

Expand full comment

Under the assumption that your ethical system is isotropic across alienness and physical distance, yes…

Expand full comment

Do people who accept the Repugnant Conclusion, also believe in a concrete moral obligation for individuals to strive to have as many children as possible?

Some religions do, but I'd be surprised to find a modern atheist philosopher among them. But if you accept the premise that preventing the existence of a future person is as bad as killing an existing person..

Expand full comment
author

The Repugnant Conclusion doesn't imply that preventing the existence of a future person is as bad as killing an existing person!

I think if you accept the Conclusion, then (assuming your children will have better than zero lives and not contribute to some kind of resource-shortage) having children becomes a morally good thing to do, but not necessarily better than donating to charity, being a vegetarian, voting for the right side in elections, or anything else that most people consider nice but not obligatory.

Expand full comment

Clear, thanks!

Expand full comment

Scott's answer is somewhere on the continuum of "taking utilitarianism seriously" other than 100% (like most Rationalists - EY said 75% on Twitter recently https://twitter.com/ESYudkowsky/status/1497157447219232768). At 100% utilitarian seriousness, all good things are obligatory; the system is a *notoriously*-harsh mistress in that regard.

However, if having kids is for some reason costly* for you to do, even utilitarianisms that embrace the RC say it may not be the best life choice *for you*. If, say, having kids would cause you to not prevent an X-risk, then having kids is not the most good you can do.

*Has to be unusually costly, though, otherwise your utilitarianism is itself an X-risk by telling all people to not have kids.

Expand full comment

> At 100% utilitarian seriousness, all good things are obligatory; the system is a notoriously-harsh mistress in that regard.

I think that claiming this is "utilitarian seriousness" is ignoring the fact that there are plenty of non-utilitarian moral maximalists out there, most notably Jesus Christ. I don't think moral maximalism and utilitarianism are actually the same thing, despite how often they're conflated.

Expand full comment

There are a bunch of moral systems that even at 100% seriousness don't prescribe an exact sequence of actions, and a bunch more that have clearly-delineated "tiers" of obligations (e.g. Kant's imperfect duty). Utilitarianism has no obvious line between the good and the obligatory.

Expand full comment

Yes, and deontology also has no obvious line between the good and the obligatory, nor too does virtue ethics, because you must add that line yourself, as Kant did in his specific form of deontology.

Expand full comment

>>>nice but not obligatory

This covers a lot of territory, even without the 'most people' qualifier.

Are there any positive things that are considered nice *and* obligatory? By EAs if not most people?

(Also, I think that whether those things are even considered nice depends a lot on ones circle.)

Expand full comment

Isn't the whole point of EA that YES, it is morally obligatory to increase overall utility by all the means available to you?

Expand full comment

It's more like, as long as you're acting to increase utility, you might as well do it in the most efficient way.

Expand full comment

Only if the well being of those children will be greater than the cost in well being to others.

Expand full comment

Not everyone who accepts it is a utilitarian. I mentioned Michael Huemer, for example.

Expand full comment

The suppositions of misery - whether impoverished nations or sick children- to me always seem to leave aside an important possibility of improvement.

The nation could discover a rare earth mineral. A medical breakthrough could change the course of the lives of the children. A social habit could change.

In fact, while the last half millennium

has been Something Else, and Past Performance Is No Garuntee of Future Returns, it does seem that future improvements are, if not most likely, at least a highly possible outcome that needs consideration.

(Been a while since a post has contained such a density of scissor topics.)

Expand full comment

"they decided to burn “long-termism” into the collective consciousness, and they sure succeeded."

If the goal is "one-tenth the penetration of anti-racism" or some such, that at best remains unclear. It's worth dwelling on your identity as an EA + pre-orderer here and realizing that very few media campaigns have ever been targeted so careful at "people like you." Someone on Facebook asked if anyone could remember a book getting more coverage and I think this response would hold up under investigation:

"Many biographies/autobiographies of powerful people; stuff by Malcom Gladwell, Tai-Nehisi Coates, Freakonomics, The Secret… worth remembering that this is a rare coincidence where you sit impossibly central in the book's target demo. Like if you were a career ANC member, A Long Walk to Freedom would have been everywhere for you at one point"

Expand full comment
author

One tenth the penetration of anti-racism would be amazing. I don't think long-termism is anywhere near that amount yet but I think it's done very well with the resources available to it. This is like your product running an ad campaign, seeing sales dectuple, and complaining that it's still not as well-known as Coca-Cola.

Expand full comment

Extremely fair. My bar for "burnt into the collective consciousness" is jsut higher than yours. Even on your terms, I still think it's too early to tell whether this has made much of an impression on people outside of EA. There should be predictions about changes in GWWC pledge growth rates, EA Forum engagement, 80k newsletter subs. My prior is two weeks of double the baseline, decaying back to 5-10% above baseline by November. It's big and you've warded off stagnation, but you're not a clear path to wide relevance.

Expand full comment