636 Comments
Comment deleted
Expand full comment

Most people have a very narrow view of the world, by necessity. They are also trained to be on the lookout for scams, also by necessity. Once you realize these two facts, the world of persuasion changes immensely. Even very obvious moral, philosophical, or practical truths will hit a lot of resistance. You need to develop reasoning and arguments that is relevant and understandable to the people you need to convince.

"White people are evil" is never going to be a selling point to white people. If you need white people's buy in to make a plan work, you need a different approach. If you can move your plan forward without the support of a group, then perhaps you should reconsider the position that "This outgroup who I consider evil is in control of everything and preventing other people from getting ahead."

Expand full comment

> "White people are evil" is never going to be a selling point to white people.

You underestimate how narcissistically self-obsessed some white people are.

Expand full comment

Of course, and we have seen plenty of evidence that some white people do in fact express that opinion.

We're missing two things though.

1. The population of white people who accept "we are evil" terminology is much smaller than the population of white people who reject it, in part or full. What percentage of the population would accept a less confrontational message is the most interesting question here.

2. People can say whatever they want, often in self-serving ways. Measure how many ultra-progressive white parents in NYC are sending their kids to public schools instead of private over the last 10 years (about the maximum timeframe of the current progressive push), and I think you'll find that the actual actions of these people is not much different than it was before.

Expand full comment

Hypothetically, what's going on is white people really saying "White people are evil, but those *other* white people are more evil."

Expand full comment

I think there's some good evidence that there's a group of white people that say white people are evil, but do not seem to include themselves at all. Freddie deBoer has had several articles looking at specific examples.

It's a lot easier to say "other white people" are a certain way, which is just naked outgroup bias.

Expand full comment

> Ted will ask you to give one of his talks.

As a counterpoint, the top TED talk by views is by waitbutwhy, a blogger whose only Amazon e-book is called "we finally figured out how to put a blog on an e-reader".

Talk: https://youtu.be/arj7oStGLkU

Blog: https://waitbutwhy.com

Expand full comment

Fun fact, the writer of waitbutwhy is rationalist/EA adjacent and wrote one of the most popular intros to AI risk.

Expand full comment

Wow, I found out about Tim & his blog around the time he gave that talk and I had no idea it became the most-watched TED talk.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Once again happy not to be a utilitarian. Good review!

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

On a scale of 1 to 10...... never mind.

Expand full comment
founding

The repugnant conclusion always reminds me of an old joke from Lore Sjoberg about Gamers reacting to a cookbook:

"I found an awesome loophole! On page 242 it says "Add oregano to taste!" It doesn't say how much oregano, or what sort of taste! You can add as much oregano as you want! I'm going to make my friends eat infinite oregano and they'll have to do it because the recipe says so!"

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

> ...happiness 0.001 might not be that bad. People seem to avoid suicide out of stubbornness or moral objections, so “the lowest threshold at which living is still slightly better than dying” doesn’t necessarily mean the level of depression we associate with most real-world suicides. It could still be a sort of okay life. Derek Parfit describes it as “listening to Muzak and eating potatoes”.

Except now, you have to deal with the fact that many-to-most existing people lead lives *worse than death*. Mercy killings of the ill-off become morally compulsory; they may not actively choose it, some may even resist, but only because they're cowards who don't know what's good for them.

Put the zero point too low, and consequentialism demands you tile the universe with shitty lives. Put it too high, and consequentialism demands you cleanse the world of the poor. There is no zero point satisfying to our intuitions on this matter, which is a shame, because it's an *extremely critical* philosophical point for the theory - possibly the *most* critical, for reasons MacAskill makes clear.

Expand full comment

This whole way of thinking (ten billion people, 0.001 happiness) seems absurd to me.

If happiness were as simple as inches, ok, sure. But I don’t think that’s the case. “Some people are happier than others” is a very different thing from “everyone has a happiness number and happiness numbers can be meaningful added linearly.”

Like, if we’re REALLY longtetmist and care about human happiness, shouldn’t we do a bunch of research into creating a utility monster AI that just sits around wireheading itself to infinity?

Expand full comment

AIs aren't human, so caring about human happiness doesn't imply caring about utility-monster AI happiness.

I'm not sure there's a simple fix for this that would restore the utility-monster argument without encoding additional strong assumptions; normal humans don't have uncapped happiness, so to provide a "this is insane" to a "disqualify all 'happy' utility monsters from personhood" argument you need to have Nietzsche-like intuitions regarding the desirability of the Ubermensch.

Expand full comment

Why should I believe that normal humans have capped happiness?

Expand full comment

Normal human brains have a maximum size. This implies a finite number of possible states. Finite sets of finite numbers must have a maximum.

As a stronger claim, activating the reward circuit more than some finite amount will burn it out.

Expand full comment

Is having the reward circuit activated the same thing as happiness?

Expand full comment

It's pretty close, and even if there are additional stipulations added the same capped-activity issue tends to crop up.

Expand full comment

I think the argument goes through regardless of the function which is defining happiness. If it maps states to values, then there are a finite number of states and therefore a finite number of happiness values.

Expand full comment

> AIs aren't human, so caring about human happiness doesn't imply caring about utility-monster AI happiness.

I don't see what makes humans special. If there's a person / sentient entity, there's a sentient entity. Ofc. AI doesn't imply personhood, but I assume Mark meant AI which is a person and not just some utility function.

Expand full comment

Lots of people explicitly only care about humans (e.g. https://slatestarcodex.com/2019/05/01/update-to-partial-retraction-of-animal-value-and-neuron-number/).

You can also go into social contract theory, which says that morality is basically a contract between beings to achieve better results for all parties (compared to the state of nature) and at least some large chunk of possible minds are not useful to give concessions to because they are not capable of negotiating in good faith and giving concessions in return. This doesn't rule out all at-least-human-equivalent AIs (e.g. a 1:1 upload of some actual non-sociopath human would probably pass), but it rules out a huge chunk of them.

More basically, I was pointing out that the argument was formally invalid insofar as it assumed something not stated. One can argue the assumption should be granted, but it should still be noted.

Expand full comment

The point is that we *don’t* just believe that some people are happier than others. At least sometimes, we believe that it’s overall better to make one person better off and another person worse off. You need a lot more structure to get this as precise as real numbers, but even just believing that some trade-offs are improvements, and that improvement is transitive, gets you much of the way there (though I don’t necessarily believe you automatically get a repugnant conclusion).

Expand full comment

> At least sometimes, we believe that it’s overall better to make one person better off and another person worse off.

Can you say more about this? I get that if you can lower A’s happiness by 0.1 and raise B’s happiness by 100, the total happiness of that pair is on net higher. But this is a tautology dressed up as a belief; it’s an insight that’s valid so long as we accept some premises that I’m rejecting here: namely, the premises that it’s possible to accurately measure and compare happiness in ways that allow for meaningful numerical comparisons, and that it’s possible to “make someone less happy” without a bunch of other side effects.

Expand full comment

You’re assuming the conclusion in that example. There are fundamental claims like “fixing someone’s broken arm is worth it even if you step on someone’s toe on the way”, and this claim together with many others is what we systematize when we say that a broken arm is worth 100 utils and a stepped toe is only worth 1. It’s exactly the same as with any other measurement - when we say that the mass of a brick is 500 grams what we mean is that the brick will tip the scales against some objects but not others. In reality, you can never move a brick without moving many other objects, but the numerical representation of mass is a convenient tool for summarizing all these facts, even though the fundamental thing doesn’t involve numbers at all.

Expand full comment

How are you choosing how many utils you assign to each item? I understand that if we _could_ measure things like pain and goodness directly then sure, we’d want to maximize those things. But I don’t think we can measure these things. Is there something I’m missing?

Expand full comment

This is absolutely the hard part. In orthodox decision theory, you get a scale for an individual by seeing what chancy options they prefer to others, and designating one particular difference arbitrarily as a unit, and then saying that something else is worth n units to them if they are willing to trade a 1/nk probability of that thing against a 1/k probability of the unit (and you have to measure their probabilities simultaneously - you don’t get to use objective probabilities). To get interpersonal comparisons, there are good arguments that it is impossible, but there are also arguments that actual people are similar enough in their preferences that we can use some standard good as the unit to compare different people’s preferences.

In principle, there’s nothing different about this than measuring anything else we measure with numbers, but in practice there are few general laws we can use to simplify things the way we do with masses and lengths.

Expand full comment

I think that we agree on two important points here.

1. Some things are better than other things

2. That is not actually possible in practice to assign meaningful numeric values to how good everything is.

The main point of difference is that having acknowledged 2 you seem to want to continue to reason as if it is possible, whereas I would rather stop trying.

What is my preferred solution? Humility and a bias towards inaction. I will make blatantly obvious trade-offs like stepping in a toe to save a broken arm, but not non-obvious ones. I won't step on a thousand toes to save a broken arm, nor break an arm to save a thousand trodden toes. I will let nature take its course. On a personal level this looks like minding your own business, on a societal level it looks like libertarianism.

Expand full comment

I think the problem here is that many things that can be done at the level of laws or policies end up with harming some people to help others, and the only way to evaluate them seems like it's trying to figure out whether that tradeoff is worthwhile. If there is a policy that will take $1 from every American and then give $10 million to one guy, is that a good or bad policy? How can we tell?

Expand full comment

I think including thermodynamics or conservation of mass is sufficient to fend off the repugnant conclusion. So long as creating new people has material opportunity costs, proof fails at the "...without harming anyone who already exists" step.

As a third option, even if a repeatable exception to basically everything we know about physics were discovered, defining "happiness" in such a way that it includes economic concepts such as comparative advantage, gains from trade, and deadweight losses, could make those perfect, frictionless transfers of utility between arbitrary individuals increasingly implausible at larger scales - or, if transaction costs are assumed to be arbitrarily low, adding new meaningfully distinct people to the system will inherently provide marginal benefits proportional to the number of people who already exist (thus combinatorially accelerating as it scales up), due to new trade opportunities.

Either way, some of the sneaky spherical-cow assumptions start to require extraordinary justification, without which the repugnant conclusion falls apart.

Expand full comment

To me, this is a fundamental problem with utilitarianism. Assigning utils is generally a matter of gross estimation and intuitive assignments about how others would feel about things. We can randomly assign 1 util, 10 utils, or 1,000,000 utils to whatever activity or life we want. If you try to do the math with such shaky numbers, you can make the conclusion be anything you want.

Expand full comment

Being more specific, let's look at animal welfare. Some people assign 0 moral weight to animal suffering. Others say it's a fraction of humans (so if a human is worth 1, maybe a smart animal is worth 0.25 and a dumb animal is worth 0.01), or some say that it's equal to humans. How can you do any math and plan any future action using numbers with such wildly varying levels? Are insects worth 0.0001 of a human? If so, then they are of more moral worth than all humans combined. If you manually adjust down your numbers because the conclusions seem off, then the original numbers and any calculation used to achieve them was just window dressing on our intuitions. If we're just leaning on intuition, then utilitarianism itself is pointless.

Expand full comment

That’s not a problem with utilitarianism - it’s a problem with simplistic attempts to think you are *applying* utilitarianism.

Expand full comment

What's the purpose of a thought experiment if the results aren't actionable? When someone calculates out the utils of a proposed moral choice, they aren't just asking an abstract, they are searching for a means of making life choices. If I can assign a 3.4 util to something, but someone else assigns a 5.9, then are we doing anything more than trying to express our moral intuitions? Utilitarianism is supposed to be a rejection of using moral intuition, but there's too little consistency to see it.

Expand full comment

Instead of asking the hard intractable question, “what does good mean”, utilitarianism punts, and says “ok assume there is some meaningful good for people, you know like saving them for drowning. Shouldn’t we multiply good by numbers of people?”

In other words, we don’t need to think at all about what good means for an individual, we can just take that as some given and use multiplication to compare different amounts. It amounts to little other than a belief in equality as good, but without coming out and saying as such.

Expand full comment

Utilitarianism *denies* that equality is good. It says that everyone’s good counts equally, which in some circumstances means we should work *against* equality.

Expand full comment

You can perform. calculations where a single good is multiplied a number of people, but you can't make non arbitrary comparisons between different goods. So it looks like utilitarianism is only useable in well behaved special cases.

Expand full comment

Utilitarianism doesn’t tell you how to decide what to do - it systematizes claims about what is right. Just because someone accepts relativity theory as correct doesn’t mean that they should do relativistic calculations when trying to catch a baseball. There are some circumstances where computing power and measurement are good enough that applying the theory is useful, but there’s no reason to think those are ordinary situations (or that the situations in an Einsteinian thought experiment are ever likely to be actual).

Expand full comment

Utilitarianism systemizes claims about what is right, based upon certain assumptions about what is right.

Do you think those assumptions are questionable?

Expand full comment

> To me, this is a fundamental problem with utilitarianism. Assigning utils is generally a matter of gross estimation and intuitive assignments about how others would feel about things. We can randomly assign 1 util, 10 utils, or 1,000,000 utils to whatever activity or life we want. If you try to do the math with such shaky numbers, you can make the conclusion be anything you want.

If you don't bother to try to estimate this _somehow_, you're maximally wrong or happened to guess correctly.

Expand full comment

Or you could reject the premise that it’s meaningful to quantify good on a per-person basis.

Expand full comment

Von Neumann showed us how to make happiness numbers meaningful. If I am indifferent between leading life A and a coin toss between life B and death, B has twice the happiness (aka utility) of A. The implication for maximizing total utility ...

Expand full comment

I can imagine a wireheading addict might prefer death to non-wireheading. So from his point of view, non-wireheading lives have negative utility. If different people have different hedonic set points / preferences then you can't really talk of B having twice the utility of A without reference to a particular observer frame. Utility is as subjective as the subjective theory of value that determines market prices.

Expand full comment

If you measure someone's utility by revealed preference, anyone who hasn't yet committed suicide and isn't currently locked up on suicide watch somewhere is deciding that their life is, in fact, worth living.

This seems like about the right way to measure what's best for people most of the time--if you choose X over Y, you must think X is better than Y. But of course we run into many places where people make choices that seem terrible for them, ranging from "I'd rather have today's heroin fix than eat" to "I'd rather sleep with this sexy tattooed bad girl than keep my marriage intact." We even put some people into the category of "can't make their own decisions"--children, the senile, the seriously mentally ill, the seriously intellectually disabled, etc.

And in many cases, the utilitarian decisions involve making tradeoffs between people that can't be addressed in this way, or can't without breaking a lot of what people want to use that reasoning to do. (By revealed preference, I spend a lot more on Starbucks than I do on feeding starving children in the third world, so that must be the correct utility calculation, right?)

Expand full comment

What makes death "good for them"? Is their Darwinian fitness increased if you kill them?

Expand full comment

The claim is that their well-being is increased. No need to bring Darwinian fitness into this. (What is well-being? That’s the difficult question here. I think the most plausible answer is something like “getting what they want”. In order to have negative well-being, your life has to consist of enough things hurting you and going in ways that make you unhappy that just not existing would be better.)

Expand full comment

I was responding to someone who claimed these people "don't know what's good for them", which sounds different from your "getting what they want". So a preference utilitarian would not kill such people, because by their own actions they appear to prefer living regardless of whether it's "good for them" by Crotchety Crank's standards.

Expand full comment

I feel like preference-satisfaction based theories of wellbeing interact weirdly with population ethics. Like, creating people who strongly wish to not exist but have strong preference-satisfaction scores in other respects seems bad. And if you try to restrict well-being to preferences 'about oneself' or something like that to avoid this implication, then you end up denying people all kinds of tradeoffs they might want to make (e.g. someone giving up some personal gain to save an old tree they care about).

Expand full comment

As a datapoint, my dad spent the last six months of his life in constant and increasing pain, with a terminal cancer diagnosis. He had no dependents, was an atheist, and kept a loaded handgun in his nightstand. And yet, he didn't commit suicide. I don't think anyone he cared about would have *blamed* him for committing suicide, and he told me he'd considered it, but he just didn't feel right about doing it.

Would he have increased his well being by doing so? I mean, from his own actions, he presumably didn't think so. But it's not a stretch to imagine someone doing a utilitarian calculation and saying "this guy's life is clearly in negative utility territory, let's put him out of his misery."

Expand full comment

I was specifically responding to the excerpt from Scott where he discusses what a life at utility 0.001–just barely better than nonexistence—looks like. He suggested it might be a merely *drab* life, mediocre music and bland food—but then anything worse is worse than nonexistence, hence the death of anyone in a worse position than that improves the universe.

I take it you disagree with the idea that a life barely worth living looks like that, and you think the zero utility point is a lot lower. But then you have to confront the “repugnant conclusion” in its harshest form—the dark conclusion Scott was hoping to dodge.

Expand full comment

This isn't a problem with consequentialism. You can make judgements about which worlds are better than others without making claims about what you ought to do. You've just pointed out a trouble with rating how good worlds are.

Expand full comment

Utilitarianism is a moral theory - a theory about what you "ought to do" - which explicitly works by aiming for the future which has the "greatest happiness for the greatest number". If there is "a trouble with rating how good worlds are", then for a utilitarian, that will directly cause troublesome "claims about what you ought to do".

On the other hand, if "claims about what you ought to do" *aren't* based on "judgements about which worlds are better than others", you've dodged the worry. But then you're probably not a consequentialist, and certainly not a utilitarian.

Expand full comment
founding

I generally solve that by accepting that people should have no restraints to ending their own life whenever they wish, and then setting the zero point at wherever would cause them to never do that even if there was zero stigma (which is a fairly high point).

Expand full comment

Feel free to define it that way; but if you’re a utilitarian, you probably shouldn’t. If you do, anyone under that “fairly high point” is making a negative contribution to universal utility, and the world is better off with them gone. I don’t think that “solves the problem” at all, it epitomizes it.

Expand full comment
founding

Valid, I do somewhat eschew utilitarianism for voluntarism here, in that something can be neither mandatory nor forbidden, and the very nature of it being neither holds at least enough inherent value to cover for the negative impact on people who made the wrong choice.

Expand full comment

Even if a person is below your fairly high point of utility at a given moment, they may still realistically expect to make a net contribution to universal utility by means of, e.g. research, charitable work, influencing the behavior of others, etc. no? Or, they may have a realistic expectation that at some future point their situation may change for the better. It's not like anyone who happens to fall under the threshold should at that instant be offed, right?

Expand full comment

Lots of people who attempt suicide and fail end up being very thankful that they failed, later on.

Indeed, it seems to me that people who commit suicide because their life is objectively terrible are *way* less common than people who commit suicide because their brain is malfunctioning in a way that makes them miserable enough in the moment to commit suicide. This famously includes many people who were apparently happy and functional a few days or weeks earlier, many people who objectively seem to have good lives, many people who appear to have a great deal to live for, etc.

I mean, you can make the revealed preference argument here w.r.t. the guy whose uncontrolled bipolar disorder finally led him to kill himself must have simply been making a rational decision to end his own unbearable suffering, but that's certainly not how we handle things in our society, and also it's very common that the person who tries and fails is later very glad they failed, or that someone who is feeling suicidal explicitly takes steps to make sure they don't kill themselves (seeking treatment or hospitalization, asking relatives to take guns and poisons out of the home, etc.)

Expand full comment

There are four issues with your arguments that come to me directly.

First: uncertainty. As we can not be certain how happy someone else is it is better to give them the benefit of the doubt, because killing them is a very permanent solution that we can not undo. It good to keep optionality.

Second: if we actually live in such a world people would be less happy. This kind of world would be terrifying to live in and make everyone miserable, thus starting a death spiral. That doesn't sound very utilitarian to me.

Third: Instrumental Utility of people to make other people happy. Some people might be just below the recommended level of happiness, but their existence brings so much happiness to others that they should keep existing.

Fourth: Limit the number of future people that can be happy (which I guess kind of fall under instrumental).

Expand full comment

The particular people who happen to be left remaining after the apocalypse is considerably more important than the availability of easily exploitable coal deposits.

People in many countries around the world are struggling to achieve industrialisation today despite a relative abundance of coal (available either to mine themselves or on the global market), plus immediate access to almost all scientific and technological information ever produced including literal blueprints for industrial equipment. That these people would suddenly be able to create industry after the apocalypse with no internet and no foreign trade, even with all the coal in the world readily available to them, is a loopy idea.

Medieval England was wealthier per capita than over a dozen countries are today in real terms, all without the benefit of integrated global markets, the internet and industrialization already having been achieved somewhere else.

I of course do not expect MacASkill to have written this in his book even if he recognized it to be true.

Expand full comment

Why does it have to be suddenly? From the point of view of "quintillions" and "galactic superclusters", a difference of say 50 thousand years basically doesn't matter, but it would be enough time for evolution to do its thing in regards to any thoughtcrime concerns you might have.

Expand full comment

It’s much easier to get industrialization started if you have access to coal and there isn’t already someone else who owns that coal and is taking it.

Also, you’ll have to give an example of a place that isn’t industrialized - I’m not aware of any other than some small bits of the Amazon and the Sentinel islands.

Expand full comment

The set of people alive a few decades after the apocalypse will be substantially different from the population before it, both in terms of culture and to a lesser extent in terms of genes.

Expand full comment

Also, in recovering from an apocalypse that doesn't blast us back to the stone age, we will start with a bunch of advantages our ancestors lacked--phonetic writing, Arabic numerals including 0 and a decimal point, the germ theory of disease, the concept of experiment as a way to learn things, the basic idea of there being natural laws that can be understood and exploited, etc. Even if our successors have forgotten most other stuff, just those things will make everything go a hell of a lot faster.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Gentle mention, you missed Sean Carrol, via his Mindscape Podcast, as a recent interviewer https://www.preposterousuniverse.com/podcast/2022/08/15/207-william-macaskill-on-maximizing-good-in-the-present-and-future/

Expand full comment

The repugnant conclusion seems unintuitive to me, specifically because it fails to consider the shape of the population-happiness tradeoff curve.

If you imagine this curve being concave down, then normal moral intuitions seem to apply: a large population that isn’t quite at carrying capacity is better than a much smaller, slightly happier population.

It’s really the concave up case that is unintuitive: where your options are a small happy population or a huge miserable one. But there’s no clear reason to my mind to imagining this is the case. Peoples utility of consumption seems to plateau relatively sharply, suggesting that a smaller society really wouldn’t unlock tons of happiness, and that having a giga-society where people still had net positive lives might not actually be many more people than the current 7 billion.

I don’t want to deny that it’s unintuitive that 20 billion people at happiness 10 really do outperform 1 billion at happiness 90, but I posit that it’s mostly unintuitive because it’d so rarely be just those two options.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

The part where you equalize happiness across population is the problem, once you forbid that the whole thing falls apart and you have sane ethics again.

I'll leave the political implications as an exercise to the reader.

Expand full comment

The Repugnant Conclusion seems to me to be a paradox, and the point of a paradox is to demonstrate that your reasoning went wrong somewhere and you need to retrace your steps to figure out where your mistake was.

Like the Grandfather Paradox is not a cookbook for cool science experiments. It strongly suggest time travel is impossible. And whaddayaknow, time travel turns out to be impossible.

Expand full comment

Stable time-loops aren't forbidden via our current understanding of physics. And the RC isn't a "paradox".

Expand full comment

Whether they're permitted by GR and whether they're physically real are different questions, and I don't think a stable time loop is what anyone means by "time travel" anyway.

I didn't say RC was a formal paradox, but it has a similar flavor and I think we should consider it in the same way.

Expand full comment

Well the RC assumes you can measure that effect. That there is a measurable unit of utility of 1 and a measurable utility of 100. I don’t know if that’s the full range but thats what I see in the literature. Negative utility exists too of course.

A good argument against a large population with positive 1 is that it doesnt take much to turn that society into a large population with negative 1, thats a loss of 2 units of utility per person. Which might just mean that everybody is almost as miserable as before but hungrier, or some other effect. We go from an over all utility of plus 1 billion to a negative 1 billion pretty quickly (the Standard scenario being 1 billion people with an utility of 1). On the other hand the contrasted 1 million people with a utility of 100, also seeing the same unit drop, goes to 98 million positive utility points. The RC isnt robust.

Expand full comment

There are strong reasons to prefer a Schelling Fence option of (randomly making this up), say 30 utils. So we can happily make the decision to add people and reduce happiness for a while, but would stop well before the bare minimum scenarios (1, or .001 from Scott).

The reason? As you say, it's easy for people to drop a small amount, and now you've created mass unhappiness. You need a buffer to account for bad times - like a famine or flood, individual heartache, etc.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Important to distinguish between the so-called "Repugnant Conclusion" itself ("a much larger, less happy population is better, at least for sufficient values of 'much' and utility remaining above some zero point") and the argument that this conclusion is in fact repugnant and should be rejected. They are, after all, opposites.

This mostly seems to be an argument that the latter is wrong and hence the "RC" is actually true.

Expand full comment

2 things: First, the "number of atoms" limit annoyed me when I saw it, since we can obviously get value from moving atoms around (sometimes even back to the same place!), so the possibilities of value-production are *much* higher than the constraints outlined.

Secondly, stealing my own comment from a related reddit thread on MacAskill: "The thing I took away from [his profile in the New Yorker] is that contrary to "near-termist" views, longtermism has no effective feedback mechanism for when it's gone off the rails.

As covered in the review of The Antipolitics Machine, even neartermist interventions can go off the rails. Even simple, effective interventions like bednets are resulting in environmental pollution or being used as fishing nets! But at least we can pick up on these mistakes after a couple of years, and course correct or repriotise.

With longtermist views, there is no feedback mechanism on unforeseen externalities, mistaken assumptions, etc. All you get at best in deontological assessments like "hmmm, they seem to be spending money on nice offices instead of doing the work", as covered in the article, or maybe "holy crap they're speeding up where we want them to slow down!" The need for epistemic humility in light of exceedingly poor feedback mechanisms calls for a deprioritisation of longtermist concerns compared to what is currently the general feel in what is communicated from the community."

Expand full comment

On the first point, you might appreciate our deep-dive into the logic here: https://philpapers.org/rec/MANWIT-6 (The second two-thirds of the paper is about refuting infinities, but the first third lays out why it takes pretty unreasonable assumptions to assume continuing growth.)

Expand full comment

I think this *should* be adequately dealt with by considering probabilities (not just probabilities of various underlying states of the world, but probabilities that actions now will have unforeseen effects). I do think this means that for most acts, considering the long term effects isn’t relevant. I think longtermists refer to this as “cluelessness”.

Expand full comment

How do you estimate the probability of something unknown? Like, can you tell me what the biggest concern of 2025 is going to be? We can guess about things we might be worried about today, but even something three years away is completely unknown to us. In 2019 very few people would have guessed that the following year's biggest concern was going to be a variation of the common cold.

Expand full comment

Sorry, this is silly and wrong. We estimate probabilities of unknown things all the time, and in mid-2019, everyone well calibrated who was paying attention would have said the probability of a novel respiratory pandemic was in the range in 1-5%, because base rates are a thing. Yes, the question of longer term prediction is harder, and over long enough time frames cluelessness becomes a critical issue. But 3 years isn't long enough. So of course we can't say with confidence, much less certainty, what the biggest concern of 2025 will be, but we can say many things about the distribution of likely things which would be big news - plausible flashpoints for wars, disasters, technological changes, risks from AI, economic crises that could occur, and so on.

Expand full comment

So, to the point of the book being reviewed, tell me how to actualize a number of small potentials? Your list is the kinds of things people expect to be big news, and I guess reasonably likely (if only because they are in huge vague phrases like "disasters"). How do you actualize the generic phrase "technological changes" as a flashpoint for action a year in advance? Should we prepare for a tsunami hitting Indonesia, or an earthquake in California? Some of our preparations might carry over, but a lot would not. What if instead "disaster" ended up being flooding in Russia?

The absolute smart money for "biggest item of 2020" in 2019 was the US Presidential Election. I would not be upset at anyone in 2019 who predicted that to be the biggest story. They would have been wrong. Anyone predicting a general and vague "respiratory pandemic" would still have had no idea what kind, where and how it would start and spread, and how to handle it. Someone saying "vaccine" would have been on a decent path, but that wasn't possible to do until the strain was isolated, after which we had a working vaccine in like, two days. If our best predictions can fail monumentally with less than a year of lead time, how much time and money are we willing to spend on figuring out and preparing for the big story of 2025? If we had a pot of $100 billion to spend purely on preparing for the biggest issue of 2025, how should we spend it?

Expand full comment

I'll also note that the second biggest story of 2020 was still not the election, but something absolutely *nobody* predicted - the BLM protests following George Floyd's death. Arguably "black man killed by police, unrest ensues" was something that could have some level of predictive value, but would anyone have bet strongly on the level of unrest? What preparations could or should we have made in advance of Floyd's death to prepare for that situation?

Expand full comment

First, preparedness spending is a thing, and risk mitigation is an entire field that governments spend time on. For example, the US had a pandemic response plan under Obama, but unfortunately Trump got rid of the office that was supposed to lead it, the the global health security team on the National Security Council. That's not a failure of preparation, but rather an idiotic dismissal of preparation that had already occurred, without replacement. Not so long ago, local disasters like floods led to starvation, instead of emergency response from FEMA. Thankfully, the US has agencies that respond.

And there are lots of things to spend money on that would yield preparedness benefits. So even if we could with high confidence predict the single biggest event in 2025, it would be strange to the point of absurdity to only prepare for the single biggest event, instead of mitigating a variety of threats, which, again, is what governments and disaster planning experts already do.

But if you want to know what I'd spend $100 billion on, it would mostly go to the American Pandemic Preparedness Plan, which was planned for $65.3 billion, and somehow wasn't funded yet - because unless we do something, COVID-19 won't be the last pandemic.

Expand full comment

> Even simple, effective interventions like bednets are resulting in environmental pollution or being used as fishing nets!

Turns out this is not actually a problem: https://www.vox.com/future-perfect/2018/10/18/17984040/bednets-tools-fight-mosquitoes-malaria-myths-fishing

Expand full comment

“suppose the current GDP growth rate is 2%/year. At that rate, the world ten thousand years from now will be only 10^86 times richer. But if you increase the growth rate to 3%, then it will be a whole 10^128 times richer! Okay, never mind, this is a stupid argument. There are only 10^67 atoms in our lightcone; even if we converted all of them into consumer goods, we couldn’t become 10^86 times richer.”

This is a common economic fallacy. Growth is not necessarily correlated with resource production. For example, if you were able to upload every living human’s mind onto a quantum computer, you could feasibly recreate reality at the highest possible fidelity a human could experience while simultaneously giving every living human their own unique planet--all while using less than the mass of the Earth.

As another example, consider the smartphone. A smartphone is several hundred times more valuable than a shovel, and yet a shovel probably has more total mass. This is because the utility of the smartphone, as well as the complicated processes needed to manufacture it, combine to create a price far higher than the simple shovel.

So yes, we could become 10^86 times richer using only 10^67 atoms. You simply have to assume that we become 10^19 times better at putting atoms into useful shapes. Frankly, the latter possibility seems far more likely than that humanity ever fully exploits even a fraction of atoms in the observable universe.

Expand full comment

I think you're wrong on the merits about becoming "10^19 times better at putting atoms into useful shapes," if only because it implies really infeasible things about preferences. There are also fundamental limits, and while 10^19 doesn't get close to them, I think they show that the practical limits are likely to be a real constraint.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Certainly, and I don’t mean to imply that I think getting 10^19 times better is an ordinary occurrence.

However, something close to that large has occurred in the past 70 years. The cost of computer memory has gotten around 10^14 times cheaper (source: https://hblok.net/storage/) and the size of a single bit of storage has decreases by a similar amount.

My main point is merely that economic growth is not at all inherently limited by the amount of resources available.

Nor, in fact, do physical limits necessarily impede growth, though they do stop growth in certain areas. Large diesel combustion engines, for example, are quite close to the maximum theoretical thermodynamic efficiency. Computer chips will also soon reach physical limitations caused by the size of atoms and their ability to prevent electrons from probabilistically flipping bits.

But who is to say that we do not simply switch to quantum computers, which have different physical limitations? Diesel engines could be replaced with small-scale fission reactors, or even fusion.

Are these potentially unrealistic? Yes. But note that the relevant question is not “will we run out of resources” (maybe) or “will we run into physical limits” (definitely), but whether we will always be able to find new, more useful technologies with ever-greater physical limitations indefinitely, and if not, when we will run out.

If you have the answer to that question, please write a book about it.

Expand full comment

> but whether we will always be able to find new, more useful technologies with ever-greater physical limitations indefinitely, and if not, when we will run out.

Assuming our current understanding of science is more or less true, the answer is "no". There's a limit on how much energy generation, computation, and other useful work you can pack into a cubic centimeter of space. Ultimately, E=mc^2, but you will hit other physical limitations long before that.

One obvious retort to this is, "aha, but obviously we will discover entirely new scientific principles that will allow for impossible things to happen". That is a valid point, but if you believe that, then you are no longer engaged in predicting the future; you're not even engaged in wild speculation; rather, you're just writing science fantasy.

Expand full comment

No. It is not fantasy to claim that we will discover new technologies which will allow us to do things we currently do not think are possible. It is obvious—unless you believe our understanding of physics is complete.

Will this allow for infinite growth? Almost certainly not. But to answer the question affirmatively or negatively is to engage in an equally useless act of speculation.

You don’t know what future science entails, and neither do I. It is sufficient for my point, however, that such growth merely be possible, and have occurred in the past. Both are true.

Expand full comment

> Will this allow for infinite growth? Almost certainly not. But to answer the question affirmatively or negatively is to engage in an equally useless act of speculation.

Well, we have some of that "useless speculation" here, which I think responds to a bunch of your points, and address the claim about not knowing if physics is correct in section 4.1.1 - https://philpapers.org/archive/MANWIT-6.pdf

And in any case, we're not looking for refutations to infinite value for the purposes of this discussion, we're looking at potential growth over "merely" the next 10,000 or 100,000 years.

Expand full comment

“We restrict our interests to a single universe that obeys the laws of physics as currently (partially) understood. In this understanding, the light-speed limit is absolute, quantum physics can be interpreted without multiverses, and thermodynamic limits are unavoidable.”

Limiting yourself to physics as it is currently understood sort of ignores the whole point of discussing future scientific breakthroughs. Indeed, some current-day theoretical research into what has been termed “Alcubierre Drives” allows for faster-than-light travel within the bonds of Einstein.

Indeed, in the section you reference, the authors essentially make a half-assed argument that scientific progress builds on itself (until it doesn’t, see: Ptolemy and the Copernican Revolution) before admitting that you cannot actually disprove the possibility of infinities.

Regardless, I agree that we can make some reasonable estimates for the near and medium term. But anybody who thinks they can tell you what the hard limits of human achievement will be after a million more years of human science (or even 500) has let their ego get ahead of their intelligence.

Expand full comment

As I said, you are free to imagine new discoveries that violate laws of physics as we know them today -- speed of light, conservation of energy, conservation of momentum, and so on. I fully agree with you that such things are possible, just like ghosts or gremlins are possible. However, the problem is that our current model of reality is not just a guess; rather, it appears to fit what actually exists quite well. So well, in fact, that we can build complex devices using this model. Every time you use a computer, you affirm that our current understanding of physics is likely true.

So, you can't have it both ways -- either you assume that our current scientific knowledge is just wildly off-base; or you assume that it is valid enough for you to make reasonable predictions about the future. You can't just throw up your hands and say, for example, "well, it sure would seem like speed of light is a constant, but we could be wrong, so we'll definitely have FTL travel one day".

Expand full comment

What is so special about a computer that the same argument couldn't apply to a mechanical clock? Someone in the 18th century waving their timepiece around and claiming that it proves that human science is pretty much settled wouldn't be very sensible.

Expand full comment

> However, something close to that large has occurred in the past 70 years. The cost of computer memory has gotten around 10^14 times cheaper (source: https://hblok.net/storage/) and the size of a single bit of storage has decreases by a similar amount.

Wikipedia says "The first magnetic tape drive, the Univac Uniservo, recorded at the density of 128 bit/in on a half-inch magnetic tape, resulting in the areal density of 256 bit/in2.[6]" That was in 1951. The current numbers are obviously far better - "since then, the increase in density has matched Moore's Law, reaching 1 Tbit/in2 in 2014.[2] In 2015, Seagate introduced a hard drive with a density of 1.34 Tbit/in2,[3] more than 600 million times that of the IBM 350. It is expected that current recording technology can "feasibly" scale to at least 5 Tbit/in2 in the near future.[3][4]"

Assuming we're at 5Tbit/in2 in 2021, that's 70 years, and an increase of 160,000,000,000x, we see a growth rate of about 45% per year. Continue that for another century or two and we're talking about black hole bounds on total information - it can't actually continue.

Expand full comment

It doesn’t have to continue for my point to be correct.

Digital computers store knowledge more densely than books. Imagine making the argument “it would be impossible to store billions of people’s daily thoughts, we would run out of paper and places to put it,” in 1530. The printing press is still a relaticely new technology, and it allows for vastly faster and cheaper production of books than previously. Indeed, by the 20th century, industrial printing presses rival anything ever created by hand for a fraction of the cost.

Certainly, you would be technically correct to argue that there is a limit. Using the existing technology of paper, creating something like Facebook would be impossible. But you’ve simply dodged the question.

The fact that technologies such as paper or digital computers have physical limits is not particularly important.

The point is that they can grow exponentially, reach their physical limits, and then be replaced by a new technology which can also grow exponentially. There’s no way to prove that this cycle cannot continue ad infinitum, unless you already possess perfect knowledge of physics.

Do I consider such infinite growth likely? No. But this is my intuition about the laws of physics, and that is a dangerous space for laypeople to speculate.

Expand full comment

This isn't an argument about the medium, it's an argument about the physics. Information is physical, possible density of accessible information is limited, and with exponential growth, those limits are reached within centuries, not millennia.

And if you're conditioning on our current understanding of physics being wrong *in specific ways that allow infinite value*, that greatly limits the worlds in which your objection matters.

Expand full comment

If it turned out that creating new universes is possible, does that undermine your argument? I think it is respectable physics to suggest that new universe generation is a natural event that has occurred many times (whatever that means in this context) and that it is not beyond the realms of physics to assume that eventually this natural process could be harnessed to make new ones on demand.

Expand full comment

A decent shovel is around $40. 40x200 = 8000.

Expand full comment

>if you were able to upload every living human’s mind onto a quantum computer, you could feasibly recreate reality at the highest possible fidelity a human could experience while simultaneously giving every living human their own unique planet--all while using less than the mass of the Earth.

This seems to me like a highly specific and a very fragile claim to base your objection on.

Quantum computers are not obviously better than any other kind of computers in general purpose tasks as far as I know (which is admittedly not much, feel free to school me if you know better), most experts I read are tired of the trope of using a Qcomputer as a generic stand-in for a super duper magic computer, they say QCs will only substantially improve our cryptography and chemical\micro-biological simulation abilities. You can say that the latter task is relevant to simulating humans but that would be asserting things about the yet-uninvented brain simulation discipline: it may very well turn out to be that simulating human brains is really just matrix multiplication, and then QCs would be no better and perhaps worse next to conventional parallel super computers.

The second part of the claim is in essence an assertion about how 'gullible' the human brain is, how little resources does it take to convince it it's actually experiencing a much bigger reality. On its face, it seems true enough : a modern open-world game like, say, GTA V is simulating a city-sized reality with the resources of a lap-sized computer. So even if the computers scaled to a room-sized data center to enable an extremely realistic and much more immersive simulation, you still have a room-to-city ratio, which is actually much more generous than the each-person-gets-an-earth ratio (earth, 510.1 million km^2 ~=~ 5*10^14 m^2 in area, can fit 10^12 computing rooms each 500 m^2 in area, so 1000 such room to each of a 10-billion population. Vastly more efficiency can be squeezed by using volumes instead of areas and\or more dense computing matter and\or more efficient simulation algorithms and\or etc etc etc.). If 995 room-computers were devoted to simulating the current part of the world your brain is focused in and 5 were used to simulate a less faithful ambient world around that, I guess it sounds a about right you can give a HD earth to each human while using less than an earth's worth of resources.

BUT, those humans will live forever, and computers need energy. More than energy, computers *hate* heat, vehemently and desperately. As the scientist and futurist Issac Arthur always remarks, heat is the real enemy opposing any kind of order-building activity. Heat is Entropy's handmaid. So, I won't be surprised if, after you factor in heat dissipation and energy supply for about a billion year per human (a reasonable definition of 'forever'), it will turn you actually need a Solar System worth of space and material to house your simulation-blessed civilization. Still not bad, but hardly "an earth per human without using all of real earth" good.

>A smartphone is several hundred times more valuable than a shovel, and yet a shovel probably has more total mass.

You're ignoring the sheer unimaginable mass and energy that went into the whole ecosystem of processes that made the smartphone possible. Cutting edge chip fabrication facilities cost in the range of 1 billion dollars each, how many shovels *is* that? And that's just the SoC at the very deep core of your phone. How many tons of matter and millions of mega joules and billions of dollars is the mining industry that extracted the lithium in the batteries and the smelting industry which crafted the glass in its screen ? How much of those again in the shipping industry that got its myriad parts and components from all over the world and into one Asian factory somewhere and then transported the final phone to the myriad shores where its needed?

Once you do all of that, congratulations, you just created a dumb piece of intricately-put-together matter that has the *potential* to be useful, but is actually dumb as a brick until its loaded with a bunch of kinky patterns of voltages called Software. And once you get to Software, oh boy. How many people and energy and billions went into the Linux kernel? The Android OS in general? The absolute mess that is the app ecosystem?

Now, I can expect one your objections: that all of this is a one-time cost per phone, that once you have the phone and all its software you can finally use it to generate value to payback all what you had put in and more. But are you sure? Smartphones are not eternal you know, I bet the average lasts about 3 to 5 years. I bought mine in late 2018 and I feel like an ancient fossil keeping it while all of those around me switched 2 or 3 times in this period. Are you sure that in this awfully short lifetime, smartphones do actually generate enough (happiness, energy, utility, whatever) to payback all what went into them? The Linux kernel (like all complex software) is full of bugs, does the increasing amount of effort and money and brainpower spent developing and maintaining it justify the amount of value it actually generates? When, finally, Linux is discarded in favor of some hot new thing (like all complex software), will it net positive or negative on average?

And this is really a pet peeve of mine. People are so worshipful of the utter mess that is modern civilization, so quick to marvel at its amusing gadgets, so inclined to pat themselves on the back for creating it. And, sure, modern civilization is *different*, I can do things with my smartphone that nobody prior to its invention could imagine, not in this shape and form. But "different" is not "efficient", walking on your hands is very different, and also very dumb and inefficient of you're an average human. Are you actually *sure*, like bet-your-life-sure, that modern civilization is actually net positive or even break-even, and not just a party trick that looks cool but is actually dumb and inefficient when you start measuring ?

Expand full comment

Could be, a lot of it. Windows, toilet paper, backhoes, trains, etc. And especially sanitation and antibiotics, vaccines, transfusions, etc. Keepers. Massive marketing of junk food and the junk viewing on media, no matter how much the critics love breaking bad, nah.

Expand full comment

The problem with this sort of argument is that if you devalue the things needed for survival to 0.00001% of the economy then someone can buy them for $5000 and now they own you.

Even granting a quantum computer that everyone lives in (which is a philosophical trap and you should feel bad for bringing it up), someone's gotta keep it running. If there are trillions of people living in it, protecting it is extremely important. I don't know what the percentage should be, but if you value survival needs at, say, 20% of the economy, that places a limit on how valuable the other 80% can be.

Expand full comment

I always used to make arguments against the repugnant conclusion by saying step C (equalising happiness) was smuggling in communism, or the abolition of Art and Science, etc.

I still think it shows some weird unconscious modern axioms that the step "now equalise everything between people" is seen as uncontroversial and most proofs spend little time on it.

However, I think I'm going to follow OP's suggestion and just tell this nonsense to bugger off.

Expand full comment
author

I think if you have specific people, you can argue they shouldn't be equal, but that since we're at an abstract level where none of the people have any qualities, it seems weird to actively prefer that a randomly chosen subset be happier than some other randomly chosen subset, even at the cost of making average global utility lower.

Expand full comment

Keeping an assumption of all happiness examples being net positive here.

I still think there's a difference between a life of suffering mollified by treats and a genuinely good life. I can see the real life logic of this gradually removing good lives in pursuit of 20% less botfly infestations. The "true equality is slime" meme.

But I have a nice home and family, so I think I probably fear being "equalised."

Expand full comment

The preference only makes sense when comparing with the world where the unhappier people don't exist. The comparison is only important if we consider a transition between the worlds.

If the happier people are not supposed to be affected by the new people appearing, then they are not supposed to be affected. Otherwise they have good arguments to resist their appearance. As you said this is bait-and-switch.

Expand full comment

When doing math, I might say that at a certain level of approximation, it's fine to round to whole numbers, and it often is. But if it happens that I'm adding 0.1 (rounded down to 0) a million times, the approximation error has swamped the measurement.

The abstraction involved in the repugnant conclusion -- the assumptions about how humans are happy or unhappy and what it means to experience a mix of joy and suffering map to abstract quals -- result in the same type of failure mode.

Expand full comment

> we're at an abstract level where none of the people have any qualities

This is exactly why I very much enjoyed how you walked away from the mugging in the review and refused to play the philosophy game. None of this is real, it's just words. First show me the button that can create 5 billion people with happiness 80, and then I'll start worrying about what it means to press it.

(And I'll probably conclude that a world where "happiness" reduces to one number that can be exactly measured is so weird and scary and different from what I'm used to that I don't want any part in bringing it about.)

Expand full comment

Alas the author is not Scott, it's part of the book review competition. Scott remains in danger of being mugged! (maybe)

Though I don't know if it's totally alarmist to suggest the repugnant conclusion might meme people into destroying civilisation in pursuit of "total aggregate happiness."

A version of it is nearly making me have a fourth kid so...

Expand full comment

> Alas the author is not Scott, it's part of the book review competition.

Huh? I thought the title for those started with "Your Book Review", whereas this one is just "Book Review". Maybe it's mistitled, but I thought it read very much like Scott :)

(It's plenty confusing though, I remember last year, Scott posted a great review of Arabian Nights in the middle of the book review contest, many people liked it and commented they were planning to vote for it in the contest, until they realized, oh wait, it's "Book Review: ...", not "*Your* Book Review: ...".)

Expand full comment

You're right and it's happened again.

Expand full comment

I think this is exactly on point. Because if there's a button, we're talking about actual people again, aren't we? You've destroyed billions of people in exchange for more billions of people. If not, then the premise that you can "create another 10 billion people who won't have an effect on the first 5 billion" is wrong. There's a bizarrely-omnipotent/omniscient Decider able to magic people into existence. That decider is going to magic away 5 points of happiness in your life, whatever that means.

The mugging comes when you transfer a thought experiment riddled with unrealistic assumptions into the realistic world:

"Imagine you know they won't have an impact on the other people", except that's not possible.

"Okay, but imagine you know they'll be blissfully happy or abjectly miserable or whatever I need to make this thought experiment work." You can't know that.

"Imagine you could, though!" Okay, I've imagined a world that doesn't exist. What do you want me to do with it?

"Now apply what you've learned to the world that does exist." That's not how this works. You asked me to image an impossible world, now you want me to, what, imagine the impossible world applies to the real one? Nope.

Expand full comment

Unless one considers inequality to be a fundamental part of, perhaps even the, human production function, in which case at least _some_ inequality is very desirable indeed.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

I can understand *instrumentally* valuing some level of inequality, if inequality is necessary to produce more of what we terminally value (whether that's total wellbeing or average wellbeing or something else entirely). It seems to me like this is what you're saying: inequality is valuable *because* it's necessary to the human production function. It's valuable because of its consequences, but not *terminally* valuable in itself.

But I don't think instrumentally valuing inequality is enough to escape the repugnant conclusion.

Here's how I'd operationalize what you're saying (by my understanding, possibly incorrectly): "World B, due to the benefits of inequality, will at some later point be better off than World C." But now this is a different thought experiment; we've stopped engaging with worlds A, B, and C as presented above and decided to compare some other worlds B-prime and C-prime instead. (Also, by virtue of what is B-prime better than C-prime? Total happiness, average happiness, both, something else?)

Another way to get at the same point: we can interpret the values of A, B, and C as being somehow integrated over all time, factoring in all the value that those worlds will create. Taking this view, valuing B over C seems to require valuing inequality *as such*, absent of any consequences (since the consequences are already fully accounted for, and C ended up with more total and average happiness). And if you can no longer justify your preference for greater inequality based on its consequences, it seems hard to justify at all.

edit: see also this comment below; the argument just relies on a tiny nudge towards equality (as long as it makes the happier population slightly less happy) -- you don't need to go all the way to full equality.

https://astralcodexten.substack.com/p/book-review-what-we-owe-the-future/comment/8566058

Expand full comment

I don't think I would value *inequality* for its own sake, but I do think I place nonzero utility on "how happy anyone is allowed to be"; that is, going from "the happiest person in the universe is at 100" to "the happiest person in the universe is at 90" is intrinsically negative to some degree. It would then follow that "level out everyone's happiness" is not an automatic good. Intuitively I'd be most inclined to support it when the least happy people are most miserable (as opposed to "good but merely less good) so it would also follow that the worlds that need "only a tiny nudge" to equality are exactly the ones where I'd be most likely to reject the proposal of levelling everyone off.

Expand full comment

Isn't that a sign that "let us imagine an abstract population A, where N more people with utility = previous utility minus delta appears as if by magic" is not maybe a useful way to think about real-life ethics?

Expand full comment

Changing happiness would take effort, the effort may (would) make more people unhappier, and there is no guarantee that the end point would make more (or any) people happier.

Removing this is like working at the "assume a spherical chicken of uniform density" level of philosophy - it's a way to spend an afternoon, but not a way to decide what to devote your life to.

I have not thought it through, but the "repugnant conclusion" probably fails on this as well.

Expand full comment

Yes. I think it’s no more problematic than the thought experiment about a utilitarian doctor who can cut up one health innocent person to provide organs to save five lives. Sounds horrific, but at least in part it’s because we really can’t imagine the thought experiment where you’ve got such certainty that this will work, with no other ill effects.

Expand full comment

More so, I think we intuitively know many other ill effects (such as people actively avoiding doctors!), and reject the theory before we even really consider it.

Eugenics seemed right to a large group of people, including many of the brightest minds of their time. We now fully reject most of what they said, and it's not because they weren't thoughtful. It's because of the practical application (as demonstrated most effectively by Hitler), and how obviously wrong it appeared to pretty much everyone. A real life application of "smart people should breed more and dumb people less" gets you a Hitler. Unintended consequences, for sure, but real.

Expand full comment

Well, Hitler did his level best to wipe out the highest-IQ population in Europe, which doesn't look like a central example of eugenics. Did the Nazis do anything to try to impose "smart people breed more dumb people breed less?" Other than forcibly sterilizing or murdering seriously disabled people, I've never heard of anything. Were they paying a bounty for German scientists or doctors to have more kids or something?

A bunch of the countries Hitler was at war with also had eugenics programs. And the reason they were terrible wasn't death camps or murder factories, it was the individual-level awfulness of having some bureaucrat or judge have the power to declare that "three generations of imbeciles are enough" and have you forcibly sterilized. In the US, those decisions were often partly based on race (because it was a deeply racist society), but it's not like that kind of coercive eugenics would be okay if it *weren't* applied in a racially biased manner. A bunch of Scandinavian countries also had eugenics programs, which presumably were all blonde haired blue eyed people sterilizing other blonde haired blue eyed people. I doubt this made it any less nasty.

Expand full comment

I largely agree with "Step C is probably the problem".

My read is this: The problem is the unspoken assumption that the "goodness" of a world can be taken by linearly summing the "goodness" of each life within it.

How certain are we that this is a realistic assumption? Why *should* it be true? There are an infinity of ways to summarize a population of individual numbers, and choosing one is a fraught task even in mundane real-world statistics.

Our judgment on a world should not contain *any* delight about its highest highs, its greatest heroes? Nor any unique appreciation for the valor exhibited by those at the lowest lows?

Given how aesthetic human morality is--the more I think about it, the less I think that "just sum up all the numbers" is an obviously right or good answer. I think it's very likely that redistributing all the utility in World B to get to World C can, at least potentially, make that world *worse*.

Has some downer implications for the pithy phrase "shut up and multiply", but we can stand to lose a catchphrase or two. Doesn't even mean utilitarianism is toast. Just have to be more careful about how many conclusions you draw and the confidence with which you claim them.

Expand full comment

You get to the same result even if it's "lower the original people's wellbeing by 0.0001% to raise the new people's wellbeing up (either just a bit, or all the way to this level)." It doesn't require full equality, just that there is any reasonable tradeoff between the wellbeing of the original people and the new people. I think most people would agree that the annoyance of having to retie a shoelace is worth others having a great and fulfilling day, and once you accept any tradeoff in that regard the repugnant conclusion is back.

Expand full comment

This guy is trying to seagull my eyes 100%.

Expand full comment

Ahaha, I'm afraid the seagulls always do start with "you have so many chips, just one please!" and eventually they end up with your eyeballs.

Expand full comment

More seriously, if the argument can derive something huge with the addition of a very minor tradeoff, then it feels like it should be able to derive it without it OR there's something wrong with the argument (not that I can find it)

Expand full comment

To quote Sir Boyle Roche:

Opposing a grant for some public works:

“What, Mr. Speaker, and so we are to beggar ourselves for the fear of vexing posterity! Now, I would ask the honourable gentleman, and this still more honourable house, why we should put ourselves out of the way to do anything for posterity; for what has posterity done for us? I apprehend gentlemen have entirely mistaken my words. I assure the house that by posterity I do not mean my ancestors, but those who are to come immediately after them.”

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Despite what most people would agree on, perhaps all such exchanges should be Pareto improvements, and that's the only way out of the conclusion.

I wasn't really expecting alt-right turbolibertarianism to be the only consistent political framework yet here we are.

Expand full comment

Isn't that the good old-fashioned reductio ab absurdum? If we start off with "having to retie your shoelaces is a trivial annoyance" and then it is supposed to bring us to "and now you have to bring into being a gazillion people who live on moss and rainwater and sleep on beds of nails", then surely you can say "that is absurd, and I am getting off this bus right here at this stop". Yeah, if I ride all the way to the end of the line, the Repugnant Conclusion is the last stop, but in that scenario, it's the last stop where the bus careers over the edge of the cliff. Nobody is forced to sit in the bus all the way to that ending, and nobody can be forced to do so.

Expand full comment

I think the general problem with this bus ride is that Rationalists tend to assume that all relevant parameters scale linearly. Except for AI, which grows exponentially. But basically, these are the only two options. Therefore, given two points such as "I'm having a normal day" and "I was having a normal day but now my shoelaces are untied", Rationalists will immediately draw a straight line through these points directly to "a gazillion seagulls feasting on a gazillion eyeballs". In reality, however, pretty much nothing in nature works this way -- and certainly not human psychology.

Expand full comment

Reductio ad absurdum is a valid form of proof, though! So some point of the chain of inference must be incorrect if you agree with the premise but disagree with the conclusion. It's disturbing if you can't point out where is the error, because that may mean you are wrong and the conclusion holds (or the error is just hidden well).

Expand full comment

Step C isn't just "equalise everything", it's "equalise everything *and then make everything better for everyone*". B is "X people at level 100, X people at level 60"; C is "2X people at level 95".

You have to view equality not just as neutral but as *actively bad* to reject the RC on that basis.

Expand full comment

I thought it was obvious that equality was actively bad?

We literally wouldn't exist without inequality because absent inequalities there is no sexual competition, therefore no evolution and no us.

Expand full comment

Evolution isn’t logically necessary to produce us. If you’re going to worry about practical difficulties like that, there are so many earlier places to stop the argument. The argument is about imagining if some things *were* possible, and people were as well off as described even after accounting for all physical problems, whether they would be better.

Expand full comment

Our existence is fantastically improbably without evolution.

Expand full comment

People also assume that equalizing happiness is actually possible. In financial terms it at least facially looks so - you can take money from one person and give it to another. But how do you equalize other aspects of happiness? What do you do for people whose primary happiness motivations are based on other factors such as health or status? What if making one person happy requires making others unhappy?

Expand full comment

"There are only 10^67 atoms in our lightcone"

Are there really? That doesn't seem right. There are about 10^57 atoms in the sun

https://www.quora.com/How-many-atoms-fit-in-the-sun

So 10^67 atoms is what we'd get if there were about ten billion stars of equal average size in our light cone. This seems, at least, inconsistent with the supposition that we might colonize the Virgo Supercluster (population: about a trillion stars.)

Expand full comment
author

I think this is implying "light cone within the 10,000 years mentioned by the example".

Expand full comment

Yes, though in our paper - https://philpapers.org/rec/MANWIT-6 - we used 100,000 light years, which seems like a better number given that it's basically encompassing the Milky Way, which is fundamental limit on human expansion via interstellar colonization over the next 100,000 years, and due to spacing between galaxies, is a moderately strong limit even over the next ten million years. (Modulus some of the smaller satellite galaxies, which might expand this to 150,000 light years, albeit with a much smaller corresponding increase in mass.)

Expand full comment

Conditional on the child's existence, it's better for them to be healthy than neutral, but you can't condition on that if you're trying to decide whether to create them.

If our options are "sick child", "neutral child", and "do nothing", it's reasonable to say that creating the neutral child and doing nothing are morally equal for the purposes of this comparison; but if we also have the option "healthy child", then in that comparison we might treat doing nothing as equal to creating the healthy child. That might sound inconsistent, but the actual rule here is that doing nothing is equal to the best positive-or-neutral child creation option (whatever that might be), and better than any negative one.

For an example of other choices that work kind of like this - imagine you have two options: play Civilization and lose, or go to a moderately interesting museum. It's hard to say that one of these options is better than the other, so you might as well treat them as equal. But now suppose that you also have the option of playing Civ and winning. That's presumably more fun than losing, but it's still not clearly better than the museum, so now "play Civ and win" and "museum" are equal, while "play Civ and lose" is eliminated as an inferior choice.

Expand full comment

How close would you say your conceptualization of this is to the idea of Pareto optimality?

Expand full comment

There's a lot of silly questions like this related to the transitivity of preferences. Two example from a behavioral economics textbook that I use (https://www.amazon.com/Course-Behavioral-Economics-Erik-Angner/dp/1352010801/):

Suppose you are indifferent between a vacation in Florida versus a vacation in California. Now someone offers you that same vacation in Florida, plus an apple. Strictly better than no apple, so transitivity says you should also strictly prefer that to the California vacation.

Or suppose you have a line of 1000 cups of tea, each with one more grain of sugar than the last. You can't tell the difference between cups next to each other so you're indifferent between them, but the last cup is clearly sweeter than the first. According to transitivity, you should also be indifferent between the first and last.

In reality, people seem to only register differences in utility of certain relative amounts. Trying to make arguments appealing to both transitivity and tiny differences in utility is basically philosophical mugging.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

> MacAskill introduces long-termism with the Broken Bottle hypothetical: you are hiking in the forest and you drop a bottle. It breaks into sharp glass shards. You expect a barefoot child to run down the trail and injure herself. Should you pick up the shards? What if it the trail is rarely used, and it would be a whole year before the expected injury? What if it is very rarely used, and it would be a millennium?

This is a really bad hypothetical! I've done a lot of barefoot running. The sharp edges of glass erode very quickly, and glass quickly becomes pretty much harmless to barefoot runners unless it has been recently broken (less than a week in most outdoor conditions). Even if it's still sharp, it's not a very serious threat (I've cut my foot fairly early in a run and had no trouble running many more miles with no lasting harm done). When you run barefoot you watch where you step and would simply not step on the glass. And trail running is extremely advanced for barefooters - rocks and branches are far more dangerous to a barefoot runner than glass, so any child who can comfortably run on a trail has experience and very tough feet, and would not be threatened by mere glass shards. This is a scenario imagined by someone who has clearly never ran even a mile unshod.

Expand full comment

I thought of this too but this is not the point of the argument. Just pretend it's a hypothetical universe with hypothetically forever sharp glass, otherwise barefoot-welcoming trail and lots of unshod children with tender feet. It's easy enough to construct a mental universe in which this dilemma works.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

It's not the point, no, but my point is: if the hypothetical underlying this entire book is so ignorant of reality, how much should we trust the author?

Expand full comment

It’s important to improve thought experiments to appropriately consider physical possibilities and our intuitions. But with a philosophy book or a math book, you shouldn’t be *trusting* the author - the author is just stepping you through reasoning and you should be deciding whether you trust *yourself*.

Expand full comment

I wouldn't characterize this book as a work of moral philosophy, it makes too many empirical claims.

If you're promoting "long-termism" you need to demonstrate the ability to reason about long-term outcomes. Demonstrating a comical inability to do so in a toy case you've constructed as your ideal hypothetical, points to either your own incompetence or the futility of the project. In either case, it casts doubt on his overall argument even if the reasoning is valid.

Expand full comment

Beautifully and succinctly expressed. Thank you!

Expand full comment

I trust myself that when I see "oh no, the cute moppet argument!", I decide the author of same is full of beans and I don't accept any arguments they are trying to sell me, look at this lovely bridge, on special offer today, wouldn't anyone be proud to own it?

The Drowning Child and the Barefoot Child are both recipients of the Darwin Awards and we should be glad to remove such stupidity so early from the gene pool!

Expand full comment

I laughed out loud, thank you

Expand full comment

My standard response to the Drowning Child is that on the first day that it happens, I might well save the kid and think nothing more of it.

If I find myself wading in to save the kid again the very next day, then *someone* - the child, its caretakers, or whoever is responsible for the body of water; possibly all three - is subsequently getting thrashed within an inch of their life, so they know better in the future.

On the third day, I'm taking the bus.

Expand full comment

Mine is that I'm wearing the expensive suit because I'm on my way to a job interview, and by jumping into a torrential wall of water, I die; my family is made homeless, and as my own daughter lays dying of starvation, phthisis and exposure, she gasps out "what idiot jumps into a flash flood anyway?"

Expand full comment

But thought experiments of this sort are not meant to be realistic at all (trolley problem, anyone?) In general it's a reasonable heuristic (don't trust an author who commits basic blunders) but I don't think it applies in this case.

Kenny Easwaran makes a valid point below as well. You don't need to trust philosophers, you only need to examine the arguments they're laying out.

Expand full comment

You still need for your thought experiment to map onto reality somehow, otherwise you're merely counting angels dancing on pinheads.

The trolley problem is not a situation anyone is likely to find themselves in at any point in their life, but it's not like it is fundamentally *unrealistic*. It *could* happen to you.

More importantly, the trolley problem is merely one example of the sort of problems that real people *do*, in fact, face quite regularly. Take the current war in Ukraine - there are civilians trapped in a location that is subject to heavy enemy bombardment; do you attempt to evacuate them, with a significant chance of failure and loss of your rescue party, or do you abandon them to their fate?

However, there are numerous thought experiments that are much harder to map onto any sort of real-life scenario. I'd say that asking "so, how does this cash out?" is well-advised before you even begin to evaluate the argument, because most thought experiments are, frankly, pretty bad.

Expand full comment

To put the point differently, if the author uses a simplified picture of the world for his hypotheticals without realizing it he may be badly underestimating the difficulty, in a complicated world, of knowing the long term consequences of current choices.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

That's a valid criticism. However, I feel that there are two questions here:

1) What is the desired long-term outcome of our current choices, from a moral standpoint?

2) How do we go about achieving the desired long-term outcome, given the difficulty of predicting long-term consequences of current choices?

Those are separate questions and you can't criticize the author for failing to adequately address question 2 if he's focusing on addressing question 1 (which I believe he is when he conducts the glass shard thought experiment).

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

I don't think the hypothetical addresses only question 1. The answer to question 1 here is simple: we would rather that our actions not maim any children, now, or in the future.

It's the attempt to predict the long-term consequences of a simple action (leaving glass shards on a trail, given that a barefoot child will be running down it in the future), which is strictly part of question 2, that MacAskill trips over himself. In a hypothetical he set up to be the simplest possible case where we can obviously predict the consequences of our actions, he fails to correctly predict the consequences of actions.

He picked this hypothetical! As his obvious case! This is his drowning child, it's like if Peter Singer intentionally picked Aquaman as the drowning child, or if the original trolley problem included a third track that didn't require killing anyone.

Expand full comment

I don't think question 1 is so simple. Of course ideally we would not injure any children at any point in time. The question is, is there a moral discount applied to children in distant future. Is hurting a child a million years from now exactly as bad as hurting a child today? Or better? Or worse?

Expand full comment
Aug 27, 2022·edited Aug 27, 2022

The answer to 2) is easy: do nothing.

Technological progress has happened and is happening. We have reason to expect it to continue to happen. People in the future will be inconceivably wealthier than us, and even more capable of solving their problems and fixing things, given that they will have many more examples of how (and how not) to do it than we do.

We don't owe them anything. They owe us, the bastards.

Given this, the answer to 1 is easy also. What is hard for us is easy for them; and we should not strive to control other people's lives.

Expand full comment

"Pretend I have a hypothetical which justifies my conclusion" is not good policy when engaging with a difficult conclusion. (Here "you" isn't you, it's MacAskill)

Expand full comment

But it's not that. The conclusion here is not "the shards will hurt a kid a million years from now". The conclusion is "IF we knew the shards would hurt a kid a million years from now, we should care about it as much as about hurting the kid today." For an "If X, then Y" statement to be true, the X part need not be true at all.

Expand full comment

But MacAskill is failing to construct a good hypothetical. People are sticking on this not because they can't imagine a hypothetical that supports the point, but because "he can't even make one" is a point in favor of "long termists don't realize how much they're misunderstanding and how wrong their interventions might be".

Expand full comment

I don't disagree on this. I disagree on a different point but I don't know how to put it more clearly than I already have. :(

Expand full comment

I think I got it from the other reply chain, no worries

Expand full comment

Ok, replace it with dropping some rust resistant nails.

Expand full comment

Or why not bear traps? I'm in agreement with Mentat Saboteur on this: such hypotheticals are not meant to bear any resemblance to reality, but rather to set up an appeal to the emotions (ironic, considering that the people creating them would say they are using reason and logic).

After all, why make it a *barefoot* *child* running up the trail? Why not an adult? Are we supposed to care less about a six foot four beefy hairy guy in bare feet getting cut on glass?

For maximum appeal to moderns, drop the child and make it a cute widdle puppy or kitty. People care more about their animals today.

Expand full comment

I was thinking the glass would just get covered with dirt over time. Layers accumulating over time is pretty basic geological history.

Expand full comment

My reaction to the glass shards example is that I feel intuitively less concerned about someone stepping on it in a hundred years than tomorrow and I'm someone who finds caring about future people reasonably intuitive in general. A small harm in a world a hundred years in the future that I barely comprehend just seems less immediate than it happening tomorrow. I care about the future people existing and their big picture happiness, but caring about them stepping on glass seems like micro-managing the future in a way that doesn't seem worth it.

So for me that example does the opposite of what it's intended to.

Expand full comment

It's inadvertently a great hypothetical because it shows how many wrong assumptions can get folded into long-termism.

Expand full comment

I'm not disagreeing with this part at all.

It's a decent hypothetical for answering the first question I referred to earlier ("should we care today about a better tomorrow?", where tomorrow stands for "hundred/thousand/million years from now) and a very flawed one for answering the second one ("what should we do today for a better tomorrow?")

Expand full comment

Fair.

I personally don't think it provides much value for the first question beyond what's already common: we care about the future, but at a discount because of how little we understand the future. It's good not to break the future, but beyond that, it's in many ways not our place to manage it.

In this way I think the second question is intertwined with the first one, and the hypothetical doesn't move me, at least.

Expand full comment

The real-world version of this is the use of landmines and cluster munitions in war, right? Kids born a generation after the end of the war occasionally lose a leg stepping on an old mine. How much should you be willing to give up in military success today in order to avoid some kids a generation from now losing legs?

Expand full comment
Aug 27, 2022·edited Aug 27, 2022

But I think it's completely uncontroversial among regular non-EA people that landmines aren't worth it; if you think they are worth it, you are probably a self interested belligerent in an actual war, and everyone (including those in other wars) doesn't want you to do it. Even the US, place of questionable military practices, hasn't used landmines in 30 years.

The point of which is to say, EA isn't bringing anything new to the table in examples like these.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

This whole discussion is asinine. The vast majority of people do not know or care that glass erodes over these time frames, therefore the hypothetical is for all intents and purposes identical to one made in a world where glass does not erode that quickly.

This whole argument is barely one pip above the level of saying that somebody made a grammatical error and so trchnically their argument is incoherent and the making of this error means their reasoning should not be trusted.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

When I think of happiness 0.01, I don't think of someone on the edge of suicide. I shudder at the thought of living the sorts of lives the vast majority of people have lived historically, yet almost all of them have wanted and tried to prolong their lives. Given how evolution shaped us, it makes sense that we are wired to care about our survival and hope for things to be better, even under great duress. So a suicidal person would have a happiness level well under 0, probably for an extended period of time.

If you think of a person with 0.01 happiness as someone whose life is pretty decent by our standards, the repugnant conclusion doesn't seem so repugnant. If you take a page from the negative utilitarians' book (without subscribing fully to them), you can weight the negatives of pain higher than the positives of pleasure, and say that neutral needs many times more pleasure than pain because pain is more bad than pleasure is good.

Another way to put it is that a life of 0.01 happiness is a life you must actually decide you'd want to live, in addition to your own life, if you had the choice to. If your intuition tells you that you wouldn't want to live it, then its value is not truly >0, and you must shift the scale. Then, once your intuition tells you that this is a life you'd marginally prefer to get to experience yourself, then the repugnant conclusion no longer seems repugnant.

Expand full comment

Came here to post this. Additionally, .01 can be characterized as “muzak and potatoes” but could easily also be lots of highs and lows that add up to a life the people living it are glad they have: heartbreak, turning that heartbreak into art, and so on. So a .01x1e100 world could be very vibrant and interesting, not just “gray.” It would contain more awesome experiences, insights, and cultural diversity than the 5 billion person flourishing world, and no one would feel as though their participation in the enterprise wasn’t worth it.

A more realistic concern is that a galaxy of humans or dyson sphere full of ems or whatever that *didn’t* put a lot of effort and coordination and whatever info ensuring everyone’s flourishing would almost certainly feature a lot of extreme suffering and lives you wouldn’t want to have lived. I consider this pretty relevant to the considerations of whether you’d prefer singletons vs competitive equilibria.

Expand full comment

This is probably a lot of the problem with the repugnant conclusion - we really don’t know how to visualize it. (I sometimes say “the repugnant conclusion is that millions of people should be allowed to live in skyscrapers in San Francisco without cars”.)

Expand full comment

RE: muzak and potatoes, it does depend what the muzak is (e.g. continuous loops of John Ritter's Christmas treacle would indeed be hell on earth). As an Irish person, I cannot agree that potatoes are a sign of badness 😁

Expand full comment

It’s not meant to be bad - it’s meant to be just fine.

Expand full comment

Yeah this is pretty close to what I wanted to say. I would consider the thresholds of quality-of-life for a life to be "worth starting" and "worth continuing" to actually be pretty far apart (with the former being much higher than the latter - e.g. for my own life, I very very strongly want not to die, but I'm kind of on the fence as to whether or not I'm glad I was born), and a main thing that makes the repugnant conclusion seem repugnant is people assuming that the former must be as low as the latter.

Expand full comment

> If I had to play the philosophy game, I would assert that it’s always bad to create new people whose lives are below zero, and neutral to slightly bad to create new people whose lives are positive but below average.

Any view that takes the average into account falls into the Aliens on Alpha Centauri problem, where if there are a quadrillion aliens living near Alpha Centauri, universal average utility is mostly determined by them, so whether it's good or bad to create new people depends mostly on how happy or miserable they are, even if we never interact with them. If those aliens are miserable, a 0.001 human life is raising the average, so we still basically get the Repugnant Conclusion; if they're living lives of bliss, then even the best human life brings down the average and we shouldn't create it.

Expand full comment

Under the assumption that your ethical system is isotropic across alienness and physical distance, yes…

Expand full comment

Do people who accept the Repugnant Conclusion, also believe in a concrete moral obligation for individuals to strive to have as many children as possible?

Some religions do, but I'd be surprised to find a modern atheist philosopher among them. But if you accept the premise that preventing the existence of a future person is as bad as killing an existing person..

Expand full comment
author

The Repugnant Conclusion doesn't imply that preventing the existence of a future person is as bad as killing an existing person!

I think if you accept the Conclusion, then (assuming your children will have better than zero lives and not contribute to some kind of resource-shortage) having children becomes a morally good thing to do, but not necessarily better than donating to charity, being a vegetarian, voting for the right side in elections, or anything else that most people consider nice but not obligatory.

Expand full comment

Clear, thanks!

Expand full comment

Scott's answer is somewhere on the continuum of "taking utilitarianism seriously" other than 100% (like most Rationalists - EY said 75% on Twitter recently https://twitter.com/ESYudkowsky/status/1497157447219232768). At 100% utilitarian seriousness, all good things are obligatory; the system is a *notoriously*-harsh mistress in that regard.

However, if having kids is for some reason costly* for you to do, even utilitarianisms that embrace the RC say it may not be the best life choice *for you*. If, say, having kids would cause you to not prevent an X-risk, then having kids is not the most good you can do.

*Has to be unusually costly, though, otherwise your utilitarianism is itself an X-risk by telling all people to not have kids.

Expand full comment

> At 100% utilitarian seriousness, all good things are obligatory; the system is a notoriously-harsh mistress in that regard.

I think that claiming this is "utilitarian seriousness" is ignoring the fact that there are plenty of non-utilitarian moral maximalists out there, most notably Jesus Christ. I don't think moral maximalism and utilitarianism are actually the same thing, despite how often they're conflated.

Expand full comment

There are a bunch of moral systems that even at 100% seriousness don't prescribe an exact sequence of actions, and a bunch more that have clearly-delineated "tiers" of obligations (e.g. Kant's imperfect duty). Utilitarianism has no obvious line between the good and the obligatory.

Expand full comment

Yes, and deontology also has no obvious line between the good and the obligatory, nor too does virtue ethics, because you must add that line yourself, as Kant did in his specific form of deontology.

Expand full comment

>>>nice but not obligatory

This covers a lot of territory, even without the 'most people' qualifier.

Are there any positive things that are considered nice *and* obligatory? By EAs if not most people?

(Also, I think that whether those things are even considered nice depends a lot on ones circle.)

Expand full comment

Isn't the whole point of EA that YES, it is morally obligatory to increase overall utility by all the means available to you?

Expand full comment

It's more like, as long as you're acting to increase utility, you might as well do it in the most efficient way.

Expand full comment

Only if the well being of those children will be greater than the cost in well being to others.

Expand full comment

Not everyone who accepts it is a utilitarian. I mentioned Michael Huemer, for example.

Expand full comment

The suppositions of misery - whether impoverished nations or sick children- to me always seem to leave aside an important possibility of improvement.

The nation could discover a rare earth mineral. A medical breakthrough could change the course of the lives of the children. A social habit could change.

In fact, while the last half millennium

has been Something Else, and Past Performance Is No Garuntee of Future Returns, it does seem that future improvements are, if not most likely, at least a highly possible outcome that needs consideration.

(Been a while since a post has contained such a density of scissor topics.)

Expand full comment

"they decided to burn “long-termism” into the collective consciousness, and they sure succeeded."

If the goal is "one-tenth the penetration of anti-racism" or some such, that at best remains unclear. It's worth dwelling on your identity as an EA + pre-orderer here and realizing that very few media campaigns have ever been targeted so careful at "people like you." Someone on Facebook asked if anyone could remember a book getting more coverage and I think this response would hold up under investigation:

"Many biographies/autobiographies of powerful people; stuff by Malcom Gladwell, Tai-Nehisi Coates, Freakonomics, The Secret… worth remembering that this is a rare coincidence where you sit impossibly central in the book's target demo. Like if you were a career ANC member, A Long Walk to Freedom would have been everywhere for you at one point"

Expand full comment
author

One tenth the penetration of anti-racism would be amazing. I don't think long-termism is anywhere near that amount yet but I think it's done very well with the resources available to it. This is like your product running an ad campaign, seeing sales dectuple, and complaining that it's still not as well-known as Coca-Cola.

Expand full comment

Extremely fair. My bar for "burnt into the collective consciousness" is jsut higher than yours. Even on your terms, I still think it's too early to tell whether this has made much of an impression on people outside of EA. There should be predictions about changes in GWWC pledge growth rates, EA Forum engagement, 80k newsletter subs. My prior is two weeks of double the baseline, decaying back to 5-10% above baseline by November. It's big and you've warded off stagnation, but you're not a clear path to wide relevance.

Expand full comment

It seems anti-racism is directed at normies while EA is directed at influential people. Certainly 80,000 Hours doesn't care much if 90 IQ Minimum Wage Joe optimizes his contribution, which kinda counts as a point against EA as a robust philosophy.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

> It's worth dwelling on your identity as an EA + pre-orderer here and realizing that very few media campaigns have ever been targeted so careful at "people like you."

FWIW, I'm a long time reader of SSC/ACX and a "rationalist" and this is the first I've heard of the book. So it doesn't appear to have even penetrated much to EA-adjacent demos.

Expand full comment

Slavery is very much still with us. It is actually legal in several African countries, and de facto legal in several others, as well as in various middle eastern locations. That is to say nothing about about the domestic bondage live-in servants are subjected to across much of south-east Asia, and covertly in various places across the U.S. and Europe, as well as the sex traffic. The world is a stubborn and complicated thing, and doesn't work as cleanly as thought experiments and 40,000 foot overviews would suggest.

Expand full comment
author

See the paragraph saying "Since then there has been some involuntary labor in prisons and gulags, but nothing like the system of forced labor that covered most of the world in the early 1800s. And although we may compare some modern institutions to slavery, it seems almost inconceivable that slavery as open and widespread as the 19th century norm could recur without a total change of everything in society." Everything is always complicated but I think it's fair to point out the end of large-scale race-based explicit slavery in the West as a specific accomplishment.

Expand full comment

https://en.wikipedia.org/wiki/Slavery_in_the_21st_century

They're estimating contemporary slavery at 38 to 48 million people, and much of it isn't by governments in some formal legal way. I think you're understating the problem.

I agree that ending almost all legally permitted slavery was a huge accomplishment and the world would be a much worse place if it hadn't been done.

Expand full comment

So, my question would be, did it? End almost all legally permitted slavery.

Africa wasn't changed. The Mid East wasn't changed. India felt some change, but not universal/on the local level. China and East Asia probably fell between Mideast and India.

I'm all for American exceptionalism and centering the West, but there are limits.

Expand full comment

On top of that, is there any law of nature that prevents slavery from being re-legalized tomorrow ? I think not, sadly...

Expand full comment

While I agree with that in principle, I think that phrasing is moving the goalposts a lot. It wouldn't be a “law of nature” that prevents it, just some ongoing state of the world, and it wouldn't have to be 100% absolute, just have really long temporal leverage.

Of course, _how_ long “long” is can then factor into how things get weighted…

Expand full comment

My point is that, given all the upheavals that humanity had gone through by now, a return of slavery is not all that hard to imagine. Wilder things have happened in the past. True, official slavery would likely remain prohibited as long as Western civilization survives -- but there are no guarantees of that, either...

Expand full comment

Not "the world," rather, "the publicly-legible parts of the WEIRD world". Slavery is still with us.

Expand full comment

Sadly not, yeah. For one thing, we returned to democracy, and that once seemed equally crazy.

Expand full comment

Who is this "we", 2022-WIERDo?

Expand full comment

Africa was mostly colonized, and ending the slave trade was a common justification for said colonization. India (including what is now Pakistan & Bangladesh) was fully colonized by Britain, the big anti-slavery power cited above.

Expand full comment

"Contemporary slavery" is often confused with the chattel slavery of the past. Presumably there were many people during the period when slavery was legal who, while not legally slaves, would be classed as slaves if they lived today.

Expand full comment

I'm not sure what you're arguing. Maybe that the number of slaves in the past were underestimated?

Expand full comment

By the definitions used to measure "modern slavery", yes.

Expand full comment

Do you have a source, preferably with some numbers about how many people meet various conditions which are considered slavery?

I looked around a little, and shift work wasn't mentioned. Forced marriage was considered slavery by at least some people, but I would expect the numbers to be higher for that.

Expand full comment

These estimates are terrible, they count shift work as slavery

Expand full comment

That seems unlikely, since I expect there are a lot more shift workers than the low tens of millions. What's your source?

Expand full comment

From one of reports from wiki page :“ Withholding of wages, or the threat that this would be done, was the most common means of coercion, experienced by almost a quarter of people (24 per cent)” this is terrible but should not be counted as slavery, but they do. https://www.ilo.org/wcmsp5/groups/public/---dgreports/---dcomm/documents/publication/wcms_575479.pdf

I read this wiki oage and links some time ago and remember there was a map which showed north of Canada as very big on slavery. But really it’s just where people mostly work shifts and people who signed half a year contracts are more vulnerable to thus type of coercion than people who work monthly salary.

Expand full comment

One possibility to consider is radical value changes.

Past people were very different from us today, and future people will probably be different from present humans. They will look weird.

To prevent radical value changes in the future requires global coordination that we presently don't have.

Expand full comment

Why would you want to prevent radical value changes in the future? Unless you mean value changes that decreases utility. But that just collapses to the same utilitarian argument.

Expand full comment

The Eli Lifland post linked assumes 10% AI x-risk this century.

Expand full comment

Informative article. Thank you. I'm gonna steal your paragraph "if you're under ~50, unaligned AI might kill you and everyone you know. Not your great-great-(...)-great-grandchildren in the year 30,000 AD. Not even your children. You and everyone you know."

Expand full comment

Nobody seems to know how to solve the AI alignment problem though. Which raises the question as to why we should be at all debating about the repugnant conclusion or some of these other factors when it isn't a very likely possibility that we will be around to implement it.

Expand full comment

I'm advocating a solution to the alignment problem. www.deusmechanicus.org

Expand full comment

I clicked this link because I suspected the reference. I was not disappointed. Well memed, friend, well memed.

"I began to crave the strength and certainty of paperclips"

Expand full comment

It's necessary so that in case we get a suffering-maximizer version of Roko's Basilisk and end up with a few trillion people in endless torment we'll be able to chuckle between screams about how flexible the word "repugnant" used to be.

Expand full comment

> MacAskill must take Lifland’s side here. Even though long-termism and near-termism are often allied, he must think that there are some important questions where they disagree, questions simple enough that the average person might encounter them in their ordinary life.

I think there's a really simple argument for pushing longtermism that doesn't involve this at all - the default behavior of humanity is so very short-term that pushing in the direction of considering long-term issues is critical.

For example, AI risk. As I've argued before, many AI-risk skeptics have the view that we're decades away from AGI, so we don't need to worry, whereas many AI-safety researchers have the view that we might have as little as a few decades until AGI. Is 30 years "long-term"? Well, in the current view of countries, companies, and most people, it's unimaginably far away for planning. If MacAskill suggesting that we should care about the long-term future gets people to discuss AI-risk, and I think we'd all agree it has, then we're all better off for it.

Ditto seeing how little action climate change receives, for all the attention it gets. And the same for pandemic prevention. It's even worse for nuclear war prevention, or food supply security, which don't even get attention. And to be clear, all of these seem like they are obviously under-resourced with a discount rate of 2%, rather than MackAskill's suggested 0%. I'd argue this is true for the neglected issues even if we were discounting at 5%, where the 30-year future is only worth about a quarter as much as the present - though the case for economic reactions to climate change like imposing a tax of $500/ton CO2, which I think is probably justified using a more reasonable discount rate, is harmed.

Expand full comment

Dwarkesh Patel has a series of pretty good posts related to unintuitive predictions of growth.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Everyone talks about the Repugnant Conclusion, but nobody talks about the Glorious Conclusion: instead of adding slightly-less-happy people and then equalizing, you can add slightly-more-happy people and then equalize. The second option is obviously better than the first. The obvious end point of this is infinite people who are infinitely happy. So that's the true moral end point of total utilitarianism.

Why does no one talk about this? Because no one believes that you can actually in the real world create people with arbitrarily high happiness. Whereas we actually know how to create people with low levels of happiness.

But then the Repugnant Conclusion depends on having at least some realistic assumptions about what's possible and what's not. Why not go all the way and add all the missing realism?

Creating unhappy people costs money. Money that could have been spent on making existing people happier. This is a tradeoff and it probably has an optimal point that is neither of the two extremes of having only one ultra-happy person or having a quadrillion suicidal people.

Expand full comment

Whoa.

Someone should write this up as a journal article.

Expand full comment

Seems legit, well done.

Expand full comment

The whole point of the thought experiment is the tradeoff.

Expand full comment

No it's not. See the entry in the Stanford Encyclopedia of Philosophy. No mention of the word 'tradeoff' nor anything similar. The argument is that there is always a better population that is composed of people with lives barely worth living. There's no discussion on the possibility of some middle ground that's better than both, or whether it's actually possible to create that better population.

Indeed, if all people got out of the Repugnant Conclusion argument was 'utilitarianism asks us to consider the tradeoff between making existing people happier and creating new people', then no one would pay attention to it because it is obvious.

The only reason we talk about this is because of the implied assumption that you can always add more barely-happy people without any cost to the existing people, which leads to the conclusion that the best population is the one where everyone's life is barely worth living.

Expand full comment

The idea of whether it would be better if the additional people were happier than average is obvious enough no one would bother investigating it. Similarly, the whole point of the trolley problem is that you can trade a smaller number of lives for a larger number. If you could send a trolley onto a track that wouldn't kill anybody, that would obviously be better so it wouldn't work as a thought experiment. Or consider the question of whether it would be ethical for a hospital to kill someone in order to save many lives via organ donations. If you could avoid killing them and save the same lives, there isn't the tradeoff which makes the question interesting.

Expand full comment

Increasing happiness for people costs money, money that could be spent on creating other people instead. It's like a a couple having one kid and giving him all he needs or 10 kids who only survive on potatoes. This trade off is realistic and how things work in real life.

Expand full comment

A couple of observations I have about the EA movement in general...

It seems to me that those people made rich by a nation or region's hegemon status feel strongly drawn to develop theories of "how the world should be" - how to make things better, or give a better world to our children.

I think it all looks good on the surface. And of course wealth gives us the free time to introspect upon these things. But underneath, I think there's a lot of colonialism in there. It's like the group psyche of the well-off middle-classes seeks to both expunge its own sense of guilt for how hegemon status was achieved, and to reinforce its level of cultural control through developing "improvements" that benefit other races, whilst still preserving hegemony.

Expand full comment

Yeh. I mean one if the things that isnt discussed here is how to stop the starvation in Yemen, which would be a quick fix for Americans.

Expand full comment

How exactly does that quick fix work?

Expand full comment

Ask the Saudis to stop the blockade. Honestly.

Expand full comment

The US government can't even get the Saudis to ramp up oil production, something that's actively in its interest (both for individual Americans, for the people currently in charge, and for the state as a whole): https://nypost.com/2022/07/18/oil-spikes-as-biden-fails-to-win-saudi-pledge-to-boost-output/ . How shall they influence them to stop murdering Yemenis?

Hegemony isn't what it used to be.

Expand full comment

In a world where there's a hegemon, who should consider the future shape of the world? The people who are powerless because they're not the hegemon?

Expand full comment

Was Mrs. Jellyby a 19C EA?

Expand full comment

I wouldn't consider the Western middle classes to be especially powerful personally.

Expand full comment

Yeah, the stink of paternalism reaches all the way around the world.

Let people make their own choices, even if you don't agree with them.

Expand full comment

I feel like it's just something people involved in EA can check out inside themselves - is there a hidden agenda in my giving?

Money is handy. But only awareness transforms.

Expand full comment

Nice list of publications where WWOTF was featured! Let's not forget of all the videos.

Kurzgesagt: https://youtu.be/W93XyXHI8Nw

Primer: https://youtu.be/r6sa_fWQB_4

Ali Abdaal: https://youtu.be/Zi5gD9Mh29A

Rational Animations: https://youtu.be/_uV3wP5z51U

Expand full comment

It's interesting that towards the end of his career, Derek Parfit embraced Kantianism and tried to prove in his final book that it leads to the same conclusions as utilitarianism. It seems to me that the paradoxes in "Reasons and Persons" should point us in the opposite direction.

Kantians and utilitarians disagree on first-order issues but they start from similar metaethical premises. They think that most moral questions have an objectively correct answer, and that the answer is normally one that identifies some type of duty: either a duty to maximize aggregate well-being, or a duty to respect individual rights.

If you're an evolutionary naturalist you shouldn't believe those things. You should believe that our moral intuitions were shaped by a Darwinian process that maximized selective fitness. This implies that they weren't designed to produce truth-tracking beliefs about what's right or wrong, and it strongly suggests (I don't think it's a logical implication) that there *aren't* any objective truths about right and wrong.

Under those circumstances it's predictable that our intuitions will break down in radically hypothetical situations, like decisions about how many people should exist. Now that human beings have the power to make those decisions, we've got to reach some sort of conclusion. But it would be helpful to start by giving up on ethical objectivism.

Expand full comment

I agree there are no objective moral truths (even though I've been linking to Huemer, who is a moral realist).

Expand full comment

What would you suggest replacing ethical objectivism with? The only alternative that seems logically consistent in that case would be Egoism, but it sure isn't popular.

Expand full comment

In practice? Social contract, which is what humanity has essentially been doing for most of it's existence. It tends to boil down to some variation on the Golden Rule, because the ultimate goal is to stop other people from doing bad things to *you* (and, therefore, you refrain from doing those same bad things to them).

Unsurprisingly, it works rather well, because the alternative is war of all against all. Even the "international community" of states - which is the closest thing we have to pure anarchy - at least pretends to act like this, because having everyone more-or-less follow the agreed upon rules is typically better than states attempting to subjugate or destroy one another (the latter of which is something we've gotten frightfully good at over the years).

Expand full comment

The social contract as you've described it here does not seem different to me than Egoism. Egoism says, roughly, that you should take actions that are in your self-interest. Inasmuch as you choose to abide by a social contract because it will "stop other people from doing bad things to *you*" you are acting as an Egoist. Which is sensible to me in a hypothetical where morality is not objective.

Of course, it also follows that if you want to defect against others, and you believe you'll get away with it without punishment, that you should do so. Or, rather, there is no reason not to.

Expand full comment

We've learned to lie and spot lying long before we came down from the trees. We can't reasonably know when the first camouflage evolved, but I suspect it might have been present way back in the Cambrian Explosion.

You can look at morality as a commitment strategy: "I will not defect, even if it benefits me at the time, in order to avoid a descent into a defection cascade in the future".

Or you can take the easy road, and observe that the etymology of "morality" goes back to the Latin word for "custom". When in Rome, do as the Romans do.

Expand full comment

You are giving reasons why an Egoist would conform to the social contract in general, but it's still Egoism. If we have a disagreement, it's not about the utility of a social contract.

Expand full comment

I love this comment!

Can we connect?

Also into ea / ai risk looking for alternatives to utilitarianism. Of late been interested in social contracts and commitments.

Samuel.da.shadrach@gmail.com

Expand full comment

An alternative to both ethical objectivism and ethical relativism is open individualism. It works especially well as a foundation for utilitarian-like theories.

Expand full comment

This seems a good place to briefly vent about this slightly maddening topic and an atomistic tendency of thought that is in my opinion not helpful in moral reasoning.

For example, these thought experiments about 'neutral children' with 'neutral lives' and no costs or impacts is not getting to the root of any dilemma. Instead, it is stripping away everything that makes population dilemmas true dilemmas.

In actual cases, you have to look at the whole picture not just the principles. Is it better to have a million extra people? Maybe? Is it better to have them if it means razing x acres of rainforest to make room for them? Maybe not? It will rarely be simple. And it won't be simple even if there are 10^whatever of us, either. Will it be better then to expand into the last remaining spiral arm galaxy or will it be better to leave it as a cosmic nature park, or unplundered resource for our even longer term future? Who knows?

I also think a holistic approach exposes a lot of the unduly human-experience-centred thinking that is rife in this whole scene. I think many people care about wild species and even wild landscapes – not just their experience of them, but the existence of them period. Should we therefore endeavour to multiply every species as far as we can to prevent the possibility of their wipeout? No, because all things are trade-offs.

The world is too complicated for singly held principles.

Expand full comment

Ok, and? So, thought experiments are less than perfectly illuminating, but what do you propose to do instead, other than saying that everything is complicated and going off to meditate and ponder upon the mysteries of life and the universe?

Expand full comment

I didn't say these thought experiments are less than perfectly illuminating. Actually I think they're often used in a way that is *worse* than useless. (I restrict this to underspecified moral thought experiments, I do think thought experiments can be helpful in other areas of philosophy.)

What else to do instead is a big question, but philosophically I would think some people could benefit from learning to content themselves with a messier view of morality, and move on first-principles thinking to debate and action that looks a bit more like politics than formal logic.

Expand full comment

Well, the way I see it, the reason that we have this unique high-growth situation is that we were able to move from more politics-style discourse to more formal logic one in some domains. It's plausible that things like ethics are just too messy for first principles centric approach, but it seems to me that we haven't been doing it for long enough to confirm this, whereas the politics approach is basically the historic default which has conclusively demonstrated its utter intractability.

Expand full comment

I guess anything is worth a try. My best guess though is that moral reasoning is an intricate expression of boos and hurrahs, so however much it's codified and worked on, the conclusions we each reach will still, ultimately, supervene on some irreducible preferences (some will consider the wild forest plus a billion people to be most valuable, others put more worth on 2 billon people and no forest). It's well worth drawing out and reflecting on the conflicting preferences but it doesn't necessarily mean they'll go away.

A formal logicky approach might work in some domains but it might be cargo culty to think that success in some means it will succeed in all.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Yays and boos clearly play a major role in how moral reasoning in actual humans functions, but that doesn't mean that we can't determine some unifying principles for where they come from and how they get inside people. I'd be surprised if it's too controversial in mainstream philosophy to say that, for example, game-theoretical considerations play a significant part, or that cultural inculcation is a prominent mechanism.

Expand full comment

Sure. Those things are definitely at play. I'm all for that kind of meta ethics. It's interesting (and an empirical question) to consider to what extent our yay/boo reflexes change as a result of reasoning and conscious deliberation, and to what extent we are endowed with them by culture and experience.

(No doubt in a few centuries we'll have figured out how to infallibly raise children such that their instincts automatically line up with utilitarian calculus, making all moral debates much simpler and more efficient.)

Expand full comment

The argument that we should aim to reduce unhappiness rather than maximise happiness has always been more persuasive to me. Happiness is something we can hardly define in real life, but people will certainly squeal when they are unhappy! Plus in negative utilitarianism you get to argue about whether blowing up the world is a good idea or not; which is a much more entertaining discussion than whether we should pack it full of people like sardines.

Expand full comment

People act both to decrease their unhappiness and increase their happiness. We can model it all with ordinal preferences without actually needing to set a zero-point demarcating negative from positive.

Expand full comment

I would like to register at this juncture that the Benevolent World Exploder is my absolute all time favourite name for a philosophical argument and band.

Expand full comment

I agree

I'm not so repelled by the benevolent world exploder, but maybe you could add a deontist clause that says that you can't kill people either, and then you have regular antinatalism

Or that clause could come from risk of bias, right? (https://www.lesswrong.com/posts/K9ZaZXDnL3SEmYZqB/ends-don-t-justify-means-among-humans)

Expand full comment

This stuff is silly and just highlights how the EA people don't understand the fundamental nature of *morality.* Morality doesn't scale - and that's by design. Morals are a set of rules for a particular group of people in a particular time and place. They aren't abstract mathematical rules that apply to everyone, everywhere, at all times and in all places.

Expand full comment

To be fair it's not like EA people invented utilitarianism. You're describing a view of morality (deontology + relativism?) which some people hold and other people don't. It's far from a settled debate.

Expand full comment

Call me a contrarian if you want, but I don't think that I have a 1% chance of affecting the future. I have about a 0.000025% chance of affecting Los Angeles, and that's me being optimistic. Maybe someone like Xi Jinping, who can command the labor of billions, could pull it off; but even then, a whole 1% seems a bit too high, unless he wanted to just destroy the future with nukes. Wholesale destruction aside, the best that even the most powerful dictator can do is gently steer the future, and I doubt that his contribution could rise to a whole percentage point.

Expand full comment

I think that you can affect the future quite easily, and quite substantially, you just can't do it in any predictable way. Your attempted good deeds have equal probability of turning out to have good or bad effects in the long run, your attempted bad deeds likewise, and the most significant thing you ever do will probably be the time you accidentally scared a butterfly in Central Park setting off a chain of events which will eventually lead to the defeat of the trans-Jovian hivemind by Antipope Pius XX at the fifth and final battle of the Cydonian jello rift.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

The butterfly is a neat conversational hypothetical but if you do the math, it's vanishingly unlikely that any specific butterfly (or action) has an outsized effect in chaotic systems. The systems are sensitive to initial conditions *in the aggregate*, but they're not sensitive to every single condition; in situations where there are numerous or uncountable conditions, most conditions have an effect which peters out to zero after a limited time.

Expand full comment

I don't think the idea is that you could increase the EV of the future by 1% all by yourself. I think it's that maybe we could all do that collectively, for example if we spent 1% of GDP on it, or something.

Expand full comment

What about going on a killing spree of people who are a net-negative to society?

Expand full comment

Well, he *did* mention Los Angeles; perhaps you're both thinking about https://en.wikipedia.org/wiki/Falling_Down

Expand full comment

The Surfline.com guys affected Los Angeles. Pretty noble work.

Expand full comment

What if he invents superintelligent AI and achieves world domination?

Expand full comment

I think an extremely important reason to prioritise animal welfare is AI risk. A learning AI would likely base at least some of its learning on our moral intuitions. And we would be pretty close to animals for a super intelligent AI. How we treat animals might affect how AIs treat us!

Expand full comment

The AI is either evil or it isnt. If it is evil its not going to care about how we treated animals. If it isnt it’s not going to kill humans.

Also it’s not going to exist so even less to worry about.

Expand full comment

You're missing the part where I mention that how we treat animals may play a part in whether the AI is evil or not. Machine learning depends on the extant universe of things to train itself.

Expand full comment

If the machine is learning from us then it will think that animals are tasty meat.

Expand full comment

We are either evil or we are not. If we are evil, then we deserve to suffer, and if we are not evil, then a not evil AI will treat us just the way we treat factory farmed animals.

Expand full comment

The AI is like an avenging daemon, isnt it?

Expand full comment

No, I was trying to point out that treating everything as a binary between "evil" and "not evil" lets you get too-convenient conclusions that support whatever you want.

Expand full comment

I didnt treat “everything” as a binary between “evil” and “not evil”. I treated one thing, the AI, speculated to do evil things as evil. Unless we think total human genocide as a moral good.

Why the AI is a radical vegan is not clear anyway.

Expand full comment

"How we treat animals might affect how AIs treat us!"

The revival of baby-farming? Only different this time!

https://en.wikipedia.org/wiki/Baby_farming

Expand full comment

Imagine everybody adopts veganism, and even sweeps the ground in front of them (like some Jains) to avoid walking over bugs. And, somehow, no more development of land, meaning no more deforestation etc. Even if all these highly unlikely things happen, people will almost certainly still try to kill mosquitoes, midges, black flies, other biting insects, and parasites such as lice.

Doesn't this make it nearly inevitable that any AI which trains itself on data about all human-animal interactions will see humans treating certain animals as having no moral value and thus to be disposed of in the most convenient way?

Expand full comment

Where’s the AI’s morality coming from? Why isnt it influenced by human morality on this which is largely pro meat.

Expand full comment

The main differences between human and animal minds are qualitative, not quantitative. In order for an AI to stand in relation to us as we do to animals, we would have to be fundamentally incapable of comprehending the factors that drive its behaviour. In the (IMO, unlikely) event that such an AI were to be created, it would be completely pointless to try and influence its ethics in any way. It would be like a dog trying to debate philosophy with its owner.

Expand full comment

This is a reason to give animal welfare 0 priority and assign 0 value to animal lives.

Assuming a powerful AI existed, if it viewed animals as even being on the same scale as humans when considering importance, humanity would be shafted.

Expand full comment

I guess I’m the first Scottish person to read this, so let me formally object to MacAskill being described as an ‘English writer’, on behalf of our nation

Expand full comment

Charitably, he’s an English writer like he’s a book writer.

But realistically, yes, even Englishman Alexander is confused about Britain :(

Expand full comment

That's certainly one interpretation but you know we're sensitive about these things!

Expand full comment

Is he Scottish? His last name is from marriage.

Expand full comment

He is Glaswegian, the very best kind of Scottish.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Quite right - he should be "a British writer" when he's doing well and "a Scottish writer" when he's failing, as is traditional.

Expand full comment

>There are only 10^67 atoms in our lightcone

Meh, I wouldn't give up quite that fast. Sometimes I think about fun schemes to try if the electroweak vacuum turns out to be metastable (which last I heard, it probably is). And there's a chance more stuff might crop up once we crack quantum gravity.

Also, only a 1% chance of affecting the vast future, really? I suspect that's underselling it. Right now, everything from human extinction to a paradise populated by considerably more than a nonillion people looks possible to me, and which one we get probably depends very strongly on actions taken within this century.

Expand full comment

>"But the future (hopefully) has more people than the present. MacAskill frames this as: if humanity stays at the same population, but exists for another 500 million years, the future will contain about 50,000,000,000,000,000 (50 quadrillion) people. For some reason he stops there, but we don’t have to: if humanity colonizes the whole Virgo Supercluster and lasts a billion years, there could be as many as 100,000,000,000,000,000,000,000,000,000,000 (100 nonillion) people."

The main threat we face may be the reverse:

https://quillette.com/2022/08/20/the-unexpected-future/

Expand full comment

Not an x-risk, can be ignored. Natural selection obviously works against it. In the worst case civilization will just genocide its old and carry on.

Expand full comment

An excellent response to justify dealing with theoretical problems rather than real ones.

Expand full comment

Regarding section IV. and Counterfactual Mugging:

You assume that there is no contest of resources (not possible) and that the happiness of people is not an interaction (which I think it is wrong). Happiness is a relative term and even that is a 'resource' If there is one person with happiness 80 and all of a sudden another appears with happiness 100, that 80 may go down to 60 just because the 100 appears. Or it may go up to 90 if they hook up. You are much happier being middle class in Africa surrounded by poorer people than being poor in the US surrounded by richer people.

What I want to say is that simple utility functions don't work except in academic papers or when paying students to switch coffee mugs with pens.

Expand full comment

I assume this is baked into the calculation.

Also, you're an asshole if you're unhappy because someone, somewhere else is happier than you.

Expand full comment

Sadly it's not baked into the calculation. The calculation was extremely clear cut and simple; Scott reproduced it in the review. The example was just kinda half baked in the first place.

Expand full comment

Yeah. It's kind of a shame that that level of simplistic argument made it into a widely notable popular philosophy book. "Assume people are perfectly round spheres with independence, and assign each one a number. Now, it follows that..."

Expand full comment

I would love to read somewhere a more detailed analysis of the "drowning child" thought experiment. Is it actually valid to extrapolate from one's moral obligations in unforeseen emergency scenarios, to policies for dealing with recurring, predictable, structural problems? If so, can we show that rigorously? If not, why not?

Expand full comment

The best I know of is Garrett Cullity's The Moral Demands of Affluence. He ultimately shows that this "iterative argument" is wrong. (What happens if you see another child moments after saving the first, surely you can't let them die just because you already saved one. Repeat ad infinitum....)

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Thanks for the tip!

> What happens if you see another child moments after saving the first, surely you can't let them die just because you already saved one. Repeat ad infinitum....

Well, at that point it becomes rather difficult to keep the thought experiment even slightly realistic. (As often happens when you insert the phrase "ad infinitum" into a thought experiment.)

Where is this infinite supply of drowning children coming from? Maybe I am living at a river downstream from a large and densely-populated city, and it happens on a very regular basis that some kid falls in and comes floating past my house. Have I implicitly signed up to become a full-time volunteer lifeguard by living in this house? If I go on a two-week vacation, knowing that statistically there will be a couple of kids falling into the river during that time, am I responsible for their fate? What's up with their parents, are they just telling their kids "go ahead and play near the river if you want, don't worry about falling in, there's a nice guy living downstream who will fish you out for free"? Can I give the city government an ultimatum, saying "look, I'm not going to keep doing this forever, if you don't put up some fencing or something, there will come a time when I'm just going to look the other way"?

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

You don't need an infinite supply of children and the problem isn't that it is a thought experiment. Remember that the thought experiment is deployed as a reason for why you should give money.

Cullity's argument is that if you give $10 there are still children who will die. And surely you can give $11. Eventually you run out of money and live under a bridge but surely giving your last $1 is better than letting a child die? So, if you accept Singer's analogy, everyone should donate every cent above what is needed to support the bare minimum existence. Because it is hard to see how you can draw the line anywhere else if you've already accepted the drowning child metaphor.

Cullity actually goes in way, way more depth. Honestly his book felt more like a mathematical proof than most other philosophy books I've read.

His conclusion is that, no matter how you look at it, this metaphor fails. He actually spends the majority of the book trying to build up the strongest steelman version of this he can before showing it still can't be saved. It was one reason I quite liked the book even though it was a very technical and difficult read. (It is not remotely intended for a popular reading audience.)

Expand full comment

If you want to be a good swimmer, then you must train some amount of hours. Perhaps ten hours a week. But if you trained eleven hours a week, couldn't you even be a better swimmer? Surely it would help. So you should train eleven hours a week. Continue until you reach the point where the marginal value of another hour of training is zero or negative, and now you're spending every waking hour but what's physiologically and psychologically necessary in training for more swimming. This would suck. Therefore, if you want to be a good swimmer, you must be living at the bare minimum.

Now replace "good swimmer" with "good person" and "train some amount of hours" with "work some amount of hours to give to charity."

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I've already written about this elsewhere in this thread, but no it's not, precisely because the two situations are non-analogous.

To piggyback on the discussion of Cullity: you don't need a steady supply of children falling from Mars. You just need the same child to be drowning tomorrow.

If your first question isn't "Where the hell are the kid's parents?", it should be.

If a child keeps falling into a river, the correct solution isn't to be pulling it out every time it does - it's to ensure it stops falling into the river.

Somehow I doubt Singer would advocate a return of colonialism (if he did, I might have more respect for his arguments), which is much more likely to be a solution to the problem of Third World poverty than charitable giving by individuals. It might not be the *optimal* solution, given other problems, but at least it addresses the actual issue at hand, which is primarily poor governance.

Expand full comment

As I see it, at this point all long termism debate is about resolving the philosophical issues caused by assuming utilitarianism. It's probably a worthwhile idea to explore this, but I don't understand why is this important in practise at all? Isn't the one main idea behind EA to use utilitarianism as much as possible, but avoid the repugnancies by responding to Pascal's muggings with "no thank you I'm good"? Practical long termism looks morally consistent. I think it's barely different from EA-before-long-termism. x-risks are very important because we care about future people, but the future people are conditional not only on us surviving but also growing as a civilization. The latter is pretty much EA\{x-risks}, so we're just left with finding the optimal resource assignment between survival and growth. I imagine survival has significantly diminishing returns past a certain amount of resources and even astronomical future people numbers won't make the expected outcome better.

Expand full comment

The future potential people are really not my problem, nor anything I can solve. We definitely want to avoid nuclear war (which remains the biggest threat) but that’s in part because it affects us now. Back in 3000 bc they had their own worries and couldn’t be expected to also worry about the much richer people of the future. I get that the future might not be richer if technology slows but there’s little the average guy can do about that.

Expand full comment

> If I had to play the philosophy game, I would assert that it’s always bad to create new people whose lives are below zero, and neutral to slightly bad to create new people whose lives are positive but below average.

But you cannot rate the worth of different people's lives on a numerical scale, so the whole thing is nonsense from start to finish.

Expand full comment

I feel the Repugnant Conclusion is fine as a conclusion if it's seen as a dynamic system, not static. If there's a trillion people with the *potential* to build something magical in the future that's probably better than 5 billion 100 utilon people. It's the equivalent to (perhaps) the 17th/ 18th century world but much much bigger (which would help increase progress), compared to a much more stagnant world of only the richer parts of the world in 2040.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

I didn't preorder the book, mostly because I suspect I've already internalised everything it says, but also because I don't think the philosophical debate over how much we value the future is as interesting or relevant as the practical details.

Regardless of your moral system, if there are concrete examples of things we can do now to avert disaster or cause long-term benefit, I think people will be in favour of doing them - maybe it's a utilitarian obligation, maybe it's just because it seems like the kind of thing a wise and virtuous person would do. The value of future generations maybe factors in when considering trade-offs compared to focusing on present issues but it's a little ridiculous when all the longtermists end up being mostly concerned with things that are likely to happen soon and would be really bad for everyone currently alive.

"We should do more to address climate change", "we should carefully regulate Artificial Intelligence", and "we should invest in pandemic prevention" are all important ideas worthy of being debated in the present on their own merits (obviously not every idea that's suggested will actually help, or be worth the cost), and I think framing them as longtermist issue that require high-level utilitarianism to care about is actively harmful to the longtermist cause.

The best analogy I have is that the longtermists are in a cabin on a ship trying to convince the rest of the crew that the most important thing is to be concerned about people on the potentially vast number of future voyages, then concluding that the best thing we can do is not run into the rocks on the current voyage. The long-term arguement feels a little redundant if we think there's a good chance of running aground very soon.

Expand full comment

A lot of this “moral mugging” (great term btw) logic reminds me of a trick seasoned traders sometimes play against out-of-college hires. They offer them the following game, asking how much they’ll pay to play:

You flip a coin. If it’s heads you win $2. If it’s tails the game ends.

If you won “round 1” we flip again in “round 2.” If it’s heads, you win $4. If it’s tails, the game ends and you collect $2. In round 3, it’s $8 or you collect $4. Continue until you flip tails.

The expected value of this game is infinite: 1/2 * 2 + 1/4 * 4 + 1/8 * 8 …

Junior traders thus agree to offer the senior ones large sums to play and… always lose. Because there isn’t infinite money (certainly the senior trader doesn’t have it) and if you max out the payment at basically any number the game’s “true” expected value is incredibly low.

The connection here is that strict “moral models” are deeply brittle, relying on narrow and unrealistic assumptions about the world while often ignoring the significance of uncertainty. Following them as actual guides to behavior as opposed to idle thought experiments always strikes me ill-advised and, frankly, often creepy, as such models have a tendency to be usable to justify just about anything…

Expand full comment

> You flip a coin. If it’s heads you win $2. If it’s tails the game ends.

>

> If you won “round 1” we flip again in “round 2.” If it’s heads, you win $4. If it’s tails, the game ends and you collect $2. In round 3, it’s $8 or you collect $4. Continue until you flip tails.

Ah, the St Petersburg Paradox. Always a favourite.

The trick, of course, is - to reference a classic sketch - "don't ask how much you can make; ask how much you can lose" (in this case, it would rather be "how likely am I to make back my initial investment?", which is "not very likely at all", directly proportional to how much you paid to play).

Expand full comment

If someone wants me to accept some kind of variation on the repugnant conclusion, all they have to do is go out and find me one person with happiness 50 and fifty people with happiness 1 so I can check them out for myself.

This is, of course, impossible. People blithely throw numbers around as if they mean something, but it's not possible to meaningfully define a scale, let alone measure people against it. And even if you manage to dream up a numerical scale it doesn't mean you can start applying mathematical operations like sums or averages to them; it's as meaningless as taking the dot product of salami and Cleveland.

The bizarre thing is that everybody fully admits that obviously you _can't_ go around actually assigning numbers to these things, but then they immediately forget this fact and go back to making arguments that rely on them.

You can't even meaningfully define the zero point of your scale -- the point at which life is _just_ worth living. And if you can't meaningfully define that, then the whole thing blows apart, because imagine you made a small error and accidentally created a hundred trillion people with happiness of -0.01 instead of creating a hundred trillion people of happiness +0.01.

tldr: ethics based on mathematically meaningless combinations of made-up numbers is stupid and everyone should stop doing it.

Expand full comment

My problem with the Repugnant Conclusion is that its conclusions depend on the worlds it describes being actually possible. There might be certain universal rules that govern all complex systems, including social systems. Although we don't currently know what these could be, I believe they are likely to exist and that the world described in the RC would be thereby forbidden. If this is the case the RC argument is premised on an impossibility, equivalent to starting a mathematical proof with a squared circle, and hence its conclusions have no merit in our reality.

Expand full comment

That's an interesting objection. If there are huge numbers of people right on the edge-- I'm assuming that these mostly miserable people are close to death-- then things going wrong might kill a lot of people, or at least move them below the life-barely-worth-living line.

Or there might be a revolution, and not necessarily one that aims at improving living standards for a lot of people.

Expand full comment

Per the review, the book seems to take as a given that poorer == less happy, but the country comparison data I've seen suggests that's not true, or at best a wild oversimplification. Does the book flesh out this argument?

In the absence of this, the repugnant conclusion's logic seems difficult to map to reality.

I continue to like utilitarianism, philosophically, but no definition of "quals" maps to the rich diversity of preference and experience in reality.

Expand full comment

Scott — I'm not sure male pattern baldness should count as a "medical condition". Is having eyes of different colours a condition? Is red hair a condition? Baldness is just a physical trait. Many people find baldness attractive (attractive enough that shaving your hair even if you're not bald is a thing). Any badness relating to being bald is socially-constructed and contingent, and I don't think it should be talked about in at all the same category as lung malformations.

Expand full comment

I think the real point is that “condition” is a general term, and there’s going to be a borderline case however you try to define it.

Expand full comment
author

This is a good argument for letting people biologically self-modify to alter traits which makes them miserable, regardless of whether they're diseases. Not for calling everything that people might want to self-modify "a disease"! A red-haired person might dye their hair because there's a lot of anti-ginger bullying in their community. That surely doesn't mean you should call red hair a "condition". What if there were a procedure by which a POC could make themselves look white to avoid suffering from racism; would that make high melanin levels a "medical condition" in that social context? I just don't see how pattern baldness is different from either of these.

Expand full comment

Avoiding counterfactual mugging has much the flavor Luria's Siberian peasants, and their refusal to engage in hypothetical reasoning.

Expand full comment

Grug----Midwit----Genius

Expand full comment

The best argument I ever read against the Repugnant Conclusion is the idea of a Parliament of Unborn Souls. It goes like this:

If we imagine the echoes of all possible future humans voting among themselves on who wants to get to be born, vs. sacrifice lower their odds in favour of a having a better life if they *do* get born — well, the Repugnant Conclusion would be laughed out of the room. The sum total of all possible future people overwhelmingly exceeds the sum total of all people who could possibly be born in the physical world (taking into account all possible sperm/egg combinations). Voting for the Repugnant Conclusion wouldn't meaningfully increase anybody's chances of being born, while it would drastically lower the expected value of life if they do get lucky.

I guess this is an answer to a version of the R.C. that phrases itself in terms of "potential people have moral value and should get to exist", rather than "bringing people into existence has moral value". But the latter as distinct from the former seems, for lack of a better term, bonkers. (I guess this is just Exhibit ∞ in me being a preferentialist and continuing to feel deep down that pure-utilitarians must be weird moon mutants.)

Expand full comment

“Here's a logical rebuttal to your batshit crazy scheme, but do note that I reserve the right to disavow that logic if you find a way to incorporate it into something worse.”

Expand full comment

That just seems wrong? If someone offers me the chance to change a 1% chance of a million dollars into a 2% chance of $900,000 I’m taking it. And that’s all that’s going on here.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

I don't think you're grasping the scopes we're talking about.

Per a quick Google, a single release of sperm can contain up to five hundred million different gametes. Multiply that by the number of pairings of fertile people on Earth at any given time, and it will become obvious just how *vast* the population of individual potential future people really is. The population of the "people who could be alive by 2050" wing of the Parliament of Unborn Souls is counted in trillions.

No amount of pro-population-growth policy could possibly make a meaningful dent in that number by 2050 . Or ever, really, because each actual, child-producing sexual intercourse still involves 499 million paths-not-taken! Everybody could be going at it like rabbits to the limits of biological possibility, and the chance any individual currently-possible 2050-person has of being born would still be way below one-in-a-billion.

We're not talking about a tradeoff of 1% per 100,000 bucks here. We are talking about Pascalian levels of astronomical. We're talking about 0.6 chance in a trillion being increased to like… a 0.61 chance in a trillion. (Those last numbers are off-the-cuff and don't represent a particular Fermi calculation, but you see my point.)

I do not in fact trade a 1% chance at a million dollars for a 1,00000000000000001% chance at $900,000. Let alone a 1,00000000000000001% chance at ten bucks, which seems more directly analogous to the Repugnant Conclusion.

Expand full comment

None of the numbers we are talking about here are small enough, to be sure, but what matters is the ratios. We do trade one in a billion chances of dying for one in a million chances of substantially less bad things when regulating air travel.

Expand full comment

Nitpick alert. Aoogah! Aoogah!

You should probably knock back your numbers of potential people by a third or something-- the miscarriage rate-- not that it matters for your argument.

Expand full comment

Just to nitpick: Stalin did not say that "1 million deaths : just statistic"-thing. An unnamed French said it (allegedly - about 100.000 war-deaths) - and was quoted by German leftish/satirical writer Kurt Tucholsky in 1925. - Statistics were important to Stalin - when statisticians showed him the population-numbers for Ukraine et al after the famine, he had those number classified. And the statisticians executed.

Expand full comment
Aug 27, 2022·edited Aug 27, 2022

<i>And the statisticians executed.</i>

...because Stalin.

Expand full comment

Sorry if this is well trod territory but I'm no philosopher: Doesn't that Parfit thought experiment about the survival of humanity imply a lot about what his views should be on contraception? If the non-existence of an unborn person is morally equivalent to (or, you know, worth any significant percentage of) murdering a living person, then does he consider abortion murder?

Expand full comment

Two things - if aborting one child when you’re not ready let’s you have another child when you are ready then it’s not murder, it’s saving one rather than the other; and part of the harm of murder isn’t just the lack of existence of the person, but is specifically the frustration of the already existing goals they have, which an infant or fetus doesn’t have.

Expand full comment

Infant? Do you not think that killing an infant is murder?

Expand full comment

another example of intransitive preferences: currently people have to work unpleasant jobs and any new automation is a great change. But if you keep adding automations, eventually humans don't have any non-artificial struggle at all and at that point it seems kinda pointless to me.

Expand full comment

Before reading the rest of this, I want to register this bit:

> Is it morally good to add five billion more people with slightly less constant excruciating suffering (happiness -90) to hell? No, this is obviously bad,

My intuition straightforwardly disagreed with this on first read! It is a good thing to add five billion more people with slightly less constant excruciating suffering to hell, conditional on hell being the universe you start with. It is not a good thing to add them to non-hell, for instance such as by adding them to the world we currently live in.

Expand full comment

That was my general intuition as well, at least assuming there isn't a resource variable relevant to the equation.

Expand full comment

I don't understand that. If I was one of those 5 billion people born into hell with -90 happiness I would much prefer never to exist and I would curse the person that created me.

What's your intuition for what you would experience? Gratefulness that you're not -100 and glad to exist even though every moment is in extreme pain?

Expand full comment

Long termism (or even mid-termism) have one huge drawback: The advice giver (the philosophe, activist, or more importantly politician) will not be there when the results of the advice can be judged.

Like all the future trade off (suffer now for a brighter future), is is inherently scam-like ( 'A Bird in the Hand is Worth Two in the Bush' is not meaningless). It's not sure scam, but it needs to be minutely examined, even for short term advices: Is the adviser accountable in some way on the results? And more importantly, is the adviser in a special position where he would profit from the proposal sooner or more, contrary to the average guy who is asked to suffer near term in exchange of longer term benefit? If it is the case, if the adviser do not suffer at least as much in the short term as an average advisee, it is a scam.

I did not always though like that, but those last decades have been a great lesson, the whole western world is soooooo fond of this particular scam it's everywhere. I guess it taps in a deeply embedded catholic guilt+futurism.

Expand full comment

I don't think the Charcoal thing is a good argument. We can only get about 1 watt of energy for industry for each square meter devoted to forest land. When you have a population constrained by the amount of agricultural land and you also need wood for tools, houses, and heating in addition to industry then being limited to the energy you can get from charcoal for industry is going to essentially prohibit an industrial revolution.

Expand full comment

What does “discovering moral truths” mean? Is the author a moral realist?

Expand full comment

Yes.

Expand full comment

Personally, I find the framework of realism/unrealism to be very unhelpful. Most things are usually real in some sense but not the other instead of being completly real or completely unreal.

Expand full comment

Ugh. This was not a good "book review".

I've come to the conclusion that all of the book reviews so far are pretty bad because they are all too long. After 1200 (maybe 800) words, the payoff should be increasingly better the longer it gets.

This might be reflective of the entire blogging endeavor; there are no newspaper editors telling writers to shorten and tightened it up because space is limited. As a result, the quality is just not high and we'd be better off finding already published reviews.

As I vaguely recall, the poet John Ciardi remarked: that which is written without hard work is usually read without pleasure.

I suggest that distillation is hard work that makes writing better. Could the book reviewers start doing some hard work? Redo all of these and give us your best 1000 words.

D+, revise and resubmit.

Expand full comment

This is Scott, I’m pretty sure. No disclaimer at the top.

Expand full comment

It doesn't matter to me who wrote it. The book reviewer, whether Scott or someone else needs an editor to push her or him to distill.

This is not high quality. This is logarrhea. Parhessia.

Expand full comment

It reads fine to me. Better than most of the other book reviews.

Expand full comment

I guess in taste there is only dispute.

Do you really think that this review would be accepted for publication in New York Times, Washington Post, WSJ, Chicago Tribune, The Guardian, USA Today, The New York Review of Books, London Review of Books, or Chicago Review of Books, or The Chronicle of Higher Education.

There are some interesting points, sort of sprinkled in it like pomegranates in an otherwise unremarkable and way too big salad. But come on; let's step up the game.

Expand full comment

It is far more interesting to me than most of the book reviews I have seen in those publications. I don't really know (or see why I should care) whether it would meet their standards, whatever those standards may be.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

I would now kind of like to see some of JDK's reviews of famously long books.

"War and Peace - did Tolstoy never hear of an editor? Clearly accustomed to imposing his aristocratic will upon others. Could have been wrapped up in a novella-length volume. This writer needs to consider 'Would USA Today serialise this?'

The d'Artagnan Romances - Dumas père is one of the worst of the penny-a-liners, milking the initial popularity of his characters with interminable sequels. Once was more than enough. Not even worth a paragraph in the Chicago Tribune.

À la recherche du temps perdu - The length of the title alone gives you an inkling of how tediously Proust babbles on with his sentimental memories of the past. What is it with the French and logorrhoea?

A Dance To The Music Of Time - listen, Powell, if you can't say it all in one book, then shut up. We didn't need 12 volumes to tell your story, and I'm sure the London Review of Books would fillet this with a pertinent Marxist critique of your twee Classicism."

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

If I had any respect remaining for The Chronicle of Higher Education or the London Review of Books (and had I started off with any respect at all for a few others on that list), that might be a trenchant criticism. Alas, I have never wept myself to sleep lamenting "No editor for USA Today would take that piece of mine! due to its inordinate verbosity!"

Since I don't bother reading their chiselled jewel-like aperçus, but I do read and enjoy Scott's words words words, the length of this suits me just fine. If any of his pieces bore me, I stop reading them.

Expand full comment

For the long form essayists, yes. As in the LRB or perhaps the NYRB. Its too long for a daily paper.

Expand full comment

Scott's book reviews aren't intended as book reviews. They just use the book as a jumping off point to talk about whatever Scott started thinking about while reading the book.

Expand full comment

Do you like Scott Alexander's work generally? He's isn't exactly concise, but people like it because it provides room for the interesting analogies and examples.

Expand full comment

He has some interesting things to say from time to time.

It looked liked this was part of the book review series. (But apparently it wasn't.) There were some moments in the review but it wasn't that great of book review, like most of the reviews that we've been ask to judge.

We ought to be honest (as your moniker* suggests): the problem with blogs is that there are no editors to help elevate the essays.

*NB. I didn't know there was a participant with that name. A curious moniker because it hides your identity which might be the opposite of parrhesia.

Expand full comment

I would definitely like an editor.

Yes, in some sense.

Expand full comment

I think for most bloggers:

Yes, I'd like an editor but no to having to actually follow the editor (because: you're not the boss of me.)

Expand full comment

Where do you write?

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Not the OP, but I have usually enjoyed Scott's work in the past (and would have voted for Arabian Nights before learning it was not part of the contest), so I am extra disappointed to learn that he is to blame for this review. Especially as it seems to go against some of the stuff he's writen in the past. This review also doesn't feel much like Scott's usual style - not many humorous asides or roman numeral headings for one.

Expand full comment

A lot of book-based works, in both edited media and on blogs, would be better classed as book reflections than book reviews. And among both reviews and reflections, there is a distinction in style and tone: some are deliberately formal, while others, like this, are more like thinking aloud. That schema gives us at least four potentially distinct reader experiences from anything described as a 'book review'.

I think a lot of us read Scott's thinking-aloud pieces because we have doubts about how we come to believe (with varying degrees of probability) what we believe. Scott tries to make his own thinking explicit, so it serves as a comparison point. Sure, he could use an editor sometimes, but too much editing would defeat the usefulness, from my perspective.

Expand full comment

Why do we assume that the morally neutral level of life is objectively defined? I think standards change and depend upon all sorts of things, including the average quality of life across the population. So suppose we consider the blissful planet, with 5 billion people whose happiness is distributed normally with average of 100 and variance of 1. Then by their standards, people with happiness of 80 would be way off down from what they consider normal (20 standard deviations below average which they see around them!), and they would not even consider adding those people to the population. So the trick here is letting us, from our present time, decide whether to add those 80-happiness people. But it is like asking a slave on an ancient subsistence farm to decide whether to add some poor people to modern population. By the slave's standard, having access to cheap fast food and limited hours of work week would make them really happy, and they would obviously be happy for more people like this to exist, but in modern realities those added people might feel really unhappy, have loads of stress, and keep themselves from suicide by moral convictions. Similarly, it is people of the heaven that should decide which people should be added to heaven, not us here with our abysmally low standards (compared to theirs).

Expand full comment

Alternatively, objective platonic zero level of happiness and quality of life might exist, but its true value might be really counter-intuitive (as ought to be with any universal constants). So an alternative solution to the repugnant conclusion would be that it is correct, but the zero level is something that we perceive as infinite bliss. Then our current lives are actually deeply negative, with us living out of personal desires to live, not because it is objectively appropriate.

Expand full comment
founding

I use a discount rate in considering the future. For financial considerations, a discount rate is composed of (1) a risk free return plus (2) a margin for uncertainty of return.

For purposes of considering the future of humanity, I agree that only (2) makes sense. But (2) can be quite small as a % and still make the present value of affecting the distant future quite small. Not because future people are empirically less important than current people, but because of the uncertainty, or contingency, of how we can affect the far off future from investments today.

This line of reasoning leads me to want to focus my own "altruism" on issues affecting the present.

Expand full comment

This, I think, is the appropriate response.

Expand full comment

Ever since learning how to calculate a NPV I've similarly felt a lot less angst about future quintillions of humans

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

A liberal (generally) might respond that, yes, there is uncertainty, but there is also compellingness, as in the case of climate change. The science might be off (either way), but if we focus on that we ignore the scale of risk to the sustainability of our civilization. Just as in the case where we buy insurance, we need to hedge risk with counter-risk investment when the scale is extreme.

A conservative (generally) might respond that, yes, there is uncertainty, but there is also compellingness, as in the case of national debt. The economics might be off (either way), but if we focus on that we ignore the scale of risk to the sustainability of our civilization. Just as in the case where we buy insurance, we need to hedge risk with counter-risk investment when the scale is extreme.

The point is that regardless of your particular orientation or issues, part of considering how much to invest in the future concerns not just the efficacy of the investment, but its salience to avoidance of catastrophic failure.

Obviously (I think), the farther out in time we aim our long-termerism, the more it is rendered irrational by the radical unpredictability of the future. But in cases where we can model large scale damage from the actual present to a future time of a scale we are in a position to grasp (as we grasp increasingly large scale time intervals by growing older), the more likely we are to be able--not to micromanage, but rather--to macromanage future contingent threats.

There is also an interesting hybrid case of present/future linkage. A large meteor could extinguish much/all life on earth at any time beyond the travel time of an object we can spot telescopically this evening. How much should we invest in developing a civilization-saving technology to detect early and destroy distantly objects headed our way? One may not appear for several billion years--or ever, but one could be arriving a year from now. The insurance function of this investment is not tied to any timeline, and the uncertainty is absolute until it is ~100% certain, and this will be true for every future generation; how do we assess discount factor (2)? (A non-trivial added factor is to assume that at some point, the cost of tech development will radically drop, because we'll stumble on it without specifically targeted investment--an issue that applies to climate change, but perhaps not to the deficit, unless you consider MMT a technology and believe that it has reduced the cost of solution to near zero.)

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

It actually applies well in the present, too. One can downscale altruistic work taking place *right now* on the basis of physical distance, cultural distance, and domain inexperience, all of which create huge uncertainty.

For instance, I know *roughly* who is coming to the church soup kitchen next door to my house. Roughly. I could talk to some people for a long time and know better. But I don't know much at all about water distribution in Sierra Leone, and a few books written by people won't teach me enough to give me confidence in the actions I might take to affect that water distribution.

Expand full comment
author

I think everyone does this, but MacAskill would just call it "the future is worth exactly the same amount as the present, but we're less sure of our ability to affect it".

Expand full comment
founding

I think the second statement statement contradicts the first, at least as far as current efforts to affect the future goes. An uncertainty discount (5%) that halves impact every fifteen years would mean that a life saved/improved today is worth about 40 billion lives in 500 years.

Expand full comment

Why are we more than a little bit sure of our ability to affect the present? We have lots of evidence that we get this importantly wrong too. Quite a few of the books you review have that theme, where hindsight reveals that an intervention was importantly wrong.

Personally, I'm always amazed that we get right back in there and do it again with confidence.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

I want to preach the benefits of stable state analysis heuristic, which was called Kantian categorical imperative in its previous life:

How will the society look like if an action preferred by your ethical theory were an universal societal rule?

The other version put forward by Kant, "treat person always as an end, not as a means to end" is also useful, though I am less certain of Kant's claim that is essentially derived from the first principle.

I find it much more productive way to think about ethics. Now instead of just thinking "imagine a world 5 billion people in Hell, what if we can magically add 5 billion more people", you have to consider the actions to get from world A to world B.

The various repugnant conclusions become much more implausible. Basic version suggests that everyone should make as many kids as possible, because more utils experienced is always better. I don't think the society would be workable, if for nothing else that there are limits to carrying capacity and the society would eventually collapse. It would also make a society where many other moral imperatives become difficult to follow ("do not knowingly create situations where famines are likely", for instance).

And finally, such calculus also fails by the second criterion, as it views everyone currently alive at any time point T = "currently" more as incubators for the next generations of utils (their own experienced utils become overwhelmed by all the potential utils experienced by N >> (large number) future generations).

Naturally the imperatives can not be exhaustively calculated, but that is just a sign that ethics is an imperfect tool to human life, not that human life is subservient to a method. Hopeful, the rules they can be iteratively refined ("get the British government buy all slaves free if it is possible"). And I think "imperative calculus" would find it is good / necessary to help a drowning child / suffering third world person as long as the method of helping doesn't become dystopian. (Dystopian utilitarianism would allow for "if you can't swim, throw a fat person who can under a tramcar until someone saves a drowning child". I think one of the salient imperatives is, "as much people should learn to swim, help people, and call others help".)

Expand full comment
author

Suppose you are a government central planner, trying to decide what plan to have. I think the utilitarian view genuinely matches your problem, whereas there isn't really a good way to fit Kant in.

And to some degree we're all doing the central planner thing. Part of what the abolitionists were doing was imagining what a world without slavery would look like, then charting a course to get there. There's some sense where that might be Kantian (ie if everyone became an abolitionist, the world would be better), and some other sense where it isn't (if everyone freed their slaves, abolition would be achieved, but the abolitionist is actively fighting for abolition and not just freeing their personal slaves).

I wrote a bit more about this at https://slatestarcodex.com/2014/05/16/you-kant-dismiss-universalizability/

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

The glass shards example seems to me not to rely utilitarian or deontological reasoning. It hinges on the observer's emotive reaction, and the conclusion is that one should never do anything that may have consequences we would feel social regret about. The reason to clean up the glass is because we are thinking ethically, and not to clean it will leave us with doubt and a sense of guilt, whether a child comes along or not. That fits deontology (conforms to our sense of duty), utility (increases our happiness with certainty), intuitionism (just seems wrong to move on without doing it), and emotivism (makes us feel better ethically). Utilitarian reasoning would need to rest not just on a chances-of-benefit calculation, but on a cost-benefit calculation--what is the long term cost of slowing one's journey to clean the glass, delaying and ultimately precluding some "better" use of the time?)

The argument about technology seems to assume that wealth and happiness are linked in some quantitative fashion.

The argument concerning wiping out almost all people vs. wiping out all people (and precluding the birth of further future people) seems based on treating potential people as people, not future people as people. If we owe a debt to other people by virtue of being social beings, we should consider part of that debt due to people who are actual, except not yet born. But people who will never be born are not future people. To treat them as equally due a debt is a step even beyond absolutist pro-lifers, who consider the person a human being from the moment of physical conception--now these non-people are due full ethical standing from the point of intellectual conception.

The argument about immigration and culture change seems to me to make no sense. There is no reason to think that changing culture leads to less happiness, or is a negative good on other grounds, even if those who resist are called names. The fact that it causes temporary social discomfort doesn't mean it is not superseded by long-term net social good (which is how our national narrative treats the immigrant wave of the late-19th and early 20th century -- whether it's "accurate" or not, it is certainly a plausible interpretation).

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

My reaction to the Broken Bottle is (1) why is that child running around barefoot outside? what are the parents doing to let them do that, where there are rocks and thorns and all kinds of things that can injure them? (2) if you know a barefoot child is going to come running up the path behind you, it's easier to stop the child *before* they come to the broken glass so you have more time to clear it up or tell the child to go put on shoes, for pete's sake, don't they know there is broken glass on all these trails?

The problem with trying to guilt people about "but that barefoot child will cut themselves on the broken glass" is I didn't break the glass, kids do a lot of stupid things that injure themselves anyway, and I don't feel obliged to risk injuring myself on the glass if I didn't break it in the first place. EDIT: Okay, in the thought experiment, I have a glass bottle which I drop so it breaks. But in the real world, I certainly did not colonise Africa or extract industrial resources by clear-cutting or strip-mining.

I think in the real world, most people would try to clear off broken glass on a trail or the street so that people won't step on it, but that doesn't necessarily mean picking up all the glass - just kicking it to the side might be good enough. And if the glass isn't immediately in the way, leave it alone or for the people whose job it is to clean the streets to deal with it.

Expand full comment

Alternatively, I believe people who consistently walk outside barefoot build up strong callouses. I'm not sure if it's enough to protects against small pieces of broken glass.

Expand full comment

Small, yes. I've definitely gotten bits stuck in my heel after dropping a jar in the kitchen that I didn't notice until they later snagged on a sock.

Expand full comment

Is the repugnant conclusion just the paradox of heap? Is there a version with a less vague predicate (happiness)?

Expand full comment

The Sophists proved via reductio ad absurdum that philosophy is useless, mere rhetorical word tricks, to obfuscate the truth, which is why they were vilified by Plato and his ilk. Don’t feel bad about disagreeing with philosophers. Whatever was of value was extracted in the form of mathematics or science a long time ago.

As for abolitionism, Bartolomé de las Casas (1484-1566) was way ahead of any Quaker or Quakerism itself.

Expand full comment

And slavery had ended in Europe even earlier!

Expand full comment
Aug 23, 2022·edited Aug 24, 2022

In the Politics, Aristotle explicitly references an abolitionist movement in Ancient Greece and attempts to refute some of their arguments. So the basic idea has been around for a very long time, but it took a while to start gaining traction.

Expand full comment

Huh, I was unaware of that.

Expand full comment

This is not really to disagree with you, but as an addition to your point: I would think that Gregory of Nyssa (ca. 335-395) also predates any Quaker, as well as predating Bartolomé de las Casas.

Gregory is mostly known for raising the issue of slavery as an offense to the dignity of individuals who carry the 'Imago Dei'. His sermon is, to my knowledge, the first Christian sermon against slavery as an institution. It is hard to prove whether all later abolitionists took inspiration from Gregory; it ought to be easy to show that they agreed with Gregory in one form or another.

The troublesome part of this: the existence of slavery in the ancient world, and its slow disappearance in Europe, followed by its re-introduction, is a messy history.

Gregory of Nyssa, Bartolome de las Casas, and our Quaker friend all count as heroes of the movement to abolish slavery. It's hard to pick a single hero who began the push towards abolition.

Expand full comment

Agreed. I was reacting to what I perceived as anglocentrism. Las Casas is well-known (heck, there were movies made about him, like *La Controverse de Valladolid* with the always excellent Jean-Pierre Marielle playing him).

Universalism in human rights is probably as ancient as humanity. Alcidamas of Elis, a student of Gorgias in the 4th Century BC is quoted by Aristotle (Rhetoric, 1373b) as saying "God has left all men free; nature has made no man a slave". Aristotle no doubt thought Alcidamas a simpleton (not to mention one of the hated Sophists who exposed philosophy for what it is, a series of fraudulent rhetorical tricks devoid of value or truth), since he himself wrote: "But is there any one thus intended by nature to be a slave, and for whom such a condition is expedient and right, or rather is not all slavery a violation of nature? There is no difficulty in answering this question, on grounds both of reason and of fact. For that some should rule and others be ruled is a thing not only necessary, but expedient; from the hour of their birth, some are marked out for subjection, others for rule." (Politics, I, 5)

And then there is the less than edifying case of Euthyphro, where Socrates berates the eponymous character for daring to bring his father to court for killing a mere slave. While Socrates, Plato and his band of fellow disgruntled aristocrats heap scorn on Euthyphro, he at least seems to have believed a slave had rights, even one who had murdered another one.

Expand full comment

> I realize that will intuitively feel like leaving some utility on the table - the first step in the chain just looks so much obviously better than the starting point - but I’m willing to make that sacrifice.

As soon as I read this, I had to jump down here to remind everyone about Timeless Decision Theory and Newcomb's problem: https://www.readthesequences.com/Newcombs-Problem-And-Regret-Of-Rationality

Expand full comment

My problem with the Newcomb problem is that I think the reason it's hard is because it is posed to beings who live on our universe, but its setting is in a different universe. If our species had evolved to survive in a universe in which Omega was possible, presumably our thoughts would be guided by a different form of reason that suited a universe that was irrational by the standards of this one. Since I don't there is such a universe, though, I don't think the problem can be solved in a way that reason (our brand of it) can decisively confirm.

Expand full comment
author

How do you think TDT relates here?

Expand full comment

Well, more so the Newcomb's paradox post from the sequences then that specifically. The main argument for 2-boxers is that once Omega has left, whether or not the million dollars is in the box is no longer dependent on your choices, so you should 2-box since otherwise you'd be leaving money on the table. But, since Omega is a perfect oracle, they always end up with just the thousand.

IMHO the reason the 2-boxers are missing out is that they're not really *getting* that a perfect oracle will simulate them perfectly, *including during the time period after Omega has left the boxes*. Omega, being a perfect oracle, can predict if you'd be the kind of person who will 2-box, which *necessarily includes* the scenario where you make yourself into the kind of person that will 1-box until right after Omega has left, at which point you would switch to 2-boxing.

This is why the 2-boxers always end up with nothing: any attempt to get that extra box is an automatic failure. Period. If you want to justify leaving the second box, you can tell yourself "Sometimes you have to spend money to make money, and sometimes that cost is paid with *potential* money".

The argument that is being used to justify having the giant population with near-but-not-quite suicidal living conditions is like the argument for 2-boxing, but iterated until you're just a millionaire while all the 1-boxers are billionaires: yeah this relatively small, but extremely happy, population is great, but you could do a bit better by having this slightly larger but less happy population instead.

TDT says you can just 1-box if you think that will get you the most utility given what you've seen and know. Similarly, if we know that we'll have a morally repugnant outcome down the road by continually trying to go for the slightly better option, we can just decide to not do that, and be happy with what we have.

Expand full comment

I believe in The Copenhagen Interpretation of Ethics which says that when you interact with a problem in any way, you can be blamed for it.

That's why I feel responsible to prevent utilitarians inflicting harm on children in my presence, but I'm indifferent about whatever happens 100 years from now.

Après moi, le déluge.

Expand full comment

<joke>

Now that I've seen you post this message I precommit to harm children until you accept longtermism as your ethics.

</joke>

Expand full comment

Pascal is mugging kids now.

Expand full comment

Worth linking to the original coining of the term:

http://blog.jaibot.com/the-copenhagen-interpretation-of-ethics/

Expand full comment

Honestly, Copenhagen Ethics is the only approach that doesn't descend into gibbering insanity almost instantly.

If you assume that you are morally responsible for things that you did not interact with, then your primary moral obligation is to establish what moral obligations you *do* have. Never mind "what are you doing reading Scott, rather than looking for starving orphans you ought to be feeding?", even if you're feeding starving orphans *right now*, how do you know that there isn't a more starving orphan just around the corner who will die unless you tend to them immediately, while the starving orphan you're feeding right now can wait.

It's like an infinite set of tracks with infinite trolleys. How do you even know which one you ought to try to stop first? (Since you don't have infinite hands.)

Of course, all the time you're deciding which of the countless extant problems places the greatest moral obligations on you, you're not actually solving *any* problems whatsoever.

Expand full comment

Deontological ethics does not entail the Copenhagen Interpretation.

Expand full comment

That depends on how you formulate your imperatives, doesn't it?

Technically, if an imperative were to produce a contradiction when you try to apply it, it cannot be categorical.

However, given that practical consequentialism shakes out to bastardised deontology (rules utilitarianism), I wouldn't say deontology qua "ethics of duty" is obviously immune to the sort of problems above.

Expand full comment

You said Copenhagen is the ONLY approach that doesn't descend into gibbering insanity. But there are many possible deontological ethics which don't entail Copenhagen.

Expand full comment

You link to an article on counterfactual mugging, but what you describe here is not counterfactual mugging at all. Counterfactual mugging is when someone flips a coin, and on heads rewards you if you would have paid a penalty on tails.

Expand full comment
author

See the paragraph where I link to it, where I say "There’s a moral-philosophy-adjacent thought experiment called the Counterfactual Mugging. It doesn’t feature in What We Owe The Future. But I think about it a lot, because every interaction with moral philosophers feels like a counterfactual mugging. "

Expand full comment

Hm, it looks like you're saying a "counterfactual mugging" is a thing that feels like being led through a sound argument for an absurd conclusion. That doesn't sound right to me. I must be missing something.

Expand full comment

*>tfw forget a book review is written by Scott, not part of Book Review Contest

*I can't decide if it would be incredible or insufferable if Scott hired a publicist. So-And-So, potential author of hypothetical books, currently beta-testing in blog format.

*Octopus factory farming: Sad! Doesn't even taste good compared to (brainless idiot pest species) squid. And that's without factoring in the potential sentience, which really makes my stomach churn on the few occasions I do eat it begrudgingly...

*The Broken Bottle Hypothetical is weird...I feel happy and near-scrupulosity-level-compelled to clean up my own messes. But I harbour a deep resentment for cleaning up the messes of others. It just seems to go against every model of behavioral incentives I have...at some point, "leading by example" becomes "sucker doing others' dirtywork". (Besides that - who *wouldn't* pick up their own litter when out in the wilds? I've never understood that mindset...one doesn't have to be a hippie to have a little basic respect for nature. Also, Real Campers Use Metal, among other things to avoid this exact scenario.)

Like I get the direction the thought experiment is intended to go...but many "broken bottle" behaviours have intrinsic benefits in the here-and-now. Cooking with a gas stove or driving with a gas car are pretty high-utility for the user, even if deleterious on the future. What's the NPV of not picking up broken glass? (Yes, probably making too much hay out of nitpicking a specific example.)

Expand full comment

I'm glad to see Scott share this, even though many in the EA community are uncomfortable criticizing EA in public (I myself am victim to this - I omitted to rate WWOTF on Goodreads for fear of harming the book's reach).

Simply put, WWOTF is philosophically weak. This would be understandable if the book was aimed at influencing the general public, but for the reasons Scott mentions in this post, WWOTF doesn't offer any actionable takeaways different than default EA positions... and certainly won't be appealing to the general public.

The problem with all this is that WWOTF's public relations campaign is enormously costly. I don't mean all the money spent on promoting the book, but rather, WWOTF is eating all the positive reputational capital EA accumulated over the last decade.

This was it. This was EAs coming out party. There will not be another positive PR campaign like this.

The problem with this is that the older conception of EA is something most public intellectuals/news readers think very highly of.

Unfortunately, the version of EA that MacAskill puts forwards is perceived as noxious to most people (see this review for context: https://jabberwocking.com/ea/ - there are tons like it).

It seems like WWOTF's release and promotion doesn't accomplish anything helpful while causing meaningful reputational harm.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

>If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity,

The philosophers have gotten ahead of you on that one. Surprised you haven't already read it, actually.

https://www.iffs.se/media/2264/an-impossibility-theorem-for-welfarist-axiologies-in-ep-2000.pdf

It's a proof that any consistent system of utilitarianism must either accept the Repugnant Conclusion ("a larger population with very low but positive welfare is better than a small population with very high welfare, for sufficient values of 'larger'"), the Sadistic Conclusion ("it is better, for high-average-welfare populations, to add a small number of people with negative welfare than a larger number with low-but-positive welfare, for sufficient values of 'larger'"), the Anti-Egalitarian Conclusion ("for any population of some number of people and equal utility among all of those people, there is a population with lower average utility distributed unevenly that is better"), or the Oppression Olympics ("all improvement of people's lives is of zero moral value unless it is improvement of the worst life in existence").

This proof probably has something to do with why those 29 philosophers said the Repugnant Conclusion shouldn't be grounds to disqualify a moral accounting - it is known that no coherent system of utilitarian ethics avoids all unintuitive results, and the RC is one of the more palatable candidates (this is where the "it's not actually as bad as it looks, because by definition low positive welfare is still a life actually worth living, and also in reality people of 0.001 welfare eat more than 1/1000 as much as people of 1 welfare so the result of applying RC-logic in the real world isn't infinitesimal individual welfare" arguments come in).

(Also, the most obvious eye-pecking of "making kids that are below average is wrong" is "if everyone follows this, the human race goes extinct, as for any non-empty population of real people there will be someone below average who shouldn't have been born". You also get the Sadistic Conclusion, because you assigned a non-infinitesimal negative value to creating people with positive welfare.)

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

[disclaimer: I'm dumb and I don't really know anything]

Thanks for this review, a nice summary on some of the core points of EA and long termism.

One question troubles me as for the "the most important thing ever is your choice of career. You should aim to do the maximum good, preferably earning to give / become an influential leader to change policy / becoming a top AI specialist to solve alignment / etc."

These guidelines are explicitly aimed at talented people. I remember 80kh being very open about this in the past; it seems that somewhere along the line they've altered their front page material on it. But obviously these points mostly concern talented people. Most people will not become scientists, high level engineers, leaders or influential activists.

Where does this leave normal people? What should most people do with their time? "Well duh, that which they can best do to advance the greatest good ever." Ok, but what is that for, say, a normie who can learn a profession, but whose profession is relatively boring and doesn't have anything to do with any of the aforementioned noble goals? What is the greatest utility for a person, who is ill-equipped to cognitively even grasp long-termism properly? Or for a person who does get the point, but who has no business becoming [an influential effective altruist ideal]? And so on.

Lacking an answer (granted, I haven't spent a very long time looking for one), for the time being the advice to look for me most insanely profitably successfully extremely bestest way to increase the number of people alive to [a very high number] seems to me lopsided in favor of very talented people, while simply ignoring most people everywhere. In making EA go mainstream, this might matter - maybe?

Expand full comment

You can always send some of the money you earned to effective charities.

Expand full comment

That's a good idea. It's also completely different from what 80 000 Hours claims. Is it that for some people, the choice of career doesn't matter so much after all? Who are these people and how do I know if I'm one of them?

It seems very important to know whether I should

1) immediately begin a desperate attempt to turn my life around in order to maximize a perhaps nearly (but not completely) trivial chance of becoming a leader focused om maximizing future utility (remembering that if a million relatively ordinary people do the same, some of them might succeed, making it worthwhile for everyone to have tried even if their lives were miserable) or if it's enough to

2) make some money, prioritize my mental health and donate some of that money relatively effectively.

Expand full comment

True. But as far as I can tell, any of these options are better than the baseline. And of course you can find the middle ground between the two.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Again, yes - I'm in complete agreement.

Also, that is in complete opposition to 80 000 Hours' perspective:

"If you’re trying to tackle a problem, it’s vital to look for the very best solutions in an area, rather than those that are just ‘good.’

This contrasts with the common attitude that what matters is trying to ‘make a difference,’ or that ‘every little bit helps.’ If some solutions achieve 100 times more per year of effort, then it really matters that we try to find those that make the most difference to the problem."

https://80000hours.org/articles/solutions/#what-do-these-findings-imply

Scott himself has written on how "[if] you try to be good, if you don’t let yourself fiddle with your statistical intuitions until they give you the results you want, sometimes you end up with weird or scary results."

https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/

To me that seems like at least Scott is taking the 80 000 Hours' view very seriously. However, I'm struggling to even come to terms with the vastness of what is being said, as I've likely made clear. The bottom line is that I should maybe likely sacrifice everything in order to have an infinitesimal chance of influencing something I don't understand in a positive way, and that if I and my family become miserable along the process, that's a rounding error and doesn't exactly count.

Obviously I'm not going to do that. However, knowing that people as smart and as kind as [great EA minds like Scott] think that [people like me] should, kind of feels like crap.

I don't like feeling like crap. So I'm trying to see that maybe there's a way to work around the problem, such as that the given guidelines are excessively lopsided in favor of very talented people (who should obviously and immediately sacrifice everything in order to maximize future utility, and stop doing anything besides that).

I don't think I'm taking this too literally. I'm worried that I and most people are not taking it literally enough.

Expand full comment

Have we considered that there is a middle ground between "future people matter as much as current people" and "future people don't matter at all"? If you want numbers you can use a function that discounts the value the further in the future it is, just like we do for money or simulations, to account for uncertainty.

I imagine people would argue over what the right discount function should be, but this seems better than the alternative. It also lets us factor in the extent to which we are in a better position to find solutions for our near term problems than for far-future problems.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Mostly current mainstream ideologies and actions seem to discount the distant future so hard as to nearly equal "discount to zero". So that seems to be the baseline against which longtermists argue.

Intuitively I'm in favor of discounting due to the wild difficulties in prediction (just how likely is it that policies XYZ lead to high numbers of sentient beings with at least marginally positive utility?). I'm not opposed to trying to estimate stuff, just saying that it's very hard.

However, discounting future for the sake of being in the future seems crazy. If we know for a certainty that everyone will die in a meteor strike in 2000 years, I don't think we should discount that.

And yet, if we are very serious about immediately working our hardest to maximize the future, we should (I dunno, maybe?) very quickly start making a lot of babies and train them the best we can to achieve the best goals we have in mind. I don't see a lot of longtermists doing that, either, although it seems to me being a heck of a lot more certain way to start maximizing the future utility than writing Harry Potter fiction. (I apologize for the tone)

Expand full comment

I think the problem with this approach is that even a massive discount still leads to the same conclusions, given a large enough set of potential future people. If we're comparing to a future with 100 nonillion people, even a moral discount of 0.000000000001 leaves us in a mugging scenario.

Personally, I think people trying to plan for thousands of years in the future need some epistemic humility. If they look at a population of people in 1890, and ask them to guess what their children and grandchildren will be doing, not only would the 1890 people be wildly wrong, but the future-people asking the question would be laughing at them for how inaccurate they were. Predicting world wars, flight, cars, and dozens of other world-shaking changes is simply not possible. We lack the tools to see far enough ahead to make any kind of real predictions. Just look at the 2024 presidential prediction markets. That's two years away, and there's a ton of uncertainty! Who thinks they can predict 1,000 years from now with enough certainty that they can choose a path and try to force people down it against their will for the greater good or unknown potential future people? They worry about AI and then blissfully do the very things they would be concerned an AI would do - predict a future that we can't see and force current people to suffer for it.

Expand full comment

You can exponentially discount more for every next generation?

Expand full comment

Sure, but to what end? Also, based on what principle? If our goal is for the numbers that result to be intuitively accurate to us, then we can skip all those steps and just go ahead and assign a value to whatever future generation we want. The problem is not finding the *correct* number, but in realizing that there is no possible way to identify such a number or check if it's correct.

Expand full comment

I think it has something to do with the confidence I can have that people in the next generation will exist and that I can successfuly expect their utility from my actions. And yes in the end it's about our intuitions, but also consistency between them.

Expand full comment

Yes, there is value in pushing our intuitions to see where they do or do not make sense. The need for epistemic humility comes into play when we're looking at mathematical adjustments to align the XXXX generation out in an attempt to avoid the "mugging" scenario. We can only get mugged when we take seriously our math. We need to realize that the math comes from our near-term intuitions, pushed to absurd levels. There's no solid basis for our near-term intuitions, so why should we give any credence to the math that results from them?

Expand full comment

> We can only get mugged when we take seriously our math.

Also when we are not lobotomized. I think there are less drastic ways to escape mugging.

What we call mugging is the situation where projected consequence from one intuition contradict the other intuition. It means either one of the intuition is wrong and should be discarded or we are using the wrong mathematical model to extrapolate the consequences so we need to look for a better model. No need to be this dramatic.

Expand full comment

I am not sure if I understood the Repugnant Conclusion thing correctly. Is the setting that we are given two alternative universes: 1 with a small population of very happy individuals, and 1 with a very large population of not so happy individuals? And is the issue that most people would rather ACTUALLY LIVE in the first universe, because then they would be happier themselves?

I can also imagine something about scope neglect I guess. A large population may be very valuable and each of those people are unique and special, have their own friends and families, hopes, dreams, etc. But intuitively it sure feels like the difference between 1,000,000 and 10,000,000 people isn't so big, after all it's more people than I could ever imagine interacting with,

Expand full comment

The Repugnant Conclusion boils down to one thing: if you assume that the only thing that matters is maximising the total utility of the population, then "mere addition" is the simplest and surest way of achieving that. As long as you're adding a person whose life is even a little bit worth living, you've got more utility in your population than you hadn't done so.

This is true whether you actively reduce the utility/happiness of the existing population or not, so a population consisting overwhelmingly of poor and unhappy (but not so unhappy as to kill themselves) people, with a small minority of very happy people, is - under mere addition - strictly preferable to one that consists solely of the very happy people, simply because the former population is larger.

In practice, the addition of more people is likely to reduce the utilty/happiness of the existing population (resources are limited, and - if nothing else - the surrounding sea of abject misery will get to all but the hardest hearts), so the inevitable endpoint - as we continue down this road - is a massive population of terribly unhappy people.

Needless to say, the idea of the ideal world state being masses upon masses of nigh-suicidal people goes against every moral intuition in the majority of the population, which is why the Conclusion is Repugnant.

Expand full comment

Thanks for this explanation! I can see how that would be pretty repugnant for sure although it is counterintuitive to me to consider every piece of utility equal. For near-suicidal hypothetical persons, a +1 would seem to be highly valuable whereas to an already very happy individual a similar welfare increase would not make such a significant difference. For this reason it would be perhaps better to maintain a high average welfare with relatively small spread (variance) rather than just adding near-suicidal individuals. Although thinking of utility as similar to money could be a mistake.

I have never understood how utilitarians can agree on how to measure utility so it would seem to me like you could just choose a utility definition which would make it so every time you split n people's worth of utility over n+1 individuals it doesn't actually change that much, or I should say more precisely: if you decide that U=0 occurs at a point which is a fairly good life to live then it makes little difference if you redistribute c/n so that you only have c/(n+1) left. It seems to me like it depends very much on where you put your zero point.

Expand full comment

> For this reason it would be perhaps better to maintain a high average welfare with relatively small spread (variance) rather than just adding near-suicidal individuals.

This approach is exactly how we get to the RC.

Look at Scott's diagrams again. World A consists entirely of maximally happy people. We might add to it a number of happy, but not quite as happy, individuals and get World B. Surely, this world is, at the least, no worse than World A? Everyone who was maximally happy in World A is still around and happy as ever; we've merely added some people who are still very happy, just not maximally happy. There are more happy people in World B than A, so we might even say that World B is better than A?

Okay, now let's split the difference between the happiest and least happy people in World B. We thus get world C, where everyone is, on average, happier than in World B, and the total amount of happiness (assuming happiness is something we can measure) is also higher. Surely world C is better than World B?

World C also contains almost double the happiness that World A does (950 > 500), so it looks like it's better, doesn't it? Only problem is: given a choice of living in World A, where you get to be as happy as you can possibly be, and World C, where you still get to be happy, but not quite as happy, world A looks to be the better choice.

Of course, we don't have to stop there. We can designate our World C as World A', and repeat the exercise, following which we call our new World C' World A" and continue until our ultimate world is filled with a mass of near-suicidal individuals.

At each step of the sequence, the next world will seem "better" than the preceeding, for some value of "better", but it's hard to argue that any of them would be actually better to live in than the original World A (at least without introducing a bunch of additional stipulations).

Expand full comment

I notice that as soon as we start treating future people as already existing, calculations become messy. Be it anthropics reasoning which assumes that we are randomly selected from all humans that has ever or will ever live, or moral reasoning which passes the buck of utility to future generations.

I can clearly point where is the error of such anthropic reasoning. I'm less certain what's wrong with total utilitarianism. There should be some discounting based on the probability of future humans existing but it's not just that. I guess it just doesn't fit my moral intuition?

Imagine a situation where I know that all my decendants for n next generations will have terrible lives. Lets say there is some problem which can't be fixed for the next many years. But also I know that in at some moment humanity will fix this problem and thus strating from n+1 generation, my decendants will have happy lives. Am I thus morally obliged to create as many decendants as possible? Are my decendants of k-generation facing an even harder situation: if they decide not to breed they are retroactively making me and their relatives from k-1 generations terrible people? Eventually, whatever disutility from the n generations of suffering were accumulated would be outweighted by the utility of n+1 and futher generations. But what's the point? Why not just let people without this problem reproduce and have happiness in all the generations to come?

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Am I the only one who thinks B is clearly better than A with regards to nuclear war? More or less same technological development with 10% of the population so great potential for growth and less zero sum games?

Expand full comment
founding

B->C seems like the more sensible place to get off the repugnant conclusion train than A->B, since that’s the step that actually involves making (some) people worse off.

In your immigration analogy, that corresponds to letting immigrants in but not changing society to accommodate them, which seems much better than not letting immigrants in at all.

Expand full comment

I have a couple of thoughts and I'm not sure which is more likely to start a fight.

1. A sufficiently creative philosopher can construct an ironclad argument for pretty much any conclusion, and which of them you choose is down to your personal aesthetic preferences.

2. The reason abolition of slavery came so late was that for most of human history, being a slave wasn't that bad, relative to being pretty much any other person. Industrialization turned slavery into a practice too reprehensible to survive. Even Aristotle would have looked at the Antebellum South and said hey, that's kinda fucked.

Expand full comment

#2: Rather, the opposite. For most of history the advantage of slavery was that you didn't pay to raise the person, and could make them work without giving them enough food to reproduce themselves. The New World was the one instance where slaves were in a non-Malthusian environment enabling an expanded population. Even at that time, there were places like Caribbean sugar plantations with a high death rate needing replenishment, but the US banned the international slave trade as soon as constitutionally permitted and still had a positive growth rate afterward.

Expand full comment

Depending on what kind of slave you were, sure. A Greek physician-slave probably had it a lot better than public slaves sent off to be worked to death in the Roman mines. And "it's not rape because I have a right to fuck my female and boy slaves" wasn't so great for those on the receiving end. I can't track down where I read it, but an anecdote about some Famous Ancient Philosopher/Scholar of the Classical World who wrote about the formation of the embryo in human pregnancy did so due to observing aborted human foetuses. His sister owned slaves who were professional performers at banquets (musicians and dancers, and expected to provide sexual services to the clients as well) and one of them became pregnant in the line of work. This was a cause of trouble to the sister, since a pregnant slave couldn't work. So her scholar-brother advised her to use a method of bringing about abortion he had read of, by getting the pregnant slave to exercise vigorously. She miscarried, and was freed up to perform (and sexually service) at banquets again, and he got to do some early medical studies.

'Not that bad relative to being pretty much any other person' is indeed very relative.

Expand full comment

But the Romans enslaved people of all races, so that makes it okay.

Expand full comment

The issue I always have with ultralarge-potential-future utilitarian arguments is that the Carter Catastrophe argument can be made the same way from the same premises, and that that argument says that the probability of this ultralarge future is proportionately ultrasmall.

Imagine two black boxes (and this will sound very familiar to anyone who has read *Manifold: Time*). Put one red marble in both Box A and Box B. Then, put nine black marbles in Box A and nine hundred ninety-nine black marbles in Box B. Then, shuffle the boxes around so that you don't know which is which, pick a box, and start drawing out marbles at random. And then suppose that the third marble you get is the red marble, after two black ones.

If you were asked, with that information and nothing else, whether the box in front of you was Box A or Box B, you'd probably say 'Box A'. Sure, it's possible to pull the red marble out from 999 black ones after just three tries. It *could* happen. But it's a lot less likely than pulling it out from a box with just 9 black marbles.

The biggest projected future mentioned in this book is the one where humanity colonizes the entire Virgo Cluster, and has a total population of 100 nonillion over the course of its entire history. By comparison, roughly 100 billion human beings have ever lived. If the Virgo Cluster future is in fact our actual future, then only 1 thousand billion billionth of all the humans across history have been born yet. But, the odds of me being in the first thousand billion billionth of humanity are somewhere on the order of a thousand billion billion to one against. The larger the proposed future, the earlier in its history we'd have to be, and the less likely we would declare that a priori.

If every human who ever lived or ever will live said "I am not in the first 0.01% of humans to be born", 99.99% of them would be right. If we're going by Bayesian reasoning, that's an awfully strong prior to overcome.

Expand full comment

Heh I was just comparing the weirdness of longtermism reasoning to the weirdness of anthropics reasoning.

The assumption that you were randomly selected with equal probability from all humans who have ever or will ever exist doesn't seem to correspond to the way our universe with causality between past present and future works. No surprise it leads to weird conclusions.

Expand full comment

Yeh. I have no probability of ever being born in the 19, or 21st C. And a small chance of being born in the 20C. Which happened.

Expand full comment

As minor as quibbles can be, but:

"each new person is happy to exist but doesn’t make anyone else worse off."

Is there a reason this is a "but" instead of an "and"? As if people being happy usually make others worse off?

Expand full comment

A lot of people have zero sum intuitions, such that any additional person is by default a drain on others.

Expand full comment

I've linked to Huemer's In Defence of Repugnance in the comments to another post, but it's so on-topic it makes sense to do so here:

https://philpapers.org/archive/HUEIDO.pdf

As I noted there, Huemer is not a utilitarian but instead a follower of Thomas Reid's philosophy of "common sense".

There really doesn't seem to be any reason to believe it's "neutral to slightly bad to create new people whose lives are positive but below average", which would cause the birth of a child to become bad if some utility monster became extremely happy.

Expand full comment

This is a good article. I found it persuasive when I read it previously. Thank you for the link.

Expand full comment

A post wherein Scott outs himself as one of those people who choose only box B in Newcomb's paradox.

Expand full comment

Maybe the whole post is an elaborate plot to trick GPT-5

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

I haven't seriously struggled with repugnant conclusion style arguments before (Mostly I've decided to ignore them to avoid the aforementioned mugging effect), so what I'm about to write is probably old hat. Still, I'd like to hear people's thoughts.

What if you have the following options:

A) 5 billion people, today, at 100% happiness, then the universe ends

B) 10 billion people, today, at 95% happiness, then the universe ends

C) 5 billion people, today, at 97% happiness followed by another 5 billion people at 97% happiness 50 years later, then the universe ends

I think most people would agree that option C is better than option B. If we're thinking in bizzare, long-termist views anyway, there is likely some sustainable equilibrium level of population such that you can generate 100% happiness for an arbitrary number of person-years. You just might have to have fewer people and wait longer years. So let's... do that, instead of mugging ourselves into a Malthusian hellscape.

If you object that the lifetime of the universe is finite, and so the number of person-years in the above scenario is not arbitrarily high, I would respond with something along the lines of "Yeah, sure, but if humanity survives until the heat death of the universe, I'm pretty sure the people alive at that time won't be bummed out that we didn't maximize humanity's total utility. They won't be cursing their ancestors for not having more children. It's not like they'd decide that maximizing total utility was the meaning of life and we fucked it up all along."

Expand full comment

- There are only 10^67 atoms in our lightcone; even if we converted all of them into consumer goods, we couldn’t become 10^86 times richer.

Warning: rambling.

Most of the value in the modern economy does not come from extracting resources, but rather from turning those resources into more valuable things. The raw materials for an iPhone are worth ~$1, whereas the product has 1000x that value. There is probably a limit to how much value we can get out of a single atom, but I think we can still get a better multiplier than 1000x!

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Sorry for the nit-picking, but the below doesn't follow from the link:

"Octopi seem unusually smart and thoughtful for animals, some people have just barely started factory farming them in horrible painful ways, and probably there aren’t enough entrenched interests here to resist an effort to stop this."

Link just says "there are no released standards for how the octopuses are going to be kept and raised, nor for how they will be slaughtered."

Now maybe the Spanish octopus farmers will do horrible, Snidely Whiplash moustache-twirling, evil octopus farming. Or maybe they will be constrained under EU animal welfare standards. It's no skin off my nose either way, because I've never eaten octopus and have no intention of ever doing so. But this is what is annoying: trying to force us to accept the conclusion that *of course* it will be 'horrible painful ways' because eating meat (do octopi count as fish?) is evil and wicked and immoral, and factory farming is evil and wicked and immoral, and fish farming is factory farming hence is evil and wicked and immoral.

I don't know how smart octopi are, they seem to be smart in some way, and probably smarter than a cow (a low bar). But here's the thing: I am not yet convinced eating octopi is morally evil. And I know dang well that it's not just the octopus farming this campaign would like to stop, it's fishing for wild octopus and eating them at all.

Let's wait and see if the wicked, bull-fighting, blood-thirsty Spaniards *are* going to torture sweet, cute, innocent, smart, octopi to death before we start calling for the war crimes tribunal, hmmm?

EDIT: And if the "scientists and conservationists" are so outraged about the intelligent octopi, then surely Ms. Tonkins should quit her job at Bristol Aquarium, rather than being complicit in the enslavement of these intelligent and sentient beings? Did any of the octopi consent to being captured and imprisoned in tanks for humans to gawk at? Liberate all those incarcerated octopi into the wild and take the beam out of your own eye first!

Also, how moral are octopi themselves if experts fear "if there was more than one octopus in a tank - experts say they could start to eat each other". That seems to be that the greatest threat to an octopus is another octopus, not a human.

Expand full comment

If you support the notion of impartiality and accept the concept of intelligence explosions, doesn't this take the oomf out of human-centric long-termism?

Aren't there almost certainly other life forms in the universe that will experience intelligent explosions, making whatever happens in our story irrelevant?

Who cares if we cant interact with the regions in space they are located as long as they are experiencing lots of positive utils.

Expand full comment

> I realize this is “anti-intellectual” and “defeating the entire point of philosophy”. If you want to complain, you can find me in World A, along with my 4,999,999,999 blissfully happy friends.

The philosopher Massimo Pigliucci on the Rationally Speaking Podcast did something like this once when he was confronted with the vegan troll bit about bestiality. You're against bestiality right? Because it's bad to sexually assault animals? Well, if you think that's bad, then you must definitely be against eating them.

He retorted that he just didn't feel it necessary to be morally consistent. 🤯

Expand full comment

Have people just started using the word "troll" to mean "someone who expresses an opinion that I disagree with"? This is the second time today that I have noticed someone using the word "troll" like that.

Expand full comment

I'm vegan myself, so at least I'm not using it to express disagreement.

Expand full comment

> Is this just Pascalian reasoning, where you name a prize so big that it overwhelms any potential discussion of how likely it is that you can really get the prize? MacAskill carefully avoids doing this explicitly, so much so that he (unconvincingly) denies being a utilitarian at all. Is he doing it implicitly? I think he would make an argument something like Gregory Lewis’ Most Small Probabilities Aren’t Pascalian. This isn’t about an 0.000001% chance of affecting 50 quadrillion people. It’s more like a 1% chance of affecting them. It’s not automatically Pascalian reasoning every time you’re dealing with a high-stakes situation!

Whenever I hear things like "What We Owe The Future" and "dectillions of future humans", I think "ah, the future is a utility monster that we mere 7 billion humans should sacrifice everything to".

The utility monster is a critique of utilitarianism.

Suppose everyone gets about one unit of pleasure from resource unit X. But there exists a person who gets ten billion pleasures from unit X. As a utilitarian you should give everything to that person because it would optimize global pleasure.

In this case, the future is the utility monster because there are so many potential humans to pleasure with existence. Spending any resources on ourselves instead of the future is squandering them. We are the 1%. But actually we are the 0.000001%

Expand full comment

I'm not convinced about this math. How long does humanity as a species have left before we're not talking about humans anymore? Left to genetic drift, we have maybe 2-5 million years. Well short of 500 million years. Maybe we could fight against that with genetic tampering, but probably we'd go the other direction and shorten the lifespan of the species by a few million years. Certainly we wouldn't expect humanity to last the full 500 million years.

"But what if we achieve relativistic travel? Couldn't we then travel among the stars and extend humanity's lifespan artificially?"

This may solve the nominal problem without changing the stakes at all, but likely would make things worse. If humanity were to use relativistic travel to help extend the Earth-reckoned lifespan of the species it's not getting more babies from the exchange, because by definition the experience is relative. Those spacefarers don't get an extra ten million years to have children. Instead, they get a bunch of adverse selection conditions never before experience by humans in a harsh environment with little margin for error. The perfect recipe for a rapid series of selection events.

In other words, if we send our children to the stars, it's likely that by the time they get back they won't find humans on Earth. But that's fine, because by then the people piloting the returning ships won't be human either. It'll be two different alien species with a common ancestor, meeting on a planet populated by new flora and fauna. The far-flung future isn't just different from the present, it's alien.

Expand full comment

What concerns me about this concept, at least as it has been presented by my peers who are into long-termism, is the accuracy of their predictions. Your actions now have some moral consequence down the line. My question is, how accurate therefore are your predictions, spanning long into the future, that your very rational utilitarian decisions will actually lead to positive outcomes and not negative? We are pretty darn bad at even near term predictions (see Michael Huemer on the experts and predictions problem); so making and explicit statement to live your life in some particular way because you are confident your predictions about how your life will impact humanity and the universe eons into the future just seems silly. In fact, it seems worse than silly, it seems like a load of hubris that is just as likely to be harmful down the line as good, but we will all be dead and no one can call you on it when the consequences occur, conversely, we are all alive now and have to hear how very moral and virtuous long-termism is today by its practitioners.

Expand full comment

This is your regular reminder that nuclear weapons are not an existential risk and never have been, nuclear winter is mostly made up, and we have the technology to build missile defense systems that would make the results of a nuclear war much less bad (although still bad enough that people will want to avoid having one).

https://www.navalgazing.net/Nuclear-Weapon-Destructiveness

https://www.navalgazing.net/Nuclear-Winter

Expand full comment

Just a few paragraphs in, and I'm thinking to myself "Thank you for reading and reviewing this book, so now I need not waste my time on it." That, in itself, raises this review several positions in the ranking of reviews so far!

Expand full comment

I still think that naively adding hedons - or utils - or whatever you call them nowadays is not the right approach.

Thought experiment : let’s say that you are pretty happy, and worth 80 "happiness". Now I participate to an experiment when I’m put to sleep, get cloned n times, and me and my clones are put in identical rooms where I can enjoy a book of my choosing after waking up. Under classic utilitarianism, the experiment has created 80*n "happiness". Which sounds wrong to me : as long as me and my clones are identical, no happiness has really been created ; identical clones have no moral value. Generalizing this, addition of happiness should discount for having similarity with other existing individuals.

Expand full comment

It seems like that would lead to some really weird conclusions.

1. If we make a decision using this moral calculus, we may think we're doing the right thing and making someone happy. But then, if we later discover that there is a clone of them somewhere in the world, or even in some alternate universe, then the moral content of the action has suddenly changed, despite no details about the event itself changing. If we allow "non-local" considerations to affect our moral considerations, then we can do other weird things like make a clone of someone after murdering them and nullify the wrongness of the act.

2. If there are two identical clones, which one matters and which one doesn't?

3. By corollary, torturing a Googol clones of someone is equally as bad as torturing a single one of them, even if they don't know about each others' existence. If you were being tortured, would you accept "don't worry, there's a clone of you living a happy life somewhere in the world" as a valid moral justification?

Expand full comment

I still don't find the repugnant conclusion repugnant, or even surprising. Either a certain level of existence is better than nonexistence, or it isn't. If it's better, let's get more existence!

I think a lot of people have two thresholds in mind: there's the level of existence at which point it's worth creating a new life, and there's a separate, lower one at which point it's worth ending an existing life. But then it's just treating existing lives differently from potential ones.

The biggest objection, to me, is one I never see people raise, and that's the obligation to have more kids. I only have one, and might have a second, but I easily could have 4 by now, and could probably support much more than that at a reasonably high standard of living, so if I really buy the repugnant conclusion, I should be doing that. But I don't, so update your priors accordingly.

Expand full comment

Thanks so much for highlighting my interview of him Scott!

Expand full comment

With regard to the Repugnant Conclusion, I think that one way out is that the weighting of factors determining the utility is somewhat arbitrary, so one can move the zero line to what one considers an acceptable standard of living.

Suppose I assign -1000 for the lack of access to any of: clean water, adequate food or housing, education, recreation, nature, potential for fulfillment etc. Now adding with about zero net util does not seem to bad. In fact, not adding them just to preserve a few utils for the preexisting population would feel wrong -- like hypothetical billionaires (or Asimov's Solarians) preferring to keep giant estates which could otherwise be suburban districts providing decent living for millions.

What life is considered worth living is very dependent on the society. I gather that ancient Mesopotamians probably did not consider either freedom of speech or antibiotics essential, given that they had (to my knowledge) neither concept. For most people living in the middle ages, Famine, War and Pestilence were immutable facts of life along Death. From a modern western point of view, at least two of the horsemen are clearly unacceptable and we work hard to fight the third one. EYs Super Happy People would consider a life containing any involuntary suffering to be morally abhorrent. Perhaps after we fix death, only supervillains would even contemplate creating a sentient being doomed to die.

Of course, this also seems to contradict "Can we solve this by saying that it’s not morally good to create new happy people unless their lives are above a certain quality threshold? No."

--

Also, I get a strong vibe of "Arguments? You can prove anything with arguments." ( https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/ ) here from Scott with regard to philosophical muggings.

--

Finally, in long term thinking, extinction is hardly the worst case. The worst case would be that due to value misalignment, someone future being would turn the inner part of the light cone going from Sol, 2022 CE into sufferonium -- turning the reachable universe into sentient beings which have negative utility according to our values.

Expand full comment

If we take that util grading system to its logical conclusion, then people with clean water, adequate food, adequate housing, education, and potential for fulfillment, but no access to nature would have lives with negative utility, i.e. lives that are not worth living at all.

Expand full comment

“As far as anyone can tell, the first abolitionist was Benjamin Lay (1682 - 1759), a hunchbacked Quaker dwarf who lived in a cave. He convinced some of his fellow Quakers...”

Now this is just not true. Slavery was largely abolished in mediaeval Europe. And often by Catholics. And the invaders of Britain, the Normans, ended it there. However the Normans are looked at with hostility, as is Catholicism in Anglo historiography

Expand full comment

There are three things that grate me in this review (or, may be, in the book as well, I am yet to read the book). All three have to do with exponentials.

1. The hockey stick chart with world economic growth does not prove that we live in an exceptional time. Indeed, if you take a chart of a simple exponential function y=exp(A*x) between 0 and T, then for any T you can find a value of A such that the chart looks just like that. An yet there is nothing special about that or another value of T.

2. I do not see why economic growth is limited by the number of atoms in the universe. It looks to me similar to thinking in 1800 that economic growth is limited by the number of horses. We are already well past the time when most of economic value was generated by tons of steel and megawatts of electricity. Most (90%) of book value in S&P500 is already intangible, i.e. not coming from any physical objects but from abstract things such as ideas and knowledge. I do not see why the quantity of ideas or their value relative to other ideas would be limited by the number of atoms in the universe. If anything, I could see an argument why it there is growth limit of the number of sets consisting of such atoms, which is much larger (it is 2^[number of atoms]) and, at our paltry rates of economic growth, is large enough to last us until the heat death of the universe.

3. All these pictures with figures of future people are relevant only in the absence of discounting aka the value of time. I do not know if the book ignores this issue but you do not mention it at all in the review. Any calculation comparing payoffs at different times has to make these payoffs somehow commensurate. That's a pretty basic feature of any financial analysis and I am not sure why it would be absent in utility analysis. When we are comparing a benefit of $10 in 10 years time to a current cost of $1, it makes no sense to simply take the difference $10-$1. We should divide the benefit by at least the inflation discount factor exp(-[inflation rate]*10). If we have an option to invest $1 today in some stocks, we should additionally multiply by exp(-[real equity growth rate]*10). When our ability to predict future results of our actions decays with time horizon, we should add another exponential factor. This kind of discounting removes a lot of paradoxes and also kills a lot of long-termist conclusions. This argument gets a bit fuzzier if we deal with utilities and not with actual money, but if the annual increase of uncertainty is higher than the annual population growth rate then the utility of all future generations is actually finite even for an infinite number of exponentially growing generations. So not all small probabilities are Pascalian but ones deriving from events far from the future definitely are! I do not know if this is discussed in the book but any long termism discussion seems to be pretty pointless without it.

Expand full comment

For why not to use discount rates in utility analysis, see https://www.lesswrong.com/posts/AvJeJw52NL9y7RJDJ/against-discount-rates

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Thank you very much for this link. I think Eliezer here is a bit confused about several things. The main argument in favor of discounting is that it is a more realistic way to look at our intertemporal decisions.

Take the first example from that post: "The idea that it is literally, fundamentally 5% more important that a poverty-stricken family have clean water in 2008, than that a similar family have clean water in 2009, seems like pure discrimination to me". If we have 1) money in 2008 to provide clean water to 100 families 2) a reasonably safe investment that will payoff $105 in a year for each $100 invested 3) a way to fix the cost of clean water provision for a year at 2008 prices, then we have at least two possible strategies. Strategy A is to provide water to 100 families in 2008 and Strategy B is to provide water to105 families in 2009. Which of these strategies is better is a complex question which cannot be answered without many additional inputs, but this is not the point. The point of discounting is to realise that we do not have a choice to provide clean water to 100 families in 2008 or to 100 families in 2009 as Eliezer seems to think, but the choice is between 100 families in 2008 vs 105 families in 2009. Once we realise that, we can fume as much as we want about those 5 extra families that are left without clean water because somebody does not understand the discounting.

Whether and which discounting should be applied to the number of lives or torture is much less clear cut and I think this question should be left until we are clear about aggregation of utilities and have a satisfactory resolution to paradoxes like the Repugnant Conclusion, so this is an entirely separate question.

A third question is the author "advocating against the idea that you should compound a 5% discount rate a century out when you are valuing global catastrophic risk management". This is not about comparing lives today to lives in 100 years, but about efficient management of resources: if our efforts to combat e.g. climate change are successful but slow down economic growth by 0.1% a year for the next 100 years, the loss for humanity will be larger than the upper end of IPCC projections for the cost of global warming. May be we should still fight the warming for fairness/diversity/other reasons, but I do not think that one can have a productive discussion about this kind of policies if one does not understand this kind of arithmetic and this is again a form of discounting.

Expand full comment

I don't think Eliezer is opposed to this kind of discounting: "... as opposed to a pseudo-discount rate arising from monetary inflation, or from opportunity costs of other investments"

The type of discounting he finds objectionable is that which says that a person 4,490 years from now is literally worth a googol times less moral consideration than a person now (which would also imply that a person born 4,490 years ago has a googol times more moral value than we do now).

Expand full comment

I think his first example clearly shows that he is not really distinguishing between the purely monetary benefit discounting as with pure water supply and lifes/utilons discounting. These discounts have different nature as the first one is clearly driven by simple financial engineering considerations while the the second one is not. In his first example Eliezer is clearly opposed to opportunity costs considerations that would demonstrate that the value of a single family water supply in a year time is less than the value of an immediate single family water supply.

There is at least one reason to apply a discount in the second situation and that is what in finance is called "risk adjustment". Even if put the same value on a human life in 4490 years as on a human life right now, their values, preferences, and our impact on their environment are much more uncertain and this uncertainty is growing with time. One way to take into account this uncertainty is to apply a discount rate. It is a somewhat crude way and one can debate whether it should behave exponentially, but it is sure better than not applying any adjustment at all and assuming that we perfectly understand 4490 year consequences of our actions.

And, by the way, this risk adjustment argument does not work backwards in time as there is no uncertainty in the past. Also, without time travel we cannot have any impact on the live of a guy who dies 4490 years ago, so the question of his value is pretty pointless.

Expand full comment

I'm still not really understanding why we should discount the value of future people.

One hypothetical is "we can provide water to 100 families in 2008, or to 100 families in 2009" and the implication is that we should not favor the 2008 families over the 2009 families, because there's no reason their utils are inherently more important.

You pointed out that a more realistic example might be "we can provide water to 100 families in 2008, or to 105 families in 2009." If anything, this seems to be all the more reason to support longtermism, because by investing now we may be able to reap greater benefit later on. You seem to think this is a reason for discounting the value of those 105 families, but I don't understand why they would matter any less, or why they would matter precisely 5% less. It seems more like an ad hoc solution for "I want to find some way to value 100 people today the same as 105 people a year from now, so I'll choose a discount rate of 5% so they even out."

Expand full comment

My argument is not really about the right way to compare the benefits, but about comparing the expenses. We may value the benefit of each next year family the same as each this year family, or may be half as much, or may be double as much, but the same amount of money will still deliver either 100 immediate benefits or 105 deferred. Once we recognise that the choice is between 100 this year vs 105 next year and not 100 vs 100, we can then make a better informed decision, taking into account the drought conditions this year and next, excess mortality caused by bad water this year, uncertainty over whether these families will still live in the same place next year, and many other factors. The decision may end up favouring next year 105 wells over this year 100 wells or some mixture, or 100 this year wells.

For simple binary "this year vs next year" choices it does not matter if we make the adjustment on the benefits or on the expenses. However, real life decisions usually involve complicated schedules of expenses and benefits. Discounting was invented to make consistent comparisons between such schedules. Without discounting we may end up with expense/benefit schedules that can be improved by simple deferral+investing.

I think that generally discounting works against "long-termism" because for long term problems one of the choices is between "spend money today to provide benefit to people living 50 years in the future" vs "invest money in the stock market today and let people in 50 years time decide for themselves how they want to spend it". Discounting shifts the choice somewhat towards the latter, but the altruistic long-termism we are discussing typically favours the former. Imagine if EA movement decided to focus on the latter option - it would have all the fun and philosophical panache of a pension fund board! :)

Expand full comment

Your comment about slavery going away seems to be false, in that there are credible estimates that there are more slaves today than ever:

https://www.nydailynews.com/news/world/slaves-time-human-history-article-1.3506975

Expand full comment

For a good introduction to population ethics (surveying the major options), see: https://www.utilitarianism.net/population-ethics

One thing worth flagging is that MacAskill's book neglects the possibility of parity (or "value blur", as we call it in the section on Critical Range theories, above), which can help block some of the more extreme philosophical arguments (though, as we note, there's no way to capture every common intuition here).

Expand full comment

This is a great article. I recommend. I also recommend Chappell's blog Good Thoughts. Seriously, if you like this stuff, you'll like his blog.

Expand full comment

I'm pretty sure most ACX readers would agree that humans cannot psychologically comprehend the differences between very large numbers causes a lot of unnecessary suffering. Therefore, I find it very confusing and epistemically tenuous that the repugnant conclusion, which involves human intuitions with respect to exceptionally large numbers that we know are completely unreliable, is used to reject principles like more flourishing is good and less suffering is bad.

Expand full comment

Now not nitpicking: Erik Hoel has his fine take on the book out. https://erikhoel.substack.com/p/we-owe-the-future-but-why?utm_source=substack&utm_medium=email He offers some help - i.e. arguments -against the 'mugging' ;) - not just flat out refusing the "repugnant conclusion" (as Scott seems to do) - In the comment section at Hoel I liked Mark Baker's comment a lot: "The fundamental error in utilitarianism, and in EA it seems from your description of it, is that is conflates suffering with evil. Suffering is not evil. Suffering is an inherent feature of life. Suffering is information. Without suffering we would all die very quickly, probably by forgetting to eat.

Causing suffering is evil, because it is cruelty.

Ignoring preventable suffering is evil because it is indifference.

But setting yourself up to run the world and dictate how everyone else should live because you believe that you have the calculus to mathematically minimize suffering is also evil because it is tyranny.

Holocausts are evil because they are cruel. Stubbed toes are not evil because they are information. (Put your shoes on!)" - end of quote -

If you read Scott's post first, good for you: Hoel writes less about the book nor how the "repugnant conclusion" is reached. But he had a long, strong post "versus utilitarianism" just last week, so his review is more kind of a follow-up.

I really do like a lot about EA, and strongly dislike "IA". But I agree with Hoel: "All to say: while in-practice the EA movement gets up to a lot of good and generally promotes good causes, its leaders should stop flirting with sophomoric literalisms like “If human civilization were destroyed but replaced by AIs there would be more of them so the human genocide would be a bad thing only if the wrong values got locked-in.” - end of quote

Expand full comment

Nice review. Definitely some interesting thoughts.

If you recall, I thought that your population article was mistaken because it wasn't accurately weighing potential people. [1] You replied (which I appreciate) to say that you reject the Repugnant Conclusion. You said "I am equally happy with any sized human civilization large enough to be interesting and do cool stuff. Or, if I'm not, I will never admit my scaling function, lest you trap me in some kind of paradox." I wrote an article responding to the article, and critiqued possible scaling functions [2].

"If I had to play the philosophy game, I would assert that it’s always bad to create new people whose lives are below zero, and neutral to slightly bad to create new people whose lives are positive but below average. This sort of implies that very poor people shouldn’t have kids, but I’m happy to shrug this off by saying it’s a very minor sin and the joy that the child brings the parents more than compensates for the harm against abstract utility. This series of commitments feels basically right to me and I think it prevents muggings."

Some implications of this view:

1. If no people existed, the average would be 0. In which case, you would have the Repugant Conclusion again.

2. If we set the average value given existing people, it's better to create 1 ever-so-slightly above average person, than tons of ever-so-slightly below average people even if they fully believe their lives are good and worth living.

3. Since the critical value is a function rather than fixed, it will change with the present population. This means that someone who was evaluated as good to produce could later be bad without any aspect of their life changing. While creating a human in 1600 could be regarded as morally good then, it's likely that tons of those lives were below average for 2022 standards. This seems to create odd conclusions similar to asking the child their age after they were cut by the broken bottle.

4. The goodness or badness of having a child is heavily dependent on the existence of "persons" on other planets. If these persons have incredibly good lives, it might be immoral to have any humans. If these persons have incredibly bad lives, it might result in something like the repugant conclusion because they are below 0 and drag the average down to almost zero if they are numerous enough. If you consider animals "persons", then you could argue they suffer so much and are so numerous that the average is below zero.

5. It would be better (but not good) to introduce millions of tormented people into the world rather than a sufficiently larger number of slightly below average people.

6. Imagine we had population A with 101% average utility and a very large population B with 200% average utility which changes the average to 110%. One population is created 1 second before the other. If A comes first then B, it's good to have A. If B comes first, then A, it's bad to have A. The mere 1 second delay creates a very different decision, but partically the exact same world. This seems odd from a perspective where only the consequences matter.

[1] https://astralcodexten.substack.com/p/slightly-against-underpopulation/comment/8159506

[1] https://parrhesia.substack.com/p/in-favor-of-underpopulation-worries

Expand full comment

> 1. If no people existed, the average would be 0. In which case, you would have the Repugant Conclusion again.

Pedantry: if no people existed, total utility would be 0, but average would not, in the same way that 0/0 doesn’t equal 0 and the present king of France isn’t 0 feet tall.

Expand full comment

Fair enough. This makes decision making with 0 people unclear.

Expand full comment

>> Suppose that some catastrophe “merely” kills 99% of humans. Could the rest survive and rebuild civilization? MacAskill thinks yes, partly because of the indomitable human spirit

Oh no. [survivorship bias airplane.jpg]

Expand full comment

Another scenario:

Suppose god offers you the option to flip a coin. If It comes up heads, the future contains N times as many people as the counterfactual future where you don't flip the coin. (Average happiness remains the same.) If it comes up tails, humanity goes extinct this year. A total utilitarian expectation maximizer would have to flip the coin for any value of N over 2. But I think it is very bad to flip the coin for almost any value of N.

Professional gamblers like me act so as to maximize the expectation of logarithm of their bankroll, because this is how you avoid going broke and maximize the long term growth rate of your bankroll. The Kelly criterion is derived from logarithmic utility.

Would it make any sense to use a logarithmic utility function in population ethics? This could:

1. Avoid the extinction coinflip mugging

2. Avoid the repugnant conclusion because there aren't enough atoms in our lightcone to make into people to multiply the logarithm of the population by a low average happiness and get a bigger utility number than 5 billion happy people.

On the downside it implies you should kill half the population if it will make the remaining people modestly happier.

Expand full comment
Aug 23, 2022·edited Aug 24, 2022

This is a very good point. Any concave utility function creates risk aversion. It is reasonable for people to have concave utility functions and most people who thought about their utility functions realised that they have pretty concave utility functions like yours log utility one. When combined with zillions of future lives, this can easily lead us to the conclusions opposite to long termism. If (1) we indeed live at the time of a critical junction and if (2) we are a little bit uncertain about long term effects of our actions and if (3) we are sufficiently risk averse, then we need to do as little as possible to affect lives of future generations.

Expand full comment

I think your reasoning is incorrect because taking long-termist actions doesn't necessarily increase the variance in the set of possible futures. It could just shift the entire distribution upward or downward. It could even decrease volatility, if you're working on X-risk mitigation.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

That is true, may be what we do decreases the variance. Or may be it does not. If our understanding of the future is very poor, we are adding another uncorrelated variable to the existing one - this increases the variance. An if our understanding of the future is a bit better but we manage to mobilise serious resources for our goals, we are adding a variable with a small negative correlation but large variance - this might also increase the variance. To "just shift the distribution upwards" you need to believe in a deterministic long-term impact of your actions. This is 1) clearly unrealistic 2) people who think it is are very dangerous and are definitely increasing the variance long term.

Expand full comment

Scott, I would imagine that you - like me - are deeply dissatisfied with simply walking away from the moralist who has made an argument for why you should get your eyes pecked out. It seems to me like you’re essentially saying “you fool! Your abstract rules of logic don’t bind my actions!” - and with this statement the entire rationalist endeavor to build a society that privileges logical argument goes out the window.

Is that a fair summary, or is there a deeper justification in the article I missed?

I’ll take a stab at providing one: the common conception of morality encompasses many different systems, and these sorts of arguments confuse them.

System 1: moral intuitions. These can be understood as a cognate of disgust; they are essentially emotional responses that tell us “you can’t do this, it’s beyond the pale”.

System 2: modeling and reasoning about system 1 (moral intuitions). This is the domain of psychology, and involves experiments to figure out exactly what triggers moral intuitions.

System 3: systemic morality. The attempt to construct rules for action that avoid triggering moral intuitions, and that perhaps that maximally trigger some sort of inverse emotion (moral righteousness? Mathematical elegance?). This is the realm of philosophers, with arguments about deontology and utilitarianism. “Mathematics of morality”

The fundamental problem of systemic morality is that our moral intuitions are too complex to model with a logical system. This is pitted against our strong desire to create such a system for many reasons - for its elegance, its righteousness, and for the foundation of society that it could be if it existed.

To bring this idea into focus, imagine another philosophical mugging - but this time plausible. You’ve just left an ice cream shop with your children when a philosopher jumps out of a bush and tells you “I have an argument that will make you hand over your ice cream to me.” You of course object - you’ve just paid for it, and it looks so good - but he says a few words and you hand it over.

What did he say? He walked you through the statistics on contaminants in cream, sugar, and the berries that were likely used to make your ice cream. Then he went into the statistics on worker hygiene and workplace cleanliness, as well as the violation the ice cream shop received two years ago. When he started talking about the health problems caused by sugar and saturated fats you suddenly found you weren’t excited about the ice cream anymore and you handed it over.

Does this mean people shouldn’t eat ice cream? Yeah, it kinda does. But it doesn’t pose any serious philosophical problems for us because we’re not foolish enough to try to systemize our disgust triggers into systems of behavior that we should follow. We can simply recognize the countervailing forces within ourselves, say “I am manifold”, and move on.

I’m not advocating that we should stop trying to systematize our moral intuitions and make them legible within society. Rather I think we should stop expecting these moral systems to work at the extreme margins. They’re deliberately-oversimplified models of something that is extremely complex. We can note where they break down (I.e. diverge from the ground truth of our intuitions) and avoid using them in those situations.

Expand full comment

Could you please elaborate, as I'm not sure which you're advocating: 1) that we avoid using our moral intuitions in contexts such as, say, extremely large numbers (and just accept the intuitively horrible results the math gives us), or 2) that we avoid using logically construed systems of morality in contexts such as, say, extremely large numbers (and then try to work around the moral questions in some other manner)?

Expand full comment

Good question! If we treat moral intuitions and systemic morality as separate systems that can return different results, which one should we go with in which context?

I don’t know. We should probably favor systems in public spheres (e.g. governments) and intuitions in private spheres (e.g. friendships), but not absolutely.

My main point was just that the philosopher was privileging the systemic view and ignoring the right to exert a moral judgment over the result of the system, which I think is essentially the right Scott asserted in refusing to play the philosopher’s game.

Expand full comment

Oh yeah, 100% agreed. I got a little scared for a moment there.

Expand full comment

I am a bit skeptical about the well-definedness of the GDP across the gulf of millennia. How do you inflation-adjust between economies so different? I assume that you pick some principal trade goods existing in both economies (e.g. grain) as a baseline. Grain (or the like) was a big deal of the economy in 1 CE and is today (Ukraine nonwithstanding) not a big deal in the grand scheme of things in the western world: yearly grain production in the order of 2e9 metric tons, times 211 US$ per ton equals some 5e11 US$, about 6/1000 of the world GDP of 84e12 US$.

In ancient times, the median day-wage workers may have earned enough grain to keep them alive for a day or two. Today, by spending 10% of the median US income, you could take a bath in 80kg of fresh grain every other day if you were so inclined.

In fact we should be able to push our GDP advantage over the Roman Empire much further by just spending a few percents of our GDP to subsidize grain or flood the market with cheap low-quality iron nobody wants. Probably a good thing that we do not have intertemporal trade.

Thus, I am not particularly concerned about the GDP being limited by the number of atoms in our light cone (which only grows quadratically). A flagship phone from 2022 worth 800 US$ does not contain more atoms (rare earth elements and the like) than a flagship phone from 2017 worth perhaps 150 US$. The fact that a phone build 100 years from now (if that trend continued) might be worth more than our present global GDP (if we established value equivalence using a series of phone generations) does not bother me, nor the fact that a phone build in 3022 CE might surpass our GDP by 10^whatever. Arbitrary quantities grow at arbitrary speed, film at 11.

Expand full comment

William Nordhaus has done some worthwhile research on GDP-like measures across millenia. One that I liked focused on the cost in labor-hours of a unit of artificial light. This can be assessed from prehistoric times to the present.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I agree with your points about the difficulty of comparing GDP across very different economic conditions and also about GDP not being limited by the number of atoms. However, I think that your argument about grain prices misses an important aspect. You are valuing grain supply at the marginal peace-time price. Demand for basic things such as grain, electricity, or oil is very unelastic. Consumers derive massive utility from consumption of these things, but get to keep the surplus with normal low prices. So a large drop in production of any of these basic staples brings a massive disutility which is not upper bounded by the market prices. It is not even upper bounded by market prices post-spike, as these prices may be observed at the wrong points of broken supply chains. So I disagree that grain is not a big deal in the grand scheme of things - it is not until it is. Oxygen that we breathe is actually free to consume to us all, but the disutility and GDP consequences of full oxygen removal from the atmosphere would be quite horrendous.

Incidentially, this confusion between utility and marginal price was an important argument is favor of sanctions against Russia and is probably contributing a lot to the current inflation burst in many countries.

Expand full comment

> When we build nuclear waste repositories, we try to build ones that won’t crack in ten thousand years and give our distant descendants weird cancers.

I realize this is seriously discussed by experts, but I'm wondering how it makes sense. It seems like if nuclear waste lasts ten thousand years then it must have a very long half life, so it can't be very radioactive at all?

There's gotta be a flaw in this argument, but I don't know enough about radioactivity to say what it is.

Expand full comment

Thousands of years is not a long half-life. Plutonium 239 has a half-life of 700 million years, but you would definitely not want to be exposed to large amounts of it.

Expand full comment

If I understand correctly, the (overly simplistic version) of the Repugnant Conclusion works like this:

Define utility function U = N * H, where N is number of people and H is happiness. Calculate U for a world A with 1 trillion people with happiness 1 (A = 10^12 people*happiness), and a world B with 1 billion people with happiness 100 (B = 10^11 people*happiness). This leads to the conclusion that an overcrowded, unhappy world is better than a less crowded happy one (A > B), the “Repugnant Conclusion.” Thus, we must either throw out the axioms of utilitarianism or accept the slum world.

This seems like a terrible argument to me, especially this part: "MacAskill concludes that there’s no solution besides agreeing to create as many people as possible even though they will all have happiness 0.001." Why is the utility function linear? This "proof" relies on linearity in N and H, which are NOT axiomatic.

You could easily come to a much less repugnant conclusion by defining something nonlinear. For example, let’s say we want utility to still be linear in happiness but penalize overcrowding. Define U = H * N * exp(-N^2/C), where C is some constant. Now the utility function has a nice peak at some number of people. In fact, we can change U to match our intuition of what a better world would look like.

Expand full comment

Cool article, thanks scott-

Expand full comment

"fighting climate change ... building robust international institutions that avoid war and enable good governance."

MacAskill takes it for granted that these are good things to do, but he might be wrong. Climate change could make us worse off in the long run — or better off. Present global temperatures are high relative to the past few thousand years, low relative to the past few hundred million. Robust international institutions might avoid war. They might also prevent beneficial competition among national institutions and so look us into global stasis.

To make the point more generally, MacAskill seems, judged by the review, to ignore the very serious knowledge problems with deciding what policies will have good effects in the distant future.

Expand full comment

Which is the core of my concern about trying to use essentially made up numbers to derive a mathematical proof of the correct course of action. This is impossible in real life for even very short term decisions (should the US intervene against Russia in support of Ukraine?), so I can't see any plausible way we could even try to do this for very long time periods - such as enough time to get ourselves mugged by quintillions of people's future preferences.

Expand full comment

The Old Testament placed limits on slavery, and the Church increasingly limited it for 1500 years - basically until the money wasn't just good, but suddenly amazingly good and more than half of everyone threw their principles in the ocean, overruling the others. The Quakers deserve a lot of credit, but not all of it.

Expand full comment

Another Phil101 class junior high level question:

If the supposed, much larger future population is capable of stability at least comparable to that of today - which it should, in order for us to consider aiming to bring it about - wouldn't it be possible or likely that the exactly same longtermism applied to those people, forcing them to discount their own preferences in order to maximize the utility of a much larger civilization in their far future? If their numbers add up to a rounding error in comparison with the much larger^2 population, it might follow that those people should sacrifice their utility in order to bring about the far future.

And as for the much larger^2 population's longtermist views...

Expand full comment

Yep. There's no reason to assume that our current generation is in any way special in this respect.

Naturally, this implies that every human generation ought to discount its own preferences in favour of future generations, right up until the point humanity goes extinct.

Expand full comment

I think a major point long-termism misses is risk. We discount the future (as in using a discount rate to say how much less we value future money or utility) because we ultimately don’t know what’s going to happen between now and then. A meteor could hit the earth, and then all our fervent long-term investments turned out to be pointless. Or all the other scenarios you could imagine. So the future is worth less than the present, and we should prioritize accordingly. As a rule of thumb, infinite happy people infinitely far in the future don’t matter. That’s not to say we shouldn’t invest in the future, just that we weigh that against a more immediate and certain present.

Practically, this also aligns with Scott’s point that most of the time improving the future is pretty similar to improving the present. Maybe some time soon we can stop torturing ourselves with future infinities and just get back to making things better.

Expand full comment

> Can we solve this by saying you can only create new people if they’re at least as happy as existing people - ie if they raise the average? No. In another mugging, MacAskill proves that if you accept this, then you must accept that it is sometimes better to create suffering people (ie people being tortured whose lives are actively worse than not existing at all) than happy people.

But that's the same as saying that "it's worth suffering in the fight for others' right to die" is problematic in the "zero is almost suicidal" case - quality threshold is just shifting meaning of zero. If conclusion is repugnant then for some scale it's worth creating suffering to avoid it.

Expand full comment

The coal issue seems like a silly distraction. Imagine we evolved on a planet exactly like Earth except there was no coal anywhere. Do you think humanity would stagnate forever at pre-Industrial Revolution technology? A billion years after the emergence of Homo sapiens we're still messing around with muskets and whale oil lamps because we lacked an energy-dense rock to dig out of the ground? Things would surely go slower without coal, but if you're taking a "longtermist" view it seems silly to worry about civilization taking a little longer to rebuild.

Expand full comment

What would the substitute for coal that’s as energy dense?

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Why does there need to be a substitute for coal that's as energy dense? You don't think a thriving civilization can make due with less and route around that spot on the tech tree after a few thousand years?

Expand full comment

No. Clearly. And I’m not alone. You just cant the level of industrialisation that was seen in Britain without coal or something as energy dense as coal. Wood doesn’t do it. And they were running out of wood anyway.

Expand full comment

For more on the long history of abolition, The Dawn of Everything [reviewed in the book review contest!] talks about prehistoric California tribes who lived immediately next to each other, some of whom appeared to own slaves and some of whom refused. Oppressing your fellow humans? Refusing to oppress your fellow humans? It's been going on for as long as there have been humans.

And abolition is not a clean line: slave labor still happens in the US, we just call it "prison labor" and look the other way.

As the US has the highest carceral population BY FAR [and, uhh, spoiler: we're not any "safer"...] along with a shocking rise in pre-trial holds since 2000, that seems like the most important "near term" cultural fix on the scale of abolishing slavery. Abolish the carceral state! And if that seems crazy to you, recall that the DOJ's own studies show that prison is not a crime deterrent and imprisoning people likely makes them re-offend more frequently: https://www.ojp.gov/ncjrs/virtual-library/abstracts/imprisonment-and-reoffending-crime-and-justice-review-research

As long as people in the US don't care that marginalized [poor] folks are being oppressed by these systems, we're probably never going to get folks care about hypothetical Future People.

The discussion around this may obviously be different in, say, Norway.

Expand full comment

The main reason why I am not a utilitarian is that once you start to mix morality and math, you usually end up going off the rails. I think the main problem is the assumption that you can measure things like "utility" and "happiness" precisely, and get reasonable results by multiplying large rewards by small probabilities, or summing over vast numbers of hypothetical people. The error bars get too large, too quickly, for that sort of calculation to be viable.

That being said, if you are going to do math, do it properly. In reinforcement learning, if you attempt to sum over all future rewards from now until the end of time, you get an infinite number. The solution is to apply a time discount gamma, where 0.0 < gamma < 1.0, to any future rewards.

R(s_t) = r_t + gamma * E[ R(s_{t+1}) ]

Or in English, the total reward at time "t" is equal to the immediate benefit at time "t", plus the expected total reward at time "t+1", times gamma. Thus, any benefits that might occur at time t+10 will be discounted by gamma^10. This says that we should care about the future, but hypothetical future rewards are worth exponentially less than present rewards, depending on how far in the future you are looking. So long as future benefits don't grow exponentially faster than the decay rate of gamma, the math stays finite.

Note also that we are talking about future rewards "in expectation", which means dealing with uncertainty. Since the future is hard to predict, any future rewards are further discounted by the probability with which they might happen.

The argument over "short-term" vs "long-term" thinking is just an argument over what value to give gamma.

Expand full comment

Note that the US Govt (OMB) sets gamma at 0.07 for cost-benefit analyses of policy, which reduces benefits in year 2122 by a factor of 1000. With gamma = 1000, no one cares much about global warming, and no one cares at all about what happens in 2222. So global warming advocates want to use a smaller gamma (aka discount rate). Thomas Schelling wrote a brilliant paper on this topic, "Intergenerational Discounting," [Energy Policy 23:4-5 1995] which, among other things, reminds us of the Ramsey equation where per capita GDP growth rate is a lower bound for the discount rate, and any reasonable growth rate makes the distant future irrelevant. The argument behind the Ramsey equation is roughly as follows: We believe it is ethical for wealthy people to give some of their wealth to poor people, and some of us believe it is ethical for the government to compel wealthy people to share some wealth with poor people (eg progressive income tax + social safety net). Therefore, it seems unethical for the government to compel poor people to share their meager wealth with wealthy people. However, people 100 years in the future are wealthy compared to people today. (Even if you imagine GDP growth slowing over this period, shrinking population will cause per capita GDP to continue to grow.) As a thought experiment, imagine asking European descendants on the US frontier in 1800 (or indigenous Americans in 1800) to give up burning a fire on one day per week (say, Mondays) because it will allow Americans in the year 2000 to each have 2 more gallons of gasoline per year. Perverse, eh? But this is what much of the climate change argument amounts to. The Brit study on climate policy known as the Stern report, reduced gamma to zero to make arguments for climate activism - fine but mostly the poor today were helping the unimaginably rich 8000 years from now. Scott shows that MacAskill dodges all this by claiming benefits for the distant future also have net benefit today. However, this makes the whole question uninteresting. We execute policy because it has benefit today, and it happens that a lucky collateral effect is a benefit in the distant future. Easy decision. The only policy decision deserving of any thought or analysis at all are those that have near term costs only balanced by far term (5 year or 50 year) benefit.

Expand full comment

Can't we just agree that any analysis that relies on collapsing the complex entirety of human experience into a single number is not even wrong?

Expand full comment

Well, I'll agree with you, so that's a start.

Expand full comment

>Coward.

Agreed. As evidenced by the later neglecting of exploring real conflict between longtermism and general utilitarian ethics.

>So it would appear we have moral obligations to people who have not yet been born, and to people in the far future who might be millennia away.

There is so much more work needed before this armchair jump to this statement than the thought experiment provides.

>Stalin said that one death was a tragedy but a million was a statistic, but he was joking.

Was he? I don't think that is clear, from his behavior. I also don't think it is clear he was "wrong" about it. Ethics some might argue (I would argue) is context/perspective dependent. What is the right action for Bob walking down the street is not necessarily the right "action" for the US government.

>Coal

The whole coal conversation is silly. Industrialization is not remotely that path dependent. Might take quite a bit longer without coal, no way that stops anything. Seems like a very bad misreading of history. Industrialization was incredibly rapid. The world seeing more change in 50 years than it had in millennia. If that is instead 500 years because of no coal, what difference does it make? In fact the transition might be smoother and less fraught.

>If only dictators have AI, maybe they can use it to create perfect surveillance states that will never be overthrown.

What is so bad about dictators? Especially ones with AI? When talking about issues this large scale, the exact distribution of political power is the least of our problems.

>Octopus farming

I agree this sounds bad.

>Bertrand Russell was a witch.

Indeed, he is amazing.

>Suppose...

And here would be the first of my two main complaints/responses. This "suppose" is doing a lot of the work here. In reality we discount ethical obligations with spatiotemporal distance from ourselves pretty heavily. One big reason for this is epistemology, it just generally isn't as possible to know and understand the outcomes of your actions when you get much beyond your own senses.

You see this with how difficult effective development aid is, and how bad people are at predicting when fusion will happen, and how their behavior impacts the climate, or the political system. All sorts of areas. Because of this epistemic poverty, we discount "potential people", quite heavily, and that makes perfect sense because we mostly aren't in a good position to know what is good for them especially as you get farther from today.

The longtermist tries to construct some ethical dilemma where they say "surely the child running down the path 10 days from now matters no more than the one running down it 10 years from now". And then once you grant that they jump to the seagulls. But the answer is to just impale yourself on that horn of the dilemma, embrace it.

No the child 10 years form now is not as important. Someone else might clean up the glass, a flood might bury it, the trail might become disused. Et cetera, et cetera.

We don't have the same epistemic (and hence moral/ethical) standing towards the child 10 years from now, the situations ARE NOT the same.

The funny thing is overall I expect I am generally somewhat of a longtermist myself. I think one of the main focuses of humanity, should be trying to get itself as extinction proof as possible as soon as possible. Which means perhaps ratcheting down on the optimum economic growth/human flourishing slightly, and up on the interstellar colonization and self-sufficiency slider slightly.

But I certainly don't think we should do that on behalf of nebulous future people, but instead based on the inherent value of our thought//culture/civilization and accumulated knowledge. I don't remotely share the intuition that if I know someone is going to have a great life I owe it to them to make it possible.

>did you know you might want to assign your descendants in the year 30,000 AD exactly equal moral value to yourself?

Anyone who really believes this is far far down an ethical dark alley and needs to find their way back to sanity.

Expand full comment

Good post. Except for the coal. The industrial Revolution needed coal.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Nah, the industrial revolution needed coal to play out the way it did with the rapidity it did. You still have turbines from water and windmills and eventually electricity to move it forward without coal. I guess maybe you could argue it wouldn't have been a "revolution", then more just slow gradualism, but I don't think there is anything about the economic or intellectual history that indicates a Europe with no coal doesn't still lead science and economics, it jsut maybe builds up less of a comparative advantage, dominates less, conquers less.

Coal made it all easy, and mass scale. But the discoveries and efficiencies are all important enough that even implemented on much more boutique and restricted scales they would have still lead to big change. Maybe "Jutland" is only 30 ships vs 30 instead of 100 vs 100, and happens in 1970 instead of 1915. But in my mind that hardly an indication coal was "required". There was/is SOOOOO much slack in the system.

And you really only need to get through the 100-150 years to the petrochemical revolution regardless. Maybe that takes 200 or 300. But I cannot imagine it somehow doesn't happen.

Expand full comment

I never understand why so many people care specifically about the survival of humanity. Isn't it enough that many different species survive? Anyway, our distant descendants won't be humans.

Expand full comment

I think you can head off the Repugnant Conclusion fairly easily by deciding that a larger population is not, in itself, a positive.

Expand full comment

In order to get the Repugnant Conclusion, you merely have to accept that adding positive utility to an existing population is good - as long as doing so does not disproportionally diminish existing utility.

The larger population isn't a positive in itself - it is positive because it contains more people living lives (if only barely) worth living. After that it just goes downhill of its own accord, because adding more people is easier than improving already existing lives.

Expand full comment

All these thought experiments seem to contain the hidden assumption that the Copenhagen interpretation of quantum mechanics is the correct one. That we live in a universe with a single future. If instead the Many Worlds interpretation of quantum mechanics is true, you don't really have to worry about silly things like humanity going extinct - that would be practically impossible.

You also wouldn't have to stress over whether we should try to have a world with 50 billion happy people or a world with 500 billion slightly less happy people. Many worlds already guarantees countless future worlds with the whole range of population size and disposition. There will be far more distinct individuals across the branches of the wave function than could ever fit in the Virgo Super Cluster of a singular universe, and that's guaranteed no matter what we do today since there is always another branch of the wave function where we do something different.

If you believe the many worlds interpretation of quantum mechanics is true AND that quantum immortality follows from it.. Well that opens up all kinds of fun possibilities!

Expand full comment

Funnily enough Yudkowsky is, as far as I recall, a fan of the MWI, and he if anyone would understand the implications. Still, he's extremely concerned with x-risk in [t]his particular world.

Expand full comment

I wonder how he squares that. Maybe a hedge against the possibility that MWI is wrong? It still makes sense to be somewhat concerned about x-risk along with any other deadly risk under MWI. It would be a crappy experience to go through it personally or to worry that many future people might. But I maintain that you simply wouldn't have to worry about humanity going extinct as a separate and distinct concern from the wellbeing of yourself and future people.

Expand full comment

Yeah, I guess he just Bayeses his way through it. If MWI is real, it doesn't matter what "he" does, as he will end up making every possible choice in all matters in every possible way, no matter how he subjectively perceives it.

However, that only influences the resulting probability distribution as weighed by MWI:s relative probability of being real, and that influences the risk analysis in a "if MWI is real, I don't screw up by still caring about x-risk, and if it isn't real, we're really really screwed" sort of manner.

Expand full comment

Nooo no no no. Many worlds doesn't mean all possible worlds happen with equal frequency. Yudkoswki addresses this explicitly. Possible worlds get generated more often the closer they are to the one that generates them. 'It all adds up to normality'. There are no differences in what you should do morally under many worlds. It just massively multiplies the numbers involved not the expected values.

Expand full comment

I think that is true for many, but not all moral positions. The idea that extinction of the human race would be very bad on a scale far above and beyond the extinction of 99.999% of the human race just kind of becomes irrelevant under MWI.

Completely adding up to normality would also mean treating the amplitude of various branches of the wave function as some kind of an indicator of the moral weight we should assign to that branch just like we do with possible futures under the Copenhagen interpretation, but that seems questionable to me. If you flip a weighted quantum coin that comes up heads 1/1,000 times, the heads branch amplitude is much smaller than the tails branch amplitude, but you have still only created two distinct sets of observers. You could argue that we should pre assign higher moral value to the tails branch, but it's definitely less airtight of an argument under MWI than it would be under the Copenhagen interpretation.

And if you believe Quantum Immortality follows from the MWI, that's REALLY going to change your moral calculations. For example, you could just totally ignore certain kinds of x-risk (like accidentally triggered false vacuum collapse in a particle collider) since everyone would just continue existing without experiencing it no matter how likely it was to happen.

Expand full comment

1) sorry for my snark it was unwarranted

2) my understanding is that the reason extinction is so bad is not because one life for sure is better than a 50% chance of two and zero but because in long term anything short of extinction still eventually colonizes galaxy and reaches massive universe utils.

Expand full comment

Under MWI any populated world is guaranteed to eventually colonize the galaxy and reach massive universe utils so you still don't really have to worry about extinction.

Another example of different morality under MWI is that instead of trying to maximize the population of happy galactic citizens, you might feel obligated to maximize the population of DISTINCT happy galactic citizens. It's almost certainly not the case, but if we found out that the wave function doesn't branch as much as we think it does, perhaps because very few probabilistic quantum events percolate upward into macroscopic events that differentiate people's experiences, we might want to artificially increase how often that happens - construct machines that roll quantum dice and produce notable macroscopic effects depending on the outcome.

Such a project would only make sense under MWI, not the Copenhagen Interpretation. So now you are set up with a conflict of how EA resources should be allocated depending on which interpretation you favor. Do you try to minimize x-risk or maximize wave function branching? It's probably not the cleanest example, but I suspect further exploration of the concept would uncover better ones.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I'm not sure 'it all adds up to normality' leads logically to 'your so-called choices matter in the grand scheme of things' if one takes MWI very seriously.

If you, say, work in AI risk, the possible worlds where humanity survives 'close' to you only seem to represent a higher frequency because they happen to be close to the particular world you happen to inhabit. Whereas even I can imagine a possible world where even I became an x-risk specialist, and close to that possible world exist possible worlds with a higher rate of human[ity] survival*. I'm uncertain whether those worlds being close or far away from my POV has anything to do with anything.

[*) likely, lower rate, as I would fill the environment with meaningless noise without adding to the knowledge, hindering progress]

Expand full comment

That is, I don't think our apparent choices have any influence on the wavefunction's probability distribution and therefore the distribution of parallel worlds. We seem to make choices and those choices seemingly have effects, but that simply represents the outcomes in those particular worlds. I don't think we can shift the entire distribution to one way or another, just our particular perspective of where we lie in the distribution.

Expand full comment

I agree that, while I find long-termism very compelling when reasoning about it in the abstract, I must admit that my much stronger personal motivation for trying to get humanity safely through the most important century is my concern for my loved ones and myself, followed by my concern for my imagined future children, followed by my concern for all the strangers alive today or who will be alive at the time of possible extinction in 5-50 years. People who don't yet exist failing to ever exist matters, it just gets like 5% of my total, despite the numbers being huge. I dunno. I think maybe I have decreasing valuation of numbers of people. Like, it matters more to me that somebody is alive vs nobody, than lots of theoretical people vs a few theoretical people. Questions about theoretical moral value are complex, and I don't feel that this has answered them to my satisfaction. I'm not about to let that stop me from trying my hardest to keep humanity from going extinct though!

Expand full comment

"I dunno. I think maybe I have decreasing valuation of numbers of people."

It's a common thing called "scope insensitivity [bias]". I don't think it's necessarily always a bias, as in this case.

Expand full comment

I'm not sure it's the same thing. I do think that my sense of badness scales with numbers of people suffering, but I'm not sure that my sense of goodness scales with number of people existing. I get my news from statistical reports, and care about progress in the world based on such, not from anecdotal data, so I'm unusually less prone to scope insensitivity than most people.

Value is subtle and complex however. For example, I find that I value diversity of thought and experience. A lot of people being all falling into 10 very similar life patterns and thought patterns seems less valuable to me than a smaller number of people with 100 life & thought patterns. I could go on and on with such. There's a lot more to this than numbers of people and the average happiness of their lives.

Expand full comment

>the joy that the child brings the parents more than compensates for the harm against abstract utility

On average, children decrease parental happiness, so this isn't particularly exculpatory.

Expand full comment

Did that conclusion survive the replication crisis?

Expand full comment

Yes.

""We assume that children will improve our happiness," Senior tells NPR's Melissa Block. "That's why babies are called bundles of joy. But what's so interesting is that one of the most robust findings in the social sciences — and it's been this way for about 50 years — is that children do not improve their parents' happiness."

https://www.npr.org/2014/01/24/265365876/a-parenting-paradox-how-kids-manage-to-be-all-joy-and-no-fun

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

-> I realize this is “anti-intellectual” and “defeating the entire point of philosophy”

I think this kind of book is borderline pseudoscience. Philosophy discovers ideas, science discovers truth. And while MacAskill wants to compel you to believe something is true, in fact he is only doing philosophy.

The real idea of science is not "using our big brains to reason out the truth" or "being rational", it is, as Feynman once said, that the test of all knowledge is experiment.

We do not believe the odd things special relativity tells us about the time simply because there is a chain of logic and we believe anything logic tells us. We believe it because that chain of logic leads to testable, falsifiable conclusions that have been verified by experiment.

Mathematics alone is not science because there is nothing to test. Only when you try to apply it (in a field like physics) do you get testable conclusions. Logic does not derive truth, it simply tells us what conclusions are consistent with a given set of axioms. For example, hyperbolic geometry yields different conclusions than Euclidian geometry. Neither is "right" or "wrong" or "true" or "false": it doesn't even make sense to talk about something being "true" until you can to test against reality.

When MacAskill derives his Repugnant Conclusions and decides that they are True, what is the experiment by which we test that truth? I don't think there is one.

One can argue that we should still believe the conclusion because we believe the axioms, but what is the experiment that tested or derived our axioms? Our intuition? But if our intuition is axiomatic, a conclusion that disagrees with our intuition cannot be correct. The "proof" of such a conclusion may have demonstrated that our intuition is not logically consistent, but that does not help us decide what is true or which of the two intuitions (axiom or conclusion) we should discard.

To the extent that MacAskill's arguments are like mathematics, they are interesting and worth thinking about. But to the extent that they are not like science, we should treat the conclusions derived in the same way we would treat the conclusions of hyperbolic geometry. Not true or false, just interesting.

And I think MacAskill knows this is the case. After all, after a quick google it does not look like he's fathering as many children as he possibly can.

Expand full comment

To tackle the core example here: we don’t owe anything to the future child. We owe things only to those that exist. And future children, like future starvation (Malthus) or future fusion (still waiting) aren’t real until the moment they are born/discovered. Apologies if I missed it (although LRG’s comment touches on it). But the doctrine of Presentism seems to be missing from all these discussions. We are all engaging in a type of moral induction. But induction is a deeply flawed method of knowing the truth. Yes, it might be likely that humanity survives next year. But it might not. We can certainly make bets that certain actions taken now (which do affect presently existing moral agents) are worthwhile. But not because we “owe” anything to future generations. But besides we are betting on our continuance and willing to spend some present value for possible future gain. But all of that calculus is present. And to borrow from David Deutsch our inductive reasoning about the future is, at heart, prophecy, not prediction. Sure there may be 500B humans one day. Or AI wipes us all out next Tuesday. The end. What do we owe to those 500B? Nothing. Clearly. Because they don’t exist, and may never. So the real debate is about our inductive confidence. Should I be concerned about the child stepping on glass in 10,000 years? Our inductive reasoning falls utterly apart at that level. So no. Should I be concerned about something that’s reasonably foreseeable in the near term. Yes. But it’s frankly a bet. That it will be beneficial to those who exist at that near future time. Not an obligation. But a moral insurance plan. And there’s only so much insurance one should rationally carry for events that may never occur.

Expand full comment

>This isn’t about an 0.000001% chance of affecting 50 quadrillion people. It’s more like a 1% chance of affecting them.

Bullshit. In order to successfully affect 50 quadrillion people, it's not enough to do something that has some kind of effect on the distant future -- it would have to be some act that uniformly improves the lives of every single person on future Earth in a way that can be accurately predicted 500 million years before it happens. That's not just improbable -- that's insane.

Expand full comment

One exception is if you have the power to cause the extinction of mankind. Refraining from doing that has a pretty solid chance of affecting the future of humans from then on. Fortunately, very few of us have a realistic ability to cause the extinction of mankind, though there's some stuff that plausibly could (sufficiently horrible bioweapons, unaligned AI, ecosystem-wrecking warfare (think splattering the surface of the Earth with 10km-wide asteroids), etc.

Alternatively, if you could work out how to get self-sustaining, economically at least minimially viable colonies of humans off Earth, that might substantially lower our probability of extinction. (Unless they're the ones smacking Earth with 10km-wide asteroids or releasing bioweapons on Earth to win a war of independence or something.)

Expand full comment

Fun review. Much of the logic alluded to seems mushy to me.

Example. Regarding having a child with or without a medical condition, these are two decisions conflated into one. "But we already agreed that having the child with the mild medical condition is morally neutral. So it seems that having the healthy child must be morally good, better than not having a child at all." Does not follow.

Another way to look at it is that once it is decided to have a child, and that this decision in and of itself may be morally neutral, then the next decision fork is whether it is known the child will have a morally unacceptable health disorder, a morally neutral health disorder or no health disorders whose morality remains to be determined. It is a fallacy that because decision B lying between two other decisions A and C along a spectrum of characteristic H is morally neutral, morality being characteristic M, any decisions on either side of the H spectrum must therefore correspond to the M spectrum. It is possible that bearing children with health conditions less "severe" for the sake of argument than male pattern baldness might be equally as morally neutral. The M spectrum may only go from bad to neutral in this case. There is no law that options must have a positive outcome option.

Then there is the mugger and the kittens. The decision-maker is loosely represented as the observer. Better outcomes for whom? With the mugger, it is better from the decision-making target's perspective to retain the wallet. From the mugger's perspective, it is better for the target to relinquish it. Regarding drowning kittens, that is undesirable from the kittens' perspective but logically, it is a neutral outcome for the drowner. Do not confuse this observation with sociopathy, please; it is an argument about the logic!

There is so much confusion of categories in these poorly defined arguments that I find them unpersuasive in general.

Expand full comment

I have spent too many hours thinking about questions like the repugnant conclusion, and whether it's better to maximize average or total happiness. I'm still hopelessly confused. It's easy to dismiss all this as pointless philosophizing, but I think if we ever get to a point where we can create large numbers of artificial sentient beings, these questions will have huge moral implications.

I suspect that one reason for present day confusions around the question is a lack of a mechanistic understanding of how sentience and qualia work, and so our frameworks for thinking about these questions could be off.

For example one assumption that seems to be baked in to these questions is that there is a discrete number of distinct humans/brains/entities that do the experiencing. You could imagine a world where the rate of information transfer between these entities is so much higher that they aren't really distinct from one another anymore. In that world differences in happiness between these entities might be kind of like differences in happiness of different brain regions.

I really hope we'll develop better frameworks for thinking about these questions, and I think that by creating and studying artificial sentient systems that can report on their experiences we should be able to do so.

Expand full comment

(First comment - nothing like a math error to motivate a forum post.)

Trying to follow the critique of the Repugnant Conclusion here:

> "World C (10 billion people with happiness 95). You will not be surprised to hear we can repeat the process to go to 20 billion people with happiness 90, 40 billion with 85, and so on, all the way until we reach (let’s say) a trillion people with happiness 0.01. Remember, on our scale, 0 was completely neutral, neither enjoying nor hating life, not caring whether they live or die. So we have gone from a world of 10 billion extremely happy people to a trillion near-suicidal people, and every step seems logically valid and morally correct."

10 billion people with happiness 95 = 950 billion utilons.

1 trillion people with happiness 0.01 = 10 billion utilons.

Shouldn't you need at least ~100 trillion people (not 1 trillion) with happiness 0.01 before the moral calculus would favor choosing the greater number of less-happy people?

Expand full comment

You can tweak the numbers any way you need to (in this case, I suspect it's simply a mistake). The Repugnant Conclusion assumes that you can keep adding positive utility people ad infinitum.

Regardless of how you set your initial parameters, "giant mass of near-suicidal people" will always be the morally-desirable end state if you're maximising total utility with the addition assumption holding.

Expand full comment

> If I had to play the philosophy game, I would assert that it’s always bad to create new people whose lives are below zero, and neutral to slightly bad to create new people whose lives are positive but below average.

I'll supply the obligatory mugging: https://forum.effectivealtruism.org/posts/DCZhan8phEMRHuewk/person-affecting-intuitions-can-often-be-money-pumped

Expand full comment

I think this money pump probably doesn't work. The standard view in the literature is that money pumps need to be able to work given that the offer of the future trades is foreseen. Further, the standard view is that we should use backwards induction reasoning to account for future trades. I don't think the money pump works given those standard assumptions (basically, I think that the standard view is precisely the view you don't discuss when you say "You could also take the local decision rule and try to turn it into a global decision rule by giving it information about what decisions it would make in the future. I'm not sure how you'd make this work but I don't expect great results."). I suspect that you could fix this, but I haven't checked (see Gustafsson and Rabinowicz, "A Simpler, More Compelling Money Pump with Foresight").

I also think that neither trades 1 nor 3 would be mandated on plausible person-affecting views, which involve a sort of greedy neutrality (basically, the value of the additional happy person needn't be heeded but it can be if you want; as a consequence, both trades will turn out to be permissible but not required). A common view in the literature is that non-forcing money pumps don't succeed.

Also, these are just the first two responses that come to mind. I have some further background suspicion about the assumptions going into this money pump (though perhaps these ultimately comet o nothing).

Meta: I worry that EAs/rationalists are often a little too fast to assume that philosophers defend dumb views that are obviously false, but it often turns out that the views are more robust than they first seem.

Expand full comment

As mentioned at the top of the post:

> Note that it is primarily about person-affecting intuitions that normal people have, rather than a serious engagement with the population ethics literature, which contains many person-affecting views not subject to the argument in this post.

I think this money pump will apply to Scott's position specifically, not to every person-affecting theory that philosophers have come up with.

> The standard view in the literature is that money pumps need to be able to work given that the offer of the future trades is foreseen.

I think Scott was proposing a local decision rule. (If not then I've misunderstood Scott and my reply might be irrelevant.) When you are evaluating a local decision rule, I don't endorse this principle, and I don't think Scott would either. I think "if you follow this local decision rule you can lose money with certainty" is a good reason to discard that local decision rule. You can of course try to turn it into a global decision rule and that seems like a totally reasonable next direction for Scott to go in.

I happen to think that the similar-sounding global decision rules (that I know of) are also silly but I don't claim to have made the case for that here. Personally the biggest update for me was Arrhenius's impossibility result: https://www.iffs.se/media/2264/an-impossibility-theorem-for-welfarist-axiologies-in-ep-2000.pdf

Expand full comment

The Repugnant Conclusion: "Shut Up, Be Fruitful, and Multiply"

Expand full comment

"Nor was there much abolitionist thinking in the New World before 1700. As far as anyone can tell, the first abolitionist was Benjamin Lay (1682 - 1759), a hunchbacked Quaker dwarf who lived in a cave. He convinced some of his fellow Quakers, the Quakers convinced some other Americans and British, and the British convinced the world."

We should celebrate all of the work the Quakers did to eradicate most of the slavery in the world. But they were not the first abolitionists. The abolitionist movement of the High Middle Ages in Northwestern Europe successfully ended the Viking slave/thrall trade and laid the foundation for the Quakers to build on. There is less evidence for this time period, but we do have enough to get some idea of the movement.

The first evidence we have comes from the Council of Koblenz, in 922, in what is now Germany, who unanimously agreed that selling a Christian into slavery was equivalent to homicide. It doesn't look like this had any legal consequences.

In England, about 10% of the population was enslaved in 1086, when the census now known as the Doomsday Book was conducted. Anselm of Canterbury (famous for the ontological argument) convened the Council of London in 1102, which outlawed "that nefarious business by which they were accustomed hitherto to sell men like brute animals". Slavery in England seems to have died out over the next several decades. Slavery was still prominent in Ireland, and Dublin had been the main hub for the Viking slave trade. This Irish slave trade was one of the reasons listed for the Anglo-Norman invasion of Ireland in 1169. Abolition was declared at the Council of Armagh in 1171.

In Norway, we don't know the exact date when slavery was outlawed. The legal code issued by Magnus IV in 1274 discusses former slaves, but not current slaves, which indicates that slavery had been outlawed within the previous generation. Slavery in Sweden was ended by Magnus IV in 1335.

Louis X declared that "France signifies freedom" in 1315, and that any slave who set foot on French soil was automatically freed. Philip V abolished serfdom as well in 1318.

You frequently hear the argument that serfdom was ended in Europe by the Black Death. The decreased population allowed peasants more bargaining power to demand their freedom. This really doesn't match up with the history in France (not an insignificant country): serfdom ended decades before the Black Death of 1347. The abolition occurred during a Great Famine, when overpopulation was a greater concern. I think that this is a point in favor of the more general argument that the abolition of slavery was contingent and the result of moral persuasion, not the inevitable result of economic forces.

In the Mediterranean, Christian states had laws against selling Christians into slavery, dating at least as far back as the Pactum Lotharii of 840 between Venice and the Carolingian Empire. Slaves captured from Muslim states or pagans from Eastern Europe were commonly used by Chrisitians. Similarly, Muslim states in the Mediterranean banned enslaving Muslims, but frequently enslaved Christians and pagans. Since Muslims and Christian were continually raiding each other or engaged in larger wars, there was no shortage of slaves in the Mediterranean.

During the Early Modern Era, the religion-based criteria for slavery evolved into the more familiar race-based criteria for slavery. The countries of northwest Europe participated in the slave trade (a lot) and used slavery extensively in their colonies.

The Quaker-led abolitionist movement of the 1700s were able to build on the earlier abolitionist movement. In Somerset v Stewart in 1772, the slave Somerset who had been bought in the colonies and brought to England sued for his freedom. The judge found that there was no precedent for slavery in English common law, or in any act of parliament. This was the first major victory of the modern abolitionist movement, and it relied on the tradition created by the medieval abolitionists.

Expand full comment

The Mercedarians (Order of Our Lady of Ransom) were founded in the 13th century to pay ransoms for Christian slaves:

https://en.wikipedia.org/wiki/Order_of_the_Blessed_Virgin_Mary_of_Mercy

"Between the eighth and the fifteenth centuries, medieval Europe was in a state of intermittent warfare between the Christian kingdoms of southern Europe and the Muslim polities of North Africa, Southern France, Sicily and Moorish portions of Spain. According to James W. Brodman, the threat of capture, whether by pirates or coastal raiders, or during one of the region's intermittent wars, was a continuous threat to residents of Catalonia, Languedoc and the other coastal provinces of medieval Christian Europe. Raids by militias, bands and armies from both sides were an almost annual occurrence.

For over 600 years, these constant armed confrontations produced numerous war prisoners on both sides. Islam's captives were reduced to the state of slaves since they were considered war booty. In the lands of Visigothic Spain, both Christian and Muslim societies had become accustomed to the buying and selling of captives, so much so that tenth-century Andalusian merchants formed caravans to purchase slaves in Eastern Europe. In the thirteenth century, in addition to spices, slaves constituted one of the goods of the flourishing trade between Christian and Muslim ports.

[Peter] Nolasco began ransoming Christian captives in 1203. After fifteen years of work, he and his friends saw that the number of captives was growing day by day. His plan was to establish a well-structured and stable redemptive religious order under the patronage of Blessed Mary.

The Order of the Blessed Virgin Mary of Mercy (or the Order of Merced, O.Merc., Mercedarians, the Order of Captives, or the Order of Our Lady of Ransom) was one of many dozens of associations that sprang up in Europe during the 12th and 13th centuries as institutions of charitable works. The work of the Mercedarians was in ransoming impoverished captive Christians (slaves) held in Muslim hands, especially along the frontier that the Crown of Aragon shared with al-Andalus (Muslim Spain).

The Order of Mercy, an early 13th century popular movement of personal piety organized at first by Nolasco, was concerned with ransoming the ordinary men who had not the means to negotiate their own ransom, the "poor of Christ."

Expand full comment

"For example, it would be much easier to reinvent agriculture the second time around, because we would have seeds of our current highly-optimized crops"

This may not be true. High yield wheat requires the application of synthetic hormones during its growth cycle in order to deactivate the genes that make the stem grow, otherwise the stem gets too long and bendy to support the head, and the plant falls over and rots.

Modern agriculture is highly technical and requires significant industrial infrastructure to support it.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I may be misunderstanding you, but I doubt that your proposed view on population ethics does what you want. (Sorry if this was already discussed.) You say:

> Just don’t create new people! I agree it’s slightly awkward to have to say creating new happy people isn’t morally praiseworthy, but it’s only a minor deviation from my intuitions, and accepting any of these muggings is much worse.

> If I had to play the philosophy game, I would assert that it’s always bad to create new people whose lives are below zero, and neutral to slightly bad to create new people whose lives are positive but below average. [...] This series of commitments feels basically right to me and I think it prevents muggings.

I'm not sure whether you mean to suggest that creating new (happy) people adds zero value, or that it does add some positive value provided the new people increase average happiness.

In either case, the resulting view does prevent the kind of mugging you get based on the Repugnant Conclusions. But many other muggings remain. For instance:

*If adding new people is at best neutral:*

This commits you to the view that any population that faces even a tiny risk of creating net unhappy people (or worse, future generations with slightly below-average welfare!) in the future should pursue voluntary extinction because there's an expected harm that cannot be offset by anything of positive value. Imagine an amazing utopia, but that for some reason the only options available to its inhabitants are voluntary extinction or a very long future utopia for its descendants that is still great (imagine lives *much* better than any life today) but slightly less awesome than the counterfactual average. Your proposed view implies that it'd be better if this utopian world was cut short, which seems absurd.

If the amount of expected future suffering (or reduction in average wellbeing) is small enough, you may be able to get around this by appealing to contingent claims like "but procreating might make people happy, and maybe this increase in happiness in practice would outweigh the expected reduction in average wellbeing that is at stake". But this response neither works to defeat the previous thought experiment, nor does it (arguably) work in the actual world. In the thought experiment, we can simply stipulate that refusing voluntary extinction does not increase the happiness of the current generation. Put differently, this response can't buy you more than being able to say "future generations by themselves can only make the world worse, but bringing them into existence can sometimes be justified by the current generation's self-interest". My guess is that this is not really the intuition you started out with, and in any case it becomes increasingly tenuous once we consider more extreme scenarios. Imagine a hypothetical Adam and Eve with a crystal ball who experience extreme levels of bliss. They know that if they procreate, a utopian future lasting 3^^^3 years will ensue, in which at each time 3^^^3 people will experience the same levels of bliss minus one barely perceptible pin brick. Suppose Adam and Eve were interested in what they ought to do, morally speaking (rather than just in what's best for them), and they turn to you for advice. Would you really tell them "easy! the key question for you to answer is whether the reduction in future average wellbeing represented by these pin bricks is outweighed by a potential happiness boost you'd get from having sex."?

OK, now maybe you somehow generally object to reasoning involving contrived thought experiments. But arguably we get the implication favoring voluntary extinction in the real world as well. After all, there is a non-tiny risk of creating astronomical amounts of future suffering, e.g. if the transition to a world with transformative AI goes wrong and results in lots of suffering machine workers or whatever. (For those who are skeptical about weird AI futures, consider that even a 'business as usual' world involves some people who experience more suffering than happiness. We don't even need to get into things happening to nonhuman animals ...) This is a significant amount of expected disvalue that, from an impartial perspective, is not plausibly outweighed by the interests of current people to have children. You can of course still maintain that "I don’t want x-risks to kill me and everyone I know", but this statement then has morphed from "look, honestly my main motivation to prevent this horrible thing from happening that I don't want my near and dear to die" to "I don't want my near and dear to die even at a huge cost to the world". This seems hard to square with generally cheering for EA, and seems viable only as an admission of prioritizing one's self-interest over moral considerations (and, to be clear, may be relatable and even exculpable as such) but hardly as an articulation of a moral view.

Does it help if you modify the view to saying it's only bad to add people whose wellbeing is below some fixed & noncontingent "critical threshold" (rather than average wellbeing)? Or only if their wellbeing would be below zero? No. Some of the above arguments still apply, and in others you still get nearly as counterintuitive results by replacing future populations that would slightly reduce the wellbeing average with (a risk of) future populations with slightly below-zero lives. Any view that entails that future generations can make the world only worse, but never better, will have extremely counterintuitive implications of the kind discussed above.

*If adding new people _is_ positive when they increase average wellbeing:* OK, now you're in the game of at least sometimes being able to say "yes, the future is worth saving, and not just for selfish reasons". But now your view is arguably just a worse version of the critical level view, which, as you describe, Will does discuss in the book. For instance, your view implies that it's sometimes better to create an enormous amount of people experiencing extremely severe torture than to create an even larger number of amazingly happy people who unfortunately just so happen to be slightly less happy than the previous even more unfathomably high average. Faced with such implications, I think you should just throw out the role that average wellbeing plays in your view, and adopt a critical level view instead. You still have to deal with the same sort of case, with the previous wellbeing average replaced by the critical level (now we are just dealing with the 'Sadistic Conclusion' discussed in the book), and I would still consider it a fatal objection to your theory but it is at least slightly less absurd than the average variant.

--

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Taking a step back (and bringing back in some distinctions I've elided so far), the key observation anyone needs to grapple with is that a first-pass empirical analysis suggests that almost all potential (dis-)value lies in the future. We can either bite the bullet that this means the distant future is of overwhelming importance, or try to explain away the key observation. So we have four alternatives:

(1) 'Fanatical longtermism:' We accept that, depending on our empirical beliefs, unfathomable amounts of positive future value swamp all other reasons we have for choosing actions.

(2) 'Fanatical world exploder:' We accept that future lives can at best make the world worse, and so that barring appeals to our own happiness gains, we should campaign for voluntary extinction.

(3) 'Epicycles': You try to avoid either of the previous conclusions by adding additional complexity to your view. This can be philosophical: Perhaps the solution is found by appealing to 'deontic considerations' – reasons for actions other than maximizing expected value, or bounded utility functions, or rejecting transitivity, or by saying that some different futures are "on a par" with each other in a way that is subtly different from saying they are equally good and hopefully avoids various problems that would be involved in saying the latter or ... (Academic philosophers have done this for decades, and I think it's fair to say that they have so far basically failed.) Or it can be empirical: perhaps something about aliens or the possibility that we may be in a simulation happens to change the analysis in just the right way such that our everyday intuitions are vindicated after all.

(4) 'Pragmatism': You accept that improving the world from an impartial perspective boils down to improving the far future, but also that there are many other perspective you consider when choosing actions, both "moral" and self-regarding ones. You don't stress out about not having a fleshed out philosophical account for how this is working, you just do it – just like you don't stress out about not having a philosophical account for how to decide whether to give 10% or 11% of your income to the global poor or for how to decide whether to cut short your well-earned and sorely needed holiday in order to attend your aunt's wedding.

I'll be snarky in my closing paragraph (which is not especially directed at this review), so let me preface it by saying that it's slightly exaggerated for humorous effect; and by acknowledging that even responses of kinds I somewhat mock can be valuable contributions to a constructive conversation; and by owning that longtermists, including in WWOTF, don't always do a great job at avoiding various misunderstandings when talking to a general audience (or even when talking to each other). Anyway, here it goes:

A lot of the popular response to WWOTF has been one of: people being scared that effective altruists secretly embrace (1); people reiterating that they are fine with embracing (2), contra mainstream longtermism; people discovering a sudden love of strategy (3), hoping and sometimes confidently asserting that some view they've just made up (but which, unbeknownst to them, was already discussed 40 years ago in Reasons and Persons) either undermines longtermism altogether or at least weakens the case for ensuring we survive. (Within the choir there are also some legit, non-consequentialist philosophers who somewhat fairly object to characterizing (3) as adding epicycles to an uncontroversial minimal theory, but who don’t seem to realize, or at least fail to acknowledge, that their alternative view of conceiving of morality either commits them to some kind of weird, legalistic notion of morality that is arguably at least as divorced from common-sense morality as consequentialism; or that within their framework they still have to deal with the dilemma that essentially demands an answer of either type (1), (2), (3), or (4).) Meanwhile, actual longtermist EAs mostly embrace (4) – admittedly in some cases after a fair amount of soul searching and youthful zeal – and are busy working on us not getting all killed (which, yes, doesn't require you to be a longtermist to feel pretty good about).

(Disclaimer: I've been professionally involved with work promoting longtermism, including work aimed at improving and promoting WWOTF specifically. Views are my own.)

Expand full comment

What’s the underlying argument re: why a hypothetical reasonable man should logically care about anyone aside from (i) himself and, possibly, (ii) his own descendants?

That seems to be taken for granted, but not postulating it could lead one to prefer a catastrophe that wipes out 99% of humanity to a car wreck that wipes out him and his children.

Expand full comment

Well, the best arguement is that this doesn't seem to match with anyone's intuitions - even the most constrained view of morality usually places some value on friends and family (including parents and siblings). I don't think that moral philosophy works if you don't at least care a little already.

Expand full comment

To be fair, "caring about your friends and family" is easily subsumed under "caring about yourself", given that these people will affect your life in numerous ways.

It doesn't necessarily follow that you should similarly care about strangers on the other side of the world, or generations that will come centuries after you and everyone you knew died.

Expand full comment

I somewhat agree, but I think concern for friends and family can extend beyond your own lifetime - people put thought into what will happen to their children after they die, and there is a great deal of respect given to the idea of being willing to lay down one's life for their friends. I think there are good reasons to not focus too much on the future (particularly our inability to reliably affect it) but the thing to really emphasise is that concern for future generations is not unique to quirky utilitarians who've done to much moral philosophy.

I acknowledge it is a jump to consider the wellbeing of distant strangers, separated from you by vast distances of space and/or time. I don't expect everyone to find the arguement compelling, I just think it's worth pointing out that it's not a massive leap of logic but a series of smaller steps. I don't expect anyone to care as much about these distant people as they do about those close to them, I just don't think it's too far-fetched to have some level of concern.

Expand full comment

Clearly it is not far-fetched, given how many people demonstrate varying levels of concern about these things.

The problem with quirky utilitarians isn't that they care, but rather the tendency to bite off way more than they can chew. Utilitarianism attempts to reduce complex choices in a chaotic world to a simple game of count the beans, and such attempts almost invariably end in tears.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

“I realize this is ‘anti-intellectual’ and ‘defeating the entire point of philosophy’.” This is why Robert Nozick said that the perfect argument is one where, if you admit the truth of the premises and agree that they lead to the conclusion, but still deny the conclusion, then you die. Otherwise people are free to thumb their noses at reason and walk away. Nozick lamented the lack of perfect arguments.

Expand full comment

I always feel like these kinds of moral philosophy arguments that arrive at weird conclusions are some kind of category error. The Repugnant Conclusion to me feels like pointing out that if you divide by zero you can make math do anything. The correct response is to point out that "dividing by zero" means nothing in a real world context (I can't divide my three apples among zero friends and end up with infinite apples), and therefore the funky math results from it are meaningless.

In the same way, trying to redistribute happiness across a population isn't actually a thing you can do. I can't give you some of my happiness and take some of your sadness. Since you can't actually do the things the thought experiment proposes in the real world, it has no applicability to the real world.

Expand full comment

Except that here, there is no division by zero. There really isn't any math necessary, you can demonstrate the repugnant conclusion with pictures of rectangles alone, or with basic descriptions of worlds with a certain number of people at varying levels of happiness.

It also doesn't just seem to be some abstract question that is completely irrelevant in the real world - practically everything we do that could affect the future demography of the world (which is practically everything we do) forces us to consider whether we prefer a smaller amount of happy people, or a larger amount of slightly less-happy people. And while you obviously can't directly redistribute happiness, you can redistribute land and resources which contribute to happiness.

I also take issue with "Since you can't actually do the things the thought experiment proposes in the real world, it has no applicability to the real world." Tons of thought experiments are completely impossible in the real world, but demonstrate a valuable principle or contradiction in reasoning.

Expand full comment

There seems to be a lot of misunderstanding about what the Repugnant Conclusion is.

The Repugnant Conclusion is this: if we assume that maximising total utility is good and that adding a person with low, but non-zero, lifetime utility increases the total utility of the population, then the ideal population is that which maximises size relative to the carrying capacity of the environment. These people will necessarily be poor, and likely miserable, but as long as they're not actively trying to kill themselves, the Greatest Good has been achieved.

Redistribution of happiness is not necessary at any point. It'll probably happen organically, in any case, because past a certain point adding more people around you tends to reduce quality of life.

We note, in passing, that "let's have more people in the world" is not an impossible, nor even particularly difficult proposition.

Expand full comment

That is also a good point - in each step of the logic of the repugnant conclusion, you're not literally "transforming world A into world B", you're saying "world B is not worse than world A" and using the transitive property to show that the final world is preferable to the original world.

Expand full comment

Exactly.

Expand full comment

It's always interesting how our moral intuitions differ. The First time I heard about the repugnant conclusion, I did not understand why people found it repugnant. It matched my intuition perfectly, even if step B was missing. I've read enough arguments from people that find it repugnant to understand their view point, but both intuition and logical argument make me think it's not repugnant.

Expand full comment

Recently, in comments on EA, you said "Although I am not a perfect doctrinaire utilitarian, I'm pretty close and I feel like I have reached a point where I'm no longer interested in discussion about how even the most basic intuitions of utilitarianism are completely wrong"

and

"sorry if I'm addressing a straw man, but if you asked me "would you rather cure poverty for one million people, or for one million and one people", I am going to say the one million and one people, and I feel like this is true even as numbers get very very high. Although satisficing consequentialism is a useful hack for avoiding some infinity paradoxes, it doesn't really fit how I actually think about ethics"

Here, you say,

"Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If that’s true, I will just not do that, and switch to some other set of axioms. If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity, I will just refuse to extend things to infinity. I can always just keep World A with its 5 billion extremely happy people! I like that one! When the friendly AI asks me if I want to switch from World A to something superficially better, I can ask it “tell me the truth, is this eventually going to result in my eyes being pecked out by seagulls?” and if it answers “yes, I have a series of twenty-eight switches, and each one is obviously better than the one before, and the twenty-eighth is this world except your eyes are getting pecked out by seagulls”, then I will just avoid the first switch. I realize that will intuitively feel like leaving some utility on the table - the first step in the chain just looks so much obviously better than the starting point - but I’m willing to make that sacrifice."

All I can say is, it's nice to have you on team Satisficing Consequentialism! At least, however much you're over here. I feel like I should thank William MacAskill.

Expand full comment

A point about the repugnant conclusion: the transition from step B to step C, where we take a heterogeneous population with an average happiness level of 90%, and convert it to a homogeneous population with an average happiness level of 95% - this step is the crux of my problem with it as a thought experiment. This step strikes me as intuitively impossible. Any such step will be governed by some kind of relevant analogy to the second law of thermodynamics - whatever actual process comprises the step from B to C cannot end with a higher average happiness than it started with, unless you add happiness to the system somehow.

But if you have a happiness making machine, then the repugnant conclusion is moot. It becomes merely a question of how best to distribute the happiness machine's output, which makes it just a garden variety utilitarian quandary.

Expand full comment

Also,

"cosigned by twenty-nine philosophers"

I was taught that this was an appeal you were supposed to reject in this very space. Of all the fucking places to make a slick appeal to authority, philosophy? Fuck. That.

Expand full comment

> If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity, I will just refuse to extend things to infinity.

This reminds me of the distinction between parametric and non-parametric approaches towards statistical analysis.

Within parametric statistics you specify a function at the start, and then go on and do lots of interesting statistics conditional on your functional choices. The analysis can become incredibly complex, and people will derive bounds for their errors, uncertainty, and lots of other theoretical properties of their model -- but it all comes back to that initial choice to pick some mathematical function to describe their world.

And that's what I think these philosophers are doing -- although perhaps more implicitly. A lot of their word games are isomorphic to some linear utility with infinite discounting. Okay sure. But why is it a linear function? Why not piece-wise? Why doesn't it decay faster? If they set up some linear utility structure, then extrapolate towards situations we've never seen, you can get arguments for "it's better for 20 billion slightly miserable people, than 5 billion happy people"

The non-parametric approach towards statistics sacrifices the power of defining a function and examining how it works across all cases, often through extrapolation, and instead lets the data speak for itself. It's the more structurally empirical (or Humean) way of approaching this sort of thing. It's what you're doing when you say "just walk out." We've never seen what it looks like to actually, empirically, make a choice to discount infinitely and follow it through, nor have we compared it to the counter-factual (to the extent that this is even a realistic thing to measure). What we have seen is on earth, where you could sort of squint and try to compare Switzerland to certain impoverished regions in the global south, and can use that as a proxy for fewer happy people, vs. more unhappy people. Reasonable people can make that comparison and come to different conclusions, but I find the messy empirical proxy is a lot more informative than this made-up infinite extrapolation.

One thing we can also observe is in our actual reality, there aren't 1.2 billion people with happiness=0.01 in Africa, and 8.6 million people with happiness=100 in Switzerland. Both have pretty dramatic distributions. In these contrived examples, we ignore the fact that there will inevitably be uncertainty and distributions of outcomes. The implicit functions that are proposed don't meaningfully contend with the messiness (uncertainty) of our actual reality, when extrapolating to the far future.

In this sense taking a non-parametric approach is sacrificing the cleanliness and purity of thought that these philosophers take, and instead taking a probabilistic, empirically founded approach, that is much worse at extrapolation or making crisp statements, but far better at integrating itself with the empirical reality we observe.

Expand full comment

Yes. Related: Concerning questions about the far future, there is uncertainty all the way down. Including which, if any, methodological approach we should use if or when we pose such questions.

Expand full comment

One important thing I never see mentioned in discussions of utilitarianism and it's assorted dilemmas. Time. The utility function evaluated in a specific point of time, which most people seem to be talking about, is actually irrelevant. What we actually should care about is its integral over some period of time we care about.

If I'm egoistical, I'll just care for my total personal well-being, over my whole lifespan. Let's say that without life extension, this is just a finite timespan and rather easy to reason about.

But if I'm altruistical and value humanity, I'll care for the total well-being of all people over... all of the future time. Which is an infinite timespan! And so it does not actually matter, if there are 5 or 50 billion happy people right now, because the integral of the utility function will be infinite anyway. Except for:

1) If humans become extinct, our integral becomes finite. This leads to the conclusion that x-risk is an exteremely important problem to tackle.

2) If most people's lives are net negative, the integral will be negative infinity. Which is literally, hell. So, we should take actions to avoid hellish scenarios, which is basically ensuring people have a noticeably positive quality of life on average.

3) The heat death of the universe will come some time, and so these infinities are not actually infinite. Which suggests we should maximise our entropy usage by being energy-efficient, and that probably means building Dyson's spheres for all stars quickly, etc. But! This is much less of a priority than 1 and 2, we should not rush this at all.

I don't see any dilemmas with this approach. And it seems to match common sense perfectly. Can anyone find flaws in reasoning here?

Expand full comment

I think the Repugnant Conclusion is wrong, I'll explain why. If you think there is a flaw in the reasoning below please get in touch.

The moral question 'what is good/bad' requires a subject/subjects capable of experiencing goodness/badness. The existence of the subject must precede the question about what is good/bad for it.

So if you are asked 'which is better, a world with Population A or a World with Population B' you should ask 'for who?' The question doesn't make sense otherwise. In a world without experiencing beings, there is no meaning to the concepts of 'good/bad'.

Eg, a world with 1000 happy Population As, is better - for Population A. A world with 1000 happy Population Bs is better - for population B.

This makes the question of who does/doesn't exist morally important. If Population A exist, and Population B doesn't, we should choose the Population A world.

We shouldn't care about beings that don't exist. Because - they don't exist. They are literally nothing. We should only care morally about things that exist. This means we are morally free to choose not to bring beings into the world if we so wish. Eg, don't have the baby/create the orgasmic AI, no problem.

Important note -I'm not saying that only things that exist now are important and that future beings have no moral importance. No.

If we care about existence (which we do), that means we also care about future existence. Future beings that actually will exist are of moral importance. For example, if you do decide to have a baby and you can choose between Baby A, happy baby, and Baby B, sad baby, you should choose Baby A, happy baby.

This is because, from the perspective of a budding creator, neither Baby A or Baby B exist (yet). So we don't need to prioritise the interests of either A or B. But the creator knows a baby will exist. And happy babies are good. So they are under obligation to choose Baby A. This is better, from the perspective of the baby who will exist.

I think this idea of thinking about 'future beings who will exist' in potentiae in order to prioritise between them is the problematic bit. But I think it is less problematic than the reasoning used in the Repugnant Conclusion which says 'we should care morally about all beings equally regardless of whether they exist or not.'

I believe my reasoning here is similar/adjacent to something called the 'Person Affecting View' in population ethics, though I'm not 100% sure.

I think one reason why this flaw in the Repugnant Conclusion often goes unnoticed is because we forget that ethical reasoning usually just takes for granted the existence of conscious subjects. So we get confused by an ethical argument that doesn't take that for granted but neglects to mention it.

If this logic is right EA should take note, because currently most EAs seem to accept the Repugnant Conclusion and that's bad PR, because most people really don't like it (as is evident from most of the comments here). And also it's wrong, maybe

Expand full comment

Thank you! I think this captures what I've been struggling to put together. My go to thought experiment was "If the earth suddenly disappeared, would that be good or bad?" and the (obvious to me) answer is "neither, because there is no one around to care about it."

Expand full comment

Yes, this. A lot of this discourse seems to rely on an omniscient observer to make the call on whether a planet of happy or miserable lifeforms is desirable or not. Or - ludicrously - a universe that cares.

Expand full comment

The suit, the muddy pond, and the drowning kid: you save the kid you see in front of you because you know you can. "Saving" an unknown kid on another continent is a much iffier thing. You sell your suit and give the money to some people who *assure* you it'll save a kid. Mmhmm. Only people stupid enough to call themselves effective altruists would give that person money.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Reading “What We Owe The Future” long-termist book, the population ethics section.

There is a series of premises that are assumed.

- That preferences are or should be transitive.

- That a person’s life’s wellbeing is something you could measure on a unidimensional scale.

- That if you wanted to measure the value of a society, the most sensible way to do it would be to look at each individual’s wellbeing measurement and add them up or average them.

To the transitivity point, clearly in real life even simple preferences might not be transitive. This is because people could have at least 2 qualities/considerations/dimensions that they rate decisions on, and apply different qualities when making the decision between A and B, vs. B and C. Thus, you can’t conclude that if A>B, and B>C, that A must > C. Here is an example:

Decision Quality

1 2

A 50 100

B N/A 50

C 100 25

If your decision rule is:

1. Choose based on quality 1, if applicable

2. Then, choose based on quality 2.

Then, your preferences would be intransitive and perfectly logical at the same time (that is, you would prefer A>B, B>C and C>A).

I think things in real life are certainly more complex than this simple example, in that I think people do care about 2+ qualities at the very least when it comes to making most (substantive) decisions.

I think the fact that people have to do one thing in a given moment provides the illusion that we must be computing an overall score of the Benefit or Utility of our actions, and then picking the one with the highest utility, but other alternatives are equally or more plausible:

Alternative 1: We have complex decision rules which take into account different properties in different contexts.

Alternative 2: We have a kind of probabilistic wave function-like set of preferences, and in the moment we have to make a decision, we have to collapse the wave function so to speak, by picking one thing. Then, after the fact, if someone asks you why you did what you did, you come up with a post hoc rationalization about why that was the overall best decision (if you do feel that that was the case).

Either way, I think people’s preferences are much more complex than just measuring the Value or Utility or Benefit of things and picking the best one (even the acknowledgement that this measurement of Utility is error-prone and uncertain is not enough. I think the reality has more complexity than this).

__

What about this measurement of wellbeing, at the individual level?

They talk about these formulations:

1. Preference Satisfaction

2. Hedonism

3. Objective List

I think my problem with all of these is that they are at the individual level. When I think about “rating” a person’s life, and what could possibly be relevant, I would think about things like:

- the narrative trajectory, cohesion, etc. of their life. “Is it a good story?”

- the network of relations that they have with people closer and more distant to them, and with their human built environment and natural environment, etc.

- their character, both from a moral standpoint and just a value-neutral personality standpoint.

among other things

If you choose to measure something you will get a measurement of it. So, of course you can ask someone a single question, “Please rate the quality of your life so far (0-10):” etc. But just because you can ask the question and get an answer doesn’t mean that this concept reflects what we think is important about the topic.

To use psychometric terminology, just because we can ask the question doesn’t mean it’s valid or reliable. Perhaps not valid: at face validity level, concurrent validity, predictive validity etc. Perhaps not reliable: if you were to have an external rater, or the same person rate multiple times, etc., how reliable would it be?

(So what do I think would make more sense than this? To ask people more questions and rate more qualities, e.g. the quality of their work environment, their home environment, relationships with their families, relationships with friends, hobbies, beliefs about society, beliefs about the future etc.)

__

What about the concept of rating societies against one another? And using the sum or average of wellbeing of the individuals making it up to evaluate them?

Of course, I think a single number would be way too simple to evaluate the quality of an individual life (as above). But even if you could do this, I don’t think taking the sum or average would be a good measure of the society. Why is that?

I think when you are measuring the quality of a society, it would make sense to look at characteristics that are at the same level of analysis as a society.

By analogy, if you wanted to evaluate the quality of a chair, you wouldn’t first measure the quality of the atoms that make up the chair, and then average the quality of the atoms in order to get the quality of the chair. A chair has many emergent properties, and those emergent properties are what make it a chair, and also make it a “good” chair. Qualities such as: it being arranged in a way that a human might be able to sit in it; the comfort of sitting it in; the typicality of its appearance; the stylishness of its appearance; its ability to be reused, durability etc.; the quality of the materials; the environment impact of its construction; the treatment of the labour used in making the chair, etc. etc. None of these qualities or any of the other qualities you might care about are contained in the individual atoms that make up the chair, even though the chair is nothing except atoms.

In the same way, a society has many emergent properties (obviously, it has emergent qualities much more complicated than those of chair). Those emergent properties are what make it a society rather than a Matrix-like collection of atomic individuals who have no effect on one another. The emergent qualities of societies are exactly what we would care about if we were trying to measure the qualities of a society--the qualities that come out of the complexities of the actual realities of people’s lives in interaction with one another, with their human and natural environments, etc.

__

So when I put that all together, what do I think about their population ethics arguments:

- I don’t think measuring wellbeing on an individual level on a unidimensional scale makes sense (both because of the “individual level” and “unidimensional” parts).

- Even if I conceded that you could do something like that (measuring individual wellbeing on a unidimensional scale), I don’t think evaluating society on the basis of the sum or average of this measurement would make sense (because of emergent properties being exactly the ones we care about).

- Even if I conceded that you could make some sort of Society Scores reflecting the overall qualities of a Society on the multiple dimensions that matter (which I suppose could be possible, but you wouldn’t do it by averaging or summing individual wellbeing), I don’t think preferences in general (even very small preferences, but especially large preferences like those of an entire society) are necessarily transitive. That is, I think a person can be perfectly logical in preferring A to B, B to C and C to A, as long as they have a slightly more complex decision rule than “rate based on a single quality then pick the one with the most of that quality”. I think it is quite realistic that people would have more complex decision rules when it comes to their preferences, when I think about how complex people are.

Expand full comment

"When slavery was abolished, the shlef price of sugar increased by about 50 percent, costing the British public £21 million over seven years - about 5% of British expenditure at the time" This sounds off to me: for a 50 %increase in the price of a product to cause 5% total expenditure, then the initial expenditure on that good would have to be around 10% of total expenditure. And did amyone really spend 10% of their total consumption in sugar? that is one hell of a sweet tooth....

Expand full comment

£21 was 5% of GDP, but that was the cost over 7 years, so the increase was less than 1%. Let's say a base of 2% of national income spent on sugar each year.

Yes, the British ate a lot of sugar. The article below claims 18lbs/year in 1800 and 90lbs/year in 1900. That's 450 calories per day. Assuming an exponential trend it would be 260 calories per day at the time of emancipation. A poor person spending 2% of income to get 5% of calories from the cheapest source makes sense. How the numbers work out on the national level confuses me. On the one hand, there were a lot of children who at less than 5000 calories. On the other hand, how much of the income was controlled by the wealthy?

https://www.theguardian.com/uk/2007/oct/13/lifeandhealth.britishidentity

Expand full comment

>(in case you think this is irrelevant to the real world, I sometimes think about this during debates about immigration. Economists make a strong argument that if you let more people into the country, it will make them better off at no cost to you. But once the people are in the country, you have to change the national culture away from your culture/preferences towards their culture/preferences, or else you are an evil racist.)

We have a real-world, large scale example of this in post-apartheid South Africa. After its transition from white minority rule to democracy, the country arguably got worse for that white minority as resources were redistributed. But it certainly got a lot better for everyone else. And yeah, you were probably an evil racist if you opposed that.

In The Four Loves, C.S. Lewis writes on patriotism,

"With this love for the place there goes a love for the way of life; for beer and tea and open fires, trains with compartments in them and an unarmed police force and all the rest of it; for the local dialect and (a shade less) for our native language. As Chesterton says, a man's reasons for not wanting his country to be ruled by foreigners are very like his reasons for not wanting his house to be burned down; because he "could not even begin" to enumerate all the things he would miss."

So I've never understood nationalism, but the above quote is perhaps the best steelman of it that I've seen. It does make me wonder if my experience growing up in vividly multicultural post-Apartheid South Africa might be part of the reason for that; I've never had a monocultural community with which to identify.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

My take on the repugnant conclusion (also certain trolley problem variants) is that for purposes of utility calculations, human lives are not commensurable.

So world B is indeed strictly better than world A, because we're comparing the new, happy humans to a nonexistence thereof. However, world B and world C cannot (necessarily) be meaningfully compared.

In mathematical terms, there isn't a well-defined order relation on the set of these possible worlds; at most there exists a partial order relation. So you can have some worlds that are obviously better than some other worlds (B > A), but with no guarantee that a given two worlds are commensurable (B ? C).

Expand full comment

Closely-related spoilers for the ending of UNSONG:

God is a utilitarian who accepts the Repugnant Conclusion, so he creates all possible universes where (the sum of all good in the universe) > (the sum of all evil in the universe). Unfortunately for most people in the setting of UNSONG, they live in one of those universes where good and evil are just about balanced. Also, in UNSONG, multiverse theory and the Repugnant Conclusion of utilitarianism are the solution to the Problem of Evil.

Expand full comment

It's not quite that God accepts the Repugnant Conclusion, as he's also instantiating Universes A and B.

The argument relies on you agreeing that it is better to create universe C than to not create it. The repugnancy arises when this seems to lead to the conclusion that universe C should be instantiated *rather than* universe A.

Expand full comment

Yes, God is instantiating universes A and B, but by instantiating universe C as well, God is accepting the Repugnant Conclusion. After all, the repugnant part comes from the fact that it sucks for the trillions of people living in universe C to at 0.01 utils (where 100 is the best, -100 is the worst, and 0 is neutral between wanting to live or die). Likewise, it sucks for all of those who spend several centuries in the UNSONG universe's hell, even though that particular universe is net good and God decided to instantiate it.

The repugnancy is does not arise when questioning whether to instantiate universe A or C, it comes from deciding that allowing universe C at all is worth it because it is (very slightly) net good.

Expand full comment

Technically if all you wanted was sum of all good > sum of all evil, plus the total highest amount of good possible, wouldn't you produce every single universe except for the one that's purely evil? You would have many that are vastly more evil than good, not just stopping at the balanced one.

Expand full comment

I think that God in UNSONG wanted to have good exceed evil within each universe rather than across the multiverse as a whole. I think your way would be another (eviler) way to accept the Repugnant Conclusion (since universes where evil outstrips good are now also created).

It is a bit surprising that Scott wrote the Repugnant Conclusion as the answer to the Problem of Evil in UNSONG (and here: https://slatestarcodex.com/2015/03/15/answer-to-job/), given that he rejects the Repugnant Conclusion.

Expand full comment

Possible solutions to the Repugnant Conclusion:

If we are maximizing utility, then going from World A to World C is not as good as going from World A to a version of World C where all people have as much individual utility as those in World A. This helps, but it still ranks the regular World C above World A.

To make sure World C does not outrank World A, we would have to say that utility isn't additive. Perhaps true utility is the utility of the person with the lowest utility (ie, like in The Ones Who Walk Away From Omelas, or real-world concerns about income inequality). Or, perhaps utility is additive in small cases (ie, whether to add another child), but not in large cases (in the same way that, under special relativity, velocity is approximately additive at low speeds, but not at all additive when close to the speed of light).

Expand full comment

I always wonder if someone has thought that perhaps the "time value of money" concept (https://en.wikipedia.org/wiki/Time_value_of_money) could also apply to future lives? At some point, an almost infinite number of lives (but not quite infinite) could still be worth near zero if it's almost infinitely far into the future.

I must not be "sophisticated" enough to deeply care about the EA bandwagon, but if I were to plant a tree, I don't worry (like not worry AT ALL!) whether that tree will fall on someone 50 years from now when the wood starts to get rotten.

Expand full comment

Didn't expect this post to be mentioned by the Economist. Does this kind of thing happen often here?

"One crucial moment it charts is the shift in the movement’s centre of gravity from Mr Yudkowsky to Scott Alexander, who currently blogs under the name Astral Codex Ten."

"Also try:

For a whimsical but critical review of Mr MacAskill’s “What We Owe the Future”, see a recent blog post by Mr Alexander at Astral Codex Ten."

https://www.economist.com/the-economist-reads/2022/08/24/what-to-read-to-understand-effective-altruism

Expand full comment

I have two really big difficulties with the population / RC analysis here. 1) Why are we assuming that more people makes life worse for everyone? Maybe in some contrived scenario, but not in the world we live in. In our world, as population has increased rapidly over the last 200 years, life has gotten rapidly better for just about everyone. Why do we not assume that more children means more people that are also happier? In that case the moral logic is easy - have more children. My wife and I practice what we preach, as we have 8 children. We are more fulfilled for it, and the total utility of the world is increased.

2) The idea that very poor people would be better off never having existed is appalling and doesn’t match my experience. Do you really believe this? I have spent years among very poor people in slums in a poorer country, and years managing allocation of charitable contributions to needy people in my community in the US. While finances may be tight, and assistance can help, these are humans very much as capable of both joy and despair as anyone living in luxury. I don’t know how you can suggest that they or the world as a whole would somehow be better off without their existence.

Expand full comment

Donating money to a longtermist X-risk Charity sounds like the best way to make sure that the donation never actually helps any actual person.

Expand full comment

ISTM that the main difficulty is in figuring out what would help people far in the future. I mean, imagine a set of very smart and rich Romans in 22 AD talking about how to help the people of 2022 AD have better lives. They could have done things we'd appreciate now (make highly survivable caches of their best writings and artworks and gadgets that we can discover with 2022 technology), but they would not have had any idea how to do anything useful for us.

Far away in the future or far away in distance/culture/environment or far away in terms of a long causal chain to your desired effect all means less chance of your help being helpful.

Expand full comment

Does Arrow's Impossibility Theorem come up in the context of the Repugnant Conclusion?

The Repugnant Conclusion argument sneaks in a dictator (the philosopher) that insists on completeness (we can always compare two worlds) and transitivity (if world C is better than B, which is better than A, then C is better than A) while disregarding the preferences of even a vast majority of the population.

Arrow showed that this it's not possible to have completeness and transitivity in social choice without sacrificing a set of reasonable criteria including non-dictatorship and Pareto efficiency (if World A is better than B for everybody, then A is better than B).

I'm not familiar with population ethics – is there an axiomatic approach?

Expand full comment

The part about the billions people at 0.1 looks to me like philosophical sleigh-of-hand swindle. The issue is the aggregation function — the functions that take a lot of individual happinesses and computes a single value for comparison — and the assumption that happiness-suffering are linear and symmetrical.

But this assumption immediately leads to the conclusion that one person suffering hell is well worth a little more happiness for a lot of people, were we to find an evil god offering the deal. Which we usually do not consider valid.

It is hard to find a satisfactory aggregation function, but my guess is that for most people, it is closer to an infimum than a total or an average. And with that corrected aggregation function, the whole reasoning of part IV collapses.

Expand full comment

I have this provocative take: the Nazis may have saved us from the worst of climate change. How? Without World Ware II, no Manhattan project, nuclear power would have been developed later, and coal and oil would have taken a greater part.

Does it mean a rational person would have helped Hitler get into power? No! Because it is just a hypothetical. It could have gone the opposite way: maybe without the Manhattan project nuclear power would be less associated with bombs in our collective subconscious and we would have fewer naturolaters demanding we ban it.

My point is that counterfactual history is useless for ethical reasoning, fun as it is in fiction. History is chaotic. Little changes in facts can and will lead to completely different events. Sometimes, just a few centimeters can make a difference, but nobody can say if it would have led to Mars or to “Kennedy Camps”. Also, gametogenesis and fecundation are extremely sensitive to initial conditions.

In short any action today can lead to a better future soon but then to a much worse future later, or the opposite.

This is, I think, the flaw of longtermism and this ethical discussion: when weighing our actions now, their consequences have to be weighed by our ability to predict them, and that converges to 0 very very fast.

Expand full comment

> so your child doesn’t develop a medical, most people would correct the deficiency

Typo

Expand full comment

I may be being incredibly naive here, but isn't it likely that after a (near-)apocalypse that happens in the future, the survivors would be able to make use of physical and/or intellectual remains to bypass coal-powered technology entirely and rebuild using only renewable energy technology? The hard part is discovering and designing alternative ways of harnessing energy, and once that's done there's no need to relive the past in some kind of steampunk era, is there?

Expand full comment

It probably depends how far they got knocked down. But coal has the nice property that it can be dug up by people without complicated industrial tools, and has immediate uses as fuel for heating and cooking and metalworking.

Expand full comment
Aug 27, 2022·edited Aug 27, 2022

Counterfactual mugging, and this whole circus, depend in part on assuming metrics for things that are immeasurable, such as your degree of happiness, and even if we do the thought experiment of supposing a measure for an individual exists, are incommensurable, such as your degree of happiness versus my degreee of happiness--and worse again, applying simple real-number arithmetic to such measurements: not only metrics, but the most bone-headed ones; and in part on removing all context from every situation: manufacturing universals.

Drowning kittens is better (for you, for animals in your neighborhood, etc.) than petting them, _if they are rabid._ Etc. Universals universally do not exist. "Better or worse for whom? When? Materially?"

The fact that in his concrete examples (as reported), MacAskill elides important details and qualifications, or outright gets details wrong (the Industrial Revolution happened *before* steam became relevant, for instance: see Davis Kedrosky), gives me a prior that he is not worth listening to on the philosophy.

Sloppy is sloppy. Universally.

Expand full comment
founding

When Longtermism comes up, very very often the criticism I hear is that the future people you are supposedly trying to help might not exist at all, and I think this point gets conflated with the idea of intrinsic moral discounting of the future. (I mean that it gets conflated by the public, not by Will and other EAs who understand the distinction already.)

Like I will say, "I think my great grandchildren have as much moral worth as I do, even though they don't currently exist", and the critic will respond "but they might not exist at all, even in the future, you need to discount their moral worth by the probability that they will". But what I meant in the first place is "presuming they exist, I think my great grandchildren...", i.e. there is no *additional* discounting coming from the fact that they live in another century and not Now.

Maybe this distinction just seems too simple to be worth marking for people already "in the know", but I think it leads to a lot of confusion about the basic premises of Longtermism when EAs communicate to the public.

Expand full comment

The book was reviewed in The Wall Street Journal today (27 Aug. 2022)

"‘What We Owe the Future’ Review: A Technocrat’s Tomorrow The gospel of ‘Effective Altruism’ is shaping decisions in Silicon Valley and beyond" By Barton Swaim on Aug. 26, 2022.

https://www.wsj.com/articles/what-we-owe-the-future-review-a-technocrats-tomorrow-11661544593

The reviewer did not like the book. Quotations from the review, which like all WSJ.com content is paywalled:

* * *

Skeptical readers, of whom I confess I am one, will find it mildly amusing that a 35-year-old lifelong campus-dweller believes he possesses sufficient knowledge and wisdom to pronounce on the continuance and advancement of Homo sapiens into the next million years. But Mr. MacAskill’s aim, he writes, “is to stimulate further work in this area, not to be definitive in any conclusions about what we should do.” I’m not sure what sort of work his arguments will stimulate, but I can say with a high degree of confidence that “What We Owe the Future” is a preposterous book.

* * *

But it’s Mr. MacAskill’s arguments themselves that dumbfound. Most prominent among their flaws is a consistent failure to anticipate obvious objections to his claims. One of the great threats to civilizational flourishing, in his view, is, of course, climate change. He is able to dismiss all objections to zero-emissions policies by ignoring questions of costs. Questions like this: Will vitiating the economies of Western nations in order to avoid consequences about which we can only speculate hinder our ability to find new ways to mitigate those consequences? And will the resultant economic decline also create social and economic pathologies we haven’t anticipated? Mr. MacAskill, who specializes in asking difficult, often unanswerable, questions about the future, shows little curiosity about the plain ones. One outcome he does foresee, meanwhile, is that China will abide by its pledge to reach zero carbon emissions by 2060—a judgment that, let’s say, doesn't enhance one’s confidence in Mr. MacAskill’s prophetic abilities.

* * *

Books like this very often mask the impracticality of their arguments by assigning agency to a disembodied “we.” Mr. MacAskill does this on nearly every page—and, come to think of it, on the title page: “What We Owe the Future.” “We” should increase technological progress by doing this. “We” can mitigate the risk of disease outbreak by doing that. Often “we” refers to the government, although it’s unclear if he means regulatory agencies or lawmaking bodies. At other times “we” seems to mean educated elites or humanity in general. This gives the book the feel of a late-night dorm-room bull session of an erudite sort. Fun for the participants, perhaps, but useless.

* * *

Mr. MacAskill warns that once the development of artificial intelligence achieves the state known as artificial general intelligence, or AGI—that is, a state in which machines can perform the tasks that humans assign them at least as well as any human—we will be able to “lock in” bad values. So instead of the kind of future that techno-utopians want, we’ll have 1984. Mr. MacAskill’s solution: Rather than try to lock in our own values, we should create a “morally exploratory world” in which there is a “long reflection: a stable state of the world in which we are safe from calamity and we can reflect on and debate the nature of the good life, working out what the most flourishing society would be.” That sounds familiar, does it not? In fact, it sounds a lot like the liberal order that developed in Europe from the 14th century until now; you know, the one that made a place for a certain young Oxford don to work out his ideas on the good life? That one! Yet when Mr. MacAskill casts around for examples of what a morally exploratory world might look like, he cites the special economic zone in Shenzhen, China, created in 1979 by Deng Xiaoping.

* * *

William MacAskill clearly possesses a powerful mind. In an earlier age he would have made a formidable theologian or an omnicompetent journalist. But the questions to which he has dedicated himself in this book are absurd ones. They admit of no realistic answers because they were formulated by people who know little and pretend to know everything. Rarely have I read a book by a reputedly important intellectual more replete with highfalutin truisms, cockamamie analogies and complex discussions leading nowhere. Never mind what we owe the future; what does an author owe his readers? In this case, an apology.

Expand full comment

I feel like this version of the repugnant conclusion is overly sneaky because it makes use of a different paradox: the slippery slope problem; or the one where the pile of sand eventually turns into a heap of sand.

How do you feel about the following less graded version of the repugnant conclusion:

World A: 1bn people, level 100

World B: 1bn people, level 100 + 1bn people, level 1

World C: 2bn people, level 51

Would I choose World B over World A? Would I choose World C over World B? How do I develop an intuition for this?

Might I pretend that I must live the life of every individual in the system? I think I would rather live [2 lives at level 100 + 1 life at level 1] than [3 lives at level 70]. So, maybe I don't prefer world C to world B.

In fact, this sort of decision might even take place within a single individual's life span. Isn't that delayed gratification?

Expand full comment

There's an interesting point here which is "how much can you actually do to fix X?"

There seems to be three groups of AI Ethics/AI Alignment folks:

1. AI Alignment Researchers that aren't a part of the academic/OpenAI/etc initiatives, that largely just wax philosophical about how to fix an AI they do not actually understand. Mathematicians I respect deeply constantly are on about this and are like "look, we have implemented robust controls on AI trading systems before, so claiming that it's impossible doesn't seem to hold out. The economy still fundamentally exists, ergo I win". This argument strikes me as pretty strong that AI Safety is both being thought about more carefully than people think it is when it comes to giving AI control of important things *and* that the people who talk a lot about AI Safety aren't actually that well researched in how AI systems actually work.

2. AI Ethics people who are concerned with whether the artificial superintelligence would be invited back to a San Francisco dinner party.

3. AI Researchers who know exactly how AI works, know the stakes, and are just working on increasing capabilities while feeling slightly bad about it.

4. Me, the vaguely educated layman in all of these felds. Why do I feel that way about these groups? It's simple - people like EY make glaring errors when they talk about AI, errors that a well versed researcher would not have made. Meanwhile, the algorithmic justice league and their ilk are constantly busy writing systems on the bases of "reducing bias" (where "bias" means - "any time the latent manifold contains patterns I don't like") . That second one actually seems more admirable than my cynical reading - if it were not the equivalent of putting out a house fire on top of an in-flight ICBM. The third one comes from extensive interactions with the folks at Eleuther, Anthropic, etc.

In all of these cases, you would have to *change the fundamental social reward curves that make these groups behave as they do* or *select the best one and donate to it*.

However - I have a feeling that the capability of any given organization or movement is hard capped at time X, so at time X, the return on your donation dollar, activist volunteering, or Straussian memeplex-shelfing is going to be capped. Ideally a longtermist would consider when judging "needs of the present" versus "needs of the future" exactly *when* this effect appears. Even if you fully accept "sacrifice all present needs for future needs", there's more than one place where the approach is relevant so it's important to be able to make decisions about whether or not you're hardcapped.

Expand full comment

Not sure if someone has already pointed this out because I don't want to read all the comments, but I have read the book and you made a mistake in recounting the repugnant conclusion argument. When you move from World A to World B, you don't just add more people, you also make all previously existing people happier. So in order to reject the first premise, you can't just say that adding more happy people is neutral, you have to say that it is bad.

Expand full comment

This is not a website that solely discusses sales, products and publishes articles with affiliate links We care about each of our readers, and we offer<a href="https://www.hotpartystripper.com/newyorkstrippers/">private dancers nyc‍</a>

Expand full comment