deletedAug 26, 2022·edited Aug 26, 2022
Comment deleted
Expand full comment
Comment deleted
Expand full comment
Comment deleted
Expand full comment
Comment deleted
Expand full comment
deletedAug 25, 2022·edited Aug 25, 2022
Comment deleted
Expand full comment

While people are having Repugnant Conclusion takes, I might as well throw in a plug for my post from 2016 about "the asymmetry between deprivation and non-deprivation," which is an under-appreciated nuance of the issue IMO:


Expand full comment
Aug 25, 2022·edited Aug 25, 2022

I'm honored to be highlighted and get a sarcastic response, although I think I defended the relationship between the weakness of the thought experiment and its consequences for a philosophy of long-termism in the replies to my comment.

Also, if water didn't damage suits, wouldn't your obligation to save the drowning child be more obvious rather than less? An extremely selfish person might respond to the Drowning Child thought experiment by saying, "Why no, the value of my suit is higher to me than the value of that child's life, that's my utility function, I'm cool here." Wouldn't the analogous situation be if children couldn't drown (or my mocking of MacAskill with a version of the Drowning Child where the child is Aquaman)?

Expand full comment

You're right that anti-communist intuitions don't disprove the abstract argument.

But if you have a friend who's constantly coming up with hypotheticals about sleeping with your beautiful wife, maybe don't leave him alone with her.

Expand full comment

I think I have to post my obligatory reminder that consequentialism does not imply utilitarianism, that E-utility functions are terribly ill-defined and it's not clear that they make sense in the first place, and also while I'm at it that decision-theoretic utility functions should be bounded. :P

...you know what, let me just link to my LW post on the subject: https://www.lesswrong.com/posts/DQ4pyHoAKpYutXwSr/underappreciated-points-about-utility-functions-of-both

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

I think there must be 4th way in which a theory can differ: by refusing the way situation is modeled by math is correct.

I think this is the case. My intuition is that if we insist that a human life value must be a single number then at least we should allow it may change over time - at least in response to the actions of the thought experimenter. For example, if I add a new person to the world already populated with people then their lives are altered, too.

But more importantly, I think the model should clearly differentiate between "actions within world" (like the above one) and "choosing between two worlds" (like: would you preferer a world to start with that additional person).

For me the intuitive judgment of a given world comes from trying to imagine me to randomly inhabit one of its members' minds.

In this framework: I prefer the Hell with N x -100 + N x -90 people to the Hell with only N x -100 as it gives me a slightly better expected life.

At the same time, in the same framework, being already born in this second Hell, I wouldn't want to spawn additional N x -90 people in it, as it would mean for each of them a certainty of inhabiting a mind with negative value, and that is bad for them, and also having to live with my memory of voting for their suffering would make me suffer and change my utility from -100 to even more negative number.

Basically my intuitions are modeling "pool of souls waiting in a lobby to join the game some players are already playing".

In this moral framework the Repugnant Conclusion, even if somehow derived, would only mean that if I were in position to redesign the world from scratch I might do it differently, but as I already am in an already existing world and adding people to it is qualitatively different than switching to a different world such a conclusion would not force me to breed like crazy. It's a type error.

But this framework actually doesn't even reach the RC because it doesn't prefer a world with lower average live utility offered in the first step.

(The most important missing feature in various puzzles of the "choose A vs B" kind is specifying if I'll have the memory of the choice, and memory of the previous state. I think part of the difficulty in designing the AI'a utility function around the state of it's off-switch is because we obsess on the end state (switch is on vs switch is off) forgetting about the path by which we got to it which is burned in the memory of participants at least in the atoms of the brain of human supervisor and thus becomes part of the state. I mean perhaps utility functions should be over paths not endpoints.)

Expand full comment

I don't understand people who follow the Carter Catastrophe argument all the way to the point of saying "gosh, I wonder what thing I currently consider very unlikely must turn out to be true in order to make conscious life be likely to be wiped out in the near future!" This feels to me like a giant flashing sign saying that your cached thoughts are based on incompatible reasoning and you should fix that before continuing.

If you've done proper Bayesian updates on all of your hypotheses based on all of your evidence, then your posterior should not include contradictions like "it is likely that conscious life will soon be wiped out, but the union of all individual ways this could happen is unlikely".

My personal take on the Carter Catastrophe argument is that it is evidence about the total number of humans who will ever live, but it is only a tiny piece of the available body of evidence, and therefore weighs little compared to everything else. We've got massive amounts of evidence about which possible world we're living in based on our empirical observations about when and how humans are born and what conditions are necessary for this to continue happening; far more bits of evidence than you get from just counting the number of people born before you.

Expand full comment

can't the repugnant conclusion also say that "a smaller population with very high positive welfare is better than a very large population with very low but positive welfare, for sufficient values of 'high' "?

I find difficult to understand the repugnant conclusion because to me it barely says anything. If you are a utilitarian you want to maximize utility, period. Maybe one way is to create a civilization with infinite people barely wanting to live, or to create a god-like AI that has infinite welfare.

Am I missing something here?

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

> Peirce is a tragic figure because he was smart enough to discover that logic disconfirmed his biases, but decided to just shrug it off instead of being genuinely open to change.

Is that actually what he was doing? Wikipedia says

> Peirce shared his father's views [i.e. he was a Union partisan] and liked to use the following syllogism to illustrate the unreliability of traditional forms of logic, if one doesn't keep the meaning of the words, phrases, and sentences consistent throughout an argument:

> All Men are equal in their political rights.

> Negroes are Men.

> Therefore, negroes are equal in political rights to whites.

And at the time, the last statement was factually incorrect! The syllogism fails because the definition of Men changes between steps 1 and 2.

Expand full comment

The mediocrity principle should not be invoked by people above a certain level of success.

Scott, you are an outlier in the reference class of humans-alive-in-2022; why shouldn't you also be an outlier in the reference class of all-intelligent-life-to-ever-exist?

Expand full comment

I wrote in the previous comments that the repugnant conclusion is highly non robust. Drop 2 units of utility from a society of +1 utility and you go negative, turning a trillion barely happy people into a trillion unhappy people, negative utility is -1T.

Meanwhile the billion perfectly happy people drop from utility of +100B to +98B.

I know that this is not my thought experiment to play around with and theorists might reply - stick to the initial conditions. However I think when we are talking about a future society we should think like this. Its like designing many more bridges to take ten times as much traffic, but if you increase that traffic by 1% all bridges collapse, this isnt sane design.

So if we, or a benign AI, could design an a actual society for robustness and we decided the per capita utility could collapse by up to 50 points, then the repugnant conclusion disappears.

Expand full comment

The "future people should be exponentially discounted, past people should be given extremely high moral weight" thing just seemed obvious to me; I'm surprised it's controversial among utilitarians. Fortunately there is no actual way to influence the past.

Expand full comment

I want to add that some philosophers think they may have beaten the Impossibility Theorems that Magic9Mushroom mentioned. It is an idea called "Critical Range Utilitarianism" that involves some pretty complex modifications to how value itself functions, but seems like it could be workable. It doesn't seem that dissimilar than some ideas Scott has tossed out, actually. Richard Chapell has a good intro to it here: https://www.utilitarianism.net/population-ethics#critical-level-and-critical-range-theories

Even if Critical Range Utilitarianism doesn't pan out, I am not sure the Sadistic Conclusion is as counterintuitive as it is made out to be if you think about it in terms of practical ethics instead of hypothetical possible worlds. From the perspective of these types of utilitarianism, inflicting a mild amount of pain on someone or preventing a moderate amount of happiness is basically the same as creating a new person whose life is just barely not worth living. So another formulation of the Sadistic Conclusion is that it may be better to harm existing people a little rather than bring lives into existence that are worth living, but below some bound or average.

This seems counterintuitive at first, but then if you think about it a little more it's just common sense ethics. People harm themselves in order to avoid having extra kids all the time. They get vasectomies and other operations that can be uncomfortable. They wear condoms and have IUDs inserted. They do this even if they admit that, if they were to think about it, the child they conceive would likely have a life that is worth living by some metrics. So it seems like people act like the Sadistic Conclusion is true.

The Sadistic Conclusion is misnamed anyway. It's not saying that it's good to add miserable lives, just that it might be less bad than the alternatives. You should still try to avoid adding them if possible. (Stuart Armstrong has a nice discussion of this here: https://www.lesswrong.com/posts/ZGSd5K5Jzn6wrKckC/embracing-the-sadistic-conclusion )

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

First, the zero of individual welfare function does almost nothing with what people think about quality of their life? They're just numbers you use to match your intuitions. It doesn't even mean you must disregard their will - zero just only means what aggregate welfare function implies.

And so the second point - you can't choose between repugnant conclusion and sadistic conclusion, only between sadistic conclusion and both of them. Because sadistic conclusion is present even with "zero means almost suicidal" - you would prefer some hero to suffer horribly to prevent billion tortures or something. And that means sadistic variant is the one that is actually perfectly ok - if you sad about repugnant world, you should accept the price of averting it. Even if you don't think that adding people to that world makes it worse, you can just have thresholds for a range of personal welfare you don't care about - then the sad world of any size would have zero value.

It's not that population ethics is a field where math is actively trying to get you - it's all just works out in slightly more complicated models.

Expand full comment

There are four categories of answer that always come up in this kind of discussion; these have (sensibly) not been addressed in the highlights, but they're so recurrent that I'm going to highlight them here. In the spirit of the most recent post, I will do this in Q&A format (also in the spirit some of the questions are comments), and have put them in rough order of sophistication.

Comment #1

Q: Why should I care about other people?

A: Why should you care about yourself?

Comment #2

Q: No one actually care about others/strangers: people pretend to care about starving orphans, but actually only really care about their own status/warm feelings.

A: Even if this was true, it would be basically fine. A world in which people get pleasure from making other people's lives better is a much better world than one in which people gain pleasure from kicking puppies.

Comment #3 (Answers taken from SA and Godoth in the Tower post, which are much better than my initial respose)

Q: All charity is counterproductive/bad/colonialist (implied: there are no exceptions here)

A1: Even organ donation?

A2: If you actually think charity is bad, then you should be working towards ways to end charity (e.g. campaigning against tax-deductibility for charitable donations)

Comment #4

Q: Charity may help in the short run, but ultimately impedes the development of market mechanisms that have made the biggest difference in improving human welfare over the long run (for recent example, see China)

A1: Even deworming tablets, which are one of the most effective educational improvements, which then tend to cascade down to other things getting better?

A2: I've got some Irish historians I'd like you to meet after they've had a few pints ***

*** During the Irish famine, food continued to be exported from Ireland by the (mainly but not exclusively English) landlords on essentially variations on this argument.

Expand full comment

World economic growth can be hyperbolic and be on track to go to infinity in a singularity event but this graph does not illustrate it. Human eye cannot distinguish between exponential growth and pre-singularity hyperbolic growth on charts like this. With noise, these two trajectories would be pretty hard to distinguish with any tools.

To really see super-exponential growth we need to look at log-growth rates. If we do that, we can see a new regime of higher (measured) growth post 1950, but it is not obvious at all that growth is accelerating (it actually slowed down by about 1 percentage point from 4.5% to 3.5%) and also a lot of demographic headwinds are now gone. I am not sure I can easily insert a chart here, so you will have to trust me that the data you are using has the growth rate in a moderate decline on a 20-50 year horizon even before coronavirus and not in a rapid acceleration as you would see with a hyperbolic trajectory.

So your chart shows that we live a special time only to the extent that this time is "post WW2". And even then the single 1AD-1000AD step on this chart may easily be hiding even more impressive 70 year growth bursts (it probably does not but there is no way to tell from this data alone.)

Expand full comment

Thanks for including point 9, the 'yes this is philosophy actually' point. Reading that Twitter thread (a) got me grumpy at people being smug asses, and (b) was a bit of a Gell-Mann Amnesia moment – it's healthy to be reminded that the people talking confidently about things in this space (which I usually have to take on trust as I'm no expert in most stuff) can be just as confident exhibiting no idea of what philosophy is (which I do at least have a decent idea of).

Expand full comment

I just don't see why I should prefer a world with ten billion people to a world with five billion people, all else being equal (so no arguments about how a larger population would make more scientific discoveries or something, because that's obviously irrelevant to the Repugnant Conclusion).

Expand full comment

Concerning Scott's statement: "I don’t think you can do harm to potential people by not causing them to come into existence."

...an approx. 2500 year old commentator might be worth quoting:

And I praised the dead who have died long ago, more than the living, those who are alive now; but better than both of these is the one who is not yet, because he does not see the evil work that is done under the sun.

Ecclesiastes, 4:2-3

...admittedly, Ecclesiastes wrote these sentences before science, law and democracy fully geared up. There have been, after all, some rather new things emerging under the sun.

Expand full comment

> There are also a few studies that just ask this question directly; apparently 16% of Americans say their lives contain more suffering than happiness, 44% say even, and 40% say more happiness than suffering; nine percent wish they were never born. A replication in India found similar numbers.

It's worth noting that the same survey also asked whether people would want to live their lives again, experiencing everything a second time. In that case, 30% of US respondents (and 19% of India respondents) would not live the exact same life again, and 44% percent of US respondents would.

If you subscribe to a type of utilitarianism where life-worth-livingness is independent of whether a life is lived for the first or second time, I think the big discrepancy here should make you less keen to directly translate these numbers into "how many people are living lives worth creating".

Possible things that might drive the difference:

* It feels very sad to say you wish you weren't born, but less sad to say you don't want to experience life again.

* people might attach some ~aesthetic value to their lives where it's ~beautiful for the universe to contain one copy, but not more for the universe to contain two.

* people might be religious and so have all sorts of strange ideas about what the counterfactual to them being born is and what the counterfactual to them reliving their life is.

(I think the first two of these should make a ~hedonistic utilitarian more inclined to trust the larger number than the smaller number, and the third should make us less inclined to trust the procedure overall.)

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

> In fact, I’m not sure what to reject. Most of the simple solutions (eg switch to average utilitarianism) end up somewhere even worse. On the other hand, I know that it’s not impossible to come up with something that satisfies my intuitions, because “just stay at World A” (the 5 billion very happy people) satisfies them just fine.

If "just stay at world A" was your only desiderata, then sure, you could just enshrine that as your morality and call it a day.

But you probably also have other intuitions. And the general pattern of what the philosophers are doing here is showing that your intuitions violate transitivity. And if you have an intuition that your morality should satisfy transitivity, then it might well be impossible to come up with something that satisifies both your intuitions and that meta-intuition.

Analogy: Consider someone who prefers oranges > apples > bananas > oranges. Upset about someone leading them through a wicked set of trades that imply they'd choose an orange over a banana, they declare that their must be an intuitive solution since they'd be ok with the world where they just kept their banana.

Expand full comment

>If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity

Well, if you extend to _infinity_ then you run into completly different problems with infinite ethics. And even if you just extend to arbitrarily large non-infinite numbers, every system of ethics that uses expected values and doesn't have a "maximum utility possible"-cap breaks https://www.lesswrong.com/posts/hbmsW2k9DxED5Z4eJ/impossibility-results-for-unbounded-utilities

Expand full comment

I still find it really weird that the Repugnant Conclusion seems wrong enough to make Scott question everything. Suppose we knew that, 150 years from now, a pandemic would kill off 10% of the Earth's population, and if we all sacrificed 5% of our income today, we could prevent it. I believe Scott would support doing that, so he supports harming actually existing people to help potential people (as everyone who would be harmed in 150 years is not currently alive). Is the distinction that death is bad for reasons that go beyond the fact that it ends someone's existence? As in, preventing death is a worthwhile goal, but adding existence isn't?

Expand full comment

I don't understand the cancer and hammer example. Isn't killing cancerous cells in a test tube a necessary (but not sufficient) condition for killing cancerous cells in the human body? Why would observing cancer cells being killed in a test tube be a reasonable feedback mechanism?

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

Couple of thoughts, before the comment count swells into the hundreds:

I always wondered how many people are selfish and simply not particularly interested in altruism. You know, not necessarily sociopathic and not caring about other people at all, just mostly interested in themselves. Or perhaps mostly interested in their own families.

Watching the decay, resurgence, and repeat decay of the American big city, as well as our attempts to bring democracy to the world (or were we just after oil the whole time?) has made me skeptical of these grand projects of improvement. It seems like well-intentioned liberals/progressives (and oil-grubbing conservatives?) often do more harm than good, outside events have a much bigger effect than anything we can do, and complicated hypotheticals about bringing people into existence with happiness 0.01 are of limited practical value since there is currently no way to quantify happiness that effectively.

Expand full comment

The concept of a civilization of ten billion people whose lives are barely worth living might be smuggling in something like a belief that says “civilizations can scale to arbitrary sizes while zero constraints on the distribution of emotional state of people in them.”

I don’t think that 10 billion person civilization can exist without lots of advanced technology which means lots of capacities for pissed off people to cause serious damage. Likewise, I believe hierarchies are inevitably a thing, and some people at the top may feel their lives are awesome.

Claiming everyone is exactly the same emotional valence gets around this issue, but this becomes an impossible scenario. It’s like asking what happens if you get a mass of hydrogen bigger than the sun all together in a space the size of the sun but the atoms are under equal pressure. This scenario is impossible for hydrogen atoms. What if the thing you are proposing for people is equally impossible?

Perhaps the repugnant conclusion is only valid if you claim that emotional states of people in a world, and states of that world itself, are purely orthogonal to each other. But what if they aren’t?

Expand full comment

On having geniune average utilitarian philosophical intuitions.

I'm kind of surprised that you claim that haven't met more of people like this. For me, biting the bullet of repugnant conclusion is much more difficult than the bullet of creating more less unhappy people in hell. Keep creating people whose happiness is highter than the average and you eventually end up in the situation where majority of people are in heaven. This is still not the best of situations as long as hell still exists but it's obviously better than when everybody is in hell. Compared to repugnant conclusion scenario, the way ethical induction works here seems to be correct.

Maybe we need some extra assumptions to perfectly capture our moral intuitions. Something akin the lines of minimal acceptable expected happiness level for creating new people. But this is a small and easy fix compared to what you need to do so that total utilitarism produced less repugnant conclusion. Or am I missing some other mind experiments that show the craziness of average utilitarism?

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

I am not a utilitarian, it's a quantity over quality approach. It for example, obligates you to care about the lives of everyone equally, which I think is a bit silly because people aren't equal. Personally, the lives of Africans mean very little to me. In my view, they are really a real life version of a utility monster. No matter how many resources you pour into them, they don't seem to be getting better. Imagine if for example we took all that African foreign aid in the past fifty years and invested it in a space program. It's possible we'd be on Mars already! Over long time scales, this might even result in us colonizing another planet, or at least having an off world presence. O'Neal cylinders and that kind of stuff. Perhaps this would even avert human extinction. We might feel a little silly with a world ending meteor coming our way in 2034. I bet we'd regret caring so much about Africans then!

On the subject of births, I think it better to have a planet full of maybe two billion highly intelligent and moral people (no necklacing!) than an Earth full of people living with African living standards. If we want to expand our population we can go out and claim the free real estate in space, rather than overcrowding Earth. I think that's a more sustainable solution. Unfortunately this means many people may have to die (uh-oh!) I think a fair way of deciding who has to go would maybe look at who contributes most to the world. I think us Americans have contributed quite a lot so we're safe! As for Nigeria... well I'm not so sure. What have Nigerians contributed? Sorry Nigerians.. there's really only room for two billion of us.

Expand full comment

“I’m not saying the Repugnant Conclusion doesn’t matter. I’m saying it’s wrong.” Well, the Repugnant Conclusion is just an instance of Simpson’s Paradox—we’re partitioning the superset of people into subsets each of which has a property contrary to the superset (utility of the superset increases, that of each subset decreases). So the thing that’s wrong about RC is…. Utilitarianism itself. A long time ago in grad school I met Parfit when he came to give a talk on this topic. This was just a few years after Reasons and Persons was published. I wish I had known about Simpson’s Paradox at the time to discuss it with him. My own view is that we have contradictory moral intuitions as a result of distinct evolutionary strategies that built them. So we can’t ever have a completely intuitively satisfying ethics. This is a reason to doubt that the AI alignment problem has a solution; there’s not a uniquely correct moral theory to align with.

Expand full comment

> Thanks, now I don’t have to be a long-termist! Heck, if someone can convince me that water doesn’t really damage fancy suits, I won’t have to be an altruist at all!

That depends - are you a long-termist the same way you are a medical professional? Or in the way that I am a medical professional, i.e. not at all, but in that I fully support the continued existence of the medical profession? If you want to perform a Gedankenexperiment in order to persuade others of your point, then it costs very little to get it right. So why should we talk about what might happen in the Virgo supercluster millions of years in the future if someone to poor to afford running shoes can point out flaws in your analogies that are not minor but go to the core of the matter?

Expand full comment

I also have the average utilitarian intuition on the hell example.

Expand full comment

“Happiness isn’t exactly the same as income, but if we assume they sort of correlate, it’s worth pointing out that someone in the tenth percent of the US income distribution makes about $15,000.”

I thought the relationship between happiness and income was somewhat weak and non-robust. Maybe it’s strong enough before a threshold to justify this analogy?

I’m still somewhat skeptical, since I’ve met a lot of happy immigrants with not great incomes. If you push back and say “how happy were they really?” I would say much happier than a net-negative life.

Expand full comment

1. sounds suspiciously like lizardman's constant. Or, more generously, like some sort of concept where a percentage of a society is negative regardless of that society's actual positive or negative overall positioning. Imagine running the same survey in Norway and Sudan, would you see the same 10%? My intuition is it wouldn't be far off, but I would be happy to see alternate data!

Expand full comment

"3: Regarding MacAskill’s thought experiment intending to show that creating hapy people is net good"

"hapy" ? Is that different than "happy"

"I agree with Blacktrance that this dones’t feel true, but I think this is just because I’m bad at estimating utilities"


"gave the following example of how it could go wrote:"

"wrote" = "wrong"?

Expand full comment

On discounting, specifically on: "I think Robin later admitted that his view meant people in the past were much more valuable than people today".

I was completely taken by surprise by this conclusion. Discounting does seem absolutely necessary for me. Whenever you do math with sums that (may) sum up to infinity, you end up getting nonsense. Especially when you try to draw moral conclusions from such sums.

But the only way of discounting that makes sense to me is that it punishes distance *from myself*. So if I want to compute how valuable a person is, then this gets a penalty term if it is far away *from me*, whether in time or in geographical or in cultural distance. So both people in the past and in the future should be less valuable than people today.

Of course, this means that the value of a human is not an inherent property of that human, but it is a property *between* two humans. The value of a being is no absolute thing, it's always relative to someone else. Which seems very right to me. From HPMOR: "The universe does not care. WE care."

Expand full comment

> I should stress that even the people who accept the repugnant conclusion don’t believe that “preventing the existence of a future person is as bad as killing an existing person”; in many years of talking to weird utilitarians, I have never heard someone assert this.

Really? I've heard it. I've even heard a philosophy professor - a famous one, at that - talk about how this is a major unsolved and undertreated problem in consequentialism. (And he counted himself as a utilitarian, so he wasn't dunking on them.) What's more, MacAskill has to confront this question more directly than others, given his other arguments.

At a basic level - both have the same consequences: one less person. If actions are bad because of the difference between the world where they're taken and the world where they aren't, then these two are equally bad, ceteris paribus.

The usual effort to get around that fact involves appealing to the fact that one person exists already, while the other person doesn't, and only people who do or will exist matter. This leads to straight-up paradoxes when you consider actions which can lead to people existing or not, so more typically, philosophers endorse some "person-affecting principle" where in order to be bad, an action must affect an actual person, and merely possible people don't count. A typical example of this view would be "possible people whose existence is conditional on your action can have their interests discounted".

But note, this is completely incompatible with MacAskill's longtermism. If we don't have to care about "merely possible" people's interests, we don't have to care about the massive staggering numbers he cites. In fact, given person-affecting principles, it's hard to justify caring about the distant future at all which is why some people absolutely do bite the bullet and say, yes, pragmatic concerns aside, killing is equivalent to not creating.

Expand full comment

Is there a name for the position that all we really have to rely on are our moral intuitions, and all arguments and extrapolations and conclusions have to pass muster with our intuitions? I have no problem with the idea that I am perfectly entitled to reject any philosophical conclusion that clashes with my intuitions, purely *because* it clashes with my intuitions, even if I can’t find any specific problem with the chain of axioms.

I don’t think people are good enough at thinking to actually understand the inner clockwork of our empirical values, and there’s no reason we should defer to “derived” values over revealed, intuitive ones. Philosophy is not like engineering, where you build a bicycle out of gears and struts and it all comes together exactly as intended, according to principles you fully understand. Philosophy (of the “Repugnant Conclusion” variety) is like trying to build a bike out of jello, and then telling everyone that you succeeded in building a bike and the bike works great, even though everybody can see that it is a pile of goo.

Expand full comment

I don't share the intuition that the repugnant conclusion is repugnant at all. I very much want to exist. Given how specific *I* is I get the chance of me existing is astronomically tiny. It would be still significantly more likely in a world with more people, and I would still want to be *me* for all possible *me's* even if just barely above neutral state. Now I act selfishly because I don't think veil of ignorance is a thing we should live our lives by, but when it comes to philosophy about total worlds? Totally

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

One of the best phrasings I ever heard was "don't make happy people; make people happy". It seems very easy to reach all kinds of oddball conclusions once you're free to introduce whatever hypothetical people you feel like and value them as much as actual people

On the other hand, we do reasonably seem to care about future people - if not, we wouldn't have to *overly* bother about climate change (it would still be reasonable to do something about it, but the priority would be much lower).

So what I personally need is some kind of philosophy about hypothetical people, and some way to contrast them against actual future people. But either way, I don't think we need to concern ourselves about purely abstract hypothetical people.

I don't think it's unreasonable to say that a world with a million people, all happy, is in no way worse than a world with a billion people, all happy. The important thing is that the people who exist are, in fact, happy. Only slightly more abstractly, we want the people who will actually exist in the future to be happy as well. But beyond that? Who cares about hypothetical people?!

Another reasonable thing to interject here is that we shouldn't even make people happy, at least not in some kind of official policy way. Focus on removing sources of unhappiness instead - the happiness will take care of itself, and peoples' free choices are the best way to handle that. Happiness is good; it shouldn't be mandatory. A line of reasoning like this could also explain why it's bad to create people destined to certain net suffering, but not necessarily good (and certainly not obligatory) to create happy people.

Anyway, if we *really* care only about happiness in abstract hypothetical entities, then humans seem like a dead end. Then what we need to focus on is tiling the universe with maximally happy AIs who can replace us at a vastly improved efficiency as far as happiness is concerned (better even than rat brains on heroin!). We need to hurry to the point where we can abolish ourselves, and getting destroyed in the Singularity might be the best possible outcome.

Expand full comment

Obligatory xkcd, re: cancer cells.


Expand full comment
Aug 25, 2022·edited Aug 25, 2022

>>Do people who accept the Repugnant Conclusion, also believe in a concrete moral obligation for individuals to strive to have as many children as possible?

We had a conversation that brushed close to this topic on one of the open threads a while back. It's specific to the context of abortion (OP posits that utilitarians should be pro-life because of the balance of the utility of the life-years cost for the fetus and its possible descendants due to ending the pregnancy by abortion as compared to the life-years cost to the mother from carrying a child to term & raising it), but as a few people point out, the logic had a hard time not extending from "preventing people from being unborn because of abortion" to "preventing people from being unborn because of contraceptives or abstinence." I don't think we ever really got to a line of logic that satisfactorily distinguished the two.


Expand full comment

Quote: "The best I can do is say I’m some kind of intuitionist. I have some moral intuitions. Maybe some of them are contradictory. Maybe I will abandon some of them when I think about them more clearly. When we do moral philosophy, we’re examining our intuitions to see which ones survive vs. dissolve under logical argument."

...Scott, this suggests you are an implicit follower of the philosopher Jonathan Bennett (1930-). In the article "The conscience of Huckleberry Finn" (Philosophy, 49 (1974), pp.123 - 34), he analyses the morals of Huckleberry Finn (as portrayed by Mark Twain), Heinrich Himmler (yes, he had moral principles - very bad ones), and the famous US Calvinist theologian and philosopher Jonathan Edwards (with an even worse moral outlook than Himmler, according to Bennett). His moral take-home point after the analysis is this:

"I imagine that we agree in our rejection of slavery, eternal damnation, genocide, and uncritical patriotic self-abnegation; so we shall agree that Huck Finn, Jonathan Edwards, Heinrich Himmler.... would all have done well to bring certain of their principles under severe pressure from ordinary human sympathies. But then we can say this because we can say that all those are bad moralities, whereas we cannot look at our own moralities and declare them bad. This is not arrogance: it is obviously incoherent for someone to declare the system of moral principles that he accepts to be bad, just as one cannot coherently say of anything that one believes it but it is false.

Still, although I can’t point to any of my beliefs and say “This is false”, I don’t doubt that some of my beliefs are false; and so I should try to remain open to correction. Similarly, I accept every single item of my morality – that is inevitable – but I am sure that my morality could be improved, which is to say that it could undergo changes which I should be glad of once I had made them. So I must try to keep my morality open to revision, exposing it to whatever valid pressures there are – including pressures from my sympathies."

Expand full comment

A related topic is that -- personally -- I think it's better to try to extend people's lives, for example with anti-aging or brain preservation/cryonics, than just letting people involuntarily die and create new people.

I've been very surprised to find that many people in the EA space disagree with this intuition.

For example, Jeff Kauffman's argument against cryonics, which was highly upvoted in an EA forum comment, is basically that cryonics is not worth it because if people involuntarily die it's okay, we can just create new people instead. https://forum.effectivealtruism.org/posts/vqaeCxRS9tc9PoWMq/?commentId=39dsytnzhJS6DRzaE#39dsytnzhJS6DRzaE

As another example, see the comments on this post about brain preservation as a potential EA cause area: https://forum.effectivealtruism.org/posts/sRXQbZpCLDnBLXHAH/brain-preservation-to-prevent-involuntary-death-a-possible?commentId=puMSdFecxFsnKmEx5#comments

Expand full comment

I'm glad to see you elaborating on this. I wrote an article about why your population ethic in your last post is going to be not work [1].

Your new theory is also going to have problems (“morality prohibits bringing below-zero-happiness people into existence, and says nothing at all about bringing new above-zero-happiness people into existence, we’ll make decisions about those based on how we’re feeling that day and how likely it is to lead to some terrible result down the line.”) This doesn't evade the RC fully, it just makes you indifferent between population A (small, happy) and Z(huge, barely worth living). If we introduce one slightly below average person, we get the RC again. Population A + 1 mildly unhappy person v. population Z would make you choose population Z. It also makes you indifferent to creating an extremely happy and fulfilled child versus one whose life is barely worth living.

In my article, I wanted you to more fully embrace intuitionism and accept stuff like weak natural rights. Here you accept intuitionism, which is good to see. Can I get you to accept weak natural rights? Since you are an intuitionist and since you don't embrace the idea that all actions have to be utility maximizing, perhaps you could accept the Repugant Conclusion but reject the need to bring it about, and reject depriving anyone of their rights to attain it. That would be my position. I think the RC is the best of the bad conclusions, but we don't have to bring it about. People aren't morally obligated to reproduce. And we shouldn't make people if it makes everyone's lives get worse.

Thanks for talking about this more. This stuff is very interesting.

[1] https://parrhesia.substack.com/p/scott-alexander-population-ethics

Expand full comment

I think one way out of the RC is to deny that a moral system should be able to give answers to all hypothetical questions but to try to derive a system that would give answers in actually actionable situations. This system would be quite sufficient to guide one in all practical situations.

Under such a moral system, we can still engage in some thought experiments, but the meta-questions such as choosing between two hypothetical worlds would have no meaning.

In other words, even if one accepts (a pretty dubious) idea of aggregating utilities between people, one can still refuse to compare aggregate utilities between hypothetical worlds.

I think this is actually equivalent to a pretty traditional approach to these questions that would have been obvious to any thinker a few hundred years ago. They would have easily recognised who is the only entity that could wish a world with 10Bn near-hell suffering people into existence and would know that a series of complicated transactions with this entity is not going to end in anything good.

Expand full comment

Can we bring Rawls into this conversation, or is that against the rules? After all these universes are created, you get to choose which one to live in, but you don't get to choose who you get to be. Would you rather live in the 5B universe of maximum utility, or the barely-above-neutral universe? Would you even choose the 5B max + 10B 80% utility universe or the first one?

Expand full comment

I have a long-running dislike of comparing future worlds as a means of making decisions, stemming from:

-There are no *inherently* desirable states of affairs.

-I am in a brain with preferences, and I would like to achieve those preferences. Non-survival preferences I categorize as "happiness" throughout this logical chain. Thus some states of affairs are more desirable than others.

-I could do whatever would result in the most happiness to me. This is really hard to analyze and my subjective happiness (the thing that actually matters) is only very loosely correlated with my objective well-being.

-I contextualize this as "defect" and prefer to "cooperate" within my community. The best outcomes come if "my community" means "any human who ever has or will exist".

"Should this also apply to aliens" and "should I care about animals" are outside of scope

-From this I choose to optimize my philosophy for the outcomes that come about for me if everyone has them; there's some muddy stuff here but ultimately I more or less reach the veil of ignorance.

anyway, my takes as a result:

-Making a person exist is morally neutral to that person, but by creating a person you have an obligation to love them ("put them in the category of people who are in your community, i.e. the people whose happiness you treat as equal to your own*"). This doesn't make making a person exist morally neutral overall, because it will affect existing people; but the existing person it will affect most is yourself, so "have kids if you want to, don't if you don't" seems like the best axiom to me at the moment

-Any long-term view must assume future people will take the same long-term view

*Caveat that this says "happiness" and not well-being intentionally, and that placing the happiness of another as equal importance to your own does not mean that the two happinesses should be equal

Expand full comment

Am I missing something from Utilitarian concepts/arguments? They seem based on a fallacy: that there's some immutable H value of happiness. But that's not my experience of how humans think/work at all.

First of all, if you reach a certain point of H, and then stay there, your perception of how much H you have decays (or increases, either way, reverts to a "mean" perception of H). Concretely: if you win the lottery and now are immeasurably richer than you ever thought you could possibly be, you'll be really happy for 6 months, at which point you'll feel pretty average again. If you break your neck and become completely paralysed, you'll be really unhappy for 6 months (okay, maybe 2 years), by which time you'll feel pretty average again.

So actually "Heaven" is where every single day you have H+1 happiness over the previous day. "Hell" is where every single day you have H-1 happiness over the previous day. It doesn't matter whether you start in abject poverty or amazing riches, if your H is monotonically increasing, you're in heaven. Decreasing, hell.

Second of all, I know everyone hates Jordan Peterson because he doesn't toe the leftist line, but he has some good points. One is that "happiness" is not well-measured. If you have all your stuff and things and money, you think you're happy, but after a few months, you might well want to kill yourself. Meanwhile, if someone else has nothing at all, after many months, they might be full of energy and still getting on. Why?

Because having a sense of purpose, meaning, a goal that you're aiming toward, is way more important than having all your comforts and stuff and things.

So again, someone in abject poverty, who feels they have meaning in their life, raising their kids, helping their neighbours, and generally doing "productive" things in their lives will maybe not report being "happy" but if you ask them if they'd rather be dead, they will certainly say no.

On the other hand, someone in the top 0.01% of global wealth, with everything they could ever possibly want, but with no purpose or meaning in their life (whether self-inflicted or whatever) might well commit suicide. Again, not because they're "unhappy" but because... what's the fucking point?

This is one of the reasons I am unconvinced of the utility of, eg, Universal Basic Income. If you have a few hundred million people sitting around with no purpose, they will start acting destructively. Either to themselves or everyone around them. Sometimes it's fun to just wreck shit, especially if you've got nothing else going on.

Do all these utilitarian philosophies say nothing on this, or are all these well-understood, and just no-one ever brings them up in these extremely long, in-depth utilitarian discussions?

Expand full comment

Your response to Petey seems to miss the core point of the argument. Humans have an aversion to ceasing to exist (and a rather strong one at that), so the point at which people are indifferent between continuing to and ceasing to exist is the wrong point to assess for a zero point in utility.

The more relevant question would be along the lines of “would you prefer your current life to having never been created”, though this would still be hard to actually ask people without triggering their feelings of aversion to ceasing to exist (TODO: find a way to survey non-existent people).

Because of these considerations, actual people who are barely-not-suicidal or barely-subclinically depressed are *precisely the wrong* group to use as a reference for small-but-positive utility, due to the strong human dispreference for ceasing to exist throwing the whole comparison off.

Expand full comment

I wonder if part of the answer might be about epistemic humility, and margins of error? You ask

> Start with World P, with 10 billion people, all happiness level 95. Would you like to switch

> to World Q, which has 5 billion people of happiness level 80 plus 5 billion of happiness level 100?

I think my actual answer is no, because I might be wrong about it. I'd want some margin for error, as well as for day-to-day variation. So in the original argument, I'm unwilling to swap world Q for world P without some margin for error - I'd want world P to have everyone at perhaps 91 or 92 happiness units. This gives me a margin for everything - error in my theory, errors in measurement, daily variations, and so forth.

In short, I'm unwilling to swap this world at level X with another one at the same level: I'd only swap it for one with a higher than X level, by some margin to be determined when I know how you're doing the measuring, etc.

Expand full comment

I'm not sure if this is really an original thought or not, but does it help to approach this from another angle that more explicitly ties a decision to the creation of suffering vs joy?

For example, imagine we discover an alien species who have two ways of producing offspring. One method is roughly analogous to human reproduction. It creates relatively healthy offspring and most maladies are addressable with modern science; expected utility is average for the species.

The other method creates highly variable offspring. 50% of offspring produced with this method are sickly specimens that die at a young age. 50% of offspring produced with this method are exceptional compared to normal offspring. They are stronger, healthier, more intelligent, are capable of experiencing far more joy, and live longer lives than the normal offspring.

Which method of reproduction should a moral alien choose?

Expand full comment

Gonna make another top-level comment, because this seems to address a holistic set of assumptions across dozens of comments, and I doubt I should be replying to each individually.

Predicting the future is extremely hard. Not least because there's a lot of chaos in the system. You simply can't model very far out into the future for most interesting problems. Especially when the inputs to the system are "whatever each of a few billion people want/need (or think they want/need) and all that changes on a daily basis."

At most I should be considering my offspring that I can see. Children and grandchildren (and if I'm lucky, I suppose my great grandchildren). Beyond that and I'm just gambling on chaos. You don't completely discount future people because you don't care about them. You simply realise there's very little you can do that will actually have the intended effect for future people.

I can give some homeless hobo $100 now, and be fairly certain what will happen with that for the next 20-30 minutes, or if he's a very reliable and well-known hobo, maybe 20-30 days. But I absolutely cannot know, for any reasonable definition of "know" what will happen to that $100 in the next 6-12mo. There are way too many inputs into the system.

So why should I have any faith in my ability to do anything for (or against) someone 100, 500, 1000 years from now? I don't discount future people in my utility calculations because they're any less (or more) important than people today. It's just that... there's nothing I can do for (or against) them. So why consider them at all? Effective altruism is hard enough with a short feedback loop. If the feedback won't come until after you die... is there really hope to get it right?

It seems the height of hubris to think that I will be able to come up with a plan that not only solves for chaotic systems, but also solves for a few billion people who disagree about anything (but I repeat myself), and also overcomes the present limitations on whatever tech/systems/knowledge those future people could possibly have that I don't currently have. How do I come up with the right answer without the benefit of hindsight those people will have 100 years from now?

I think we should let those future people solve their problems themselves. Not because we're selfish, but because we recognise our own limitations. Fix what you see wrong/broken now, today. Once all those problems are solved, then start gambling on the future, I suppose?

Expand full comment

"I’m not sure how moral realist vs. anti-realist I am. The best I can do is say I’m some kind of intuitionist. I have some moral intuitions. Maybe some of them are contradictory. Maybe I will abandon some of them when I think about them more clearly. When we do moral philosophy, we’re examining our intuitions to see which ones survive vs. dissolve under logical argument."

Does this ever actually happen? What are some moral intuitions that have dissolved under logical argument?

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

Rejecting the incremental steps on the road to the repugnant conclusion makes sense if you assume our moral intuitions are driven by, among other things, a large factor of blame avoidance, rather than pure altruism. For example, preferring world A (5 billion 100-happiness people) to worlds B (5 billion 100-happiness + 5 billion 80-happiness people) and C (10 billion 95-happiness people) makes sense if you think of these people as potentially holding you blameworthy for whatever choice you make. If you go from A or B to C, you are counterfactually injuring the people who could have had 100-happiness but instead only have 95-happiness, and this instinctively feels dangerous. If you could choose C, but you choose B instead, then, similarly, you are choosing to make half of the people worse off than the counterfactual. But, if you choose A, the only people you are injuring are those who could have existed but don't, and they won't be around to blame or punish you.

You could try to get around this by saying that the sets of people in each world are pairwise disjoint, or that your choices won't be enacted until long after you're dead and can't be punished for anything, but I don't think the part of our brain that generates moral intuitions is capable of entertaining arbitrary hypotheticals like that.

Maybe if you take blame avoidance into account, you can restore transitivity to your preferences, at the cost of putting yourself at risk of being blamed for not being purely altruistic and capable of rationally overriding your instinctive preferences when a thought experiment calls for it.

Expand full comment

You are discounting the trail running comment, but I actually think it has a valuable insight in it:

Things that seem like they could be very bad now are often much less bad than we anticipated by the time they come around.

I think you sort of got at this point in a post a long time ago that was going back and looking at old catastrophe predictions (I think you focused on acid rain and rainforest deforestation?)

Something that seems really bad now, that will have consequences going far forward in the future, will in reality often have it's edges dulled by time and not be as catastrophic as seemed.

Some of this dulling is a direct consequence of taking action to minimize the harms, that's certainly true, but some of it is just due to effects not always being as strong in the future.

I don't think this insight/idea is a complete refutation of longtermerism, but I do think it's a caution to be careful about being overly hyperbolic about future risks.

Expand full comment

I don't think of the Repugnant Conclusion as a paradox to be solved, but as a proof by contradiction that human morality can't be consistent with itself. Doesn't mean you shouldn't try to be moral. It does mean you can't extend to infinity (or very far even), and that paper shows this even more convincingly.

Expand full comment

"just barely about the threshold" --> "just barely *above* the threshold"

Expand full comment

"This is a really bad hypothetical! I've done a lot of barefoot running. The sharp edges of glass erode very quickly, and glass quickly becomes pretty much harmless to barefoot runners unless it has been recently broken (less than a week in most outdoor conditions). Even if it's still sharp, it's not a very serious threat (I've cut my foot fairly early in a run and had no trouble running many more miles with no lasting harm done). When you run barefoot you watch where you step and would simply not step on the glass. And trail running is extremely advanced for barefooters - rocks and branches are far more dangerous to a barefoot runner than glass, so any child who can comfortably run on a trail has experience and very tough feet, and would not be threatened by mere glass shards. This is a scenario imagined by someone who has clearly never ran even a mile unshod."

I completely disagree with this. I run with my dog and she has cut her paws on broken glass on the trail. At first I was angry that thoughtless people were leaving broken glass on the trail and I started picking it up whenever I saw it. After a few weeks of picking up broken glass, I kept wondering who the hell keeps breaking glass in the same section of the trail. Then it dawned on me that the glass was coming up out of the ground as the trail is being eroded by rain. I don't know what used to happen there, but there is a lot of glass in the soil from way in the past. And it is still sharp. So in my experience, the sharp edges of glass do not necessarily erode quickly on trails.

Expand full comment

You die and meet god. He's planning to send you back for reincarnation into the world as it is, but he's willing to give you a choice. His wife has created a world where everyone floats effortlessly in a 72 F environment, being bathed constantly in physical, emotional, and spiritual pleasure. Their happiness is maxed out. But that's pretty much all they do. They don't get jobs and go to work. They don't have opioid crises or wars, but they also don't graduate college, make discoveries, build businesses, or have any accomplishments whatsoever. Sure, they have children, but they're so engrossed in the maximum hedonics game (with happiness saturation negative feedback loops in their brains turned off, so every moment is like the first/best moment) that they never realize it.

He also has a brother who created a world where everyone fluctuates around 50% happiness. Sometimes they're happier and sometimes not so much. However, they also get many opportunities to contribute to their society. Their work has meaning, they have children, and get fulfillment. Nobody is totally miserable, and nobody is totally happy, but on net everyone is halfway between neutral and the wife's world of maximum bliss.

The choice you get is not to be reincarnated into one of those other worlds. You don't get to pick and choose who your god it. But you do get to pick whether you want the world to stay as it is, or whether you want your god to model the world after one of these other two worlds. It's a lot of work designing a world, so you can't just imagine something into existence without a template. You have to pick one of these two templates, or keep the world as-is. You're choosing both for yourself, and for everyone who will ever be born.

Expand full comment

> Unless you want to ban people from having kids / require them to do so, you had better get on board with the program of “some things can have nonzero utility but also be optional”.

This doesn't properly follow. It seems having kids is of unknown utility, and so you can easily hold a position of "we shouldn't require or ban this, because we don't know which of those would be right" while still holding a position of "things with nonzero utility shouldn't be optional (if you want to be a good person)."

Not that I think you're wrong about not having a moral obligation to do all positive utility things, but it doesn't seem incoherent to believe the above.

Expand full comment

One issue with this analysis is that it deals in terms of outputs (e.g., happiness), rather that inputs (e.g., resources, social connections, etc.). But societies deal in terms of inputs. No Bureau of Happiness is questioning people to determine whether they have exceeded their allotted happiness.

The quirks mentioned about historical people wanting to preserve their lives, even though people today might not find those lives worth preserving, arise from this issue. There is no invariant happiness function that applies to present and historical people.

This deficiency also applies to the cross-checking based on depression rates. There appears to be some implicit assumption that the measured depression outputs correspond to some deficiency in happiness "inputs." But I suspect that substantial overlap in inputs exists between the depressed and non-depressed populations. So increasing the inputs of the depressed would likely result in little or no change in the rate of depression. The problem is mental.

Accomodation to health changes provides evidence of the disconnect between happiness inputs and happiness. People with chronic diseases or injuries, such as spinal cord injuries, tend to report a far higher quality of life than one would expect, based on their physical condition. Furthermore, there appears to be accomodation over time, as people adjust their expectations to match their realities. Perhaps the depressed population arises from an inability to makes such adjustments?

Overall, the argument appears ill-posed. A small number of people consuming a large number of resources per capita is likely no more happy (or at best marginally happier) than a large number of people consuming fewer resources.

Expand full comment

Have you considered the Buddhist perspective on the Repugnant Conclusion?

My moral intuitions say that creating new people is bad, no matter how happy they are. All conscious life is suffering. People who are unusually happy are just suffering less than usual and deluding themselves. Maybe it's another case where "want" and "like" don't quite match up in our brain, and our survival instincts kick in to deprive us of the best possible outcome i.e. nonexistence. If you abuse the notion of a utility function to apply it to existing people, then every existing person has negative utility. Maybe someone with an extremely blessed life approaches zero utility asymptotically from below, but it's still negative.

If this sounds repugnant to you; well, does it sound more or less repugnant than The Conclusion?

(if we want to go from the tongue-in-cheek-Buddhist perspective into an actual Buddhist perspective, we get the outcome that nonexistence is actually very hard to achieve and if you kill yourself you'll just reincarnate; perhaps birthing a child is not morally bad because the number of "souls" in the world doesn't actually increase)

Expand full comment

>Start with World P, with 10 billion people, all happiness level 95. Would you like to switch to World Q, which has 5 billion people of happiness level 80 plus 5 billion of happiness level 100? If so, why?

I think the main argument would be that the world will natural come out of equilibrium because people are not clones/drones and have differential wants/interests and abilities.

And as an aside I suspect the top world simply is not actually possible under remotely normal conditions. Too much of human happiness/action/motivation is tied up in positional considerations.

>You’re just choosing half the people at random,

Yeah but in the real world this is never how this works. The people aren't chosen at random. The Kulaks are Kulaks for a reason.

>MacAskill calls the necessary assumption “non-anti-egalitarianism”, ie you don’t think equality is so bad in and of itself

I suspect it very much is bad specifically because what it does to human motivation in a world with differential human wants/interests and abilities

Expand full comment

> This doesn’t quite make sense, because you would think the tenth percentile of America and the tenth percentile of India are very different; there could be positional effects going on here, or it could be that India has some other advantages counterbalancing its poverty (better at family/community/religion?) and so tenth-percentile Indians and Americans are about equally happy.

Circumstances don't affect happiness. The phenomenon is well-known under the name "hedonic adaptation"; there's no need to postulate offsetting advantages to Indian life, because advantages and drawbacks weren't relevant in the first place.

Saying people who are below the tenth percentile of happiness should kill themselves is not obviously different from saying people who are below the tenth percentile of hip circumference should kill themselves.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

Okay, so I get that you're just trying to evoke the stuck-prior-all-communism-bad version of communism, but:

>Communism wants to take stuff away from people who have it for some specific reason (maybe because they earned it), and (according to its opponents), makes people on average worse off.

I want to point out that this is almost literally the exact same argument that communists make against capitalism:

>[Capitalism] wants to take stuff away from people who have it for some specific reason (maybe because they [did the labor to create it]), and (according to its opponents), makes people on average worse off.

And really, "doing the labor to create something" is just a more concrete version of "earning" something.

This almost makes the original statement bad faith, because it almost-but-not-quite claims communists specifically want to take from people who have earned what they have, when the whole ideology of communism is based around the idea that redistributing wealth away from capitalists is just because they *didn't* earn their (large) share of the profit from their meager (or nonexistent) contributions of labor.

Like, I get it was supposed to be a snarky aside, but come on. That's practically a strawman, it makes the whole argument weaker. There are much better criticisms of communism, you've got plenty to choose from.

Expand full comment

I'm sure this has been thought about, but it seems obvious to me that I should care that my action 10 years ago harmed an 8 year old child, and it doesn't seem like that requires me to prefer a world with 10 billion happy people to a world with 5 billion happy people.

All you need to do is say that, once a person comes into existence, they matter and have moral worth. Before that person comes into existence, they don't.

A corollary of that is that if I expect my actions to affect people who are likely to come into existence in the future, I should care about that, because by the time my actions affect them, they will exist and thus matter. So, if I've saved $200,000 for my future child's college fund, and am trying to have a baby, it would be wrong of me to blow that money in Vegas because doing so would likely hurt a person who is likely to exist. Deciding not to have a child and then blowing all my money in vegas, on the other hand, isn't a moral affront to my potential child.

If I'm deciding whether to pick up broken glass (or, you know, broken superglass that never dulls), if I think it's likely that some day a child will run along the path and step on the glass, I need to care regardless of whether the child is born yet. But that doesn't need to imply that it would be a good act to produce more children.

This also feels like it allows for normal moral intuitions about stuff like global warming, while not directly requiring that the survival of the human species be prioritized over the well-being of people alive now or in the near future. We shouldn't make the planet extremely unpleasant to live in for the next 200 years, because there are very likely to be people then and that would suck for them. We shouldn't create swarms of autonomous murderbots that will wake up and start murdering in 150 years, after we're all safely and comfortably dead, because there will likely be people then who won't want to be murdered.

But, if the RAND corporation calculates that there's a 0.1% chance of a nuclear exchange with China causing human extinction, and that a preemptive strike by the US would lower that to 0% but would cause the deaths of 4 billion people, we don't need to go ahead and do the first strike in the interests of the quadrillion future denizens of the galactic empire.

Expand full comment

On the answer to Blacktrance:

>If playing Civ and losing was genuinely exactly equal in utility to going to the museum, then it might be true that playing Civ and winning dominates it. I agree with Blacktrance that this dones’t feel true, but I think this is just because I’m bad at estimating utilities and they’re so close together that they don’t register as different to me.

1. I think Blacktrance doesn't mean that both options have the same utility - it's likely a reference to Ruth Chang idea that options might be "on a par" and that some things might be incommensurable, which helps Chang attack the Repugnant Conclusion (RC): <https://philarchive.org/rec/CHAPIC-4>

2. Even if you don't buy the "incommensurability thesis", if you admit that you're *uncertain* when estimating utilities, I think you might be able to justify a partial "escapist position" - because each step of the argument leading to the RC will be increasingly uncertain. So even though you might concede something like "for any world w with a population n living in bliss there's a world w' with a population m with barely worth lives", you might be able to argue that two precise descriptions of worlds satisfy the conditions of w and w'.

Expand full comment

A potential solution to the RC (and maybe utilitarianism as a whole) is to view civilization rather than individual people as the moral patient. In the metaphor of the human body, it is not mass that makes a good person. Having happy, healthy cells everywhere in your body is good, but just adding a bunch of extra otherwise healthy fat cells until you're morbidly obese is not improving things. Unless you fancy Baron Vladimir Harkonnens aesthetic. More power too you I suppose. Anyway, when I view myself as the civilization I'm a part of and ask "should I grow more people?" The answer doesn't make sense without considering the current state of the civilization. Are there too many people and theres starvation? Well that doesn't make me feel like a very healthy civilization, so no there probably shouldn't be more people right now unless they know how to make more food. Is there lots of work to be done and plenty of resources to do it and just a lack of people to be doing the work? Then yeah, absolutely there should be more people. When asking questions about what is best for our civilization, what we want for our civilization should be considered, not just what we want for individuals in it.

Expand full comment

These thought experiments seem to confuse the issue by combining two unlike questions to create nonsense. Creating happy vs. unhappy people is a moral question. How many people to create is a practical question. Practical, as in, it matters how many people are needed to ensure continuation of the human species, or how many people are required to make a functioning community, but I don't have any moral intuition that more people is better than fewer outside of these practical questions. I don't see why there would be. Do others have this moral intuition?

The thought experiment would make more sense to me if it were explicitly a question of how much we would be willing to trade off happiness for higher chance of survival, or something like that.

Expand full comment

> In the Repugnant Conclusion, we’re not creating a world, then redistributing resources equally. We’re asking which of two worlds to create. It’s only coincidence that we were thinking of the unequal one first.

This sounds a bit Motte-and-Bailey-ish. Yes, I understand that in the analogy, we are creating a world from scratch. But the overall scenario is about devising policies that we should apply to our current, existing world; and this is where the "equalizing happiness" technique breaks down. Ultimately, either the analogy is not analogous, or you are advocating for seagull-pecking communism.

Expand full comment

> Also, a few commenters point out that even if you did have an obligation to have children, you would probably have an even stronger obligation to spend that money saving other people’s children (eg donating it to orphanages, etc).

Wait, you do ? Why ? Imagine that I'm reasonably rich, have a stable marriage, and both me and my wife have good genes (and we can do genetic screening to ensure that they are passed to our offspring). Our children thus have a high probability of growing up happy and well-adjusted, with a happiness level of, say, 0.9. Why should I instead spend my resources on pulling up orphans from 0.1 to 0.2 ?

Expand full comment

I also prefer the world of 5k people in epic galaxy-wide civilization with non-feeling robot assistants over the world with just humdrum lives of 80k people. I feel like I haven't seen good discussion of this moral intuition elsewhere. I feel like it could be valuable to try playing a game with friends where you all pretended you were unborn souls in a pre-life waiting room, and had to assemble a collection of 'possible lives in possible universes'. Maybe the rules would be you had to live every life in your final collection, or maybe you would get randomly assigned one of them. But the idea would be to explore how people valued different sets of possible lives. How willing would you be to accept a particular negative life in exchange for also adding a particular positive life to your collection? Where would different people's intuitions set their balance points for different pairings of good/bad? Seems interesting to explore.

Expand full comment

> many AI-risk skeptics have the view that we're decades away from AGI, so we don't need to worry,

I keep seeing this point brought up, but by now, it is starting to sound like a strawman. The key point of AI-risk skeptics is not that AGI is decades away; it's that right now no one knows where to even begin researching AGI (in fact, some people would argue that AGI is impossible, though I don't hold that view). You cannot extrapolate from "we don't even know where to begin" to "so, obviously it's a few decades away"; at least, not if you're being honest.

On top of that, many AI-risk skeptics (myself included) disbelieve in the possibility of the existence of omnipotent (or just arbitrarily powerful) entities, and this includes AIs along with gods and demons and extradimensional wizards. This turns AGI from an unprecedented all-consuming X-risk to merely good old-fashioned localized X-risk, like nuclear bombs and biological warfare. Which means that we should still keep a close eye on it, of course; but there's no need to panic.

Expand full comment

In a successfully-established repugnant solution society, what happens if I take myself hostage? The extreme case is "I'm going to kill myself if someone doesn't give me a cheeseburger right now", but in general what's the response if the typical person who has been allotted 0.01 utils demands more, declaring that they will 'exit' unless the ambivalence point is set one full util higher?

The typical definition of where the ambivalence point is set declares that this axiomatically won't happen, but that ignores that people within such a society can *see* the resource allocation system and bargain accordingly - collectively so, if useful. There's an instability when the needs that are being met can be responsive to the effort spent meeting them.

(There's a boring answer that the repugnant society could allow such people to select themselves out of the population until the problem goes away, but tailoring your population to fit your society is trivially applicable to any model and if you're ok with that you needn't have gone to as much effort.)

Expand full comment

I belive that there are things that need to be clarified in the Repugnant Conclusion, at least in the Wikipedia article (I have not read the book).

The ordering of happiness implies that we can assign a happiness score between -100 and 100, that does not mean we can add them or take averages. It is not clear to me that the process would be ”linear” – perhaps the happiness of the new population converges to e.g. 70 and not 0?

Also, it seems that the process of obtaining a new population with lower happiness can be simplified. Let's start with a population of 1 billion individuals of happiness 100 and another empty population. We can keep adding individuals of happiness 0.01 to the second population until its ”total happiness” is larger than the first one.

But that only works if happiness can be readily added and averaged (just like in the original argument). Maybe the second population never reaches the first one and converges to something much lower?

Expand full comment

> MacAskill calls the necessary assumption “non-anti-egalitarianism”, ie you don’t think equality is so bad in and of itself that you would be willing to make the world worse off on average just to avoid equality.

I don't think that quite covers it. Raising up the lower parts of the population to be equal to the top would satisfy egalitarianism, but would not bother someone whose primary concern was maximizing the level of the top (an inverse Rawlsian?). McAskill wants to BOTH raise the bottom & reduce the top.

> Also, a few commenters point out that even if you did have an obligation to have children, you would probably have an even stronger obligation to spend that money saving other people’s children (eg donating it to orphanages, etc).

Wouldn't that depend on how happy your children are expected to be vs the children you could save? The insight of EA though is that lives are cheaper in the third world, and you can save them for much less than it would typically cost to raise one here.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

8) is resolved very simply. You are not special. So the chance of pulling one particular red ball out of a box of 100 nonillion balls is infinitesimally small. Unless all the balls are all red. Then the chance is just 1, every ball is a red ball. The fact that you are here, towards the "start" (we hope) of human history doesn't tell you anything about where you fall in that distribution because *anyone* could ask that question.

You are not some special red ball that needs its occurrence explained out of the population of 100 nonillion.

Expand full comment

> Several people had this concern but I think the chart isn’t exponential, it’s hyperbolic.

I look at that chart, and think it's logistic.

Mathematically, I do not think it's possible for that chart to have a singularity. I'm certain if you were to look at the population chart instead: it definitely cannot be a function that reaches infinity in a finite amount of time, because (even if we were to assume an infinite universe) however fast you grow the population, even if you increase the growth rate and the growth rate of the growth rate all the time, you cannot make the jump from "zillions of zillions to the power zillions of people" to "literally infinitely many people" without infinite time to do it. A population singularity is mathematically just not possible, and so God has cancelled it I guess: https://slatestarcodex.com/2019/04/22/1960-the-year-the-singularity-was-cancelled/

And if you assume that each of the finitely many people will contribute a finite amount of wealth per year, however large that may be, then GDP cannot become infinite in finite time either.

More practically speaking, if there are only around 10^67 atoms in the universe or something, then that sets an even harder limit on growth.

But back to my claim it's logistic. If you look at the first part of a logistic function or "S curve" then it can sure look like an exponential, but at some point it flattens out and then converges to a finite limit. World population growth is estimated to do that, so I think GDP will follow - even if productivity were to remain constant, which it's apparently not doing (something something is science slowing down, low-hanging fruit, cost disease, moloch etc.)

Just like the curve of a spread of a disease (or a meme) in a population can look exponential at first, but even in the worst case can't reach more than 100% of the population, I think GDP will at some point meet the limits of our finite-resources planet, possibly long before we get to colonise other planets in which case those 10^70 or something potential future people are mostly never going to happen.

Expand full comment

I like the notion that increasing population delivers diminishing marginal returns on both the individual and group level. And I also like the introduction of AI as a way of highlighting the fact that some actions deliver utility by being performed but not necessarily by being experienced or enjoyed. For example; lab work to understand the universe is good but not primarily because it's pleasant. If the goal of life is to have meaning relative to some group, an overly large group could decrease one's own feeling of significance. The live-ability of lives, therefore, may not be entirely disjoint from one another.

And, of course, larger groups are more robust to all kinds of stressors. And many people will elevate survival of the group or species above other considerations like qualia.

If there was some ideal number of human beings such that sensory pleasantness wasn't the primary limiting factor on having a liveable life but meaningfullness was, I could see us agreeing on some kind of equilibrium "best population" without nessicarily descending by necessity into some kind of physical hellscape.

And as I think was mentioned, "meaningfullness" might be one more counterbalance against a more populous future, arguing in favor of present significance. The other being, as mentioned, uncertainty of future results.

Expand full comment

The repugnant conclusion is irrelevant to policy because to do policy we must reckon with the coercion of the population as it exists now -- not merely the selection of unrelated populations out of a hat. You can make this argument even on utilitarian grounds. Suppose the repugnant conclusion is indeed inevitable, given its stated axioms. Suppose population A is a small population with high utility and Z is a large population with small but positive utility. Suppose u is a function that gives the total utility of a population. Suppose c is a binary function that gives the utility of transitioning one population to have the same total members as another and the same utility distribution. The repugnant conclusion roughly says u(Z) > u(A). Even so, it could be true that c(A -> Z) > u(Z) - u(A). Intuitively speaking, this is sensible because peoples' utility is not memoryless. If you take away a bunch of people's money, they will be sadder than if they never had it in the first place and engage in various utility-destroying actions. Actually you might not ever be able to even make A's utility distribution exactly identical to Z's without expending crazy effort to eliminate a bunch of small rounding errors or unintentionally reducing the population.

As a separate point, there are just critiques of utilitarianism. It's not obvious to me why u(Z) should be equal to \sum_i z_i where z_i are the utilities of all the individual people in population Z. For example, the cultural and technological heights enabled by the largest z_i should count for something. Maybe alternatively the utility should depend on the beholder. As a modern-day human, maybe I don't care so much to perpetuate a society where all the people have been reduced to epsilon-utility drones, but I care very much about having more people like me + replicating society as I know it. On the flip, maybe there are even paradoxes, where I don't care for societies that are unrecognizably awesome and everyone is happy all the time.

Expand full comment

How do effective altruist initiatives to fight global poverty affect the expected longevity of the political regimes under which the global poor reside?

Expand full comment

It seems like most of the problems with utilitarianism only show up when you start applying powers that no mortal should ever have.

Is the solution some kind of "humble utilitarianism", in which you strive to do the greatest good for the greatest number within ordinary mortal human constraints, while also committing not to do certain things like swapping the universe for a different universe, or creating simulated humans, or killing anyone?

Be nice, give to charity, save children from ponds, don't murder that random dude to redistribute his organs, and definitely don't accept any offers from godlike entities to replace the whole universe with a different universe, because that's not your prerogative.

Expand full comment

I second the DangerouslyUnstable's comment, that the discounting horizon for long-term effects is MUCH MUCH shorter than for short-term effects, somewhat counterintuitively.

**We cannot trust any long-term calculations at all**

Scott, you actually gave an example of it, implicitly: "adopt utilitarianism, end up with your eyes pecked by seagulls some time down the road".

Other examples include nearly all calamities:

- The pandemic destroyed a lot of inadequate equilibria.

- The Holocaust resulted in Jews having their own state, for the first time in 1900 years.

- The comet that killed dinosaurs helped mammals.

I propose an inverse relation:

Prediction accuracy horizon is inversely proportional to the exponent of the time frame of interest. Which is much much more brutal than exponential discounting, but matches everyday experience, mostly because of the unknown unknowns.

To say it slightly differently:

**Longermism gets its eyes pecked by black swans**

Expand full comment

Genuine question: why bother with ethical systems if we test them against our intuition? If we do that, don’t we implicitly assume intuitionism?

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

"But in the end I am kind of a moral nonrealist who is playing at moral realism because it seems to help my intuitions be more coherent."

Well said. In the grand scheme of things I'm a mistake theorist; I think Hume had it right in 1739 that you just can't derive an ought from an is.

But in a certain sense I'm a moral realist; I believe that moral truths genuinely exist in the world *as we perceive it*, because evolution built that into our perceptions. Unfortunately that means that extensions of our moral intuitions far from our everyday existence often go wrong. Evolution only selected for any muddle of utilitarianism, deontology, and virtue ethics that would give the right answer (in terms of increased fitness) under the conditions its selectees were actually encountering. And of course no version of that muddle can have any ultimate claim to normativity.

Expand full comment

This comment is directed at a different level of the discourse: Am I the only one who missed that the original post was a "book review" not a "your book review"? I only realized upon seeing the Highlights From, today. The whole time I was reading the review I was thinking, "This seems like the best book review in the contest so far, but I wonder if that's only because it has more jokes in it rather than actually being more insightful."

Expand full comment

I don't think the "sadistic conclusion is uniformly counterintuitive. Consider a large utopia. Either create 1 person living a dull grey life, or 100 living a dull grey life. Option 1 seems possibly quite a lot better. Maybe. Even if the 1 person has util -0.00000001 and the 100 each have util +0.00000001, the difference is one dust speck, practically ignoreable.

Expand full comment

You are starting to sound an awful lot like a satisficing consequentialist.

Expand full comment
Aug 26, 2022·edited Aug 26, 2022

Goodness me! I was just discussing this with friends over moderately good wine and some mediocre food last night. I'm a late Boomer, but I see that GenXers have largely moved into positions of corporate power (we've got some Boomers in political power, but a lot of their policy wonks are GenX and Millennials). We old farts agreed that it's up to the next generation to solve their problems. We solved ours—imperfectly for the most part—but ultimately we improved on what was left to us by the early Boomers and the Greatest Generation. The worldwide standard of living is still higher than it's ever been. We're not seeing the regular famines we were seeing in the late 20th Century. We've had a big COVID-19 dislocation, and life expectancies worldwide have dropped somewhat—but if SARS2 follows the pattern of previous plagues, we're seeing the light at the end of the tunnel.

Global warming? The fact is that AGW is much has been at the lowest of the low-end of the predictive models. We might see a 1-1.5° C increase by 2100. Antarctica froze when average world temps were 6° C higher than they are today. So I don't see my grandchildren or great-grandchildren having to worry about massive sea-level rise for the rest of the 21st Century.

At current rates, world population will be plateauing at about 11 billion people around 2110 to 2120. I can't do anything about that except to tell all the youngsters to use birth control. What can I say? The world will soon be beyond my ability to offer any solutions. Good luck to you youngsters, but I'm optimistic you'll muddle through, just like we did.

Expand full comment

What if we modified humans to make it impossible for them to suffer, thus making the distinction between a +100 world and a -100 world completely meaningless? Both happiness and misery are a mere interpretation of external stimuli by a neural network, so what if we just hack the neural network to always report "happiness" no matter what?

Yeah, I know, this makes for shitty science fiction but this seems like the most obvious conclusion to the entire utilitarian debate in the next few thousand years.

Expand full comment
Aug 26, 2022·edited Aug 26, 2022

My answer to the Doomsday Argument prior is simply that we do, in fact, have enough evidence to overcome the prior (cf https://www.lesswrong.com/posts/JD7fwtRQ27yc8NoqS/strong-evidence-is-common).

(I do still think extinction soon is plausible, but that's because the piles of other evidence point that way more than it is because of the Doomsday Argument prior.)

Expand full comment

"It's a proof that any consistent system of utilitarianism must either accept the Repugnant Conclusion ("a larger population with very low but positive welfare is better than a small population with very high welfare, for sufficient values of 'larger'"), the Sadistic Conclusion ("it is better, for high-average-welfare populations, to add a small number of people with negative welfare than a larger number with low-but-positive welfare, for sufficient values of 'larger'"), the Anti-Egalitarian Conclusion ("for any population of some number of people and equal utility among all of those people, there is a population with lower average utility distributed unevenly that is better"), or the Oppression Olympics ("all improvement of people's lives is of zero moral value unless it is improvement of the worst life in existence")."

Animals outnumber us a zillion to one and most of them suffer most of the time. Therefore Earth probably has negative average utility. If we don't accept the anti-egalitarian conclusion we should probably be paying Yog Sothoth to devour Earth and put them out of their misery since this would both raise the average utility to zero, and equalize utility.

The anti-egalitarian conclusion seems like it might be the least bad of the bunch above. Would we really rather a million serfs in the 1500s were slightly better fed, instead of having all that renaissance art and science?

I feel like there's some value in the aggregate capabilities of humanity and this is separate from the value of hedonic states. Utilitarianism is incomplete if it only cares about the latter. If I had to choose between a trillion rats on heroin, or a supercomputer that can solve grand unified theory, I'm choosing the latter.

Expand full comment

I suspect that with the repugnant conclusion something similar to the Maxwell's demon issue with thermodynamics is being used.

Namely, if you don't think about how Maxwell's demon operates, the second law is violated! But if you pay attention to it as a system, careful analysis shows it can't work without information sources which preserve the second law.

Similarly, I suspect the "imagine a world" pump in the repugnant conclusion is hiding a similar situation. Ok, so world Z has trillions of near-suicidal people in it. Why?! Who or what is keeping them from working to improve their lives? What's imposing these conditions? The evil of that system is pretty plain, and surely must count negatively to the total utility in world Z.

Another case: lives aren't just selected into existence. Choices are made about creating them. Abstract selections of worlds a, a+, and z aren't how it works. Again, when you assess the choices involved, who is making them? Is life creation in world Z centralized? Why aren't these trillions of people allowed to exercise their own choices about creating new lives? Are they all in total agreement? What's enforcing that orthodoxy?

In our world, the utility of having more children varies dramatically, and is exhausted very quickly. Heck, having to buckle a third car seat in the middle of a small car for a couple years is apparently more trouble than an extra human life is worth. Birth rates historically are very near replacement levels. Actual people when making choices in real conditions very regularly actually decide to have fewer children than is biologically possible.

I suspect careful assessment of real mechanisms would yield the same flavor of result as for Maxwell's demon: the kind of totalitarian oppression resulting in the repugnant conclusion universe is what is doing the evil to much much more than balance out the trillions of barely-worth-living lives of misery of it's inhabitants.

Expand full comment

>So maybe we could estimate that the average person in the Repugnant Conclusion would be like an American who makes $15,000

Given the global GDP per capita is about a third lower (roughly $11,000), does that mean a world better than the one we live in now?

Expand full comment

Utilitarians seem not to take into account the darker pleasures that constitute “happiness” for many people. Case in point: The well-known US Calvinist theologian and philosopher Jonathan Edwards (1703-58). According to Edwards, God condemns some men to an eternity of unimaginably awful pain, though he arbitrarily spares others – “arbitrarily” because none deserve to be spared:

Natural men are held in the hand of God over the pit of hell; they have deserved the fiery pit, and are already sentenced to it; and God is dreadfully provoked, his anger is as great towards them as to those who are actually suffering the executions of the fierceness of his wrath in hell…; the devil is waiting for them, hell is gaping for them, the flames gather and flash about them, and would fain lay hold on them…: and…there are no means within reach that can be any security to them.

Notice that Edwards says “they have deserved the fiery pit”. Edwards insists that men ought to be condemned to eternal pain; and his position isn’t that this is right because God wants it, but rather that God wants it because it is right. For him, moral standards exist independently of God, and God can be assessed in the light of them.

Of course, Edwards and people like him do not send people to hell; but a question may still arise of whether intuitive human sympathies for others (“moral intuitions”) conflict with a principled moral approval of eternal torment. Didn’t Edwards find it painful to contemplate any fellow human’s being tortured for ever? In his case: Apparently not. Edwards claims that “the saints in glory will…understand how terrible the sufferings of the damned are; yet…will not be sorry for [them].” He bases this on a rather ugly view of what makes people, including people in Paradise, feel happy:

The seeing of the calamities of others tends to heighten the sense of our own enjoyments. When the saints in glory, therefore, shall see the doleful state of the damned, how will this heighten their sense of the blessedness of their own state…When they shall see how miserable others of their fellow-creatures are…; when they shall see the smoke of their torment,…and hear the dolorous shrieks and cries, and consider that they in the mean time are in the most blissful state, and shall surely be in it to all eternity; how will they rejoice!

…some food for thought about "happiness" for present-day utilitarian philosophers here, perhaps.

(The above, with slight modifications, is lifted from the philosopher Jonathan Bennet’s classic article “The conscience of Huckleberry Finn”)

Expand full comment

Even considering that the metaphor breaks down under further examination, I still disagree that I should go and pick up the glass for hypothetical children. This is because the line of thought rapidly leads to "if I ever get pregnant I'm not allowed to get rid of it".

I think there's a certain threshold of harm to myself vs amount and certainty of harm to future people where I'd be willing to do it. Abortion is huge harm to myself vs huge certain harm to a future person I think the glass example is minor harm to my vs moderate uncertain harm to a future person, and for me the calculus comes out favouring me.

Most people already make this trade off constantly - every time you choose to fly to travel US to Europe rather than swim, you're picking a lower risk of drowning for you vs a semi certain moderate to large harm for future people.

Expand full comment

Isn't there an unstated premise here that you have only one (moral) value? I want a happy life, but I also want an interesting life, and they're two separate values that are allowed to have more complex interactions than just adding them together. I want an universe populated by happy people, but I also want one which is interesting and complex. I feel like this makes it pretty obvious why a static Alaskan town is missing something - it's only addressing one particular value in our value set.

Expand full comment

Reading the description of the population Repugnant Conclusion is a reminder of why we rarely end up with philosopher kings. They would still be trying to justify themselves even as their immiserated subjects were stringing them up to the nearest lamp pole.

Expand full comment

I really wish this kind of thing was taught to students at a certain age, probably earlier than 15-16. And the point of the lesson should be pointedly that you can use your rational mind to check and train your intuitions, but in the end you're very much allowed to throw away reason if it smells fishy. It's another kind of intellectual humility - even if the logic is perfect you can still fail, and your logic is literally never perfect.

There's a point in most people's lives where they're liable to follow a very reasonable train of thought all the way to Holocaust or Cultural Revolution. And I don't think the current water supply contains this particular failure mode explicitly yet - probably because popular memeplexes work hard on convincing you that _their_ logic is the correct one, and few very few teach general skepticism as an Art in itself.

Expand full comment

Okay so both Average Utilitarism and Total Utilitarism have some advantages and disadvantages in capturing our moral intuitions. It seems that an important sourse of failure modes is whole line of reasoning regarding creating/killing people instead of helping people that already exist. I'm not sure how to properly adress this right now but there seem to be some potential there.

On the other hand as a quick fix, what if we just combine them U = T + A, where utility has both total utilitarian component and average utilitarian one. The repugnant conclusion becomes less repugnant. We are ready to sacrifice some utility for the sake of more people but not up to the point where everybody are barely suicidal. Same thing with killing people with least happiness, total utilitarian component will give us a bar restricting killing people from some level of utility. What rate of total and average utilitarian reasoning contributing to the utility would be a correct one is obviously up to finetuning. What are the possible terrible consequences for this approach?

Expand full comment

Philosophical discussions from 4000 BCE: Urgrah the sage says to Gorgoro the rock basher. "See sticks, some long, some short. Put sticks in piles. One pile many many long sticks, one short stick. Other pile, few long sticks, one short stick. Tie pelt over eyes and spin, then grab stick from one of piles. Maybe you grab short stick, then you probably pulled from small pile of sticks. Now know all caveman must be small pile, because drew short stick to be under the stars now." At this Gorgoro became amazed and delighted at how lucky he was to live at the peak of human civilization.

Expand full comment
Aug 26, 2022·edited Aug 26, 2022

> It's a proof that any consistent system of utilitarianism must either accept the Repugnant Conclusion ("a larger population with very low but positive welfare is better than a small population with very high welfare, for sufficient values of 'larger'"), the Sadistic Conclusion ("it is better, for high-average-welfare populations, to add a small number of people with negative welfare than a larger number with low-but-positive welfare, for sufficient values of 'larger'")

I don't actually see much of a problem here. If the threshold for "positive welfare" is high, then the Repugnant Conclusion isn't very repugnant. If it's low, then the Sadistic Conclusion seems reasonable--you would rather have a small number of people suffering, then a very large number of people who are each suffering slightly less. If the arguments don't put any sort of bound on what is meant by "larger" or the difference between the 2 happiness levels, than these conclusions become even more clear, in my opinion.

(Actually, I'm not sure the "Sadistic Conclusion" is that wrong, regardless of the threshold chosen. If the slightly-worse group must be allowed to be arbitrarily large for the argument to go through, then which is better: one city of extremely happy people, or 1 galaxy of slightly less happy people?)

Expand full comment

My model of David Chapman is that he was semi-serious about "philosophy is bad" and that he wasn't just describing your post!

Expand full comment

I think the actual and fairly trivial answer to the "Repugnant Conclusion" is that the total happiness over time will be vastly higher for the non-tiled population every time. If we look at the world, most advances come from the developed world, not the vast bulk of humanity, and moreover, from the top end of the developed world.

This suggests that any "tiling" argument fails because better people actually lead to exponentially higher levels of growth. Indeed, if we look at history, the Industrial Revolution occurred after several centuries of growth that made the average person in parts of the world multiple times better off than subsistance farmers.

This led to insane levels of economic growth and growth of human well-being.

As such, the entire argument immediately fails because it is obvious that you see much better outcomes with better populations - and you see more people that way, too.

Moreover, because our requirements for minimum standards are non-stable and go up over time, this means that what is considered acceptable today is not considered acceptable in the future. This is why India and the US come out around similar levels - what is considered tolerable in India is not tolerable here.

As such, the entire argument is pretty awful and fails on a pretty basic level.

Not to mention the other obvious problems (like the fact that people don't like being miserable so are likely to effect changes you don't want if they are kept in this state).

Well, that and the fact that the ideology inherently says that the most ethical thing we can do now is commit serial eugenics-based genocide because those near-infinite number of future people will be better off if they are all geniuses and have very few criminals in their ranks, and we know from behavioral genetics that these traits are heritable, so because those future people matter SO much, it's obviously horribly unethical for us NOT to set people on the right path.

Right? :V

Expand full comment
Aug 26, 2022·edited Aug 26, 2022

The mildest of preferences against inequality can defeat the repugnant conclusion. If everyone is very very very happy, then bringing a child into the world who is merely very happy (let alone neutral) will increase inequality instantly. The badness is not connected with the happiness of the child, but the inequality that you have birthed (ha!) on society.

You can get a formal population ethics that formalises this if you want. For instance, arrange everyone from lowest life utility to highest, and apply a discount rate across this before summing. Then bringing people into existence can be a net negative, even if their own utility is positive. I'm not saying you should use this system, just that there exist perfectly fine formalisations of population ethics that avoid the repugnant conclusion entirely.

Now, as someone has pointed out, if you avoid the repugnant conclusion, you have to embrace another bad-sounding conclusion. The one I want to embrace is the so-called "sadistic conclusion". I say "so-called", because it isn't really a sadistic conclusion at all. It says, roughly, that sometimes it's better to bring into existence someone who's life isn't worth living that a larger collection of people whose lives are barely worth living.

Sounds sadistic for that poor person whose life isn't worth living - but what the "sadistic" moniker conceals is that it's a *bad* thing to bring that person into existence. It's also bad to bring those other people into existence, for non-total utilitarianism ethics. All that the sadistic conclusion is saying is that, between two bad things, we can make one of them worse by scaling it up.

More details here: http://blog.practicalethics.ox.ac.uk/2014/02/embracing-the-sadistic-conclusion/

Expand full comment

This is the point where I should bring up the Very Repugnant Conclusion, that Toby Ord shared with me.

If W is a world with a trillion very very very happy people, then there is a better world W', with has a trillion trillion trillion people getting horribly tortured, and a trillion trillion trillion trillion trillion trillion people living barely passable lives (or numbers of that magnitude; add a few trillions as needed).


Expand full comment

Didn't comment on the last one, but the comment that struck me most was the one about the infinite oregano. That is, the problem lies with taking statistical averages to pool together people that are stated to not interact with each other. If you instead take a soup approach and assume a person's value is based on their direct effect on the state of the existing group, the "make more, less happy people" problem goes away.

I guess I'll try to make an example. A baby with no living relatives is given to an AI facility who puts it in a solitary room, and gives it food and water that would inevitably go bad and be thrown away if the baby didn't get it. If the baby lives its whole life in that room, and nothing they do leaves the room, is their life's value based on how happy or depressed they are? Or do we say their life is a complete neutrality? If it's a complete neutrality because they never affect anyone else, the Repugnant Conclusion disappears.

Expand full comment

+1 to utilitarianism being fundamentally broken, moral philosophy being inherently incoherent and a waste of time trying to rationalize moral instincts that aren't rational in the first place.

Instead, just help others purely for selfish reasons like because it'll make you feel good, look good to others, or because it helps to maintain a common resource that you/your community benefits from (e.g. social safety nets, charities, Wikipedia, open source projects). That seems much more coherent and in line with our own moral instincts.

Expand full comment
Aug 27, 2022·edited Aug 27, 2022

I'm pretty unconvinced by the non anti-utilitarian argument. If people are at exactly the same utility they are far more likely to be having the same experiences. It's fairly plausible that having the same experiences doesn't actually count as different sources of value, e.g. see https://slatestarcodex.com/2015/03/15/answer-to-job/

Expand full comment

The response (16) to Mentat Saboteurs explanation of glass shards and barefoot running misses something. Every. Single. Longtermism. Hypothetical. Has an an answer like this.

The more you dig into each risk the more you realise that the risks are not as profound as initially thought. After a while you start to notice a pattern: none of the risks pan out. It doesn't mean that the risks shouldn't be taken seriously, a runner could still cut their foot, a nuclear war could still kill a 100 million people; it's just that the risks aren't existential.

Where gaps in knowledge exist fear arises. It's natural to be afraid of the dark, but maybe grab a torch and steelman that shit.

Expand full comment

The world A (five billion happy people) means "and the humanity is extinct after that". As a data point, my intuition doesn't support that at all, even compared to "colonise the Virgo cluster and exist for millions of years as cyberpunk distopia", if that's the only possible variant.

I'd also support Repugnant Conclusion if the question is "utterly destroy a miserable megapolis or a happy hamlet".

Expand full comment

> If playing Civ and losing was genuinely exactly equal in utility to going to the museum, then it might be true that playing Civ and winning dominates it. I agree with Blacktrance that this dones’t feel true,

Well, that does feel true to me. (Except insofar as a "game" of Civ where I know ahead of time whether I win or lose is not quite the same as an actual game, but this isn't applicable to children -- children who underwent embryo screening are just as real as ones who didn't.)

> any consistent system of utilitarianism must either accept the Repugnant Conclusion ("a larger population with very low but positive welfare is better than a small population with very high welfare, for sufficient values of 'larger'"), the Sadistic Conclusion ("it is better, for high-average-welfare populations, to add a small number of people with negative welfare than a larger number with low-but-positive welfare, for sufficient values of 'larger'"), the Anti-Egalitarian Conclusion ("for any population of some number of people and equal utility among all of those people, there is a population with lower average utility distributed unevenly that is better"), or the Oppression Olympics ("all improvement of people's lives is of zero moral value unless it is improvement of the worst life in existence").

The Sadistic Conclusion doesn't *at all* sound obviously wrong to me, not even if I translate it to the first person. (Would I rather my life was extended by thirty years of boredom and drudgery than by one hour of excruciating pain? The hell I would!) So a system which avoided the other three conclusions might still feel okay to me even if it accepted the Sadistic one.

> There’s been some debate about whether we should additionally have an explicit discount rate, where we count future people as genuinely less important than us.

More precisely:

Any faster- (or slower-)than-exponential discounting would be dynamically inconsistent (you would pick A over B if choosing thirty years in advance but you would pick B over A if choosing five minutes in advance *even if you didn't learn anything new in the meantime).

Any exponential discounting would either care about the year 1,102,022 a helluva lot less than about 1,002,022 or care about 3022 nearly as much as about 2022.

> Hari Seldon writes: ... If every human who ever lived or ever will live said "I am not in the first 0.01% of humans to be born", 99.99% of them would be right. If we're going by Bayesian reasoning, that's an awfully strong prior to overcome.

That argument isn't novel and has been extensively discussed for decades. https://en.wikipedia.org/wiki/Doomsday_argument

Expand full comment

I feel that the only place where I disagree with MacAskill is that I don't believe in "sufficiently large" population - more specifically, I believe that people can have degrees of being "the same person" other than 0 or 1. That is to say, once there is a sufficiently large/diverse population, it becomes bad to create someone below average, since you're shifting people's experience-measure in a worse direction. Likewise, it becomes close to pointless to make more people eventually, since they already mostly exist.

Expand full comment

One potential solution is to factor in uncertainties and practical concerns. Instead of asking "which of these two worlds do you prefer", realize that in the real world you always start from somewhere and are proposing changes to go somewhere else.

So if you start from a world of five billion perfectly happy people and go to a world of ten billion slightly less happy people, you have to ask "how wide are the error bars on that?" Because for any change to that many people, they're going to be pretty damn wide. So instead of going from 100 happiness to 95 happiness, you might accidentally go from 100 to 75. Which really changes the calculation. Anyone who says that they have more certainty about a proposed change of that magnitude can not be taken seriously.

The math works the same way going the other direction, starting from ten billion going to five billion. The error bars are too wide.

If you start from somewhere and go somewhere else, and you're uncertain about how close you'll get to your goal, it naturally limits the whole process. Even if you go through the cycle once or twice, quadrupling the population, at that point your error bars have only gotten wider, and your gains smaller, so the process naturally stops because it's no longer a clear win. You certainly won't go all the way to a quadrillion people with 0.01 happiness, because it's impossible to make those fine adjustments at those magnitudes.

Someone mentioned higher up that these philosophical preferences are not transitive. If I prefer A over B, and B over C, that doesn't mean I prefer A over C. this is why. It's just math with error bars.

Expand full comment

Huh. My take on utilitarianism is actually heavily informed by Unsong



Translate to utilitarianism: sum utilitarianism, weighted by distinctiveness. If two humans have functionally identical lives of muzak and potatoes, that only counts for one, come up with something original if you want more points. Can you create ten billion meaningfully distinct barely-worth-living lives? If you can, I approve, but I bet you can't. If you simulate a copy of me in hell, that sucks, but I don't much care if the counter next to the simulation says "1 copy" or "9999 copies".

"Meaningfully distinct" is pulling a lot of weight here, and I'm still searching for a mechanism that doesn't incentivize accepting a long string of random numbers into your identity. But this is a descriptive take, not prescriptive, and I think eventually a solution could be found.

Expand full comment

A crucial difference between people who operate in philosophy la-la-land vs. reality is former naturally tend to gravitate towards models where things are known/predictable for certain, while almost nothing in the real world works like that. In the real world you need an appropriate margin of safety for your decisions, which is why you'd never never ever even consider getting close to that 0.001 happiness level because there is an excellent chance you undershoot it by oh I don't know 50 points. And at least in my book if I have 50% chance of undershooting by 50 points and 50% chance to overshoot by 50 points I wouldn't take the risk of creating that much suffering by aiming for neutral point. I'd shoot for a 50 average.

Expand full comment

I'm not sure where this fits in, but I would like to raise my personal objection to the "twenty-nine steps that end with your eyes getting pecked out by seagulls" process.

This kind of thing typically starts with presenting an extremely contrived hypothetical (like "you get to choose between creating a world with X people at Y average happiness, or 2X people at Y-5 average happiness" -- I get to choose that? Really? Where's my world-creation button?) Then it asks people questions about their moral intuitions regarding the hypothetical. Then it takes the answers to those questions, and proceeds to turn them into iron-hard mathematical axioms that are used to build up a giant logic train that ends up in seagulls eating your eyes.

The problem here is that intuitive answers to moral questions about extremely contrived hypotheticals are not reliable information. If you ask me some extremely weird moral hypothetical question, my credence in whatever answer I give is not going to be high. Maybe you could get 80% certainty on an answer like that -- honestly, it's probably lower, but it's certainly not higher.

Turning an intuition into an iron-hard mathematical axiom, of course, requires that it be at 100% certainty. So the sleight of hand is in the bit where you take a mushy contingent answer, and turn it into a 100%-certain axiom. If you actually try to do logical deduction using that 80%-certain intuitive guess, then you have to lower your certainty with each logical step you make, just as when making calculations with physical parameters you have to multiply tolerances. Doing this correctly, you end up saying "this philosophical exercise suggests that, at 5% certainty, you should let seagulls peck out your eyes". To which it is obvious that you're justified in answering "so what".

(A related technique is, IMO, the best way of dissolving Pascal's Wager/Mugging issues. The point of Pascal's Mugging is to present you with an artificial probability distribution where paying the mugger is positive-EV. But the EV, i.e. the mean of the probability distribution, is not the only relevant quantity; you also need to take variance into account. The variance of the Pascal's Mugging distribution is extreme, and the modal outcome is one where the value of paying the mugger is negative. The trick in Pascal's Mugging is to get you to ignore this variance information, which is very relevant to the decision.)

Expand full comment
Sep 5, 2022·edited Sep 5, 2022

These Utilitarian thought experiments sound a lot like Scholasticism. How many angels can dance on the head of a pin? Can God make a stone so heavy that he can't lift it?

I don't intend to be mean spirited, but isn't this all a bit... well... juvenile?

Expand full comment

"But, the odds of me being in the first thousand billion billionth of humanity are somewhere on the order of a thousand billion billion to one against..."

Imagine that every positive integer is conscious. What prior probability should the number 4 place on the proposition "I experience being a single-digit integer?" Why should the conscious entity 4 be surprised that it's a 4? While there may be infinitely many possible integers, there is guaranteed to be a 4, which means that one consciousness is guaranteed to experience the qualia "I am a 4."

Alternate thought experiment: consider two possible worlds. In World A, humanity explodes into nonillions of descendants sometime after 2100 CE. World B is identical to World A, except that the universe vanishes in a cosmic accident on January 1, 2080, and no one after that is born. Now consider Bob, who lives until 2050 in both worlds. His experiences are identical in both worlds. Why should Bob take his experiences as evidence that he lives in World B? Both worlds contain a Bob!

Regardless of what the future looks like, the present contains Bob. He has conscious experiences, and those experiences will always be had by Bob's consciousness, not anyone else's. Why should it surprise the-conscious-entity-experiencing-Bob's-life that it is experiencing Bob's life?

I posit that our current experiences tell us nothing whatsoever about the likelihood of the universe imploding sometime in the future, except insofar as we can trace a causal path from our current observations to future events. Anthropic reasoning doesn't work for estimating future humans.

Expand full comment

> I should stress that even the people who accept the repugnant conclusion don’t believe that “preventing the existence of a future person is as bad as killing an existing person”; in many years of talking to weird utilitarians, I have never heard someone assert this.

As someone whose intuitions see the repugnant conclusion as "not that repugnant", I think "killing somebody with 40 decent years of life left" and "not letting somebody be born with 40 decent years of life left" are at least somewhat close in badness.

My intuitions take "probable persons should be evaluated equivalently to existing persons" quite far, and I haven't had time to work out the math (which is, probably, not easy, and may cause lots of views to change). But I want to file that I do have these intuitions.

Expand full comment
Sep 22, 2022·edited Sep 22, 2022

Something that bothers me about population ethics is that it seems like we live in a Big Universe, and the intuitions from hypotheticals like "what if the universe was just 5,000 people", "what if the universe was just a billion people in Hell", "what if the universe was just a high-tech mega-civilization spanning the Virgo Supercluster", etc. seem like they transfer poorly to the world described by modern physics.

(We seem to be embedded in two unrelated "multiverses" - space-time seems to extend much, much further than the visible universe, and the many-worlds interpretation of quantum mechanics suggests many overlapping areas embodying different quantum outcomes. That's without even getting into more speculative ideas.)

Average utilitarianism is intuitively appealing when it comes to smaller toy models like this, but it only works if you know the whole state of the world. If we're extremely uncertain about the state of everyone outside our tiny bubble, it becomes undecidable, we're left radically uncertain about whether any lives we're capable of are bringing the average up or dragging it down.

Scope insensitivity also makes these questions awkward. Is the reason we like the glorious high-tech civilizations more than the larger but worse-off ones of the Repugnant Conclusion just that our brains are too small to really represent the number of people in either of them, and we're just comparing individual lives? If so, should we ignore our intuitions here because we know they're irrational, the same way we should in a Month Hall problem?

Edit: A third issue: we tend to reduce utilitarianism to hedonic utilitarianism in these discussions. But while there are convincing arguments that you have to be some sort of utilitarian or else your preferences are inconsistent and you can fall into endless loops and the like, there's no such argument requiring us to be pure hedonists. Obviously happiness is immensely important, but we can attach utility to other things as well.

For example, if we regard death as intrinsically very bad, then any new mortal life we create needs to be good enough to balance out the fact that they will someday die; but once they are created that cost is sunk, and they only need to be barely worth living to justify not committing suicide. (Although this runs into some interesting problems of how exactly to define death, I basically think it's true.)

Expand full comment