394 Comments
deletedAug 26, 2022·edited Aug 26, 2022
Comment deleted
Expand full comment

(some context from https://astralcodexten.substack.com/p/effective-altruism-as-a-tower-of/comment/8633798)

Pretty sure that game theoretically you do still run into the drowning child scenario, in the sense that if you could give me my child's life at minimal cost to you and did not, we'll probably have issues (fewer than if you had killed my child, of course). That seems like part of the calculus of coordinating human society.

Expand full comment
Comment deleted
Expand full comment

It seems like you're choosing which things are rights and violations without appealing to your foundational game theory morality now? I'm not saying you're wrong, just that I don't understand how you've decided where the line is.

If it confers game theoretical benefits to harm someone at a particular level of could've/should've, then doesn't this framework suggest it? Ignore here the can of worms around figuring out if you really could've/should've, or explicitly appeal to it.

No existing countries are bound by this framework, so what they do is irrelevant.

Expand full comment
Comment deleted
Expand full comment

We're not talking about whether or not we like or believe something here. There is no such thing as a right (positive or negative) outside of some framework that defines them. My understanding was that your set of rights are defined by game-theory, which would not obviously exclude negative rights.

If you're already jumping to the step where you bend your framework to make it useful or acceptable, it's probably fair for me to point out that trying to measure rights-violation-utils is in practice just as silly as trying to measure QALY-utils -- accuracy is going to be extremely rough.

Expand full comment
Comment deleted
Expand full comment
Comment deleted
Expand full comment
Comment deleted
Expand full comment

This is a good point. We would struggle to determine the line between very small net positive happiness and objective unhappiness. It could be that most people plod along day after day at -10 or whatever, and some researcher/philosopher would mistake that for 0.1 and say they were fine. Not that we have a way to benchmark or calibrate what either of those numbers would mean in real life.

Expand full comment
deletedAug 25, 2022·edited Aug 25, 2022
Comment deleted
Expand full comment

You don't moonlight as Reg, the leader of the People's front of Judea from Monty Python's Life of Brian, do you?

"All right, but apart from the sanitation, the medicine, education, wine, public order, irrigation, roads, a fresh water system, and public health, what have the Romans ever done for us?”

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

How do you measure happiness? A caveman sitting in the warm summer sunshine, in good health, with a full belly, and their mate and offspring by their side, could have been perfectly happy in a manner that, if we managed to create a felixometer to measure "brain signals corresponding to happiness', hooked our caveman up to it, measured them, then measured someone living in today's Future World Of Scientific Progress, scored just as highly.

How can we distinguish between our caveman who is warm and happy and our modern person who is warm and happy? Both of them score 100 on the felixometer. How much does Science And Progress contribute there? Yes, of course it contributes, but we can't assign "a single felicity unit on the felixometer" matches up to "ten years advancement in progress", such that if we had a time machine to jump into and measure the happiness of a future person living in a space colony, they would measure 2,000 units, our current day modern person would measure 100 units and our caveman would only measure 1.

"I'm physically and emotionally satisfied, not in pain, not hungry, comfortable, and it's a lovely day in unspoiled nature" is the same for Cave Guy, Modern Guy and Future Guy and will measure the same on our felixometer (even if "physically and emotionally satisfied" includes antibiotics and surgery to treat sources of physical pain for Modern Guy that Cave Guy can't even dream of).

That's why "measures of happiness/contentment" don't tell very much, and why as above "depressed guy in America" and "depressed guy in India" can be roughly 10% of both populations, even if living conditions in America and India are very different.

Expand full comment

I see two major differences in what we are calling "happiness" between our society and a pre-modern society. Our society is far more complex and comes with lots of obligations, stress, and necessary knowledge (i.e. you have to go to school for over a decade to meaningfully participate). So that's on the negative side, lowering much of the potential happiness. On the other side, our society has much better insulation against sudden unhappiness - disease, killed by neighbors or wild animals.

The question seems to be more about average happiness (which would be really hard to determine) or preference for avoiding catastrophe. My wife and I prefer a steady income to a potentially higher but more volatile income. I would prefer today's society (assuming my preferences would carry over) to a caveman's, based on that.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

""All right, but apart from the sanitation, the medicine, education, wine, public order, irrigation, roads, a fresh water system, and public health, what have the Romans ever done for us?”"

How could any of this stuff possibly matter except to the extent that people's experienced sense of wellbeing has improved as result? And if you're saying this stuff *ought* to have improved our sense of wellbeing, well, that's irrelevant. What matters is what has actually happened.

Expand full comment

While people are having Repugnant Conclusion takes, I might as well throw in a plug for my post from 2016 about "the asymmetry between deprivation and non-deprivation," which is an under-appreciated nuance of the issue IMO:

https://lapsarianalyst.tumblr.com/post/147212445190/i-was-thinking-today-about-why-total

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

I definitely appreciate this post and think you're basically right

Though when I read "(Somehow I suspect this will be challenged. I am curious to see what people will say. Maybe humans can be happier than I think.)" my inner Nick Cammarata leaps forth. It seems like some people really do live in a state of perpetual, noticeable joy

(Maybe we should be figuring out how to attain that? I know for him it involved meditation and a sprinkle of psychadelics, but I have no clue how I would do the same)

Expand full comment

How can mortal beings who are aware of their own mortality ever be unproblematically happy? With the possible exeption of the lobotomised....

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

I think the real answer is that how happy you are is only tenuously and conditionally related to the facts of your life. There's no a priori answer to "set of facts {X, Y, Z} are true, should that make me feel happy?" If there's some process that can take any normal set of facts and make a person feel good about them, then almost anyone could be happy.

Expand full comment

Well, this is "happy" the way a man who falls from a skyscraper, can tell himself as he passes floor after floor on the way down: "All in all, I am quite happy with my situation - I haven't had a scratch so far".

Expand full comment

I agree with this, and this fact alone should give us great pause about our ability to rationally determine how to best create happiness. It seems to me the best way to create happiness is to create the kind of person who always seems to find the good in things and focus on that. As SA's example of unhappiness in India vs. US being relatively the same percent, despite the significant differences in physical reality suggests, it seems that there will be people unhappy in almost any circumstance, and people who will be happy in almost any circumstance, and people in between.

Expand full comment

The US founding fathers wisely stated in the Declaration of Independence that “life, liberty and the pursuit of happiness” are unalienable rights. That turn of phrase is rather clever, notice the difference from stating that “life, liberty and happiness” is what rulers should aim at providing. I pale at the thought of some utilitarian (or a future AGI?) getting into a position of power and insist on providing me and everybody else with happiness; and I pale even more if his/her/its ambition is expanded to make everybody happy in the untold millennia to come.

The late prime minister of Sweden, Tage Erlander, said something rather similar as US founding fathers, concerning the Scandinavian welfare states. He stated (quoting from memory) that the role of the state is to provide the population with a solid dance floor. “But the dancing, people must do themselves”.

Expand full comment

I'd agree with this — not in the sense that it's literally impossible (you could not care, or believe in an afterlife, etc), but in the sense that *I* cannot imagine it and I think most who aren't extremely bothered by it are in some sense wrong / not really thinking about it / have managed to persuade themselves of the usual coping mechanisms.*

Not that this *has* to be the case — perhaps, say, a true Buddhist or Stoic might have legitimately gotten themselves to a place wherein it doesn't bother them without leaning on afterlives or ignorance — just that I believe most people share my basic values here whether they admit it or not.

*That is, the first two options there are pretty self-explanatory; for the third, think of the stuff you hear normies say to be "deep" and "wise": "It's natural, the circle of life, you see; beautiful, in its way... and you'll live on in your children and the memories they have of you!"

Yeah okay but if you knew you'd die tomorrow unless you took some action, would you be like "nah I won't take that action because I'd prefer the 'children and memories' thing to continued life"? No. It's a consolation prize, not what anyone actually prefers. Not something anyone would get much comfort from if there was a choice.

You can make anything shitty *sound* good — "some people are rich, some people are poor; it's natural, the circle of wealth, beautiful in its way... and you can be rich in spirit!" — but make it more concrete, by posing a choice between "rich" and "poor" or "alive" and "dead", and we see how much the latter options are really worth.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

Really interesting post, thanks for sharing.

> we could say that depression just is being close to the zero point, and depression-the-illness is a horrible thing that, unlike other deprivations, drives people’s utilities to near-zero just by itself.

Being depressed felt like everything in life - including continued existence and meeting my basic needs - is too _hard_, in the sense of requiring a lot of effort that I physically cannot expend. Being suicidal felt like I just want what I am feeling to stop, and it's more important than self-preservation because preserving the self will just lead to feeling this forever. I've heard it compared to jumping out of a burning building and I really like this analogy.

Come to think of it, it's the inability to imagine getting better that's the cognitive distortion, the rest of the reasoning is solid. I think we as a culture taboo this kind of thought because you fundamentally cannot estimate your future happiness or suffering.

Expand full comment

>I've heard it compared to jumping out of a burning building

David Foster Wallace's expression of this idea is close to my heart: (linked so one can choose whether to read)

https://www.goodreads.com/quotes/200381-the-so-called-psychotically-depressed-person-who-tries-to-kill-herself

Of course it doesn't capture every type of suicidal ideation or behavior, but it's an excruciatingly keen picture of the type that eventually took Wallace and whose shadow looms over several whom I love, as well as, at times, me.

Expand full comment

+1 good post from me too. It does seem right that non-linearity of non-deprivations is separate from hedonic treadmill, a point that I haven't considered before.

Expand full comment

I don't think it's fair to consider depression the zero point. Humans evolved to want to survive, and our cultures generally reinforce this desire. People who would choose not to exist are, for whatever reason, not thinking normally. There's no good reason to think it's closely related to actual quality of life, and people in chronic pain may have much lower average quality of life than people with depression (which is usually temporary). The bar should be whether you personally would choose to live that person's life in addition to your own or not.

Expand full comment

You might have Schopenhauer as an intellectual ally. I think of his argument that if you compare the pleasure a predator feels when eating a prey, with the pain the prey feels while being eaten, it is obvious that pain is more intense than pleasure in the world. Plus, there is much more of it.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

Similarly, "The rabbit is running for its life, the fox only for its dinner". I should note I still embrace the Repugnant Conclusion (even while also being a moral non-realist).

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

I'm honored to be highlighted and get a sarcastic response, although I think I defended the relationship between the weakness of the thought experiment and its consequences for a philosophy of long-termism in the replies to my comment.

Also, if water didn't damage suits, wouldn't your obligation to save the drowning child be more obvious rather than less? An extremely selfish person might respond to the Drowning Child thought experiment by saying, "Why no, the value of my suit is higher to me than the value of that child's life, that's my utility function, I'm cool here." Wouldn't the analogous situation be if children couldn't drown (or my mocking of MacAskill with a version of the Drowning Child where the child is Aquaman)?

Expand full comment

I think Scott's sarcastic point is that it would mean that, actually, *you* never want to sacrifice resources to save someone else – the thought experiment that led you *to think that you want it* was flawed.

But yeah, it's more of a joke than something that stands up to scrutiny.

Expand full comment

Having occasionally ditched my suit as fast as possible and have some basic swimming skills including life saving training, the time it takes to ditch the suit is much less than the time you will save in the water without it if they are far enough from shore to be in deep water. Which is where a drowning person is likely to be. Not to mention the likely effectiveness and safety to both of your mission.

Expand full comment

This. The hypothetical is really badly set up.

Expand full comment

Specify that you can tell the water is 4 ft deep and the child is drowning because they are 3 ft tall and can’t swim. They are far enough that you can’t reach from shore, but close enough that you don’t have to swim any great distance. Does it work now? I guess also specify that you are over 4 ft tall so you can just stand.

Expand full comment

That's the way I read it; a child is drowning in water an adult can wade in. Otherwise, my inability to swim means I watch the kid die and thank God that at least my suit's okay.

Expand full comment

It's also not really an argument for anything resembling utilitarianism - it it was a really nice suit, and the child was horribly abused (and thus very unhappy), utilitarianism is the only serious moral philosophy I'm aware of that advocates not saving the child for the sake of the suit.

Expand full comment

It's odd, because the proponents of utilitarianism seem to take it as a positive trait that their philosophy notices that saving a child is good, even if it would ruin a suit. As you say, other major philosophies don't consider saving the suit to be a moral consideration at all. I think I can see how they would look at that as a better system, because someone would consider the suit in an analysis of the situation - but I don't consider that part of the *moral* calculations. You always save the child, whether you're wearing swimming trunks or an expensive dress - that's the moral imperative. That you bear the burden of a ruined suit is a practical consideration that doesn't impact the morals.

Expand full comment

It’s also weird if you think about who you should save - a very happy successful businessman or a horribly abused child. Most people’s moral intuition is you safe the child, rather than whomever has the nicest life.

Expand full comment

If water didn't damage suits, then that analogy can't be extended to spending money on bed nets.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

Yes! "Long termism's analogies don't work because the real world details are more complex than they allowed" for is a compelling analogy for "long termism doesn't work because the real world details are more complex than they can calculate for" and deserves a better response than sarcasm.

Expand full comment

I get that pedantry is annoying, but the alternative is being wrong.

Expand full comment

Your pedantic comment, while not being data, is an illustrative example of how everyone screws up when trying to predict the future. I think that's still an interesting tangent to go down. Is longtermism any more useful than neartermism if we fundamentally can't predict long-term consequences? How many seemingly-watertight arguments are going to be totally wrong in hindsight? Are enough of them going to be right to outweigh the ones that are wrong?

Expand full comment
Aug 26, 2022·edited Aug 26, 2022

Man, I'm getting put in the pendant box? Can I convince you that I'm something far worse than a pendant?

My real motivation was that it annoys the hell out of me when people make stupid assumptions about the dangers of being barefoot. Moderns think that human feet are soft and weak and incapable of being exposed to anything other than carpet. To be fair, most Western people's feet ARE soft and weak, because they've kept them in cumbersome protective coverings for the majority of their lives. But this is not true, human feet, when properly exercised and conditioned, are actually superior to shoes for most cases. This makes sense if you think about it for 20 seconds - shoes did not exist for the vast majority of human history, and if human feet were so weak, then everyone would have been incapacitated pretty quickly by stepping on something sharp. I recommend protective coverings if you're a construction worker, of course, there are dangers there that didn't exist in the ancestral environment. But every time I'm out hiking or running on sidewalks, some asshole feels the need to ask me if I forgot my shoes, or make some other ignorant comment. This annoys me, and seeing an opportunity to educate an internet community on barefooting, I took the opportunity.

So you see, I am not a pendant, even though everything you say is correct and I agree entirely and that factored into my comment. But my primary motivation was advocating for barefooting, and by being obnoxious, I got Scott to signal boost my comment. Hahaha, my evil plan to defeat the shoe industry and restore strong feet is succeeding!

Expand full comment

I object to bare feet on aesthetic grounds

Expand full comment

I object to shoes on aesthetic grounds.

Expand full comment

I object to your reasoning - you claim that just because we evolved with bare feet, they're better than shoes. But then why did we invent shoes in the first place, and why were they so widely adopted? Do you think that only the edge cases where shoes are better drove that adoption?

Expand full comment

> you claim that just because we evolved with bare feet, they're better than shoes.

No, I claim that having gone barefoot and having worn shoes, in my experience bare feet are often superior to shoes, and this is supported by the history of human feet and their obvious evolved purposes.

> But then why did we invent shoes in the first place, and why were they so widely adopted?

Ah yes, humans have never invented and widely adopted anything that is bad for humans, because humans are perfectly reasonable creatures that would never make such mistakes. Cigarettes, cryptocurrencies,.and leaded gasoline simply never existed.

> Do you think that only the edge cases where shoes are better drove that adoption?

I do not. Fashion and advertising by shoe companies are much more dominant forces than construction boots and such. The fact that most modern shoes are horribly designed but nearly universally used is a data point to consider.

Expand full comment

Ummm. It takes less than a minute to undress, and a water-soakd wool suit will significantly weigh one down (making it more likely I'd be unsuccessful rescuing the child). I'd discard the suit before I jumped in to save the child.

Expand full comment

You're right that anti-communist intuitions don't disprove the abstract argument.

But if you have a friend who's constantly coming up with hypotheticals about sleeping with your beautiful wife, maybe don't leave him alone with her.

Expand full comment

I think most people who come up with Repugnant Conclusion hypotheticals are doing so to point out the problems with average utilitarianism, not because they think the universe with a trillion nearly-suicidal people is the best of all possible worlds. That's why they called it the Repugnant Conclusion and not the Awesome Conclusion.

Expand full comment

Correction: total utilitarianism, not average one. Average utilitarianism doesn't lead to repugnant conclusion, but has other problems, see the mind experement of creating more people in hell, who would be a little more happy than the ones already there.

Expand full comment

...or simply removing the least happy people in any given universe.

Expand full comment

That’s just a total misunderstanding of the correct use of the mean though. The RC itself is just simple multiplication.

Expand full comment

You realize that they call such a scenario "repugnant", right?

Expand full comment

I think I have to post my obligatory reminder that consequentialism does not imply utilitarianism, that E-utility functions are terribly ill-defined and it's not clear that they make sense in the first place, and also while I'm at it that decision-theoretic utility functions should be bounded. :P

...you know what, let me just link to my LW post on the subject: https://www.lesswrong.com/posts/DQ4pyHoAKpYutXwSr/underappreciated-points-about-utility-functions-of-both

Expand full comment

Also Scott's old consequentialism FAQ, which describes consequentialism as a sort of template for generating moral systems, of which utilitarianism is one: https://translatedby.com/you/the-consequentalism-faq/original/?page=2 (the original raikoth link no longer works)

Expand full comment

This. I continue to be baffled that people.think it's worthwhile to endlessly tinker with utilitarianism instead of.abandoning it.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

The reason seems pretty clear to me. The problem with accepting consequentialism-but-not-utilitarianism is that there is no obvious grounds to value distant or sufficiently different from you people high enough (or at all), and so no way to rule out various unsavory, but nevertheless empirically appealing ideas like egoism or fascism outright.

Expand full comment

There no strong arguments motivating universalism, but there generally aren't strong arguments forbidding it.

Also...how sure are we that universalism is true? Lots of people here reject it. Do EAs believe in utilitarianism because they believe in universalism, or vice versa?

The theory that's most hostile to universalism is probably contractarianism, but even that only says that you cannot have obligations to people in far off lands who are not members of your community. A contractarian can still regard telescopic ethics as good but not obligatory. Does it matter so much that universalism should not just happen, but be an obligation?

Expand full comment

Well, I'm neither an EA nor an utilitarian, so I'm only speculating, but this community does seem to have a particular affinity to universalism and systematization, and it certainly helped that eloquent thought leaders directed plenty of effort in that general direction.

Expand full comment

I agree, because I am a moral non-realist egoist, describing myself as a consequentialist but not a utilitarian. Although my broader consequentialism is a stance toward others so we can come to some kind of contractarian agreement.

Expand full comment

A blend of satisficing consequentialism and virtue ethics has started to seem pretty workable to me. Thoughts?

Expand full comment
Aug 26, 2022·edited Aug 26, 2022

Also https://www.princeton.edu/~ppettit/papers/1984/Satisficing%20Consequentialism.pdf

(Linked for about the eighth time in a month)

Expand full comment

FWIW, it's aggregation, not maximization, that I object to. I think any coherent consequentialism needs to be maximizing due to Savage's theorem / the VNM theorem.

Expand full comment

Interesting. It seems to me both that maximizing is what leads to all the various inescapable repugnant conclusions, and also that maximizing is the most counter-intuitive part; enormous chunks of life are lived with value happily left on the table.

Which of these doesn't seem convincing / why is maximizing convincing for you?

(FWIW, I think VNM has very unrealistic assumptions. Is there an emotional reason why you find it convincing or realistic or intuition fitting?)

Expand full comment

You seem to be using some implicit assumptions that I'm not, but I'm a little unsure what they are, which makes this a little difficult to reply to? For instance, to get the repugnant conclusion, you need a *lot* of things, not just maximization; for instance, you can't get the repugnant conclusion without some sort of aggregation. There's no particular reason that any given non-utilitarian/non-aggregative consequentialism would yield it.

(FWIW, my position is actually not so much "aggregation is *wrong*" as it is "I can't endorse aggregation given the problems it has but I'll admit I don't know what to replace it with". :P )

> Is there an emotional reason why you find it convincing or realistic or intuition fitting?

Minor note -- I'm sure you didn't intend it this way, but this question comes off as fairly rude. Why say "emotional" reason, and thereby include the assumption that my reason can't be a good one? Why not just ask for my reason? Adding "emotional" just changes the question from one of object level discussion to one of Bulverism (of a strange sort). I would recommend against doing that sort of thing. Anyway, I'm going to ignore that and just state the reason.

Now the thing here is once again that it's a bit difficult to reply to your comment as I'm a little unclear as to what assumptions you're using, but it reads to me kind of like you're talking about applying VNM / Savage to humans? Do I have that right?

But, I'm not talking about any sort of utility function (E-utility or decision theoretic utility) as applied to humans. Certainly humans don't satisfy the VNM / Savage assumptions, and as for E-utility, I already said above I'm not willing to accept that notion at all.

As I went through in my linked post, I'm regarding a consequentialist ethics as a sort of agent of its own -- since, after all, it's a set of preferences, and that's all an agent is from a decision-theoretic point of view. And if you were to program it into an AI (not literally possible, but still) then it would be an actual agent carrying out those preferences. So they had better coherent! If this is going to be the one single set of preferences that constitutes a correct ethics, they can't be vulnerable to Dutch books or have other similar incoherencies. So I think the assumptions of Savage's theorem (mostly) follow pretty naturally. To what extent humans meet those assumptions isn't really relevant.

(Again, apologies if I'm misreading your assumptions, I'm having to infer them here.)

So note that this maximizing is purely about maximizing the decision-theoretic utility of this hypothetical agent; it has no necessary relation to maximizing the E-utility or decision-theoretic utility of any particular people.

This I guess is also my answer to your remark that

> maximizing is the most counter-intuitive part; enormous chunks of life are lived with value happily left on the table

Like, what is the relevance? That reads like a statement about E-utility functions to me, not decision-theoretic utility functions. Remember, a decision-theoretic utility function (when it exists) simply *reflects* the agent's preferences; the agent *is* maximizing it, no matter what they appear to be doing. Of course a big part of the problem here is that humans don't actually *have* decision-theoretic utility functions, but, well, either they don't and the statement isn't relevant, or they do and they are implicitly maximizing it (if you aren't maximizing it, it by definition isn't decision-theoretic utility!). I can only make sense of your statement if I read it as instead being about E-utility, which isn't relevant here as I've already said I'm rejecting aggregation!

Does that answer your questions? Or have I missed something?

Expand full comment

My apologies! Emotional has neutral valence and connotations for me. What I meant was, for all that any given argument is rational, we either gravitate to it or away from it by natural preference. A better phrasing would be "what do you think about VNM's prerequisite assumptions?"

"""But, I'm not talking about any sort of utility function (E-utility or decision theoretic utility) as applied to humans. Certainly humans don't satisfy the VNM / Savage assumptions, and as for E-utility, I already said above I'm not willing to accept that notion at all."""

I think this was my confusion, thanks. Actually in general this was a great reply to my vague questions, so thank you!

"""then it would be an actual agent carrying out those preferences. So they had better coherent"""

This might not be necessary or ideal. Maybe not necessary due to complexity; humans are expected to be strictly simpler than AGI, and we can hold inconsistent preferences. Maybe not ideal in the sense that consistency itself may be the vulnerability that leads to radical moral conclusions we want to avoid. It's possible that ambivalence comes from inconsistency, and that it is natural protection against extremism.

"""Like, what is the relevance? That reads like a statement about E-utility functions to me"""

Yes, it was. I suppose the stuff I said above would be my reply to the decision-theoretic maximization case as well.

Thanks!

Expand full comment

Uh thanks for the post! That was a nice read.

However, in this case I don't think we even need to get that far. Where the "mugging" happens is in (as others and Scott's article points out) saying that it's really equivalent

1) losing X Utils from a person that could potentially exist

2) removing from an actually existing person whatever Y Utils cause some "total aggregate" (however ill-defined) utility to go down by the same amount as in scenario (1)

For me, and others, it isn't equivalent, because there's just no "utility" in (1). I don't care about that scenario at all. And there's no need to take the first step to getting-your-eyes-eaten-by-seagulls by entering the discussion about how to aggregate and compare these two scenarios by accepting they both have some positive utility. I really think it's important to cut the argument at the root.

They only reason someone can get away with saying such a choice is "inconsistent" is because it's the future, and the large numbers involved get people's intuitions mixed up. If I say that somebody has devoted their life to building a weird fertility cult in the deep desert in Mongolia, and has managed to get about 50k people to be born there, all living lives barely above the baseline of acceptability, scavenging for food and constantly falling sick, but not quite wanting to die - we wouldn't say "oh what a hero, look at how many beautiful Utils he's brought into existence".

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

I think there must be 4th way in which a theory can differ: by refusing the way situation is modeled by math is correct.

I think this is the case. My intuition is that if we insist that a human life value must be a single number then at least we should allow it may change over time - at least in response to the actions of the thought experimenter. For example, if I add a new person to the world already populated with people then their lives are altered, too.

But more importantly, I think the model should clearly differentiate between "actions within world" (like the above one) and "choosing between two worlds" (like: would you preferer a world to start with that additional person).

For me the intuitive judgment of a given world comes from trying to imagine me to randomly inhabit one of its members' minds.

In this framework: I prefer the Hell with N x -100 + N x -90 people to the Hell with only N x -100 as it gives me a slightly better expected life.

At the same time, in the same framework, being already born in this second Hell, I wouldn't want to spawn additional N x -90 people in it, as it would mean for each of them a certainty of inhabiting a mind with negative value, and that is bad for them, and also having to live with my memory of voting for their suffering would make me suffer and change my utility from -100 to even more negative number.

Basically my intuitions are modeling "pool of souls waiting in a lobby to join the game some players are already playing".

In this moral framework the Repugnant Conclusion, even if somehow derived, would only mean that if I were in position to redesign the world from scratch I might do it differently, but as I already am in an already existing world and adding people to it is qualitatively different than switching to a different world such a conclusion would not force me to breed like crazy. It's a type error.

But this framework actually doesn't even reach the RC because it doesn't prefer a world with lower average live utility offered in the first step.

(The most important missing feature in various puzzles of the "choose A vs B" kind is specifying if I'll have the memory of the choice, and memory of the previous state. I think part of the difficulty in designing the AI'a utility function around the state of it's off-switch is because we obsess on the end state (switch is on vs switch is off) forgetting about the path by which we got to it which is burned in the memory of participants at least in the atoms of the brain of human supervisor and thus becomes part of the state. I mean perhaps utility functions should be over paths not endpoints.)

Expand full comment

I agree very much with you comment. Thanks for making it. I'm now curious if philosophers have already come up with nasty trap conclusions from extrapolating the utility-functions-over-paths idea. Not sure how to google that though. Maybe someone with a background in philosophy can answer with the terminology needed if this idea is already a well considered one.

Expand full comment
Aug 26, 2022·edited Aug 26, 2022

The "pool of souls" model kind of presupposes egoism, right? If we optimize for a world that a single soul would prefer to enter, it's not surprising that we would get outcomes inconsistent with total wellbeing.

This also serves as a criticism of Rawlsianism I hadn't considered before. Behind the veil of ignorance, you'd prefer a world with a vast number of moderately suffering people compared to a world with a single greatly suffering person.

Expand full comment

Yes, if designing a game I would prefer to make a multiplayer game which sucks and gives -99 fun to everyone than a single player game which sucks at -100.

Yes, I'd prefer to play the first one than the second one.

Yes, if I were a god, I'd prefer to create the first one than the second one.

Also, if already inhabiting the first world I wouldn't even have an option to switch to the other without somehow killing everyony except one person. But, assuming it's somehow possible to remove them in painless way and giving bad memories to the survivor which are only shifting the mood from -99 to -100, I think I might vote for this swap, as I think nonexistence should be valued at 0, so in expectation such a global euthanasia looks like good change in average.

If I were in the second word to begin with, I would not want a swap to the first one, as it brings in new people into sure suffering, and I don't feel like doing it, however the format of the puzzle assumes that for some axiomatic reason my utility will raise from -100 to -99 despite my intuition: I think the puzzle is simply stupid.

Expand full comment

The question is not whether you would change world 2 to world 1 or vice versa, it's which world is preferable over the other.

So using the models you gave, if you were a god, the question is which world you would choose to create. You say you would choose the first world, which is highly surprising. If you had a choice between a world with one person being tortured who also had a painful speck of dust in their eye, or a world with 100 trillion people being tortured, you would really choose the second one?

On the other hand, if you were a soul choosing which world to enter, you would also choose the first world, which seems defensible on the basis of maximizing your personal expectation of wellbeing. But considering your own expectation of wellbeing above all else essentially amounts to egoism.

Expand full comment

I agree one has to be careful to not confuse:

- a god choosing which world to create

- a god choosing to swap one for another (although I am not quite sure why would a god feel it's different if worlds can be paused and started easily without affecting inhabitants, I suspect some gods could)

- being a participant of the world and pondering changing it one way or the other (say by having a child, voting for communist party or killing someone)

I think it's strange that I am accused of "egoism" in a scenario in which I am a god outside of the world, not really participating in it, just watching.

On the one hand, god's happiness is not even a term in the calculation.

On the other the whole puzzle is essentially a question of "which world would the god prefer?" So obviously the answer would please the god - what's wrong with that?

Also it's a bit strange that the format of the puzzle asks me to think like a god but then judges my answer by human standards. Why judge god at all?

I think these whole field is very confused by mixing who is making decisions, who is evaluating, what are really the options on the table, and what was the path of achieving it.

But anyway, yes, as a god I'd prefer to create the world in which avg person live's is better. And that would maximize the happiness of a god which likes this, by definition of such god. Now what? How does this translate into, say, government policy? I say: not at all, because this would be an entirely different puzzle, in which the decision maker is either a politician or voter embeded in the world and the change is occuring within world. One would also have to specify if one wants to optimize happiness of decision maker or of avg citizien or what. If one doesn't specify it and instead shifts the burden of figuring out the goal function to the person being asked the puzzle, then it seems to me yo become a metapuzzle of "which kind of goal function would please you the most as a person watching such a country?". Which seems a bit strange as I don't even live in this country so why do tou care about outsider's oponion?

If the question instead is "ok, but which goal function should the people inside this world adopt as say part of their constitution?" then I'd still say: to optimize what? My outsider's preference?

This looks recursive to me: people set their goals already having some goals in setting them

Ultimately it bottoms out on some evolutionary instincts and in my case this instinct says that I like to not be the cause of someone's suffering, I like to be healthy and smile and thousands other things, and when joining a game server I want to have high expected value of fun from it. So if I as a human may influence/suggest what god should do to create nice games, I suggest high expected utility as design goal.

Expand full comment

I don't understand people who follow the Carter Catastrophe argument all the way to the point of saying "gosh, I wonder what thing I currently consider very unlikely must turn out to be true in order to make conscious life be likely to be wiped out in the near future!" This feels to me like a giant flashing sign saying that your cached thoughts are based on incompatible reasoning and you should fix that before continuing.

If you've done proper Bayesian updates on all of your hypotheses based on all of your evidence, then your posterior should not include contradictions like "it is likely that conscious life will soon be wiped out, but the union of all individual ways this could happen is unlikely".

My personal take on the Carter Catastrophe argument is that it is evidence about the total number of humans who will ever live, but it is only a tiny piece of the available body of evidence, and therefore weighs little compared to everything else. We've got massive amounts of evidence about which possible world we're living in based on our empirical observations about when and how humans are born and what conditions are necessary for this to continue happening; far more bits of evidence than you get from just counting the number of people born before you.

Expand full comment

A bit like the difference between estimating the odds that the sun is still there tomorrow as about (number of sunrises so far[1]):1 using Leibniz's law, and estimating it as much more than that using our understanding of how stars behave.

[1] If "so far" means "in the history of the earth", maybe the order of magnitude you get that way is actually OK. But if it means "that we have observed", you get a laughably high probability of some solar disaster happening in the next 24 hours.

Expand full comment

This has always seemed like a very silly way of looking at life. What are the odds that you are [First Name][Last Name] who lives at [Address], and has [personal characteristics]? One in 8 billion, you say!? Then you should not assume you are who you think you are, and should instead conclude that you are one of the other 8 billion people. Which one? Who knows, but almost certainly not [single identifiable person you think you are].

Expand full comment

can't the repugnant conclusion also say that "a smaller population with very high positive welfare is better than a very large population with very low but positive welfare, for sufficient values of 'high' "?

I find difficult to understand the repugnant conclusion because to me it barely says anything. If you are a utilitarian you want to maximize utility, period. Maybe one way is to create a civilization with infinite people barely wanting to live, or to create a god-like AI that has infinite welfare.

Am I missing something here?

Expand full comment

That would be a utility monster, not the repugnant conclusion, but yes, it's another way that utilitarianism fails on the weird edge cases.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

from Michael Huemer, in defence of repugnance https://philpapers.org/rec/HUEIDO

...I have sided not only with RC but with its logically stronger brother, the Total

Utility Principle. What is the practical import of my conclusion? Should we, in

fact, aim at a drab future like world Z, where each of our descendants occupies a

single, cramped room and there is just enough gruel to keep them from hunger?

Given any plausible view about the actual effects of population growth,

the Total Utility Principle supports no such conclusion. Those who worry about

population growth believe that, as the population increases, society’s average

utility will decline due to crowding, resource shortages, and increasing strain on

the natural environment (Ehrlich and Ehrlich 1990)...

...Some people believe that population increases will lead to increases rather

than declines in average utility, for the foreseeable future (Simon 1996). Their

reasoning is that economic and technological progress will be accelerated as a

result of the new people, and that technology will solve any resource or

environmental problems that we would otherwise face. On this view, we should

strive to increase population indefinitely, and as we do so, we and our descendants

will be better and better off....

...To determine the population utility curve with any precision would require detailed empirical research. Nevertheless, the [argument] suffice to make the point that the Total Utility Principle

does not enjoin us, in reality, to pursue the world of cramped apartments and daily

gruel. Perhaps its critics will therefore look upon the principle with less revulsion

than has hitherto been customary...

Expand full comment

The problem is that lots of people (including Scott) don't intuitively like the idea of worlds with lots of not-very-happy people being (potentially) better than worlds with very-happy people.

If you don't have that intuition, then there's indeed no issue.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

> Peirce is a tragic figure because he was smart enough to discover that logic disconfirmed his biases, but decided to just shrug it off instead of being genuinely open to change.

Is that actually what he was doing? Wikipedia says

> Peirce shared his father's views [i.e. he was a Union partisan] and liked to use the following syllogism to illustrate the unreliability of traditional forms of logic, if one doesn't keep the meaning of the words, phrases, and sentences consistent throughout an argument:

> All Men are equal in their political rights.

> Negroes are Men.

> Therefore, negroes are equal in political rights to whites.

And at the time, the last statement was factually incorrect! The syllogism fails because the definition of Men changes between steps 1 and 2.

Expand full comment
author

Thanks - I've deleted this until I can look into it more.

Expand full comment

Far be it from me to criticize Peirce but isn’t the problem that the first premise was false at the time, not that there was an ambiguity in the use of the term “men”? (If “created” had been added before “equal” then the first premise would arguably have been true but in that case the syllogism would no longer have been valid.)

Expand full comment

It's just ignoring the difference between moral and legal rights. The phrase, "people in North Korea have the right to free speech" is true if you believe in moral rights and think that's one of them, but false if you're talking about legal rights people can actually enforce against the state.

Expand full comment

The mediocrity principle should not be invoked by people above a certain level of success.

Scott, you are an outlier in the reference class of humans-alive-in-2022; why shouldn't you also be an outlier in the reference class of all-intelligent-life-to-ever-exist?

Expand full comment

I wrote in the previous comments that the repugnant conclusion is highly non robust. Drop 2 units of utility from a society of +1 utility and you go negative, turning a trillion barely happy people into a trillion unhappy people, negative utility is -1T.

Meanwhile the billion perfectly happy people drop from utility of +100B to +98B.

I know that this is not my thought experiment to play around with and theorists might reply - stick to the initial conditions. However I think when we are talking about a future society we should think like this. Its like designing many more bridges to take ten times as much traffic, but if you increase that traffic by 1% all bridges collapse, this isnt sane design.

So if we, or a benign AI, could design an a actual society for robustness and we decided the per capita utility could collapse by up to 50 points, then the repugnant conclusion disappears.

Expand full comment

A very illuminating comment, thank you.

It works the other way round as well, though: You only have to add +2 utility to get it back to utopia.

To me it seems that this puts a limit on both ends: You wouldn't want a trillion people society on the brink of neutrality, but also adding more utility to an already very happy society seems increasingly pointless.

This seems to describe the current situation in the world already, to some degree: We are mostly so rich, that losing or adding some wealth does not have much impact (e.g. the studies that claim that earning more than 60k does not increase happiness). While at the same time, a war in Europe's "backwaters" suddenly makes fuel and food unaffordable in all kinds or poor societies that were on the brink.

Expand full comment

Let me add quickly: This implies that the utility-happiness-function has the steepest slope around neutrality (zero happiness) and flattens out in both directions of happiness.

Of course it also implies an absolute happiness scale.

Given those assumption, the repugnant conclusion seems to vanish.

Expand full comment

It really doesn’t. All utilities in the example were linear (same slope everywhere). Just one of them had much higher constant slope than the other (1T vs 1B).

Expand full comment

The "future people should be exponentially discounted, past people should be given extremely high moral weight" thing just seemed obvious to me; I'm surprised it's controversial among utilitarians. Fortunately there is no actual way to influence the past.

Expand full comment

> Fortunately there is no actual way to influence the past.

Why would that be fortunate? If you believe that past people have extremely high moral weight then being able to help them would be an extremely good thing.

Expand full comment

To me at least it's fortunate because we don't want people with strong moral convictions that they should enact sweeping change to have access to arbitrarily-high-leverage points in our path. I'm sure there's *some* degrowther somewhere who would love to transmit crop blights to the earliest agricultural centers, or someone who wants to drop rods from god on any European ship that seems like it might reach the Americas.

I don't share the parent commenter's intuitions about the value of past people, but even if I did I would

a) be glad that the extremely valuable past people are protected from the malfeasance of other zealots,

b) appreciate that the less valuable but still extant present and future people (incl. you know, me) won't suddenly be unmade and replaced with a wildly indeterminate number of people of unclear happiness, and

c) probably be relieved that I have an excuse to live a normal life and focus on the living and their near descendants rather than founding a multi-generational cult of *miserable* time agents perpetually battling extinctionists and scraping up spare utilons to bestow upon the most improvable cavemen, steppe nomads, and Zoroastrian peasants.

(Now, would I watch that Netflix show and mourn its undoubtedly premature cancellation? 100%)

Expand full comment

Also, as a potentially still-pretty-early-human, you'd be a battleground for the future kiddos.

Expand full comment

> The "future people should be exponentially discounted, past people should be given extremely high moral weight" thing just seemed obvious to me

Wow, talk about a difference in moral intuitions. This just seems obviously wrong to me. I know you used the term "obvious", but can you try to elucidate why those of the past have greater moral weight?

Expand full comment

I think it's obvious future people should be discounted relative to the present. Past people having more moral weight than present people is then just a consequence of what economists call dynamic consistency (your preferences shouldn't change if you just wait around without getting new information).

Expand full comment

Do you have any *prima facie* intuition about moral weight of people of the past? Not that it should override otherwise reasoning, but if you're reasoning from what is "obvious", then we could just as easily start with the "obvious" moral weight of those in the past and deduce the weight of the future people from there.

I'm also curious to probe your intuition that future people should be discounted. What do you make of the shards-of-glass-10-years-later example? If you drop glass on the ground and know that in a decade a class of 8-year-olds will walk over it, you find that has less moral weight than if you know that in a decade a class of 12-year-olds will walk over it?

Expand full comment

No, I don't have prima facie intuitions about past people's moral weight. It's purely a product of beliefs about discounting future people and dynamic consistency.

I don't intuit any moral difference between the 8-year-olds and 12-year-olds, which means I'm probably weighting future human-experiences rather than future humans per se, but that doesn't bother me.

Expand full comment

I always took for granted that past people are entitled to the same moral consideration as far-away present people, and I once started writing a time travel story based on this. The idea behind it is that it's the duty of well-meaning time travelers to see to it that past persons had lives as good as possible. The problem in the story is that whatever happened in the past had left behind physical evidence, so that if you want your altruistic time traveler plan to have any chance of success, it has to leave behind evidence that's consistent with everything we've found. So the heroes do stuff like evacuating male sailors from a shipwreck to a comfortable island where they live reasonably nice lives. Once they're dead, a second team of time travelers move in and clean up after them. Basically, altruistic time travelers try to make historical lives better than they seem from just the surviving evidence. But then, in the story, they hatch a plan to make WWII less bad than it seemed, and things get pretty crazy. Whatever did happen in that time left lots of evidence that puts huge constraints on how much less terrible WWII could be made, but the protagonists are pretty creative and not allergic to manufacturing misleading evidence, just like in the shipwreck story.

Expand full comment

I want to add that some philosophers think they may have beaten the Impossibility Theorems that Magic9Mushroom mentioned. It is an idea called "Critical Range Utilitarianism" that involves some pretty complex modifications to how value itself functions, but seems like it could be workable. It doesn't seem that dissimilar than some ideas Scott has tossed out, actually. Richard Chapell has a good intro to it here: https://www.utilitarianism.net/population-ethics#critical-level-and-critical-range-theories

Even if Critical Range Utilitarianism doesn't pan out, I am not sure the Sadistic Conclusion is as counterintuitive as it is made out to be if you think about it in terms of practical ethics instead of hypothetical possible worlds. From the perspective of these types of utilitarianism, inflicting a mild amount of pain on someone or preventing a moderate amount of happiness is basically the same as creating a new person whose life is just barely not worth living. So another formulation of the Sadistic Conclusion is that it may be better to harm existing people a little rather than bring lives into existence that are worth living, but below some bound or average.

This seems counterintuitive at first, but then if you think about it a little more it's just common sense ethics. People harm themselves in order to avoid having extra kids all the time. They get vasectomies and other operations that can be uncomfortable. They wear condoms and have IUDs inserted. They do this even if they admit that, if they were to think about it, the child they conceive would likely have a life that is worth living by some metrics. So it seems like people act like the Sadistic Conclusion is true.

The Sadistic Conclusion is misnamed anyway. It's not saying that it's good to add miserable lives, just that it might be less bad than the alternatives. You should still try to avoid adding them if possible. (Stuart Armstrong has a nice discussion of this here: https://www.lesswrong.com/posts/ZGSd5K5Jzn6wrKckC/embracing-the-sadistic-conclusion )

Expand full comment

Regarding the sadistic conclusion, it's counterintuitive to think that a world with an extra 10,000 people whose lives are worth living, but kind of "eh" (they can actually be alright, as long as they're below average because of how happy everyone else is) is better than the same world with an extra 100 people being savagely tortured their entire lives.

Fertility prevention for utilitarian reasons is rare even if it exists - people do it for their own happiness, not to increase global happiness. It's because raising a kid who may well be extremely happy sounds like a massive faff.

Expand full comment

It does seem counterintuitive, but I think that might be a "torture vs dust specks" thing where the same disutility that sounds reasonable when spread out among a lot of people sounds unreasonable when concentrated in a small amount of people. Think about all the different inconveniences and pains that people go through to avoid having children. Now imagine that you could "compensate" for all of that by adding one person whose life is just barely not worth living. That's still the Sadistic Conclusion, but it sounds a lot more reasonable.

It is not my experience that fertility prevention for utilitarian reasons is rare. Most people who avoid having children, or have fewer children than they possibly could, have a moral justification for it, even if it is not explicitly utilitarian. They often say that they think it would be morally wrong to bring a child into the world if they can't give them a really good life. It is also fairly common for people to morally condemn others for having too many kids.

There are certainly people who say "Having a child would make the world a better place, but I don't think I'm obligated to do that." But there is also a lot of moralism where people are condemned for having children whose lives are positive, but suboptimal.

Expand full comment

I don't think the torture vs dust specks is the same, because dust specks are negative utility. The problem with average utilitarianism is that (in extreme scenarios):

You have a world only containing 1,000 people who exist in constant ecstatic bliss. You can either add 1,000,000,000 people who aren't quite that happy, but are far happier than anyone currently alive in our world and would thank Quetzlcoatl* for the blessedness of being able to exist as they do with their every breath. Or, you can add 10 people being tortured horribly. If any of the average systems are right, you should add the 10 people being tortured horribly, as they bring the average happiness down yet.

*the Aztec religion had a reformation in this world, and they understand that interpret him as wanting them to sacrifice their own hearts metaphorically in selfless love for all living things

Expand full comment
Aug 30, 2022·edited Aug 30, 2022

Critical Range produces the Anti-Egalitarian Conclusion - any equal population above the lower limit of the critical range can be made "better" in Critical-Range by breaking that equality even if it lowers average and total welfare, because the marginal utility of welfare within the critical range is zero.

Expand full comment

If I understand you correctly, you are saying that, for example, if the critical range is "5" then adding one individual whose utility is 6 and one whose utility is 0 is better than adding two people whose utility is 4.

That is not how Critical Range theory works. In Critical Range theory, the 6,0 population is worse than the 4,4 population, and both are "incomparable" with adding no one at all. In Critical Range theory if you add a mix of people who are within the range and above the range, the population is incomparable, not better, if the average utility of all the new people is below the range.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

First, the zero of individual welfare function does almost nothing with what people think about quality of their life? They're just numbers you use to match your intuitions. It doesn't even mean you must disregard their will - zero just only means what aggregate welfare function implies.

And so the second point - you can't choose between repugnant conclusion and sadistic conclusion, only between sadistic conclusion and both of them. Because sadistic conclusion is present even with "zero means almost suicidal" - you would prefer some hero to suffer horribly to prevent billion tortures or something. And that means sadistic variant is the one that is actually perfectly ok - if you sad about repugnant world, you should accept the price of averting it. Even if you don't think that adding people to that world makes it worse, you can just have thresholds for a range of personal welfare you don't care about - then the sad world of any size would have zero value.

It's not that population ethics is a field where math is actively trying to get you - it's all just works out in slightly more complicated models.

Expand full comment

There are four categories of answer that always come up in this kind of discussion; these have (sensibly) not been addressed in the highlights, but they're so recurrent that I'm going to highlight them here. In the spirit of the most recent post, I will do this in Q&A format (also in the spirit some of the questions are comments), and have put them in rough order of sophistication.

Comment #1

Q: Why should I care about other people?

A: Why should you care about yourself?

Comment #2

Q: No one actually care about others/strangers: people pretend to care about starving orphans, but actually only really care about their own status/warm feelings.

A: Even if this was true, it would be basically fine. A world in which people get pleasure from making other people's lives better is a much better world than one in which people gain pleasure from kicking puppies.

Comment #3 (Answers taken from SA and Godoth in the Tower post, which are much better than my initial respose)

Q: All charity is counterproductive/bad/colonialist (implied: there are no exceptions here)

A1: Even organ donation?

A2: If you actually think charity is bad, then you should be working towards ways to end charity (e.g. campaigning against tax-deductibility for charitable donations)

Comment #4

Q: Charity may help in the short run, but ultimately impedes the development of market mechanisms that have made the biggest difference in improving human welfare over the long run (for recent example, see China)

A1: Even deworming tablets, which are one of the most effective educational improvements, which then tend to cascade down to other things getting better?

A2: I've got some Irish historians I'd like you to meet after they've had a few pints ***

*** During the Irish famine, food continued to be exported from Ireland by the (mainly but not exclusively English) landlords on essentially variations on this argument.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

>During the Irish famine, food continued to be exported from Ireland by the (mainly but not exclusively English) landlords on essentially variations on this argument.

I recently learned that while there was export of high-value calories (wheat, oats), it was offset imports of more plentiful cheap calories (maize). By pseudoerasmus

https://pseudoerasmus.com/2015/06/08/markets-famine-stop-treating-amartya-sen-as-the-last-word/

>Of course all of that was too late and took too long to happen. And it would have been much better if all the food produced in Ireland could have been requisitioned and provided to the starving. But it’s also an open question how much the British state in the 1840s was capable of doing that in time, if it had been as activist in its inclination toward famine relief as any modern government today is.

Naturally there was also friction as maize was relatively unknown in Ireland. (But looking at the graph pseudoerasmus cites, I am confused if having net zero export would have been enough to cover what was needed -- the next export of wheat and oats in 1846 are much less than imported maize, and maize wasn't enough.)

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

Yes, there are definitely asterisks to the asterisks, and there was some charity. The maize bit is further complicated by the lack of appropriate mills at the time. That said, I do think two core points survive even a detailed look at the nuance:

1) Although exports were lower than imports, more food generally leads to less famine (noting complexities around hoarding and corruption)

2) The Whig government's attitude to charitable relief were heavily influenced by Malthusian and laissez-faire principles

Expand full comment

Same story with the great Bengal famines (1770 and 1943). Moral: even if you believe (religiously?) that market mechanisms optimize happiness as $t\to \infty$, the very contrary can be tremendously true at some critical points in time.

Expand full comment

You know, that was my original example, but it was harder to imagine Bengal historians drunk and rowdy

Expand full comment

The Bengal famine is addressed in that very book pseudoerasmus is linked to reviewing above.

Expand full comment

My comment was meant to refer to "The Whig government's attitude to charitable relief were heavily influenced by Malthusian and laissez-faire principles".

Expand full comment

What Whig government in 1943? Most of 1770 was also under a Tory PM.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

Ah, maize. "Peel's Brimstone":

https://www.rte.ie/history/the-great-irish-famine/2020/1117/1178730-the-temporary-relief-commission/

The problem was, you can't treat Indian maize like wheat. It was much harder to grind in mills of the time, and needed special cooking:

"Confronted by widespread crop failure in November 1845, the Prime Minister, Sir Robert Peel, purchased £100,000 worth of maize and cornmeal secretly from America with Baring Brothers initially acting as his agents. The government hoped that they would not "stifle private enterprise" and that their actions would not act as a disincentive to local relief efforts. Due to poor weather conditions, the first shipment did not arrive in Ireland until the beginning of February 1846. The initial shipments were of unground dried kernels, but the few Irish mills in operation were not equipped for milling maize and a long and complicated milling process had to be adopted before the meal could be distributed. In addition, before the cornmeal could be consumed, it had to be "very much" cooked again, or eating it could result in severe bowel complaints. Due to its yellow colour, and initial unpopularity, it became known as "Peel's brimstone".

Good intentions, but when you are secretly importing your relief because you don't want news to get out that the government is doing this, else it would affect The Market, and you are importing an unfamiliar food source that the population don't know how to use while exporting crops for cash, then no matter how good the intentions, the results are bad.

The entire Famine is a very complex matter. Those who suffered worst were the very poorest, and the landlords did not come out unscathed themselves (for the same reasons Scottish landlords decided on the Highland Clearances, as raising deer was more efficient and profitable than having tenant farmers on the land) due to badly managed estates that were not profitable by having so many small farms and plots of land, and many of them were in debt so that eventually they sold off their estates - see the Encumbered Estates Court: https://en.wikipedia.org/wiki/Encumbered_Estates%27_Court. Indeed, the tangled nature of property ownership in Ireland resulted in several land acts starting in the 1870s and ongoing until the early 20th century, but that's a separate, very large topic of its own.

The Famine was Moloch in operation. Unlike the simple (and understandable, due to anger and the human need to blame *someone*) conspiracy notions of a planned and deliberate genocide, it was rather the culmination of historical processes that just needed one thing to go wrong before it all collapsed. And with the potato blight taking on the role of that one thing, the failure of one food source which should have been a crisis but not a disaster, ended up with causing death and misery on a huge scale, massive emigration that continued on for decades causing the cratering of the national population, and a corrosive memory left behind that ensured bitterness and mistrust and hatred, with the echoes still rippling on to this day.

Expand full comment

I was hoping you would pop by!

Expand full comment

Thank you, I'm back again to get into more trouble (no, not more trouble, I''ll be good, I promise!)

Expand full comment

Q: Why should I care about other people?

A: Why should you care about yourself?

AA: I don't claim that I should care for myself in the sense of "should" meaning a moral obligation. So what's your point?

Expand full comment

"Why should I care about other people?"

That can be steelmanned to "why rationally!should I care about utilities that aren't in my utility function".

And "I rationally!should care about my own utility function" is a tautology.

Expand full comment

World economic growth can be hyperbolic and be on track to go to infinity in a singularity event but this graph does not illustrate it. Human eye cannot distinguish between exponential growth and pre-singularity hyperbolic growth on charts like this. With noise, these two trajectories would be pretty hard to distinguish with any tools.

To really see super-exponential growth we need to look at log-growth rates. If we do that, we can see a new regime of higher (measured) growth post 1950, but it is not obvious at all that growth is accelerating (it actually slowed down by about 1 percentage point from 4.5% to 3.5%) and also a lot of demographic headwinds are now gone. I am not sure I can easily insert a chart here, so you will have to trust me that the data you are using has the growth rate in a moderate decline on a 20-50 year horizon even before coronavirus and not in a rapid acceleration as you would see with a hyperbolic trajectory.

So your chart shows that we live a special time only to the extent that this time is "post WW2". And even then the single 1AD-1000AD step on this chart may easily be hiding even more impressive 70 year growth bursts (it probably does not but there is no way to tell from this data alone.)

Expand full comment

Thanks for including point 9, the 'yes this is philosophy actually' point. Reading that Twitter thread (a) got me grumpy at people being smug asses, and (b) was a bit of a Gell-Mann Amnesia moment – it's healthy to be reminded that the people talking confidently about things in this space (which I usually have to take on trust as I'm no expert in most stuff) can be just as confident exhibiting no idea of what philosophy is (which I do at least have a decent idea of).

Expand full comment

I just don't see why I should prefer a world with ten billion people to a world with five billion people, all else being equal (so no arguments about how a larger population would make more scientific discoveries or something, because that's obviously irrelevant to the Repugnant Conclusion).

Expand full comment

Because if existence is better than nonexistence, then there's twice as many people enjoying the benefits of existence.

Expand full comment

Existence is better than nonexistence for those who *already exist*.

Expand full comment

This does seem to be the point of unresolvable disagreement. I think that getting to exist is strictly better than never having existed, but other people seem to disagree, and it is rather hard to demonstrate one way or the other.

Expand full comment

The implications for the abortion debate are interesting, but neither side seems truly interested in exploring it. I think it calls into question some of the axioms already at work in the debate, so rocking the boat seems like a poor idea.

Expand full comment

Robin Hanson applies total utilitarianism to the abortion debate here: https://www.overcomingbias.com/2020/10/win-win-babies-as-infrastructure.html

Expand full comment

If the non-existent have no value, then should currently existing people live in such a way as to maxmize their own utility even if that will make life difficult or even impossible for future people who are yet to exist?

Expand full comment

Is a world with 1 happy person equal to a world with 5 billion?

Expand full comment

While an interesting question, I think the answer you will get will involve unidentified non-hypotheticals from outside the thought experiment. A community that is too small (one person clearly fits) will have no relationships, no families, and will result in zero humans in a short period of time. A more interesting question would be 100 million or some other number large enough to firmly establish human civilization, but less than some other arbitrary and higher number.

Expand full comment

Fair enough. I'll do 100.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

This is not a good thought experiment, because even if you stipulate that they’re as happy as can be, ppl’s intuitions will say the 1 person world is worse because that means humanity goes extinct, they won’t have any other ppl around to talk to, joke with, love, and there’s no way to keep civilization running with one person. But those things are confounders.

Edit: Mr. Doolittle got there before me.

Expand full comment

Fair enough. I'll do 100 then

Expand full comment

If we are assuming a world with 1 happy person who is capable of doing everything 5 billion people can do, and there hasn't been some sort of mass genocide or something, yes.

Expand full comment

Concerning Scott's statement: "I don’t think you can do harm to potential people by not causing them to come into existence."

...an approx. 2500 year old commentator might be worth quoting:

And I praised the dead who have died long ago, more than the living, those who are alive now; but better than both of these is the one who is not yet, because he does not see the evil work that is done under the sun.

Ecclesiastes, 4:2-3

...admittedly, Ecclesiastes wrote these sentences before science, law and democracy fully geared up. There have been, after all, some rather new things emerging under the sun.

Expand full comment

"The world is so dreadful, it would be better not to have been born. But how many have such luck? Perhaps one in a million?"

-- How is my google-fu failing on this? I feel like it's Russian.

Expand full comment

Sounds very Russian indeed!

Expand full comment

Can’t produce a citation but it reads like it came from the pen of Dostoevsky.

Pretty much the opposite of this from Brothers K tho.

“Do not weep, life is paradise, and we are all in paradise, but we do not want to know it, and if we did want to know it, tomorrow there would be paradise the world over.”

Expand full comment

I think the correct quote ends "Not one in a million".

Expand full comment

There is a quote by way of an old SSC post:

https://slatestarcodex.com/2015/12/24/how-bad-are-things/ :

David Friedman says:

December 26, 2015 at 9:15 pm ~new~

“and was stuck wishing I had never existed in the first place.”

“Better not to have been born. But who could be so lucky–not one in a million.”

Leo Rosten, The Joys of Yiddish, by memory so possibly not verbatim.

Not exactly relevant, but a wonderful line.

Expand full comment

> There are also a few studies that just ask this question directly; apparently 16% of Americans say their lives contain more suffering than happiness, 44% say even, and 40% say more happiness than suffering; nine percent wish they were never born. A replication in India found similar numbers.

It's worth noting that the same survey also asked whether people would want to live their lives again, experiencing everything a second time. In that case, 30% of US respondents (and 19% of India respondents) would not live the exact same life again, and 44% percent of US respondents would.

If you subscribe to a type of utilitarianism where life-worth-livingness is independent of whether a life is lived for the first or second time, I think the big discrepancy here should make you less keen to directly translate these numbers into "how many people are living lives worth creating".

Possible things that might drive the difference:

* It feels very sad to say you wish you weren't born, but less sad to say you don't want to experience life again.

* people might attach some ~aesthetic value to their lives where it's ~beautiful for the universe to contain one copy, but not more for the universe to contain two.

* people might be religious and so have all sorts of strange ideas about what the counterfactual to them being born is and what the counterfactual to them reliving their life is.

(I think the first two of these should make a ~hedonistic utilitarian more inclined to trust the larger number than the smaller number, and the third should make us less inclined to trust the procedure overall.)

Expand full comment

You may also attach some notion of boredom. Even if the repeat you cannot remember the past life, the current-you contemplating the question can.

Expand full comment

Declining marginal utility of duplicate lives.

Expand full comment

A strange question. Either one remembers it, meaning it's perfectly rational to discount a second experience of it (putting aise the fact that knowing what 'will' happen would change your behavior and change what you experience), or one doesn't remember it, meaning its hard to call that potential experiencer 'you' - its not meaningfully different to somebody else being born and experiencing the same life as you have.

Expand full comment

It's fascinating that Indians are so much more likely than Americans to say they would like to repeat their life over again.

As you note, maybe religious beliefs play a role in terms of what the plausible alternatives are seen as being. In Buddhism--and I would assume in Hinduism as well--*any* kind of human life is emphasized to be an amazing stroke of good fortune, compared to the vastly more likely possibility of being reborn as an animal.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

> In fact, I’m not sure what to reject. Most of the simple solutions (eg switch to average utilitarianism) end up somewhere even worse. On the other hand, I know that it’s not impossible to come up with something that satisfies my intuitions, because “just stay at World A” (the 5 billion very happy people) satisfies them just fine.

If "just stay at world A" was your only desiderata, then sure, you could just enshrine that as your morality and call it a day.

But you probably also have other intuitions. And the general pattern of what the philosophers are doing here is showing that your intuitions violate transitivity. And if you have an intuition that your morality should satisfy transitivity, then it might well be impossible to come up with something that satisifies both your intuitions and that meta-intuition.

Analogy: Consider someone who prefers oranges > apples > bananas > oranges. Upset about someone leading them through a wicked set of trades that imply they'd choose an orange over a banana, they declare that their must be an intuitive solution since they'd be ok with the world where they just kept their banana.

Expand full comment

Knowing that there are relative preferences does not negate the possibility that they may have a true preference among options. Maybe 5 billion is actually ideal (to them, or even objectively). To expand on your analogy, maybe they would have a hard time choosing among apples, oranges, and bananas, but would always say that strawberries are their favorite.

Expand full comment

>If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity

Well, if you extend to _infinity_ then you run into completly different problems with infinite ethics. And even if you just extend to arbitrarily large non-infinite numbers, every system of ethics that uses expected values and doesn't have a "maximum utility possible"-cap breaks https://www.lesswrong.com/posts/hbmsW2k9DxED5Z4eJ/impossibility-results-for-unbounded-utilities

Expand full comment

Trying to use math to contemplate ethics may have some very unintended consequences. The decision to use decimal places or negative numbers, for instance, may have drastic results on final conclusions. If the lowest possible number of utils is 0, and all real experiences are on a positive scale, that will have implications compared to considering negative numbers.

I saw something recently on Facebook showing that 1.01e356 was something like 37, but 0.99e356 was 0.3. In an ethical system, shouldn't we consider 1.01 and 0.99 really really close? But iterated enough times one gets smaller and the other gets larger - counterintuitive but also an artifact of math, not ethics. If we instead used full whole numbers instead of decimals, the resulting numbers of 101 and 99 would have less drastic differences (in that they both go up when iterated, at the very least).

Expand full comment

I still find it really weird that the Repugnant Conclusion seems wrong enough to make Scott question everything. Suppose we knew that, 150 years from now, a pandemic would kill off 10% of the Earth's population, and if we all sacrificed 5% of our income today, we could prevent it. I believe Scott would support doing that, so he supports harming actually existing people to help potential people (as everyone who would be harmed in 150 years is not currently alive). Is the distinction that death is bad for reasons that go beyond the fact that it ends someone's existence? As in, preventing death is a worthwhile goal, but adding existence isn't?

Expand full comment

Ending existence is worse than not creating existence. Non-existent people do not have preference to be alive, while existent people do have the preference not to die. So preventing death is a worthwhile goal, while creating new humans, especially if it's likely that they are going to die, is not so much.

My intuition tells me that creating a new human life itself is net negative in utility in a universe where death is a thing. This negative utility can be outweighted by other factors, like the preference satisfaction of this human or other humans due to specifically this new human, which would not happen counterfactually, but still the act itself is not a gift to the new humans - it's a loan from them.

Expand full comment

Apologies for an intense question, and feel free not to answer, but: do you think you'd be better off having never existed in the first place?

I definitely feel like I'm better off for existing, even though I will die someday, and that's definitely an influence on my moral intuitions. I can understand feeling differently, however.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

I don't think I'd be better having never existed. If I died today I would consider my loan being repaid by the preference satisfaction that I experienced and created through my life.

I can definetely imagine people prefering the never existence. And I expect that even some of the people who claim to prefer existence are in fact wrong, due to already being existent and thus not being properly able to actually engage with the question.

If I knew for a fact that my existence made all other people in the world less happy - the repugnant conclusion scenarion - I would prefer not to exist.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

I think you're conflating two different meanings of "potential people" here.

In your pandemic scenario, barring some horrible planetwide disaster or genocide happening in-between, those people in the future *will* exist, so we should take their future preferences and happiness into account. (If only because they will be the children and grandchildren of people alive today, and people tend to value the wellbeing of their offspring!)

What Scott is opposed to, is assigning value to *hypothetical* future people. Like, we can chose policy A which will lead to a future cyberpunk dystopia in which a hundred billion people live miserable lives in tiny apartments in a coast-to-coast skyscraper world, or policy B which will lead to an idyllic pastoral world in which two billion happy people all have a nice little house surrounded by lots of nature and clean air. (Just to be clear, we're not going to mass-murder or forcibly sterilise existing people -- policy B will just cause people across the world to voluntarily have fewer children in the coming generations.)

In this case, not only do those people not exist yet, but depending on which policy we choose, they may never come into existence -- they are purely hypothetical at this point. In that case, the hypothetical people in the cyberpunk dystopia do not get a vote; the fact that choosing policy B will cause the non-existence of the cyberpunk people does not carry moral weight.

Expand full comment

I don't buy this distinction between potential people. Suppose we're debating whether to adopt policy X, which would increase the future population by some amount. This would argue that we shouldn't put moral weight on the additional people who will exist on account of policy X, because they're not people who WILL exist absent the policy. But now suppose X is already the policy, and we're debating whether to repeal it. Shouldn't those people have moral weight now, as they're people who will exist? Do we really want a potential person's moral weight to depend on the current policy regime?

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

> This would argue that we shouldn't put moral weight on the additional people who will exist on account of policy X, because they're not people who WILL exist absent the policy.

Correct, that's what I believe, and if I understand Scott correctly then he agrees.

> But now suppose X is already the policy, and we're debating whether to repeal it. Shouldn't those people have moral weight now, as they're people who will exist?

The people who *already* exist today as a result of policy X having been active in the past, certainly have moral weight. The people who *may come into existence in the future* as a result of policy X, do not have moral weight so that is not an argument against repealing policy X.

(Edit: but if we *do* institute / not repeal policy X, then those foreseeable future people do have moral weight, even though they don't exist yet, so we should avoid doing things today which will foreseeably make their future lives worse!)

To get a bit more concrete: in general, poor people in developing countries tend to have lots of kids, while in most rich countries with good social security and highly educated populaces, the population size is stable or shrinking if you don't count immigration. So it would appear that any policy which increases wealth, education etc, is likely to have decreasing birth rates as a side effect. Does that mean that improving wealth and education is bad, because of the potential people it kills?

Expand full comment

If I’m understanding correctly, you seem to be saying that it’s fine to do something which will cause a future person to never have been born in the first place, but bad to do something which will make a future person worse off. I can understand that distinction, and other people here seem to find it intuitive, but it is weird for me; saying I should care about making someone’s life 100 years from now 10% worse, but not care about making them never exist at all, seems odd.

For the concrete case, I think the population decrease is a downside of improving wealth and education. Whether it makes those things bad depends on the specific parameters. I don’t think that any increase in future population justifies any amount of making current generations worse off, but I do think there is a sufficiently large population increase to justify any amount of making current generations’ lives worse, but still worth living. I won’t claim to have an excellent sense of how much weight we should put on each thing, but I think putting zero weight on potential future people is extremely weird to me.

Expand full comment

> saying I should care about making someone’s life 100 years from now 10% worse, but not care about making them never exist at all, seems odd

Well, the difference is that if you make somebody's life 10% worse, they will be unhappy about that, which is bad and you shouldn't do it, but if they never exist then there's nobody to experience any unhappiness.

Expand full comment

But there's no one to experience the happiness! I feel like there's a distinction here between "suffering is bad" utilitarians and "happiness is good" utilitarians.

Expand full comment

If I put a nuclear bomb in a major metropolitan area, and set it to go off in 150 years (and there's no longevity treatments, etc, in the meantime; and we're assuming that this bomb will last 150 years in effective function, pedants!!), then in 150 years many people who do not currently exist will be murdered, by me. If, inversely, I disarm a madman's 150 year nuke I have saved many people's lives, even if they do not currently exist.

If I instead start a major philosophy and convince people to have fewer children, then in 150 years there will be fewer people alive, but nobody will have been murdered. If, inversely, I start a major philosophy to convince people to have more children, then more people will exist, but I have not saved anybody's lives.

These seem intuitively to be two clearly different scenarios to me.

Expand full comment

Define "will exist". If people spontaneously decide to all stop having kids, is this preveneting the existence of people who "will" exist? Why are reproductive choices different to policy choices (that affect the existence of future populations)? They "will" only exist based on my current intentions to have children, but that can all change in the same way government policy can change.

"Depending on which policy we choose, they may never come into existence"

Why are individual behaviors different to this (with regards to the value of future people)? Me changing my mind and not having kids will have the exact same impact as a government policy that reduces the number of future people by one, and yet you treat only the latter as dealing with hypothetical people.

Expand full comment
Aug 26, 2022·edited Aug 26, 2022

By "will exist" I mean that in the context of whichever policy or thought experiment we're discussing, those people are assumed to exist in all relevant branches of the future.

I was responding to a scenario in which we know that a pandemic will kill 10% of the world population 150 years from now, and we can either invest resources in preparing some vaccine or treatment in advance, or let those future people die. In both branches of the scenario, those future people will come into existence; the question is whether to save their lives or let them die. So for the purpose of the discussion to decide on whether to invest in the treatment, it is assumed that those people will exist, and have moral value.

Edit: I don't think I am treating policy choices different from individual choices. My difference is between a choice which will cause a hypothetical future person to never come into existence in the first place, versus a choice that will let them come into existence and then kill them again, or harm or inconvenience them in some other manner.

Expand full comment
Aug 27, 2022·edited Aug 27, 2022

To me the conclusion to draw from this is that the question what do we owe the future and natalist considerations are seaparable from one another. If everyone in the world were to wake up tomorrow a happy antinatalist - no intellectual or biological desire for children, and capable of living a perfectly happy and fulfilling life in the knowledge that this generation would be the last - I think I would feel a twinge of sentimental sorrow for the great achievements human civilization condemned to the void, but otherwise it's not intuitively clear to me that there is anything blameworthy in it.

On the other hand, assuming the perpetuation of the species as a basic preference, it feels natural to include the preferences of future beings in moral considerations.

I doubt any of this could be formalized without arriving at the same diasasters that every other attempt to formalize ethics seems to lead to, but apparently my base intuitions distinguish sharply between possible and likely with respect to existence. The idea of assigning a measure to every possible configuration of consciousness across all possible universes feels completely alien to me.

Expand full comment

I don't understand the cancer and hammer example. Isn't killing cancerous cells in a test tube a necessary (but not sufficient) condition for killing cancerous cells in the human body? Why would observing cancer cells being killed in a test tube be a reasonable feedback mechanism?

Expand full comment

The hammer example seems wrong to me, because it is indeed a feedback mechanism, but the feedback we know to be true is that hammers hitting toes causes extreme pain. There's no "use the hammer to test for cancer" analogy. What that feedback mechanism teaches us, and nothing else, is to stop hitting ourselves with a hammer.

Expand full comment

Yeah isn't that the point? The "use the hammer to test for cancer analogy" is "use empiricism to test for far-future utility". The analogy applies because the response is the same: that tool doesn't work here. Use a different tool, even if it doesn't teach us [how to cure cancer/how to max far-future utility] as clearly as [hammer/empiricism] teach us to [stop our toes from feeling pain/maximize present utility].

Expand full comment

Sure, but that adds a layer to the thought process that may not be accurate. A hammer is such a bad tool to test for cancer that it's absurd. Empiricism to review choices and impacts in the future may not be fully usable, but it's literally how we've evaluated choices, ethical positions, and the results of actions throughout history. That's a big part of the study of History - what worked, what didn't work, etc. If we're trying to make an analogy as bad as a hammer for cancer detection, we would be better off with something like reading the bones.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

Probably not necessary. For example, any therapy that relies on altering/enhancing your existing immune system won't work in a test tube (unless your immune system is also in the test tube, but maybe it's a holistic enough approach that you can't fit all the necessary bits or even get them to function outside a human/animal).

ETA: Regardless this seems fairly analogous to long-termism. No doubt there are some tentative, approximate feedback mechanisms we can look at; they're just weaker than if we were working on an existing problem directly.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

Couple of thoughts, before the comment count swells into the hundreds:

I always wondered how many people are selfish and simply not particularly interested in altruism. You know, not necessarily sociopathic and not caring about other people at all, just mostly interested in themselves. Or perhaps mostly interested in their own families.

Watching the decay, resurgence, and repeat decay of the American big city, as well as our attempts to bring democracy to the world (or were we just after oil the whole time?) has made me skeptical of these grand projects of improvement. It seems like well-intentioned liberals/progressives (and oil-grubbing conservatives?) often do more harm than good, outside events have a much bigger effect than anything we can do, and complicated hypotheticals about bringing people into existence with happiness 0.01 are of limited practical value since there is currently no way to quantify happiness that effectively.

Expand full comment

The concept of a civilization of ten billion people whose lives are barely worth living might be smuggling in something like a belief that says “civilizations can scale to arbitrary sizes while zero constraints on the distribution of emotional state of people in them.”

I don’t think that 10 billion person civilization can exist without lots of advanced technology which means lots of capacities for pissed off people to cause serious damage. Likewise, I believe hierarchies are inevitably a thing, and some people at the top may feel their lives are awesome.

Claiming everyone is exactly the same emotional valence gets around this issue, but this becomes an impossible scenario. It’s like asking what happens if you get a mass of hydrogen bigger than the sun all together in a space the size of the sun but the atoms are under equal pressure. This scenario is impossible for hydrogen atoms. What if the thing you are proposing for people is equally impossible?

Perhaps the repugnant conclusion is only valid if you claim that emotional states of people in a world, and states of that world itself, are purely orthogonal to each other. But what if they aren’t?

Expand full comment

Usually when you are having a problem even in a simple toy scenario it's getting even worse in more realistic ones with more variables. Repugnant conclusion arises from the mere fact that it is possible to sacrifice the preference of some people for the existence of more people. It may be the case that some kind of real world limitations wouldn't let the situation get that much repugnant, but the burden of proof is on the one who claims it.

Expand full comment

> it is possible to sacrifice the preference of some people for the existence of more people

Isn’t that just a problem with standard utilitarianism not having any notion of Individuals having a unique value?

Epstein having child sex slaves is totally fine in utilitarianism as long as the joy his clients get out weights the unhappiness on the children. If pedophile island helps world leaders perform their duties better, isnt that totally fine under utilitarianism because of their outsized impact?

Expand full comment

Yeh. Or killing the noisy neighbour.

Expand full comment

Yes, but in practice it can't really be proven that world leaders would perform worse enough in the counterfactual to justify child sex. This argument is applicable to pretty much any instance of utilitarianism purportedly leading to an unintuitive conclusion in the real world.

Expand full comment

But we can’t prove anything about any of this. Proving anything about how people will respond, in general, without ruling out unintentional consequences, is impossible.

Are cultural restrictions on sexuality bad because they restrict healthy sexual behavior ? Or good because they encourage healthier long term relationships which lead to social stability and prevent long term misery?

The answer obviously hinges on different predictive models of long term consequences of immeasurable things. I can imagine either of the above two claims being correct, or even both of them being correct.

So why take utilitarianism seriously at all, except for in more or less obvious cases where any moral system will give more or less the same answer?

Expand full comment

Because empirically moral systems don't give the same answer, even in the ostensibly obvious situations. Like I said in comments to the previous post, "the more radical idea is that saving 20 of those lives is obviously better than a donation of $100k to a museum/university/library/animal shelter near you, which people would likely agree with if asked directly but don't tend to think of on their own". The reason why EA is so important for a certain type of person is that they feel that civilization in general is oblivious to stuff that is trivially true to them.

Expand full comment

> the more radical idea is that saving 20 of those lives is obviously better than a donation of $100k to a museum/university/library/animal shelter near you, which people would likely agree with if asked directly but don't tend to think of on their own

I agree with this claim here, though I think we come at this from very different angles.

My experience tells me that if I want other people to adopt my values, or consider my claims about what is, and is not valuable, I have to do a lot to develop this person’s trust and belief in my credibility. Anything weird or outlandish that I do torpedoes this credibility immediately.

If EA were exclusively focused on causes that had immediate obvious good vs those that have negligible immediate good, I think maybe more people would listen. Once you start talking about AGI risk, I think probably lose much of audience that might have been open to donating to deworm the world instead of their university.

Expand full comment

What if our prior on the negative value of child sex was really low? There are current societies where this is true, or at least significantly more true than the modern US. There are tons of ancient societies where this was definitely true - many many historical examples of powerful rulers that had very young sexual partners.

Deontological systems don't suffer from this problem - they can just say that child sex is bad. Utilitarian systems have to grapple with the math, and the math may in fact work out to be positive. I'm reminded of the newspaper opinion article from a well known feminist in the 1990s. I'm going to paraphrase because it was 25 years ago and I don't remember the exact words, but this isn't far from what was actually written: Feminists should be so thankful to Bill Clinton that they should all be willing to get down on their knees and service him sexually. That's a consequentialist argument that may in fact be accurate (I would say it's not, but apparently the author thought it was).

Expand full comment

I don't really get your point. Some of those current/ancient societies likely have deontological systems which just don't say that child sex is bad, so it certainly isn't a foolproof solution. Morality just doesn't have obvious fundamental axioms from which anybody can straightforwardly derive eternal uncontroversial truths, as our failure to discover them despite millennia of trying amply demonstrates, and if some system was unambiguously better than the others it would probably have won an overwhelming majority of appeal by now.

Expand full comment

No. What your are talking about is the ability to sacrifice preference of some people for the sake of preference of other people and consider it to be a good thing if more preferences gets satisfied as a result.

Repugnant conclusion arises from the ability to to sacrifice the preference of some people for the EXISTENCE of more people, which would not exist counterfactually.

Expand full comment

But why are these two different? Is “child sex slavery is fine as long as it makes things better in net” somehow less repugnant than “having lots more, less happy people is better than fewer, happier people”?

Child sex slavery is a totally real thing that already happens. Massive societies where everyone is equal has never happened once in history.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

The thing is, the math for child sex slavery creating more utility than disutility just doesn't work. People really dislike being slaves. And the effectiveness of goverment officials doesn't really increases by having sex with a child compared to sex with a consenting adult. You just beg the question by assuming that it does and concluding that utilitarism is wrong.

In cases where the math does work - killing a horrible dictator who is actively making life of many people worse - I have completely no problem with following this conclusion. While creating more people, thus making existent people less happy just seems wrong.

Expand full comment

> People really dislike being slaves

What scale are we using here? I get that slavery vs freedom is obviously net negative. Does each additional act of slavery feel additionally bad? How do we know?

> And the effectiveness of goverment officials doesn't really increases by having sex with a child compared to sex with a consenting adult

How do you know this is true? What if it makes them feel powerful and less afraid and therefore they act more like a sociopathic surgeon instead of someone who is uncomfortable around blood? It seems at least plausible to me that torturing other persons makes the torturer feel strong and powerful. What is so bad about allowing powerful leaders to torture, say, child rapists, if it makes the leaders feel stronger and those less anxious, and even makes rape victims feel a sense of justice?

All moral theories are, I think, just predictive models about how emotions work at scale. I’m pretty sure the questions we are asking, like “what makes a good leader effective” and “what are the limits of how people feel” are likely unanswerable.

Utilitarianism says, if each marginal sex act of a sex slave has positive valence for the politicians that outweighs the additional negative valence for the slave, then it’s net good.

Your objection - which I agree is valid - is to make some claims about the nature of emotional responses of human beings to various situations. I share these objections and think they are sufficient to blow up almost ~all~ conclusions utilitarianism can draw about anything remotely complex, including the repugnant conclusion.

Expand full comment

On having geniune average utilitarian philosophical intuitions.

I'm kind of surprised that you claim that haven't met more of people like this. For me, biting the bullet of repugnant conclusion is much more difficult than the bullet of creating more less unhappy people in hell. Keep creating people whose happiness is highter than the average and you eventually end up in the situation where majority of people are in heaven. This is still not the best of situations as long as hell still exists but it's obviously better than when everybody is in hell. Compared to repugnant conclusion scenario, the way ethical induction works here seems to be correct.

Maybe we need some extra assumptions to perfectly capture our moral intuitions. Something akin the lines of minimal acceptable expected happiness level for creating new people. But this is a small and easy fix compared to what you need to do so that total utilitarism produced less repugnant conclusion. Or am I missing some other mind experiments that show the craziness of average utilitarism?

Expand full comment

"Keep creating people whose happiness is highter than the average and you eventually end up in the situation where majority of people are in heaven."

This isn't accurate. For example, you could start with people at -100 utility, add people at -95 utility, -92.5, -91.25, -90.625, ... Each step increases average utility, but in the limit it approaches -90, and now you're just torturing an arbitrarily high number of people.

Expand full comment

You don't even have to do that - if you have people at -100 utility, and then add people at -99.99 utility, then the average will rise with each additional person but never so much as reach -99.99. As long as at least one person has a different utility number, adding another person at the current-highest score will increase the average.

Expand full comment

Oh, that's a good point! Average utilitarism will have much weaker push to create people with just barely more utility than total utilitarism pushes for the repugnant conclusion but that's still problematic.

Expand full comment

> Keep creating people whose happiness is highter than the average and you eventually end up in the situation where majority of people are in heaven.

This is not necessarily true. At time step n create someone with happiness -999-1/n.

Average happiness starts at -1000 after step 1 and continuously increases but it never gets above -999. You're always in an ever growing hell

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

I am not a utilitarian, it's a quantity over quality approach. It for example, obligates you to care about the lives of everyone equally, which I think is a bit silly because people aren't equal. Personally, the lives of Africans mean very little to me. In my view, they are really a real life version of a utility monster. No matter how many resources you pour into them, they don't seem to be getting better. Imagine if for example we took all that African foreign aid in the past fifty years and invested it in a space program. It's possible we'd be on Mars already! Over long time scales, this might even result in us colonizing another planet, or at least having an off world presence. O'Neal cylinders and that kind of stuff. Perhaps this would even avert human extinction. We might feel a little silly with a world ending meteor coming our way in 2034. I bet we'd regret caring so much about Africans then!

On the subject of births, I think it better to have a planet full of maybe two billion highly intelligent and moral people (no necklacing!) than an Earth full of people living with African living standards. If we want to expand our population we can go out and claim the free real estate in space, rather than overcrowding Earth. I think that's a more sustainable solution. Unfortunately this means many people may have to die (uh-oh!) I think a fair way of deciding who has to go would maybe look at who contributes most to the world. I think us Americans have contributed quite a lot so we're safe! As for Nigeria... well I'm not so sure. What have Nigerians contributed? Sorry Nigerians.. there's really only room for two billion of us.

Expand full comment

A good reason not to care about the utility of random Africans: they don't care about my utility, nor would I expect them to. Until random Africans start sitting around thinking about ways to improve my life, I can't see much point in thinking about ways to improve theirs.

From a game-theoretic point of view, it seems reasonable to care about the utility of other just about as much as they care about your utility. If you want to be generous, though, maybe you should care about others' utility twenty percent more than they care about yours. That's still a long way from caring about everyone equally.

Expand full comment

They could have utter disregard for your utility or even straight up hate you, but if say they became more developed (through e.g. aid) and became great scientists, their discoveries could materially improve your experienced wellbeing. Similarly, they could become absolute altruists with respect to your wellbeing but have absolutely nothing to offer you and so they're not going to increase your wellbeing (beyond any satisfaction you get from knowledge of their altruistic attitudes).

Of course, no amount of aid is likely to ever allow them to become great scientists or anything like that, but this example shows why I think people's instrumental value is more important than their attitudes towards you.

Expand full comment
founding

> I am not a utilitarian, it's a quantity over quality approach. It for example, obligates you to care about the lives of everyone equally, which I think is a bit silly because people aren't equal.

Maybe I'm just thinking of 'utilitarianism' in a much more generalized sense than what most people think, but I don't think it's obvious that there is, or must be, 'one true utility function' and that, per that supposed function, 'all lives are equal'.

Expand full comment

“I’m not saying the Repugnant Conclusion doesn’t matter. I’m saying it’s wrong.” Well, the Repugnant Conclusion is just an instance of Simpson’s Paradox—we’re partitioning the superset of people into subsets each of which has a property contrary to the superset (utility of the superset increases, that of each subset decreases). So the thing that’s wrong about RC is…. Utilitarianism itself. A long time ago in grad school I met Parfit when he came to give a talk on this topic. This was just a few years after Reasons and Persons was published. I wish I had known about Simpson’s Paradox at the time to discuss it with him. My own view is that we have contradictory moral intuitions as a result of distinct evolutionary strategies that built them. So we can’t ever have a completely intuitively satisfying ethics. This is a reason to doubt that the AI alignment problem has a solution; there’s not a uniquely correct moral theory to align with.

Expand full comment

> Thanks, now I don’t have to be a long-termist! Heck, if someone can convince me that water doesn’t really damage fancy suits, I won’t have to be an altruist at all!

That depends - are you a long-termist the same way you are a medical professional? Or in the way that I am a medical professional, i.e. not at all, but in that I fully support the continued existence of the medical profession? If you want to perform a Gedankenexperiment in order to persuade others of your point, then it costs very little to get it right. So why should we talk about what might happen in the Virgo supercluster millions of years in the future if someone to poor to afford running shoes can point out flaws in your analogies that are not minor but go to the core of the matter?

Expand full comment

I also have the average utilitarian intuition on the hell example.

Expand full comment

If there was one baby in hell torted by demons by being held over a lake of fire every 10 seconds, this world would be improved by another baby being held over a lake of fire every 11 seconds? That seems incredibly morally wrong.

Expand full comment

For some moral baseline on "Hellish conditions", an iterative process of babies being held over fires slightly less frequently with each iteration is basically the last few thousand years of human existence.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

Sounds like a really dumb baseline to me. People thousands of years ago came up with the idea of an eternally punishing afterlife, and it wasn't supposed to just be the same as their ordinary life.

Expand full comment

What if the devil is just an average utilitarian?

As a fallen angel his suffering is far greater than the suffering of mere humans. So he seeks to fill hell with ever more human souls to raise hell's average utility. And what kind of human suffers least in hell? A sinful one, because sinful ones can say "well, I guess I deserve this" while innocent ones will suffer even more from the injustice of it.

Expand full comment

“Happiness isn’t exactly the same as income, but if we assume they sort of correlate, it’s worth pointing out that someone in the tenth percent of the US income distribution makes about $15,000.”

I thought the relationship between happiness and income was somewhat weak and non-robust. Maybe it’s strong enough before a threshold to justify this analogy?

I’m still somewhat skeptical, since I’ve met a lot of happy immigrants with not great incomes. If you push back and say “how happy were they really?” I would say much happier than a net-negative life.

Expand full comment

1. sounds suspiciously like lizardman's constant. Or, more generously, like some sort of concept where a percentage of a society is negative regardless of that society's actual positive or negative overall positioning. Imagine running the same survey in Norway and Sudan, would you see the same 10%? My intuition is it wouldn't be far off, but I would be happy to see alternate data!

Expand full comment

A significant portion of our self-reported happiness might be based on our relative social status, which would make increasing everyone's wealth/health might not boost self-reported happiness at all.

This wouldn't mean boosting wealth or health isn't good! But if it is, its goodness would be independent of anything captured by self-reported happiness, which would make it unlikely that we can reliably use the numbers Scott decided as a proxy for the goodness of a life.

Expand full comment

I'm reminded of an SMBC comic https://www.smbc-comics.com/comic/emotion

I think the long term lesson is that it's often better to look at objective measures with some intuitions layered on top (i.e. "people should have more access to food, and maximizing this by turning the world into an ultradictatorship is probably bad") rather than paperclipping utility.

Expand full comment

9% is higher than usual for lizardman's constants, and I don't see that as unlikely. Many people I encounter have expressed that, I expressed it in the past, and I could still express it if I operated with a different ontology about what it means to express your preferences (i.e, "I want to quit my job" does not mean you literally want to quit your job, generally)

Expand full comment

"3: Regarding MacAskill’s thought experiment intending to show that creating hapy people is net good"

"hapy" ? Is that different than "happy"

"I agree with Blacktrance that this dones’t feel true, but I think this is just because I’m bad at estimating utilities"

"dones't"?

"gave the following example of how it could go wrote:"

"wrote" = "wrong"?

Expand full comment

On discounting, specifically on: "I think Robin later admitted that his view meant people in the past were much more valuable than people today".

I was completely taken by surprise by this conclusion. Discounting does seem absolutely necessary for me. Whenever you do math with sums that (may) sum up to infinity, you end up getting nonsense. Especially when you try to draw moral conclusions from such sums.

But the only way of discounting that makes sense to me is that it punishes distance *from myself*. So if I want to compute how valuable a person is, then this gets a penalty term if it is far away *from me*, whether in time or in geographical or in cultural distance. So both people in the past and in the future should be less valuable than people today.

Of course, this means that the value of a human is not an inherent property of that human, but it is a property *between* two humans. The value of a being is no absolute thing, it's always relative to someone else. Which seems very right to me. From HPMOR: "The universe does not care. WE care."

Expand full comment

It's a matter of consistency. If you want to make deals with the future, it will work if there are also mechanisms allowing the past to make deals with you.

Expand full comment

> I should stress that even the people who accept the repugnant conclusion don’t believe that “preventing the existence of a future person is as bad as killing an existing person”; in many years of talking to weird utilitarians, I have never heard someone assert this.

Really? I've heard it. I've even heard a philosophy professor - a famous one, at that - talk about how this is a major unsolved and undertreated problem in consequentialism. (And he counted himself as a utilitarian, so he wasn't dunking on them.) What's more, MacAskill has to confront this question more directly than others, given his other arguments.

At a basic level - both have the same consequences: one less person. If actions are bad because of the difference between the world where they're taken and the world where they aren't, then these two are equally bad, ceteris paribus.

The usual effort to get around that fact involves appealing to the fact that one person exists already, while the other person doesn't, and only people who do or will exist matter. This leads to straight-up paradoxes when you consider actions which can lead to people existing or not, so more typically, philosophers endorse some "person-affecting principle" where in order to be bad, an action must affect an actual person, and merely possible people don't count. A typical example of this view would be "possible people whose existence is conditional on your action can have their interests discounted".

But note, this is completely incompatible with MacAskill's longtermism. If we don't have to care about "merely possible" people's interests, we don't have to care about the massive staggering numbers he cites. In fact, given person-affecting principles, it's hard to justify caring about the distant future at all which is why some people absolutely do bite the bullet and say, yes, pragmatic concerns aside, killing is equivalent to not creating.

Expand full comment

I know my comments have been intemperate, and I'm sorry. A big part of that is that these philosophical concerns are being treated as "exotic" - and they are, but only in the sense that thought experiments are being used.

The question being examined in the end is very basic, and very important: "when is it wrong to kill someone?" I think you can't get the right answer in utilitarianism, and I think this is a big deal. Seeing this written off as a mere philosophical worry really annoys me.

Expand full comment

I think that indifference between production/elimination is a feature of consequentialist thinking because what matters is lost utility, not some consideration like natural rights being violated. If given the option between a hermit ceasing to exist, and 2 babies being born, it seems clear that the babies being born likely has higher utility. We just have strong intuitions about eliminating people, a comission-omission distinction. Utilitarians use the trolley experiment to suggest this is misplaced - what difference does it make if the trolley change is cause by you? Well, it seems to morally matter to me, although I would still switch it.

Expand full comment

Is there a name for the position that all we really have to rely on are our moral intuitions, and all arguments and extrapolations and conclusions have to pass muster with our intuitions? I have no problem with the idea that I am perfectly entitled to reject any philosophical conclusion that clashes with my intuitions, purely *because* it clashes with my intuitions, even if I can’t find any specific problem with the chain of axioms.

I don’t think people are good enough at thinking to actually understand the inner clockwork of our empirical values, and there’s no reason we should defer to “derived” values over revealed, intuitive ones. Philosophy is not like engineering, where you build a bicycle out of gears and struts and it all comes together exactly as intended, according to principles you fully understand. Philosophy (of the “Repugnant Conclusion” variety) is like trying to build a bike out of jello, and then telling everyone that you succeeded in building a bike and the bike works great, even though everybody can see that it is a pile of goo.

Expand full comment

I think what you describe mostly doesn't need a name, because it's mostly commonly accepted in philosophy. Our moral intuitions are all we ultimately have access to which we can use to judge a moral framework; this is commonly believed, among all but a very particular strange breed of Kantian rationalist. Try looking up "reflective equilibrium", which describes the ordinary process of how to cohere our intuitions. It explicitly allows the kind of thing you describe - rejecting conclusions if they are sufficiently unintuitive. Look up also "modus tollens" if you're unfamiliar, it's the reasoning pattern behind proofs by contradiction. "Intuitionism" may also be of interest to you, though it likely refers to something subtly different than what you're going for.

You may be encouraged by recent philosophical interest in "practical philosophy", for example Joshua Knobe doing experiments to uncover what people actually decide in various moral dilemmas; though this might overlap with (or just be properly considered as part of) psychology, and you can feel free to still condescend to philosophy proper. I think most philosophers think this is a good thing to be doing, but still trust that we can make headway through introspection and argument without needing to undertake a project like that.

Expand full comment

The belief that an ethical truth seems apparent and therefor we have some reason for believing it is ethical intuitionism. You can read Ethical Intuitionism by Michael Huemer to understand it better. Another book is The Good in the Right by Robert Audi, which I didn't like as much and didn't finish. I believe in weak natural rights, an omission-comission distinction, parental obligations, and so forth.

My stance is that you can lower your confidence in a moral conclusion because it seems counter-intuitive, but you are going to have to accept some conclusions that appear counter-intuitive. It is just going to depend on what seems the most true after evaluating all your intuitions and trying to eliminate distortions. So, for example, the RC intuitions might be distorted by the fact that it's difficult to think about large numbers.

Expand full comment

But intuitions can change, both within a person and certainly across generations. (I used to not think twice about eating meat, but after being vegetarian for several years, it does *feel* kind of wrong. See also: slavery.) Plus, sometimes your intuitions clash with each other. Hence, philosophy is usually guided by our intuitions, but attempts to find a "reflective equilibrium" amongst them, as Crotchety Crank mentions--which may result in accepting conclusions that (initially) clash with your intuitions.

Expand full comment

I don't share the intuition that the repugnant conclusion is repugnant at all. I very much want to exist. Given how specific *I* is I get the chance of me existing is astronomically tiny. It would be still significantly more likely in a world with more people, and I would still want to be *me* for all possible *me's* even if just barely above neutral state. Now I act selfishly because I don't think veil of ignorance is a thing we should live our lives by, but when it comes to philosophy about total worlds? Totally

Expand full comment

The chances if you existing ate 100%. Unless....

Expand full comment

When considering moral scenarios, it's best not to put yourself in the scenario because it's going to distort your thinking. If I were facing a trolley with five people on the other side, I may not be thinking so clearly.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

One of the best phrasings I ever heard was "don't make happy people; make people happy". It seems very easy to reach all kinds of oddball conclusions once you're free to introduce whatever hypothetical people you feel like and value them as much as actual people

On the other hand, we do reasonably seem to care about future people - if not, we wouldn't have to *overly* bother about climate change (it would still be reasonable to do something about it, but the priority would be much lower).

So what I personally need is some kind of philosophy about hypothetical people, and some way to contrast them against actual future people. But either way, I don't think we need to concern ourselves about purely abstract hypothetical people.

I don't think it's unreasonable to say that a world with a million people, all happy, is in no way worse than a world with a billion people, all happy. The important thing is that the people who exist are, in fact, happy. Only slightly more abstractly, we want the people who will actually exist in the future to be happy as well. But beyond that? Who cares about hypothetical people?!

Another reasonable thing to interject here is that we shouldn't even make people happy, at least not in some kind of official policy way. Focus on removing sources of unhappiness instead - the happiness will take care of itself, and peoples' free choices are the best way to handle that. Happiness is good; it shouldn't be mandatory. A line of reasoning like this could also explain why it's bad to create people destined to certain net suffering, but not necessarily good (and certainly not obligatory) to create happy people.

Anyway, if we *really* care only about happiness in abstract hypothetical entities, then humans seem like a dead end. Then what we need to focus on is tiling the universe with maximally happy AIs who can replace us at a vastly improved efficiency as far as happiness is concerned (better even than rat brains on heroin!). We need to hurry to the point where we can abolish ourselves, and getting destroyed in the Singularity might be the best possible outcome.

Expand full comment

Indifference to population size constrained by happiness is an odd position. Wouldn't this mean that 1 happy person is as good as a million happy people? So 1 happy person is better than 0 happy people, but 1 happy person + 1 happy person is not better than just 1 happy person?

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

Why would I care about the happiness of merely abstract hypothetical people? If all the people who exist are happy, then all the people who exist are happy. If a person who exists is unhappy, then that's sad and if there are obstacles to that person's happiness, it would be good if something could be done about it, but the nonexistence of a potentially happy person doesn't seem particularly important. Even if these people decide to happily go extinct, who am I to judge them? They don't hurt any actual person.

Expand full comment

I don't know whether you personally should care about hypothetical people from a self-interested standpoint, but we ought to make moral decisions about them that make the world better.

Are you indifferent to creating happy or sad people? That seems to be what is discussed. It's good to make people happy, everyone agrees. The difficult question is about creating new people.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

I think it's objectionable to create (net) unhappy people; I'm neutral towards creating happy people. So if for whatever reason you *must* create people (this is when you know you're in a thought experiment), by all means go for the happy ones. Of course, once they *do* in fact exist, then you get all the normal obligations towards them. "Make the world better" would be about people who exist (and here's where we need some kind of distinction between future people and merely hypothetical people, because actually future people have to matter), or else you should be obligated to press a button that obliterates everyone alive in order to replace them with the same number of new, marginally more happy people.

And self-interest isn't a factor here. When I say "I", feel free to substitute an impersonal "one", or assume perfect universalization.

Expand full comment

Obligatory xkcd, re: cancer cells.

https://xkcd.com/1217/

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

>>Do people who accept the Repugnant Conclusion, also believe in a concrete moral obligation for individuals to strive to have as many children as possible?

We had a conversation that brushed close to this topic on one of the open threads a while back. It's specific to the context of abortion (OP posits that utilitarians should be pro-life because of the balance of the utility of the life-years cost for the fetus and its possible descendants due to ending the pregnancy by abortion as compared to the life-years cost to the mother from carrying a child to term & raising it), but as a few people point out, the logic had a hard time not extending from "preventing people from being unborn because of abortion" to "preventing people from being unborn because of contraceptives or abstinence." I don't think we ever really got to a line of logic that satisfactorily distinguished the two.

https://astralcodexten.substack.com/p/open-thread-233/comment/7805444

Expand full comment

I think that a total utilitarian position which holds that "we ought to maximize utility" and that "potential people have equal moral consideration" (longtermism assumption) should take seriously the idea that we should massively increase reproduction. One way of doing this by prohibiting abortion and this is actually politically feasible. Other alternatives like incentivizing reproduction are likely better, but they are more costly financially. My argument was just to demonstrate that if we stop eliminating people in the womb it would likely increase overall human welfare. A less contentious example would be preventing spontaneous miscarriages.

Expand full comment

A true (total) utilitarian would allow, or even mandate, abortion and contraception of lives not worth living, or if you use the German, Lebensunwertes Leben. They would forbid abortion and contraception of lives that would most likely be worth living. That they argue otherwise has more to do with the background of western utilitarians in progressive ethics, combined with the fact that their friends see abortion as an inalienable right, than the logic of utilitarianism, which I think is most exemplified today in the hardcore pro-PRC and pro-HBD camps.

Expand full comment

I think that almost all lives are worth living, and even lives that aren't worth living are often economically productive enough to be worth existing anyway. I don't know about full on coercive eugenics...maybe in some instance (from a total utilitarian view) but not my personal view. I don't know what PRC is, but I am definitely pro-eugenics in the form of embryo selection and gene-editing. Actually, I think we should pour tons of money into that, more than we do education, and then make it widespread through subsidy. We should also probably gene edit farm animals to suffer less.

Expand full comment

PRC = People's Republic of China

I agree with your views on these matters. However, I'm ultimately pro-life in the sense that I believe abortion is barbaric and inherently contrary to the grain of Western morality - as Justice Alito stated in Dobbs, the recognition of an inalienable right to abortion is corrupting the rest of the US Constitution, and I believe it is corrupting Western ethical reasoning as well. We should be working to eliminate it at least as assiduously as we should work to eliminate meat-eating, even more so because it is killing humans instead of non human animals. My ideal would thus be gamete selection (or full genome synthesis) rather than embryo selection - I do believe humanity should work to eliminate genetic disease and deficiency.

Expand full comment

I actually have those sympathies to some extent. Create embryos feels kind of wrong to me, but I think it's doing more good ultimately and those embryos aren't conscious. Later abortions seem very wrong to me though. I'm somewhere in-between. I think a lot of abortions could be eliminated by better use of contraceptives.

Expand full comment

I'm glad to find someone else who supports gene editing but is at very least gravely uncomfortable with creating embryos in order to destroy them.

I'm curious about your comment that abortion could be largely eliminated with better use of contraceptives, though. I agree--but doesn't that directly contradict your total-utilitarian argument against abortion above? From your perspective, is there a moral difference between being aborted (before consciousness emerges, at least), and never being conceived in the first place?

The evidence suggests that countries which ban abortion *don't* have higher fertility for more than a few years afterward--people adjust to use contraception more conscientiously.

Expand full comment

Quote: "The best I can do is say I’m some kind of intuitionist. I have some moral intuitions. Maybe some of them are contradictory. Maybe I will abandon some of them when I think about them more clearly. When we do moral philosophy, we’re examining our intuitions to see which ones survive vs. dissolve under logical argument."

...Scott, this suggests you are an implicit follower of the philosopher Jonathan Bennett (1930-). In the article "The conscience of Huckleberry Finn" (Philosophy, 49 (1974), pp.123 - 34), he analyses the morals of Huckleberry Finn (as portrayed by Mark Twain), Heinrich Himmler (yes, he had moral principles - very bad ones), and the famous US Calvinist theologian and philosopher Jonathan Edwards (with an even worse moral outlook than Himmler, according to Bennett). His moral take-home point after the analysis is this:

"I imagine that we agree in our rejection of slavery, eternal damnation, genocide, and uncritical patriotic self-abnegation; so we shall agree that Huck Finn, Jonathan Edwards, Heinrich Himmler.... would all have done well to bring certain of their principles under severe pressure from ordinary human sympathies. But then we can say this because we can say that all those are bad moralities, whereas we cannot look at our own moralities and declare them bad. This is not arrogance: it is obviously incoherent for someone to declare the system of moral principles that he accepts to be bad, just as one cannot coherently say of anything that one believes it but it is false.

Still, although I can’t point to any of my beliefs and say “This is false”, I don’t doubt that some of my beliefs are false; and so I should try to remain open to correction. Similarly, I accept every single item of my morality – that is inevitable – but I am sure that my morality could be improved, which is to say that it could undergo changes which I should be glad of once I had made them. So I must try to keep my morality open to revision, exposing it to whatever valid pressures there are – including pressures from my sympathies."

Expand full comment

A related topic is that -- personally -- I think it's better to try to extend people's lives, for example with anti-aging or brain preservation/cryonics, than just letting people involuntarily die and create new people.

I've been very surprised to find that many people in the EA space disagree with this intuition.

For example, Jeff Kauffman's argument against cryonics, which was highly upvoted in an EA forum comment, is basically that cryonics is not worth it because if people involuntarily die it's okay, we can just create new people instead. https://forum.effectivealtruism.org/posts/vqaeCxRS9tc9PoWMq/?commentId=39dsytnzhJS6DRzaE#39dsytnzhJS6DRzaE

As another example, see the comments on this post about brain preservation as a potential EA cause area: https://forum.effectivealtruism.org/posts/sRXQbZpCLDnBLXHAH/brain-preservation-to-prevent-involuntary-death-a-possible?commentId=puMSdFecxFsnKmEx5#comments

Expand full comment

I'm glad to see you elaborating on this. I wrote an article about why your population ethic in your last post is going to be not work [1].

Your new theory is also going to have problems (“morality prohibits bringing below-zero-happiness people into existence, and says nothing at all about bringing new above-zero-happiness people into existence, we’ll make decisions about those based on how we’re feeling that day and how likely it is to lead to some terrible result down the line.”) This doesn't evade the RC fully, it just makes you indifferent between population A (small, happy) and Z(huge, barely worth living). If we introduce one slightly below average person, we get the RC again. Population A + 1 mildly unhappy person v. population Z would make you choose population Z. It also makes you indifferent to creating an extremely happy and fulfilled child versus one whose life is barely worth living.

In my article, I wanted you to more fully embrace intuitionism and accept stuff like weak natural rights. Here you accept intuitionism, which is good to see. Can I get you to accept weak natural rights? Since you are an intuitionist and since you don't embrace the idea that all actions have to be utility maximizing, perhaps you could accept the Repugant Conclusion but reject the need to bring it about, and reject depriving anyone of their rights to attain it. That would be my position. I think the RC is the best of the bad conclusions, but we don't have to bring it about. People aren't morally obligated to reproduce. And we shouldn't make people if it makes everyone's lives get worse.

Thanks for talking about this more. This stuff is very interesting.

[1] https://parrhesia.substack.com/p/scott-alexander-population-ethics

Expand full comment

I think one way out of the RC is to deny that a moral system should be able to give answers to all hypothetical questions but to try to derive a system that would give answers in actually actionable situations. This system would be quite sufficient to guide one in all practical situations.

Under such a moral system, we can still engage in some thought experiments, but the meta-questions such as choosing between two hypothetical worlds would have no meaning.

In other words, even if one accepts (a pretty dubious) idea of aggregating utilities between people, one can still refuse to compare aggregate utilities between hypothetical worlds.

I think this is actually equivalent to a pretty traditional approach to these questions that would have been obvious to any thinker a few hundred years ago. They would have easily recognised who is the only entity that could wish a world with 10Bn near-hell suffering people into existence and would know that a series of complicated transactions with this entity is not going to end in anything good.

Expand full comment

I think the whole concept of "moral systems" is mistaken and that ethics is a lot more diffuse fragmented and ad hoc than can be captured by one or even two systems.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

Coming up with a system that gives you answers to "what ought I do in this situation?" is no problem at all. Only question is: why should you trust it to give the right answers?

Or, to put it glibly: why ought you follow the system?

This has an eminently practical dimension: if you assume that there are some situations where your moral system is inapplicable, you need to know what those are, and what ought you be doing if - against all odds - you find yourself in one.

Consider a system that tells you you ought to do whatever will have the best outcome. Seems sensible, right? How will you know what's the best outcome? It makes the most people happy. Also seems something worth striving for, no?

Except, of course, I just described utilitarianism, together with the RC and all the other nastinesses, so - not so good.

So, maybe we can do utilitarianism, but without going so far? But there's no such thing as "so far" - there is no single, identifiable step where we're going from "let's make people happy" to "let's fill the world to the brim with miserable humans", or from "let's reduce suffering" to "let's blow up the world". The various unpalatable conclusions are baked in the system, and whilst we can get rid of the ones we know with stipulations like "don't blow up the world", that leaves all the other ones we haven't discovered, yet.

Just to be clear: the masses upon masses of near-suicidal people are there merely to prove a point. If we consider a world with fewer, but very happy, people, to be superior to a world with vastly more, but very unhappy, people, we don't need them to *actually* be near-suicidal - just sufficiently less happy than the population we started with.

Keep in mind that "should you have more kids/encourage people to have kids?" isn't some pie-in-the-sky hypothetical. This is exactly the sort of practical question we want our moral system to provide guidance on.

Expand full comment

Actually, no, a system that requires to go for "the best" outcome does not sound sensible at all. Mere existence of the best outcome imposes pretty strong conditions on the moral system (What is the biggest integer number? What is the biggest rational number that does not exceed 5? Why do we think that any two outcomes should be comparable or the relation "better" should be transitive?) and the ability to determine the best outcome is an even stronger requirement. It is no surprise that imposition of many strong requirements leads to paradoxes.

I agree with you that the question "should I have more kids" is a practical question that requires an ethical system to give a reasonable conscious answer to. The funny thing is that I had to answer that question more than once and I also published mathematical papers about implications of various individual utility functions and also have some amateurish interest in paradoxes like RC - and I do not think that these kind of hypotheticals help in any way to answer questions like that. There is simply no reason why superiority of one hypothetical world to another should translate to desirability of a specific nudge of our world.

Expand full comment

I think you're reading too much into "best", which is properly understood as "we want our moral system to have good results and not bad results".

Don't make the ontological argument error: it is perfectly possible to talk about "best outcome among those we can think of", without having to resort to some platonic ideal of "bestness". You merely need to ask "would you prefer A or B?"

You are correct that we can avoid the Mere Addition Paradox (and the Repugnant Conclusion) by assuming that "better" isn't transitive. At the end of the day, there's nothing stopping us from saying that B is better than A, C is better than B, and that A is better than C. All I need to do now is figure out a way to turn this into a money pump.

> There is simply no reason why superiority of one hypothetical world to another should translate to desirability of a specific nudge of our world.

The hypothetical isn't intended to guide your individual actions one way or another, but rather to stress test the system itself. If the hypothetical comes up with a conclusion that is repugnant to you, it means there's something wrong with your system.

Suppose you - being a utilitarian - want to have kids, so you interrogate your moral system to be sure that it's the right thing to do. You take as assumption that they will have lives worth living (presumably, as a parent you'll do your best to ensure this), and that any reduction of your happiness as a result will be more than offset by the happiness they will experience (plus, you get to be happy from having a kid). The system tells you that this action is morally laudable and you get right to it.

On a different occasion, you observe the possibility of reducing inequality somewhat. Some folks who are among the best off will be slightly worse off, but some folks who are worst off will be significantly better off.

But wait, there's more! Happiness isn't a zero-sum game. By reducing inequality, you end up making everyone just a bit happier, meaning not only that everyone is happier *on average*, but that the entire population in general is happier in general.

Quizzing your moral system you find that an action that increases both total and average happiness (utility) is indeed morally laudable and you get right to it.

Congratulations! You've taken the first steps to making the Repugnant Conclusion a reality.

And just in case you're thinking that you'll know when to quit, I must point out that it feels the same at every step, whether you're starting at an average happiness of 50, 25, or 5. For all you know, the seagulls are already circling.

Expand full comment

I was indeed interpreting "best" in a very literal mathematical sense - my bad.

Money pumps are pretty easy to avoid once one removes the requirement to have a total order on scenarios and also by having path dependency in the comparisons.

As a matter of actual comparisons, the classical formulation of RC postulates that we are comparing separate hypothetical worlds and are not manipulating utilities of actual beings. You seem to have just switched to the latter and that, for me, kills the paradox completely. I find large scale equalising redistribution grossly unethical on its own, so the whole exercise becomes pretty trivial - we have a chain of three operations, one of which is evil, and we come to a bad result, duh.

Expand full comment

Can we bring Rawls into this conversation, or is that against the rules? After all these universes are created, you get to choose which one to live in, but you don't get to choose who you get to be. Would you rather live in the 5B universe of maximum utility, or the barely-above-neutral universe? Would you even choose the 5B max + 10B 80% utility universe or the first one?

Expand full comment

Variation: you don’t get to choose the universe either. You will become a randomly chosen person in one of the universes. If Universe A has 100 times as many people in it as Universe B, you have a 100x greater chance of ending up in Universe A.

Expand full comment

I have a long-running dislike of comparing future worlds as a means of making decisions, stemming from:

-There are no *inherently* desirable states of affairs.

-I am in a brain with preferences, and I would like to achieve those preferences. Non-survival preferences I categorize as "happiness" throughout this logical chain. Thus some states of affairs are more desirable than others.

-I could do whatever would result in the most happiness to me. This is really hard to analyze and my subjective happiness (the thing that actually matters) is only very loosely correlated with my objective well-being.

-I contextualize this as "defect" and prefer to "cooperate" within my community. The best outcomes come if "my community" means "any human who ever has or will exist".

"Should this also apply to aliens" and "should I care about animals" are outside of scope

-From this I choose to optimize my philosophy for the outcomes that come about for me if everyone has them; there's some muddy stuff here but ultimately I more or less reach the veil of ignorance.

anyway, my takes as a result:

-Making a person exist is morally neutral to that person, but by creating a person you have an obligation to love them ("put them in the category of people who are in your community, i.e. the people whose happiness you treat as equal to your own*"). This doesn't make making a person exist morally neutral overall, because it will affect existing people; but the existing person it will affect most is yourself, so "have kids if you want to, don't if you don't" seems like the best axiom to me at the moment

-Any long-term view must assume future people will take the same long-term view

*Caveat that this says "happiness" and not well-being intentionally, and that placing the happiness of another as equal importance to your own does not mean that the two happinesses should be equal

Expand full comment

The Repugnant Conclusion, very specifically, is basically impossible for me to agree with because I can't justify using EDT (the same thing as Defect, really) to care about others

and using Cooperate I very strongly do not want it, nor does anyone who actually exists or will ever exist

If having more life is inherently good, fair enough, I disagree and disagree conceptually with valuing things inherently (inherent values are necessarily adopted from your circumstances), but if it's in your givens no point arguing

(Generic you, "reader of this comment")

But if it is not, the nonexistent have no preferences because they do not exist, and so satisfying them I see as negligible

(On the other hand, -future- people do matter, and once it is a given that they exist satisfying their preferences does matter. "Don't have any more children so we stop having to care about the future and can only optimize for the present" is a take that I only disagree with in the sense that I don't believe that banning children and guaranteeing the extinction of the human race would result in happiness in the present.

I haven't read the competing bad conclusions paper yet, will do so

Expand full comment

Satisfying one's own moral intuitions is, notably, an important part of happiness - but the goal is not to actually get the world to the standard my intuitions find "better", it is to make my intuitions feel satisfied without being self-deception

I very strongly do not want to do the exact best thing for what my moral intuitions say would be good, because the end result of everyone doing that is a mess of conflicting intuitions fighting each other and lots of martyrdom and not an increase in happiness

Expand full comment

My intuitions, best I can tell, maybe not comprehensive:

-I trust people to love all aspects of me. Unhappiness caused scales with their proximity to my life, and how many aspects I am uncomfortable sharing. Perceived betrayals of trust are earthshattering. Being around people I do not trust is mitigated by having the feeling that I am -successfully- achieving some sort of goal at their expense

-I have no goals that I don't understand why I'm doing them. Scales with how much they matter to me, and how long I have the goal without making meaningful progress

-I have no goals that I don't have the feelings that I can work towards. Usually this can be dealt with by restructuring my internal understanding though (either to find a way to make progress, or to remove the goal)

-I do not feel that I am in danger

-People I love are achieving their preferences

-People I love continue to exist, and exist in my life

Expand full comment

Am I missing something from Utilitarian concepts/arguments? They seem based on a fallacy: that there's some immutable H value of happiness. But that's not my experience of how humans think/work at all.

First of all, if you reach a certain point of H, and then stay there, your perception of how much H you have decays (or increases, either way, reverts to a "mean" perception of H). Concretely: if you win the lottery and now are immeasurably richer than you ever thought you could possibly be, you'll be really happy for 6 months, at which point you'll feel pretty average again. If you break your neck and become completely paralysed, you'll be really unhappy for 6 months (okay, maybe 2 years), by which time you'll feel pretty average again.

So actually "Heaven" is where every single day you have H+1 happiness over the previous day. "Hell" is where every single day you have H-1 happiness over the previous day. It doesn't matter whether you start in abject poverty or amazing riches, if your H is monotonically increasing, you're in heaven. Decreasing, hell.

Second of all, I know everyone hates Jordan Peterson because he doesn't toe the leftist line, but he has some good points. One is that "happiness" is not well-measured. If you have all your stuff and things and money, you think you're happy, but after a few months, you might well want to kill yourself. Meanwhile, if someone else has nothing at all, after many months, they might be full of energy and still getting on. Why?

Because having a sense of purpose, meaning, a goal that you're aiming toward, is way more important than having all your comforts and stuff and things.

So again, someone in abject poverty, who feels they have meaning in their life, raising their kids, helping their neighbours, and generally doing "productive" things in their lives will maybe not report being "happy" but if you ask them if they'd rather be dead, they will certainly say no.

On the other hand, someone in the top 0.01% of global wealth, with everything they could ever possibly want, but with no purpose or meaning in their life (whether self-inflicted or whatever) might well commit suicide. Again, not because they're "unhappy" but because... what's the fucking point?

This is one of the reasons I am unconvinced of the utility of, eg, Universal Basic Income. If you have a few hundred million people sitting around with no purpose, they will start acting destructively. Either to themselves or everyone around them. Sometimes it's fun to just wreck shit, especially if you've got nothing else going on.

Do all these utilitarian philosophies say nothing on this, or are all these well-understood, and just no-one ever brings them up in these extremely long, in-depth utilitarian discussions?

Expand full comment

To continue to riff on this... why is the Repugnant Conclusion all that repugnant? If everyone has the ability in the hypothetical situation to have meaning, purpose, and "H+1 happiness" every now and then... then everyone would prefer that conclusion.

Are we somehow trying to hold the amount of happiness constant? Does any of that have any real-world application, or is this a thought experiment so divorced from actual human psychology and reality that it's just a fun game, like playing Civ and losing?

In any case, I'm reading these essays and comments, and coming away feeling like everyone's either missing the point, or they've already considered the point and dismissed it, and I'm just behind-the-times.

Expand full comment

I guess I should riff another riff.

This is why having kids is of such positive utility (I think). They provide a sense of purpose and meaning to what might be an otherwise meaningless life. Doesn't matter if you're rich or poor, if you have some kids, you will have a renewed sense of purpose that may have previously been missing.

Altruism is quite handy, since it provides a purpose/meaning for your wealth that otherwise might go to buying meaningless shiny trinkets or comforts. For someone who's data and analytics-focused, effective altruism wins out over straight altruism, since you know you don't want your hard-earned wealth going to useless/meaningless interventions, and are too introspective to just allow yourself to blindly give something to a random person based on a promise.

Again, is this "having meaning/purpose" part of the Utilitarian formula inputs, and I just don't know about it, and no-one ever brings it up? If we're trying to bake it into a single boring "H" variable, I just don't think that'll work. Unfortunately, I suspect measuring a human society will be a multi-variable formula.

Expand full comment

I have a related problem with utilitarianism and the apparent lack of specificity regarding happiness. I believe that people are too internally dishonest to even be able to tell how happy they are, so asking them doesn't reveal anything. I also don't believe we can answer the question with philosophy (i.e. the other end of the spectrum from asking people how they feel). Aristotle's notion of happiness being a thing acting according to its purpose works for a chisel cutting stone being obviously happier than one rusting in a drawer (or for a medieval Schoolman who starts with the premise that the purpose of Man is to love God), but doesn't close the gap for us here.

From that perspective, it's true that giving someone a sense of purpose also gives them a sense of happiness, but is that good? What if he's just lying to himself? What if using kids or bugnets or AI research as a proxy for an unknowable purpose helps the person doing so move away from an 'authentic' encounter with purposelessness and makes the world a little more dishonest and 'bad' as a result? (And yes of course I can turn this argument around -- what if this is a form of dishonesty that excuses my not spending all my money on bug nets for Africa, and that any criticism of altruism -- effective or not -- is making the world worse.)

Expand full comment

Your response to Petey seems to miss the core point of the argument. Humans have an aversion to ceasing to exist (and a rather strong one at that), so the point at which people are indifferent between continuing to and ceasing to exist is the wrong point to assess for a zero point in utility.

The more relevant question would be along the lines of “would you prefer your current life to having never been created”, though this would still be hard to actually ask people without triggering their feelings of aversion to ceasing to exist (TODO: find a way to survey non-existent people).

Because of these considerations, actual people who are barely-not-suicidal or barely-subclinically depressed are *precisely the wrong* group to use as a reference for small-but-positive utility, due to the strong human dispreference for ceasing to exist throwing the whole comparison off.

Expand full comment

I wonder if part of the answer might be about epistemic humility, and margins of error? You ask

> Start with World P, with 10 billion people, all happiness level 95. Would you like to switch

> to World Q, which has 5 billion people of happiness level 80 plus 5 billion of happiness level 100?

I think my actual answer is no, because I might be wrong about it. I'd want some margin for error, as well as for day-to-day variation. So in the original argument, I'm unwilling to swap world Q for world P without some margin for error - I'd want world P to have everyone at perhaps 91 or 92 happiness units. This gives me a margin for everything - error in my theory, errors in measurement, daily variations, and so forth.

In short, I'm unwilling to swap this world at level X with another one at the same level: I'd only swap it for one with a higher than X level, by some margin to be determined when I know how you're doing the measuring, etc.

Expand full comment

Perhaps you misread? World P has happiness 95, which is greater than 91 or 92. It is both more equal AND has more total utility. Hence why preferring P to Q is called “non-anti-egalitarianism”, "ie you don’t think equality is so bad in and of itself that you would be willing to make the world worse off on average just to avoid equality."

Expand full comment

I'm not sure if this is really an original thought or not, but does it help to approach this from another angle that more explicitly ties a decision to the creation of suffering vs joy?

For example, imagine we discover an alien species who have two ways of producing offspring. One method is roughly analogous to human reproduction. It creates relatively healthy offspring and most maladies are addressable with modern science; expected utility is average for the species.

The other method creates highly variable offspring. 50% of offspring produced with this method are sickly specimens that die at a young age. 50% of offspring produced with this method are exceptional compared to normal offspring. They are stronger, healthier, more intelligent, are capable of experiencing far more joy, and live longer lives than the normal offspring.

Which method of reproduction should a moral alien choose?

Expand full comment

Gonna make another top-level comment, because this seems to address a holistic set of assumptions across dozens of comments, and I doubt I should be replying to each individually.

Predicting the future is extremely hard. Not least because there's a lot of chaos in the system. You simply can't model very far out into the future for most interesting problems. Especially when the inputs to the system are "whatever each of a few billion people want/need (or think they want/need) and all that changes on a daily basis."

At most I should be considering my offspring that I can see. Children and grandchildren (and if I'm lucky, I suppose my great grandchildren). Beyond that and I'm just gambling on chaos. You don't completely discount future people because you don't care about them. You simply realise there's very little you can do that will actually have the intended effect for future people.

I can give some homeless hobo $100 now, and be fairly certain what will happen with that for the next 20-30 minutes, or if he's a very reliable and well-known hobo, maybe 20-30 days. But I absolutely cannot know, for any reasonable definition of "know" what will happen to that $100 in the next 6-12mo. There are way too many inputs into the system.

So why should I have any faith in my ability to do anything for (or against) someone 100, 500, 1000 years from now? I don't discount future people in my utility calculations because they're any less (or more) important than people today. It's just that... there's nothing I can do for (or against) them. So why consider them at all? Effective altruism is hard enough with a short feedback loop. If the feedback won't come until after you die... is there really hope to get it right?

It seems the height of hubris to think that I will be able to come up with a plan that not only solves for chaotic systems, but also solves for a few billion people who disagree about anything (but I repeat myself), and also overcomes the present limitations on whatever tech/systems/knowledge those future people could possibly have that I don't currently have. How do I come up with the right answer without the benefit of hindsight those people will have 100 years from now?

I think we should let those future people solve their problems themselves. Not because we're selfish, but because we recognise our own limitations. Fix what you see wrong/broken now, today. Once all those problems are solved, then start gambling on the future, I suppose?

Expand full comment
founding

The trivial answer here is that preventing extinction is something "you could do" (broadly speaking - I don't expect extinction risks to be successfully addressed by individuals, rather than by more organized effort), where you don't need to worry overmuch about whether the effect is "real" or whether it's positive or negative (ignoring s-risks, which under my models are extremely unlikely). Assuming you buy one of the arguments for specific sources of extinction risk, the argument from there is simple: the only way out is to solve the problem(s). Not playing the game is losing by default. (This argument is weaker in cases where extinction isn't the default outcome of our current trajectory, and you think that efforts to address it might instead make it more likely. But in worlds where that's not true, well... you can always make your odds worse, but not by much.)

Expand full comment

"I’m not sure how moral realist vs. anti-realist I am. The best I can do is say I’m some kind of intuitionist. I have some moral intuitions. Maybe some of them are contradictory. Maybe I will abandon some of them when I think about them more clearly. When we do moral philosophy, we’re examining our intuitions to see which ones survive vs. dissolve under logical argument."

Does this ever actually happen? What are some moral intuitions that have dissolved under logical argument?

Expand full comment
author

Wouldn't abolitionism and the general shift away from racism be an example of this?

Expand full comment

Can you conclusively say that these shifts happened under logical argument, as opposed to simply in response to economic incentives ?

Expand full comment

This was already addressed (for abolition) in https://astralcodexten.substack.com/p/book-review-what-we-owe-the-future

See the excerpt starting with "At the time of abolition slavery was enormously profitable for the British."

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

Rejecting the incremental steps on the road to the repugnant conclusion makes sense if you assume our moral intuitions are driven by, among other things, a large factor of blame avoidance, rather than pure altruism. For example, preferring world A (5 billion 100-happiness people) to worlds B (5 billion 100-happiness + 5 billion 80-happiness people) and C (10 billion 95-happiness people) makes sense if you think of these people as potentially holding you blameworthy for whatever choice you make. If you go from A or B to C, you are counterfactually injuring the people who could have had 100-happiness but instead only have 95-happiness, and this instinctively feels dangerous. If you could choose C, but you choose B instead, then, similarly, you are choosing to make half of the people worse off than the counterfactual. But, if you choose A, the only people you are injuring are those who could have existed but don't, and they won't be around to blame or punish you.

You could try to get around this by saying that the sets of people in each world are pairwise disjoint, or that your choices won't be enacted until long after you're dead and can't be punished for anything, but I don't think the part of our brain that generates moral intuitions is capable of entertaining arbitrary hypotheticals like that.

Maybe if you take blame avoidance into account, you can restore transitivity to your preferences, at the cost of putting yourself at risk of being blamed for not being purely altruistic and capable of rationally overriding your instinctive preferences when a thought experiment calls for it.

Expand full comment

You are discounting the trail running comment, but I actually think it has a valuable insight in it:

Things that seem like they could be very bad now are often much less bad than we anticipated by the time they come around.

I think you sort of got at this point in a post a long time ago that was going back and looking at old catastrophe predictions (I think you focused on acid rain and rainforest deforestation?)

Something that seems really bad now, that will have consequences going far forward in the future, will in reality often have it's edges dulled by time and not be as catastrophic as seemed.

Some of this dulling is a direct consequence of taking action to minimize the harms, that's certainly true, but some of it is just due to effects not always being as strong in the future.

I don't think this insight/idea is a complete refutation of longtermerism, but I do think it's a caution to be careful about being overly hyperbolic about future risks.

Expand full comment
founding

I don't think of the Repugnant Conclusion as a paradox to be solved, but as a proof by contradiction that human morality can't be consistent with itself. Doesn't mean you shouldn't try to be moral. It does mean you can't extend to infinity (or very far even), and that paper shows this even more convincingly.

Expand full comment

"just barely about the threshold" --> "just barely *above* the threshold"

Expand full comment

"This is a really bad hypothetical! I've done a lot of barefoot running. The sharp edges of glass erode very quickly, and glass quickly becomes pretty much harmless to barefoot runners unless it has been recently broken (less than a week in most outdoor conditions). Even if it's still sharp, it's not a very serious threat (I've cut my foot fairly early in a run and had no trouble running many more miles with no lasting harm done). When you run barefoot you watch where you step and would simply not step on the glass. And trail running is extremely advanced for barefooters - rocks and branches are far more dangerous to a barefoot runner than glass, so any child who can comfortably run on a trail has experience and very tough feet, and would not be threatened by mere glass shards. This is a scenario imagined by someone who has clearly never ran even a mile unshod."

I completely disagree with this. I run with my dog and she has cut her paws on broken glass on the trail. At first I was angry that thoughtless people were leaving broken glass on the trail and I started picking it up whenever I saw it. After a few weeks of picking up broken glass, I kept wondering who the hell keeps breaking glass in the same section of the trail. Then it dawned on me that the glass was coming up out of the ground as the trail is being eroded by rain. I don't know what used to happen there, but there is a lot of glass in the soil from way in the past. And it is still sharp. So in my experience, the sharp edges of glass do not necessarily erode quickly on trails.

Expand full comment

A barely relevant tangent: a dog cutting its feet makes the hypothetical work better. A child cutting their feet teaches them to wear shoes and watch their footing, while a dog doesn't have those options.

Expand full comment

You die and meet god. He's planning to send you back for reincarnation into the world as it is, but he's willing to give you a choice. His wife has created a world where everyone floats effortlessly in a 72 F environment, being bathed constantly in physical, emotional, and spiritual pleasure. Their happiness is maxed out. But that's pretty much all they do. They don't get jobs and go to work. They don't have opioid crises or wars, but they also don't graduate college, make discoveries, build businesses, or have any accomplishments whatsoever. Sure, they have children, but they're so engrossed in the maximum hedonics game (with happiness saturation negative feedback loops in their brains turned off, so every moment is like the first/best moment) that they never realize it.

He also has a brother who created a world where everyone fluctuates around 50% happiness. Sometimes they're happier and sometimes not so much. However, they also get many opportunities to contribute to their society. Their work has meaning, they have children, and get fulfillment. Nobody is totally miserable, and nobody is totally happy, but on net everyone is halfway between neutral and the wife's world of maximum bliss.

The choice you get is not to be reincarnated into one of those other worlds. You don't get to pick and choose who your god it. But you do get to pick whether you want the world to stay as it is, or whether you want your god to model the world after one of these other two worlds. It's a lot of work designing a world, so you can't just imagine something into existence without a template. You have to pick one of these two templates, or keep the world as-is. You're choosing both for yourself, and for everyone who will ever be born.

Expand full comment

Branching off of this--the Church of Jesus Christ of Latter-day Saints (of which I am a part) teaches that one of the central conflicts that led to Lucifer falling from Heaven (or rather being cast out from the pre-mortal existence) was along these lines--

* The Father's plan (championed by Jesus Christ) was that mankind (who existed at that point in spiritual form and had the agency to choose one plan or another!) would come down to Earth to experience mortality and death and sin as part of learning to make moral choices by our own experience. And that Christ would come to provide a pathway to redemption and return to the Father. Not all would choose that path, and everyone would have both happiness and sadness, joy and misery, health and sickness, etc. That's basically your "god's brother" world.

* Lucifer proposed the "plan" that instead, he would take the Father's power and *force* everyone to do right all along. No misery, no sin. Everyone gets the "good ending" because they don't have any other choice. This is basically your "god's sister's world".

I won't go into all the theology here, but the important thing is that we, everyone who lives or has ever lived, *knew both plans and freely chose the Father's plan*. Those that didn't were (eventually) cast out, becoming Satan and his "1/3 of the hosts of heaven" (whether that's a literal or a figurative meaning I take no position on). We knew, to some degree (whether this knowledge is individual or not, we're not told), what we would face. And yet chose that anyway. All of us did. The veil of forgetfulness was placed so we could make choices here undistorted by memory of the past existence, to truly learn and become.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

The point of the exercise is to spark intuition of what people actually value. I think I've seen Scott defend against the complaint that utilitarian philosophy only cares about happiness and misery by claiming that, no, they're interested in anything that corresponds to 'positive utility' versus 'negative utility', and that the happiness/misery shorthand isn't meant to imply that these are the only things people care about. I feel like this is a motte to the happiness/misery bailey. Or maybe utilitarians strive for the nuanced view of flourishing, but too often miss the mark and only talk along the happiness/misery slider?

That's what it has looked like to me over the past three posts. Scott's depictions using this happiness/misery shorthand seem to gloss over all the important bits that really do make life worth living. Things that interact with the concept of 'misery' in complex ways, not just "-1 util for this month of suffering". What about the 'utility' gained because you overcame some difficult situation and came through stronger and more confident?

The discussion around the Repugnant Conclusion in particular seems to do this almost explicitly. Saying "these people are at 100 utils", whatever that means, implies people who are being bathed constantly in mental, physical, and spiritual pleasure. "These other people are at 80 utils, and they have no impact on the other people," implies no contact with the 100 utils people. Everyone is hermetically sealed off from everyone else. Sounds awful.

"You're still not getting it, this is all just shorthand." Really? What about the discussion of "suppose you have people in hell at -100 utils, now you add people at -80 utils". It really does sound like you're only talking about the happiness/misery slider and ignoring everything else. It's like the old joke where a farmer asks a physicist for help and he says, "okay, start with a spherical cow in a vacuum." I feel like the farmer, asking when we're going to get into the details that matter, but I'm being told that we need to understand the spherical cow first.

Okay, but when exclusively focusing on happiness/misery leads us astray, maybe it's not because we haven't fully understood the oversimplified model. When your model stops conforming to your intuitions of reality, update the model. Add back in friction, atmosphere, grassy fields, and cow-shaped mammals.

In utilitarianism, there's a lot of hand-waving in the direction of messy problems like "free will", "meaning and purpose", "meaningful relationships", etc. But it all gets sucked into the proverbial spherical cow shape and waved away as inconvenient. We'll get to that later - you know, the kind of thing that makes life worth living? - after we've figured out happiness/misery.

Part of why the Repugnant Conclusion doesn't work is because you're assuming all these people of 80 util operate on a single slider of happiness/misery, when life is more complex than that. Making determinations about whether a life is worth living based on number of times smiled, or explicitly measurable endorphin levels, glosses over everything that makes life meaningful. Including, yes, the occasional year or so of prolonged suffering.

Maximizing on happiness alone, you're left with what amounts to a description of traditional concepts of 'heaven' that sound tired and boring. "Want to play a harp on a cloud forever?" No thanks, I'm good. "But you'll never get tired of it!" Okay, now I'm actively opposed to this proposition.

Expand full comment

I agree. And as far as I can tell, *any* actual attempt to apply utilitarianism ends up just focusing on the "happiness/misery slider" as you put it. Because otherwise it becomes *even less tractable*.

My (jaded) opinion of philosophical ethics is that there are three types of theories (with overlap).

1. There are theories that don't tell you anything useful. CF the "Good == Yeah; Bad == Boo" type theories that just say "do whatever you want, there is no actual good or bad."

2. There are theories that can't be applied to any real situation because they're intractable/require too much knowledge/are internally inconsistent. CF Kantianism and "real" utilitarianism.

3. There are theories that produce absurd (at best!, usually downright evil) statements about what you should do when you apply them. CF utilitarianism as applied, as well as the bits of Kantian theory you can apply.

And the more you try to work through this and avoid those traps, the less predictive power you get and you mostly add epicycles to fit higher-order polynomials to our native intuition.

Have I mentioned that I'm cynical about the value of philosophical ethics?

Expand full comment

I wonder if you're mathematizing your results a bit too heavily. One of the points Scott was trying to make in the last post was that an intuition he didn't have in the past was that far-off people are equally important as close-up people. Utilitarian calculus helped him get to that point, even if some later calculus leads somewhere absurd. That doesn't make the whole of utilitarianism bad/wrong, just incomplete as a system of thought. Something I'm sure Scott and many utilitarians would agree with.

That said, I also like Kant and think he has a lot to offer. No school of moral philosophy is impervious to extreme examples, and if we've learned anything from a world with about 8 billion people on it, there's plenty of room for extremes in reality - not just in hypotheticals about creating worlds or whatever. I try not to toss any of them out entirely, which is part of why I'm not in the utilitarian camp (or any other). Apply Rawls' idea that you don't get to choose who you are, and a universe of 5 billion all happy people would be better than a universe of 80% slightly less happy people plus all-happy people, and on down the line.

I'm okay with learning from the best that moral philosophy has to offer. But I try to be careful to remember that these theories are incomplete and/or incapable of accounting for some very important factors.

Expand full comment

If you just take the "good parts" of what moral philosophy offers...you end up in the same place as ignoring it entirely and going with intuition and tradition. Because that's how you judge what those best parts are (unless you subscribe to a religious ethical tradition). There isn't another way to do things.

And it's not just extreme cases--as soon as you step outside the banal observations and get down to real situations, *every* moral theory I've ever been exposed to is impossible to apply and/or gives absurd results if you take it seriously.

And I'd say that the intuition that far-off people are equally important as close-up people is, frankly, wrong. At least if by "important" we mean "I should focus on helping them just as much." In fact, it's *evil*. Because you can't help far-away people as well as you can help those around you. So by focusing on the far away, you let real people up close suffer. And it inevitably leads to the inanity that is hypotheticals about infinite universes full of just-barely-surviving people (et al). It's mental masturbation.

Expand full comment

I think this explanation proves too much. First, as a religious person myself, I don't accept the assumption that moral intuitions must come from a religion. Indeed, your (our) own church teaches that this moral reasoning is embedded in all people.

I also don't accept that you can't hone those moral intuitions through moral philosophy. I think if you seek in the best books of moral philosophy you'll find some good insights, without requiring them to be perfect before you're willing to learn something. The impulse to cloister in your own religious sphere and avoid learning from people not of your faith is something your own church teaches against. Maybe those inborn moral intuitions can be sharpened with practice.

I certainly wouldn't begrudge a large number of people attempting to improve their moral performance by whatever tools they have available to them, even if I don't think those tools are perfect. That's like saying, "I can't believe you're not out there sinning! Go sin a bit more until you feel like joining my church." Maybe that's not sufficiently charitable, but I think it's only fair to be charitable to those not of your faith who are trying to do good. Seriously, let's not try to impose a monopoly on righteousness, but encourage it wherever we find it.

Finally, I strongly reject the assertion that caring for those far away is *evil*. Your/our own church has a long-standing tradition of helping people in far-away places, and in organizing Relief Society efforts from people in Provo, Utah to make blankets and care packages for people affected by tsunamis and earthquakes around the world. If I take your assertion - that this kind of assistance to people you don't know and haven't seen - is bad because "by focusing on the far away, you let real people up close suffer", then I have to assume you're saying you oppose the global charitable efforts of your church in favor of only working within your local community.

I'm going to assume you don't agree with that. Let's be honest, we can walk and chew gum at the same time. We can build better communities at the same time we work to build a better world.

Indeed, I find this framing of caring for the far-away troubling, precisely because I think you have something to contribute to the conversation but that your approach makes it difficult for your contributions to be considered and accepted. Consider instead the following approach:

Q: Do far-away people have moral worth?

A: Yes, but the farther away they are from us, the more uncertain our interventions. You want to be an Effective altruist, right? Then you should focus on reducing the uncertainty of your proposed interventions.

Q: But there are so many potential people in the future! Shouldn't we be really concerned about them?

A: Yes, we should be worried about the future, but we should also be worried about whether our interventions today will make a meaningful difference in the future, including concern about the Law of Unintended Consequences. Effective Altruism should be concerned with ensuring results are concrete, not just philosophically driven.

Q: But the philosophy leads to important insights! For example, have you considered [complicated hypothetical with lots of unrealistic caveats that assume prior knowledge of outcomes/interactions that could never happen in real life]?

A: It's important to ensure we're getting a good signal from the philosophy, and not just noise. When our hypotheticals have to include terms like, "and you KNOW what the result will be" it ceases to be useful. Because some things you cannot know, such that asking the wrong hypothetical can lead you to overgeneralize irrational conclusions.

Expand full comment

> Utilitarian calculus helped him get to that point, even if some later calculus leads somewhere absurd.

I don't see how utilitarianism could prove the point without assuming it.

Expand full comment
Aug 28, 2022·edited Aug 28, 2022

As I read it, Scott's intuition began as, "I would help a drowning child, even if it cost me something personally".

The utilitarian calculation was as follows:

Child's life in river next to me = child's life far away

Therefore, I should extend my moral intuition to help more than just the people I see. It's the willingness to spend money on a far away life that wasn't intuitive for him. (YMMV)

I think there's a step between there and EA that doesn't take a few concepts like uncertainty sufficiently into account. It goes something like this:

Proximal children dying <<1

Distal children dying >>1

Cost/child saved, distal>>proximal

Therefore most opportunities for doing the most good are likely distal. Somehow I doubt that means Scott would watch a child drown in the river, justifying the decision because he could use the money saved from not ruining his suit on >1 children in Africa.

Expand full comment

So what’s happening in hell then? Satan in this reading isnt that bad.

Expand full comment

Satan's plan couldn't work and he knew it. It was inherently contradictory (for various theological reasons out of scope here). So now he's just miserable and trying to break everyone else so they're miserable too.

Expand full comment

I read it differently from you. As I read it, Satan's plan was not to "force everyone to do good", but just to "destroy the agency of man". That doesn't require force, nor is the 'force everyone to do good' in the Church's cannon. By simply not giving commandments, there would be no righteousness. That works fine, but everyone is stuck in a state where they can't be good because they can't be bad (2 Nephi 2).

Why? Because the objection at the heart of Satan's plan is that given agency people will choose wrong, and this must be prevented. What's he up to now? Two things: 1.) he's still arguing that people shouldn't be allowed choice - you should step in and force others to 'choose right' instead of allowing people to choose even when they might choose wrong, and when they choose wrong you should be there to prove how bad that choice was instead of forgiveness and invitations back to choosing good; 2.) he's proving how bad it can be when you allow people to choose.

Expand full comment
Aug 26, 2022·edited Aug 26, 2022

I was simplifying. The point I was focusing on is that Satan's plan (which claimed to make everyone happy by denying them the ability to be unhappy, like the "god's sister's world" in the comment I was replying to) was rejected by people who had full knowledge of it. It was more a "well, I believe that something like that actually happened, and most people chose the world with the possibility of both joy and misery and were glad to do so".

His plan was also inherently impossible for many reasons. Which are out of scope here. And I don't think he's just trying to argue people shouldn't be allowed choice--that's one of many tools he uses. Along with trying to convince people that there *is* no right or wrong, that you can and should just do whatever you want. And trying to convince people that, even if there is right and wrong, God loves us so it won't really matter which one you do anyway ("beaten with a few stripes" and all that). As well as just psychological and spiritual torture and other methods. Because, in the end, Satan isn't motivated by ideology. He's motivated, as I read it, by *desire for power*. His real desire is to supplant God. The rest is all just nice speeches he could use to convince people to give him power and follow him. Tactical maneuvers, not core principles.

Expand full comment

Are you calling him a liar?

Expand full comment

> Unless you want to ban people from having kids / require them to do so, you had better get on board with the program of “some things can have nonzero utility but also be optional”.

This doesn't properly follow. It seems having kids is of unknown utility, and so you can easily hold a position of "we shouldn't require or ban this, because we don't know which of those would be right" while still holding a position of "things with nonzero utility shouldn't be optional (if you want to be a good person)."

Not that I think you're wrong about not having a moral obligation to do all positive utility things, but it doesn't seem incoherent to believe the above.

Expand full comment

One issue with this analysis is that it deals in terms of outputs (e.g., happiness), rather that inputs (e.g., resources, social connections, etc.). But societies deal in terms of inputs. No Bureau of Happiness is questioning people to determine whether they have exceeded their allotted happiness.

The quirks mentioned about historical people wanting to preserve their lives, even though people today might not find those lives worth preserving, arise from this issue. There is no invariant happiness function that applies to present and historical people.

This deficiency also applies to the cross-checking based on depression rates. There appears to be some implicit assumption that the measured depression outputs correspond to some deficiency in happiness "inputs." But I suspect that substantial overlap in inputs exists between the depressed and non-depressed populations. So increasing the inputs of the depressed would likely result in little or no change in the rate of depression. The problem is mental.

Accomodation to health changes provides evidence of the disconnect between happiness inputs and happiness. People with chronic diseases or injuries, such as spinal cord injuries, tend to report a far higher quality of life than one would expect, based on their physical condition. Furthermore, there appears to be accomodation over time, as people adjust their expectations to match their realities. Perhaps the depressed population arises from an inability to makes such adjustments?

Overall, the argument appears ill-posed. A small number of people consuming a large number of resources per capita is likely no more happy (or at best marginally happier) than a large number of people consuming fewer resources.

Expand full comment

Have you considered the Buddhist perspective on the Repugnant Conclusion?

My moral intuitions say that creating new people is bad, no matter how happy they are. All conscious life is suffering. People who are unusually happy are just suffering less than usual and deluding themselves. Maybe it's another case where "want" and "like" don't quite match up in our brain, and our survival instincts kick in to deprive us of the best possible outcome i.e. nonexistence. If you abuse the notion of a utility function to apply it to existing people, then every existing person has negative utility. Maybe someone with an extremely blessed life approaches zero utility asymptotically from below, but it's still negative.

If this sounds repugnant to you; well, does it sound more or less repugnant than The Conclusion?

(if we want to go from the tongue-in-cheek-Buddhist perspective into an actual Buddhist perspective, we get the outcome that nonexistence is actually very hard to achieve and if you kill yourself you'll just reincarnate; perhaps birthing a child is not morally bad because the number of "souls" in the world doesn't actually increase)

Expand full comment

If the number of souls doesn't increase, then either there is some huge repository of spare souls, or we are going to run out.

Expand full comment

Google tells me the world has about 10^15 ants, so I don't think we need to worry about running out of souls any time soon.

Expand full comment

Sure, but do ants have souls, and is the number of ants going up or down.

Expand full comment

>Start with World P, with 10 billion people, all happiness level 95. Would you like to switch to World Q, which has 5 billion people of happiness level 80 plus 5 billion of happiness level 100? If so, why?

I think the main argument would be that the world will natural come out of equilibrium because people are not clones/drones and have differential wants/interests and abilities.

And as an aside I suspect the top world simply is not actually possible under remotely normal conditions. Too much of human happiness/action/motivation is tied up in positional considerations.

>You’re just choosing half the people at random,

Yeah but in the real world this is never how this works. The people aren't chosen at random. The Kulaks are Kulaks for a reason.

>MacAskill calls the necessary assumption “non-anti-egalitarianism”, ie you don’t think equality is so bad in and of itself

I suspect it very much is bad specifically because what it does to human motivation in a world with differential human wants/interests and abilities

Expand full comment

> This doesn’t quite make sense, because you would think the tenth percentile of America and the tenth percentile of India are very different; there could be positional effects going on here, or it could be that India has some other advantages counterbalancing its poverty (better at family/community/religion?) and so tenth-percentile Indians and Americans are about equally happy.

Circumstances don't affect happiness. The phenomenon is well-known under the name "hedonic adaptation"; there's no need to postulate offsetting advantages to Indian life, because advantages and drawbacks weren't relevant in the first place.

Saying people who are below the tenth percentile of happiness should kill themselves is not obviously different from saying people who are below the tenth percentile of hip circumference should kill themselves.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

Okay, so I get that you're just trying to evoke the stuck-prior-all-communism-bad version of communism, but:

>Communism wants to take stuff away from people who have it for some specific reason (maybe because they earned it), and (according to its opponents), makes people on average worse off.

I want to point out that this is almost literally the exact same argument that communists make against capitalism:

>[Capitalism] wants to take stuff away from people who have it for some specific reason (maybe because they [did the labor to create it]), and (according to its opponents), makes people on average worse off.

And really, "doing the labor to create something" is just a more concrete version of "earning" something.

This almost makes the original statement bad faith, because it almost-but-not-quite claims communists specifically want to take from people who have earned what they have, when the whole ideology of communism is based around the idea that redistributing wealth away from capitalists is just because they *didn't* earn their (large) share of the profit from their meager (or nonexistent) contributions of labor.

Like, I get it was supposed to be a snarky aside, but come on. That's practically a strawman, it makes the whole argument weaker. There are much better criticisms of communism, you've got plenty to choose from.

Expand full comment

I'm sure this has been thought about, but it seems obvious to me that I should care that my action 10 years ago harmed an 8 year old child, and it doesn't seem like that requires me to prefer a world with 10 billion happy people to a world with 5 billion happy people.

All you need to do is say that, once a person comes into existence, they matter and have moral worth. Before that person comes into existence, they don't.

A corollary of that is that if I expect my actions to affect people who are likely to come into existence in the future, I should care about that, because by the time my actions affect them, they will exist and thus matter. So, if I've saved $200,000 for my future child's college fund, and am trying to have a baby, it would be wrong of me to blow that money in Vegas because doing so would likely hurt a person who is likely to exist. Deciding not to have a child and then blowing all my money in vegas, on the other hand, isn't a moral affront to my potential child.

If I'm deciding whether to pick up broken glass (or, you know, broken superglass that never dulls), if I think it's likely that some day a child will run along the path and step on the glass, I need to care regardless of whether the child is born yet. But that doesn't need to imply that it would be a good act to produce more children.

This also feels like it allows for normal moral intuitions about stuff like global warming, while not directly requiring that the survival of the human species be prioritized over the well-being of people alive now or in the near future. We shouldn't make the planet extremely unpleasant to live in for the next 200 years, because there are very likely to be people then and that would suck for them. We shouldn't create swarms of autonomous murderbots that will wake up and start murdering in 150 years, after we're all safely and comfortably dead, because there will likely be people then who won't want to be murdered.

But, if the RAND corporation calculates that there's a 0.1% chance of a nuclear exchange with China causing human extinction, and that a preemptive strike by the US would lower that to 0% but would cause the deaths of 4 billion people, we don't need to go ahead and do the first strike in the interests of the quadrillion future denizens of the galactic empire.

Expand full comment

On the answer to Blacktrance:

>If playing Civ and losing was genuinely exactly equal in utility to going to the museum, then it might be true that playing Civ and winning dominates it. I agree with Blacktrance that this dones’t feel true, but I think this is just because I’m bad at estimating utilities and they’re so close together that they don’t register as different to me.

1. I think Blacktrance doesn't mean that both options have the same utility - it's likely a reference to Ruth Chang idea that options might be "on a par" and that some things might be incommensurable, which helps Chang attack the Repugnant Conclusion (RC): <https://philarchive.org/rec/CHAPIC-4>

2. Even if you don't buy the "incommensurability thesis", if you admit that you're *uncertain* when estimating utilities, I think you might be able to justify a partial "escapist position" - because each step of the argument leading to the RC will be increasingly uncertain. So even though you might concede something like "for any world w with a population n living in bliss there's a world w' with a population m with barely worth lives", you might be able to argue that two precise descriptions of worlds satisfy the conditions of w and w'.

Expand full comment

A potential solution to the RC (and maybe utilitarianism as a whole) is to view civilization rather than individual people as the moral patient. In the metaphor of the human body, it is not mass that makes a good person. Having happy, healthy cells everywhere in your body is good, but just adding a bunch of extra otherwise healthy fat cells until you're morbidly obese is not improving things. Unless you fancy Baron Vladimir Harkonnens aesthetic. More power too you I suppose. Anyway, when I view myself as the civilization I'm a part of and ask "should I grow more people?" The answer doesn't make sense without considering the current state of the civilization. Are there too many people and theres starvation? Well that doesn't make me feel like a very healthy civilization, so no there probably shouldn't be more people right now unless they know how to make more food. Is there lots of work to be done and plenty of resources to do it and just a lack of people to be doing the work? Then yeah, absolutely there should be more people. When asking questions about what is best for our civilization, what we want for our civilization should be considered, not just what we want for individuals in it.

Expand full comment

These thought experiments seem to confuse the issue by combining two unlike questions to create nonsense. Creating happy vs. unhappy people is a moral question. How many people to create is a practical question. Practical, as in, it matters how many people are needed to ensure continuation of the human species, or how many people are required to make a functioning community, but I don't have any moral intuition that more people is better than fewer outside of these practical questions. I don't see why there would be. Do others have this moral intuition?

The thought experiment would make more sense to me if it were explicitly a question of how much we would be willing to trade off happiness for higher chance of survival, or something like that.

Expand full comment

Many people have intuitions that imply it is better for there to be more people. This conclusion can be plausibly derived from the intuitions that an individual's existence has value to that individual (not just to "the human species") and that this value should be taken into account when comparing the goodness of possible worlds.

If you don't share either of these intuitions (or are willing to abandon them upon scrutiny), then rejecting the RC is fairly straightforward. It is more difficult to come up with a consequentialist ethics that preserves these intuitions but rejects the RC.

Expand full comment

I don't follow the connection between my existence having value to me (and presumably the same for others) and valuing the pure number of people. An individual's existence can have no value to them until they exist.

I guess the intuition I'm stating is, if I were to create a world full of equally happy people from thin air, I see no difference between populating it with 1 billion or 100 billion people, except in as far as that makes a practical difference to the people living in that world.

Expand full comment

I agree with your stated intuition here, but there are plenty who disagree and would regard it as intuitively obvious that the world with 100 billion people is preferable. I don't think I can do a good job of representing their views fairly, so I'll leave it to them to explain further if they wish to do so.

If nobody else chimes in here, you might want to take a look at the explanations on this page, which sets out the main viewpoints and objections quite clearly:

https://www.utilitarianism.net/population-ethics

Expand full comment

> In the Repugnant Conclusion, we’re not creating a world, then redistributing resources equally. We’re asking which of two worlds to create. It’s only coincidence that we were thinking of the unequal one first.

This sounds a bit Motte-and-Bailey-ish. Yes, I understand that in the analogy, we are creating a world from scratch. But the overall scenario is about devising policies that we should apply to our current, existing world; and this is where the "equalizing happiness" technique breaks down. Ultimately, either the analogy is not analogous, or you are advocating for seagull-pecking communism.

Expand full comment

Agreed. All of these hypotheticals come across as utterly irrelevant because we *can't* create new worlds ex nihilo. All we can do is make small alterations on the world we are in. The future depends on the past (in complex and uncertain ways). Everything is path dependent.

This goes for policy and culture--as we saw with the fall of Soviet Communism, you can't turn a communist state into a "proper western democracy" simply by substituting the economic and political system of a mature "proper western democracy". And if the US adopted the <insert sphere here> policy of <insert country here>, the results would not be the same. Likely not even *similar*. Humans are not blank slates waiting to be overwritten by policy-makers. History matters, every step of the way. Every twist and turn. So these postulated worlds are sterile, being created without that necessary backdrop.

So my response to all of this (and 99% of most "moral philosophy") is to say "Purple monkey dishwasher. That's as profound as anything said here."

Expand full comment

>Agreed. All of these hypotheticals come across as utterly irrelevant because we *can't* create new worlds ex nihilo.

The point is not "should we do this [create a particular word]?"

It's a thought experiment used to criticize utilitarianism. If consistent application of utilitarian logic/principles (allegedly) leads us to very counterintutitive or generally repulsive conclusions, then this suggests there might be something wrong with utlitarianism that would cause us to inadvertently make morally incorrect choices about things we can actually do.

If utilitarian reasoning is valid, then why would it endorse a type of world many people find undesirable over one they find more desirable? Either our intutitions about what is good/desirable/bad/undesirable are wrong, or something about utlitarianism is wrong (or, we're actually wrong about utilitarianism endorsing the repugnant conclusion).

Expand full comment

> Also, a few commenters point out that even if you did have an obligation to have children, you would probably have an even stronger obligation to spend that money saving other people’s children (eg donating it to orphanages, etc).

Wait, you do ? Why ? Imagine that I'm reasonably rich, have a stable marriage, and both me and my wife have good genes (and we can do genetic screening to ensure that they are passed to our offspring). Our children thus have a high probability of growing up happy and well-adjusted, with a happiness level of, say, 0.9. Why should I instead spend my resources on pulling up orphans from 0.1 to 0.2 ?

Expand full comment

I also prefer the world of 5k people in epic galaxy-wide civilization with non-feeling robot assistants over the world with just humdrum lives of 80k people. I feel like I haven't seen good discussion of this moral intuition elsewhere. I feel like it could be valuable to try playing a game with friends where you all pretended you were unborn souls in a pre-life waiting room, and had to assemble a collection of 'possible lives in possible universes'. Maybe the rules would be you had to live every life in your final collection, or maybe you would get randomly assigned one of them. But the idea would be to explore how people valued different sets of possible lives. How willing would you be to accept a particular negative life in exchange for also adding a particular positive life to your collection? Where would different people's intuitions set their balance points for different pairings of good/bad? Seems interesting to explore.

Expand full comment

> many AI-risk skeptics have the view that we're decades away from AGI, so we don't need to worry,

I keep seeing this point brought up, but by now, it is starting to sound like a strawman. The key point of AI-risk skeptics is not that AGI is decades away; it's that right now no one knows where to even begin researching AGI (in fact, some people would argue that AGI is impossible, though I don't hold that view). You cannot extrapolate from "we don't even know where to begin" to "so, obviously it's a few decades away"; at least, not if you're being honest.

On top of that, many AI-risk skeptics (myself included) disbelieve in the possibility of the existence of omnipotent (or just arbitrarily powerful) entities, and this includes AIs along with gods and demons and extradimensional wizards. This turns AGI from an unprecedented all-consuming X-risk to merely good old-fashioned localized X-risk, like nuclear bombs and biological warfare. Which means that we should still keep a close eye on it, of course; but there's no need to panic.

Expand full comment
Aug 26, 2022·edited Aug 26, 2022

Why would AI need to be omnipotent to pose an x-risk? And saying that you disbelieve it doesn't make an argument for it being implausible.

Expand full comment

Well, like I said above, every human technology poses some level of x-risk. Nuclear fission obviously does. So do fossil fuels, genetic engineering, and yes, even AI. But the AI-risk community does not merely claim that AI is dangerous; they claim that AI is so singularly uber-dangerous that stopping it must override all other concerns; and they justify this conclusion by assuming that the AI will be able to achieve essentially omnipotent powers overnight. Yesterday, it was running in a data center fielding tech support calls or whatever; tomorrow, you wake up, and half the Earth had already been converted into computronium.

I believe that the laws of physics prohibit this scenario from happening, regardless of whether you posit the evil genie to be AGI, a demon from Phobos, or a literal evil genie. I wrote down some of my reasoning in my FAQ:

https://www.datasecretslox.com/index.php/topic,2481.0.html

Expand full comment
Aug 27, 2022·edited Aug 27, 2022

Thanks for the link! I tried to argue against fast AI takeoff a while back, but I wasn't able to make my points nearly as succinctly as you have. For the reasons you've stated, I doubt that x-risk-level AI takeoff will ever be plausible.

Personally, though, I'm not quite as bearish on human-level AGI within the next few decades. The problem is, even though it currently looks like it would take many qualitative advances for AI to reach human-level capabilities, I don't think we can discount the possibility that human intelligence is orders of magnitude simpler than it seems.

After all, assuming materialism, there's a steep but finite upper bound to the computation necessary to run a human mind (see the Biological Anchors post). Given how poorly we understand human intelligence, there's a non-negligible possibility that the computations could be reproduced in silicon far more efficiently than in our slow synapses.

Of course, this argument does not answer how difficult it would be to train such an optimized-brain AGI, compared to the hundreds of millions of years it took for vertebrates to evolve into humans. But overall, I wouldn't be more pessimistic than a 2% chance of human-level AGI in my lifetime.

Expand full comment

It's not just about training time (and perhaps I should add another item to the FAQ, since this keeps coming up). To use an extreme example, a heavy boulder has 10x more atoms than a computer; but a rock will never achieve AGI. A skyscraper full of Super Nintendos has lots of transistors, but you can't play Grand Theft Auto IV on them, and they won't achieve AGI, either. You need a specific structure to perform specific tasks, not just raw mass; and right now, no one knows how to configure all those transistors into a suitable substrate for AGI.

Expand full comment

Hmm, that falls in line with my own suspicions, but I'm not sure if it truly follows. Here, I'll use "GI" to refer to any kind of human-level general intelligence. The human brain is an example of GI, and it has lots of specialized structures; this shows that the substrate "carbon-based lifeform + X synapses + Y specialized structure types + Z years of training" (for sufficiently large X, Y, Z) is sufficient for GI. However, you assert that specialized structures are necessary for GI, which seems to me like a much stronger claim. Given that we don't have any other examples of GI substrates, how can we be sure of claims about what is necessary?

As you've pointed out, one thing we can do is look at substrates that are clearly insufficient for GI, to place a lower bound on what is necessary. The heavy boulder cannot store any variable state at all. The skyscraper of Super Nintendos is a bit trickier to conclusively refute, though, assuming each console is powered on and wired up to neighboring consoles. NASA's VAB is the largest skyscraper by volume at 3.665⋅10^6 m^3, so it can fit about a billion SNESes at 3510 cm^3 each (if we remove everything else in the building).

This comes out to 2.33 PiB of RAM (including VRAM), which is either more or less memory than the human brain can store, depending on which pop sci article we look at. The main thing preventing regular GTA IV gameplay would be the abysmal speed of the memory and communication. But no definition of GI that I know of sets a requirement on how fast it must be. So could we actually run a slow AGI on a VAB-sized matrix of Super Nintendos? It depends on how space-efficient it can be in silicon, which is up to speculation.

Expand full comment

Right, that was kind of my point. You have (hypothetically) wired up all those Super Nintendos together (ignoring heat dissipation, power requirements, signal interference, etc., but ok), and you're saying, "look, this thing has way more RAM than the human brain, therefore it can achieve GI". But this is equivalent to saying, "look, I gathered up all those massive rocks, wired them together with coat hangers, and they have way more mass than the human brain, therefore they can achieve GI". Mass or transistors or gigabytes of RAM by themselves are not enough; you need to execute a specific kind of computation in order to solve a specific computing problem. We know how to run all kinds of cool computations, such as GTA IV, and we could totally run it on the Very Large SNES array (assuming it were physically possible to construct one, which it likely isn't, but still).

But the problem is that we have no idea how to compute human-level intelligence. It's not a matter of throwing more CPUs at the problem, because we don't have the software to run it. We *do* know that the human brain is wired up completely differently from modern computers, but we don't know enough about how it works to emulate it in software. We have developed some pretty advanced pattern-matching engines, and they can do impressive things, but we know that pattern-matching is *all* they can do -- and general-purpose problem solving requires much more than that... we just don't know what, at the moment. People are working on this problem, of course, but right now the answer is nowhere near to being in sight.

Expand full comment

Its an extraordinary claim so the burden of proof is on the AGI promoters.

Expand full comment

In a successfully-established repugnant solution society, what happens if I take myself hostage? The extreme case is "I'm going to kill myself if someone doesn't give me a cheeseburger right now", but in general what's the response if the typical person who has been allotted 0.01 utils demands more, declaring that they will 'exit' unless the ambivalence point is set one full util higher?

The typical definition of where the ambivalence point is set declares that this axiomatically won't happen, but that ignores that people within such a society can *see* the resource allocation system and bargain accordingly - collectively so, if useful. There's an instability when the needs that are being met can be responsive to the effort spent meeting them.

(There's a boring answer that the repugnant society could allow such people to select themselves out of the population until the problem goes away, but tailoring your population to fit your society is trivially applicable to any model and if you're ok with that you needn't have gone to as much effort.)

Expand full comment

I belive that there are things that need to be clarified in the Repugnant Conclusion, at least in the Wikipedia article (I have not read the book).

The ordering of happiness implies that we can assign a happiness score between -100 and 100, that does not mean we can add them or take averages. It is not clear to me that the process would be ”linear” – perhaps the happiness of the new population converges to e.g. 70 and not 0?

Also, it seems that the process of obtaining a new population with lower happiness can be simplified. Let's start with a population of 1 billion individuals of happiness 100 and another empty population. We can keep adding individuals of happiness 0.01 to the second population until its ”total happiness” is larger than the first one.

But that only works if happiness can be readily added and averaged (just like in the original argument). Maybe the second population never reaches the first one and converges to something much lower?

Expand full comment

> MacAskill calls the necessary assumption “non-anti-egalitarianism”, ie you don’t think equality is so bad in and of itself that you would be willing to make the world worse off on average just to avoid equality.

I don't think that quite covers it. Raising up the lower parts of the population to be equal to the top would satisfy egalitarianism, but would not bother someone whose primary concern was maximizing the level of the top (an inverse Rawlsian?). McAskill wants to BOTH raise the bottom & reduce the top.

> Also, a few commenters point out that even if you did have an obligation to have children, you would probably have an even stronger obligation to spend that money saving other people’s children (eg donating it to orphanages, etc).

Wouldn't that depend on how happy your children are expected to be vs the children you could save? The insight of EA though is that lives are cheaper in the third world, and you can save them for much less than it would typically cost to raise one here.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

8) is resolved very simply. You are not special. So the chance of pulling one particular red ball out of a box of 100 nonillion balls is infinitesimally small. Unless all the balls are all red. Then the chance is just 1, every ball is a red ball. The fact that you are here, towards the "start" (we hope) of human history doesn't tell you anything about where you fall in that distribution because *anyone* could ask that question.

You are not some special red ball that needs its occurrence explained out of the population of 100 nonillion.

Expand full comment

> Several people had this concern but I think the chart isn’t exponential, it’s hyperbolic.

I look at that chart, and think it's logistic.

Mathematically, I do not think it's possible for that chart to have a singularity. I'm certain if you were to look at the population chart instead: it definitely cannot be a function that reaches infinity in a finite amount of time, because (even if we were to assume an infinite universe) however fast you grow the population, even if you increase the growth rate and the growth rate of the growth rate all the time, you cannot make the jump from "zillions of zillions to the power zillions of people" to "literally infinitely many people" without infinite time to do it. A population singularity is mathematically just not possible, and so God has cancelled it I guess: https://slatestarcodex.com/2019/04/22/1960-the-year-the-singularity-was-cancelled/

And if you assume that each of the finitely many people will contribute a finite amount of wealth per year, however large that may be, then GDP cannot become infinite in finite time either.

More practically speaking, if there are only around 10^67 atoms in the universe or something, then that sets an even harder limit on growth.

But back to my claim it's logistic. If you look at the first part of a logistic function or "S curve" then it can sure look like an exponential, but at some point it flattens out and then converges to a finite limit. World population growth is estimated to do that, so I think GDP will follow - even if productivity were to remain constant, which it's apparently not doing (something something is science slowing down, low-hanging fruit, cost disease, moloch etc.)

Just like the curve of a spread of a disease (or a meme) in a population can look exponential at first, but even in the worst case can't reach more than 100% of the population, I think GDP will at some point meet the limits of our finite-resources planet, possibly long before we get to colonise other planets in which case those 10^70 or something potential future people are mostly never going to happen.

Expand full comment

A graph is judged on what it looks like now, not its potential future. It’s a mathematical construct so atoms dont count. A virus might be exploding exponentially to begin with, but it cant forever - nevertheless in the exponential phase its considered exponential.

Expand full comment

I like the notion that increasing population delivers diminishing marginal returns on both the individual and group level. And I also like the introduction of AI as a way of highlighting the fact that some actions deliver utility by being performed but not necessarily by being experienced or enjoyed. For example; lab work to understand the universe is good but not primarily because it's pleasant. If the goal of life is to have meaning relative to some group, an overly large group could decrease one's own feeling of significance. The live-ability of lives, therefore, may not be entirely disjoint from one another.

And, of course, larger groups are more robust to all kinds of stressors. And many people will elevate survival of the group or species above other considerations like qualia.

If there was some ideal number of human beings such that sensory pleasantness wasn't the primary limiting factor on having a liveable life but meaningfullness was, I could see us agreeing on some kind of equilibrium "best population" without nessicarily descending by necessity into some kind of physical hellscape.

And as I think was mentioned, "meaningfullness" might be one more counterbalance against a more populous future, arguing in favor of present significance. The other being, as mentioned, uncertainty of future results.

Expand full comment

The repugnant conclusion is irrelevant to policy because to do policy we must reckon with the coercion of the population as it exists now -- not merely the selection of unrelated populations out of a hat. You can make this argument even on utilitarian grounds. Suppose the repugnant conclusion is indeed inevitable, given its stated axioms. Suppose population A is a small population with high utility and Z is a large population with small but positive utility. Suppose u is a function that gives the total utility of a population. Suppose c is a binary function that gives the utility of transitioning one population to have the same total members as another and the same utility distribution. The repugnant conclusion roughly says u(Z) > u(A). Even so, it could be true that c(A -> Z) > u(Z) - u(A). Intuitively speaking, this is sensible because peoples' utility is not memoryless. If you take away a bunch of people's money, they will be sadder than if they never had it in the first place and engage in various utility-destroying actions. Actually you might not ever be able to even make A's utility distribution exactly identical to Z's without expending crazy effort to eliminate a bunch of small rounding errors or unintentionally reducing the population.

As a separate point, there are just critiques of utilitarianism. It's not obvious to me why u(Z) should be equal to \sum_i z_i where z_i are the utilities of all the individual people in population Z. For example, the cultural and technological heights enabled by the largest z_i should count for something. Maybe alternatively the utility should depend on the beholder. As a modern-day human, maybe I don't care so much to perpetuate a society where all the people have been reduced to epsilon-utility drones, but I care very much about having more people like me + replicating society as I know it. On the flip, maybe there are even paradoxes, where I don't care for societies that are unrecognizably awesome and everyone is happy all the time.

Expand full comment

How do effective altruist initiatives to fight global poverty affect the expected longevity of the political regimes under which the global poor reside?

Expand full comment

It seems like most of the problems with utilitarianism only show up when you start applying powers that no mortal should ever have.

Is the solution some kind of "humble utilitarianism", in which you strive to do the greatest good for the greatest number within ordinary mortal human constraints, while also committing not to do certain things like swapping the universe for a different universe, or creating simulated humans, or killing anyone?

Be nice, give to charity, save children from ponds, don't murder that random dude to redistribute his organs, and definitely don't accept any offers from godlike entities to replace the whole universe with a different universe, because that's not your prerogative.

Expand full comment

None of that last paragraph needs utilitarianism or even benefits from it. Like most moral theories, utilitarianism is either banal and tells us nothing new or tells us evil things when it goes beyond the banal.

Expand full comment

You're right, everything in the last paragraph is exactly what any other sensible ethical system would tell you. This is a good thing! All ethical systems should basically agree on most everyday situations.

The point of "humble utilitarianism" would be to allow people whose moral intuitions tend towards utilitarianism to re-derive ordinary morality from utilitarian principles. They'll be intellectually satisfied, they won't need to obsess over the ethics of weird sci-fi scenarios, and they won't kill anyone for their organs.

Expand full comment

I second the DangerouslyUnstable's comment, that the discounting horizon for long-term effects is MUCH MUCH shorter than for short-term effects, somewhat counterintuitively.

**We cannot trust any long-term calculations at all**

Scott, you actually gave an example of it, implicitly: "adopt utilitarianism, end up with your eyes pecked by seagulls some time down the road".

Other examples include nearly all calamities:

- The pandemic destroyed a lot of inadequate equilibria.

- The Holocaust resulted in Jews having their own state, for the first time in 1900 years.

- The comet that killed dinosaurs helped mammals.

I propose an inverse relation:

Prediction accuracy horizon is inversely proportional to the exponent of the time frame of interest. Which is much much more brutal than exponential discounting, but matches everyday experience, mostly because of the unknown unknowns.

To say it slightly differently:

**Longermism gets its eyes pecked by black swans**

Expand full comment

Genuine question: why bother with ethical systems if we test them against our intuition? If we do that, don’t we implicitly assume intuitionism?

Expand full comment

Ethical systems matter specifically in the cases where our intuitions fail, so we can adjust our intuitions.

Expand full comment

But how do we know our intuitions are failing? Or that they aren’t failing when we evaluate an ethical system using it?

Expand full comment

May not have phrased it quite right.

Say your ethical system generalizes meta-ethically as consequentialism, specifically a version of utilitarianism, which leads to the repugnant conclusion. Which is not what our intuition might call a moral good. So you go back and see how and if one can abstract their ethical intuitions, those that work in non-edge cases, to something that does not fail quite the same way.

Expand full comment

I know I seem pedantic but, to me this seems like we are just trying our best to build an explicable or symbolic ethical system which will eventually map onto our intuitions. If so, this is more of a cognitive scientific project, and doesn’t seem like it can be instructive for determining what is right or wrong.

Expand full comment

I am not a moral realist (or a realist in general), so I don't think "determining what is right or wrong" is a meaningful statement in our world. And yes, it's a "cognitive scientific project", figuring out which human ethical intuitions can generalize and how.

Expand full comment

I can get with that. I don’t mind this, all I can say is that my conclusion is that using any of these frameworks to prescribe behavior is, therefore, nonsensical.

Expand full comment
Aug 26, 2022·edited Aug 26, 2022

I think the goal is to find some foundational moral principle that aligns with our intuitions but has a low Kolmogorov complexity. We could just say "morality is whatever people intuitively feel is right," but that's hard to operationalize when people disagree or lack intuition about a scenario.

And it would also feel weird if the fundamental nature of what is right and wrong just happened to *exactly* line up with what human intuitions say. In other realms of inquiry like physics, we can observe how objects behave and what properties matter has using our own experience, but it would feel weird to then just give up on physics and just say "physics is axiomatically defined as the behavior of objects and matter that we observe." This may be superficially true, but we have an intuition that there's some deeper driving principle behind what we observe, so we do things like try to formalize our intuitions with equations and formulas, ultimately getting surprisingly simple expressions like F=ma that elegantly describe a lot of what we intuitively think about how physics works.

You may object that physics is more substantive and precise than ethics, but note that F=ma later turned out to be wrong - it was a mere simplification that only held under certain conditions. But it nevertheless was a useful exercise to formalize our observations into a simpler, more elegant form.

Expand full comment

But why would that be weird? Why would we expect ethics to be separable from human intuition?

Maybe we want to find something systematic which underlies our ethical judgments? But again why would we expect these to be explicable using some symbolic system? Like everything in the human brain appears to be, it might be some non-symbolic system which only looks symbolic when you squint, and varies in its specifics from person to person.

I guess I am skeptical of any such project, and any conclusion I don’t like is just evidence that yet another symbolic ethical system can only approximate but never exactly be the human ethical system. It doesn’t make sense why this wouldn’t be true, because after all, I’m already assessing all systems with my intuition anyways.

Expand full comment

Intuition is short-range; if you want to fulfill an intuitive good that's well over the horizon, you'll need to create a system that can replicate the result of your intuition over long distances. Like, if your intuition is to save a drowning child, and you know one will be drowning at a certain time, you want to make sure your actions allow you to get there on time, and not ten minutes after they've died.

Expand full comment

One of our intuitions is realism...

Expand full comment

Intuitions do indeed exist

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

"But in the end I am kind of a moral nonrealist who is playing at moral realism because it seems to help my intuitions be more coherent."

Well said. In the grand scheme of things I'm a mistake theorist; I think Hume had it right in 1739 that you just can't derive an ought from an is.

But in a certain sense I'm a moral realist; I believe that moral truths genuinely exist in the world *as we perceive it*, because evolution built that into our perceptions. Unfortunately that means that extensions of our moral intuitions far from our everyday existence often go wrong. Evolution only selected for any muddle of utilitarianism, deontology, and virtue ethics that would give the right answer (in terms of increased fitness) under the conditions its selectees were actually encountering. And of course no version of that muddle can have any ultimate claim to normativity.

Expand full comment

This comment is directed at a different level of the discourse: Am I the only one who missed that the original post was a "book review" not a "your book review"? I only realized upon seeing the Highlights From, today. The whole time I was reading the review I was thinking, "This seems like the best book review in the contest so far, but I wonder if that's only because it has more jokes in it rather than actually being more insightful."

Expand full comment

I don't think the "sadistic conclusion is uniformly counterintuitive. Consider a large utopia. Either create 1 person living a dull grey life, or 100 living a dull grey life. Option 1 seems possibly quite a lot better. Maybe. Even if the 1 person has util -0.00000001 and the 100 each have util +0.00000001, the difference is one dust speck, practically ignoreable.

Expand full comment

You are starting to sound an awful lot like a satisficing consequentialist.

Expand full comment
Aug 26, 2022·edited Aug 26, 2022

Goodness me! I was just discussing this with friends over moderately good wine and some mediocre food last night. I'm a late Boomer, but I see that GenXers have largely moved into positions of corporate power (we've got some Boomers in political power, but a lot of their policy wonks are GenX and Millennials). We old farts agreed that it's up to the next generation to solve their problems. We solved ours—imperfectly for the most part—but ultimately we improved on what was left to us by the early Boomers and the Greatest Generation. The worldwide standard of living is still higher than it's ever been. We're not seeing the regular famines we were seeing in the late 20th Century. We've had a big COVID-19 dislocation, and life expectancies worldwide have dropped somewhat—but if SARS2 follows the pattern of previous plagues, we're seeing the light at the end of the tunnel.

Global warming? The fact is that AGW is much has been at the lowest of the low-end of the predictive models. We might see a 1-1.5° C increase by 2100. Antarctica froze when average world temps were 6° C higher than they are today. So I don't see my grandchildren or great-grandchildren having to worry about massive sea-level rise for the rest of the 21st Century.

At current rates, world population will be plateauing at about 11 billion people around 2110 to 2120. I can't do anything about that except to tell all the youngsters to use birth control. What can I say? The world will soon be beyond my ability to offer any solutions. Good luck to you youngsters, but I'm optimistic you'll muddle through, just like we did.

Expand full comment

What if we modified humans to make it impossible for them to suffer, thus making the distinction between a +100 world and a -100 world completely meaningless? Both happiness and misery are a mere interpretation of external stimuli by a neural network, so what if we just hack the neural network to always report "happiness" no matter what?

Yeah, I know, this makes for shitty science fiction but this seems like the most obvious conclusion to the entire utilitarian debate in the next few thousand years.

Expand full comment
Aug 26, 2022·edited Aug 26, 2022

My answer to the Doomsday Argument prior is simply that we do, in fact, have enough evidence to overcome the prior (cf https://www.lesswrong.com/posts/JD7fwtRQ27yc8NoqS/strong-evidence-is-common).

(I do still think extinction soon is plausible, but that's because the piles of other evidence point that way more than it is because of the Doomsday Argument prior.)

Expand full comment

Came here to post this. I think the "Strong Evidence is Common" argument is the correct answer.

Expand full comment

"It's a proof that any consistent system of utilitarianism must either accept the Repugnant Conclusion ("a larger population with very low but positive welfare is better than a small population with very high welfare, for sufficient values of 'larger'"), the Sadistic Conclusion ("it is better, for high-average-welfare populations, to add a small number of people with negative welfare than a larger number with low-but-positive welfare, for sufficient values of 'larger'"), the Anti-Egalitarian Conclusion ("for any population of some number of people and equal utility among all of those people, there is a population with lower average utility distributed unevenly that is better"), or the Oppression Olympics ("all improvement of people's lives is of zero moral value unless it is improvement of the worst life in existence")."

Animals outnumber us a zillion to one and most of them suffer most of the time. Therefore Earth probably has negative average utility. If we don't accept the anti-egalitarian conclusion we should probably be paying Yog Sothoth to devour Earth and put them out of their misery since this would both raise the average utility to zero, and equalize utility.

The anti-egalitarian conclusion seems like it might be the least bad of the bunch above. Would we really rather a million serfs in the 1500s were slightly better fed, instead of having all that renaissance art and science?

I feel like there's some value in the aggregate capabilities of humanity and this is separate from the value of hedonic states. Utilitarianism is incomplete if it only cares about the latter. If I had to choose between a trillion rats on heroin, or a supercomputer that can solve grand unified theory, I'm choosing the latter.

Expand full comment

I suspect that with the repugnant conclusion something similar to the Maxwell's demon issue with thermodynamics is being used.

Namely, if you don't think about how Maxwell's demon operates, the second law is violated! But if you pay attention to it as a system, careful analysis shows it can't work without information sources which preserve the second law.

Similarly, I suspect the "imagine a world" pump in the repugnant conclusion is hiding a similar situation. Ok, so world Z has trillions of near-suicidal people in it. Why?! Who or what is keeping them from working to improve their lives? What's imposing these conditions? The evil of that system is pretty plain, and surely must count negatively to the total utility in world Z.

Another case: lives aren't just selected into existence. Choices are made about creating them. Abstract selections of worlds a, a+, and z aren't how it works. Again, when you assess the choices involved, who is making them? Is life creation in world Z centralized? Why aren't these trillions of people allowed to exercise their own choices about creating new lives? Are they all in total agreement? What's enforcing that orthodoxy?

In our world, the utility of having more children varies dramatically, and is exhausted very quickly. Heck, having to buckle a third car seat in the middle of a small car for a couple years is apparently more trouble than an extra human life is worth. Birth rates historically are very near replacement levels. Actual people when making choices in real conditions very regularly actually decide to have fewer children than is biologically possible.

I suspect careful assessment of real mechanisms would yield the same flavor of result as for Maxwell's demon: the kind of totalitarian oppression resulting in the repugnant conclusion universe is what is doing the evil to much much more than balance out the trillions of barely-worth-living lives of misery of it's inhabitants.

Expand full comment

>So maybe we could estimate that the average person in the Repugnant Conclusion would be like an American who makes $15,000

Given the global GDP per capita is about a third lower (roughly $11,000), does that mean a world better than the one we live in now?

Expand full comment

Utilitarians seem not to take into account the darker pleasures that constitute “happiness” for many people. Case in point: The well-known US Calvinist theologian and philosopher Jonathan Edwards (1703-58). According to Edwards, God condemns some men to an eternity of unimaginably awful pain, though he arbitrarily spares others – “arbitrarily” because none deserve to be spared:

Natural men are held in the hand of God over the pit of hell; they have deserved the fiery pit, and are already sentenced to it; and God is dreadfully provoked, his anger is as great towards them as to those who are actually suffering the executions of the fierceness of his wrath in hell…; the devil is waiting for them, hell is gaping for them, the flames gather and flash about them, and would fain lay hold on them…: and…there are no means within reach that can be any security to them.

Notice that Edwards says “they have deserved the fiery pit”. Edwards insists that men ought to be condemned to eternal pain; and his position isn’t that this is right because God wants it, but rather that God wants it because it is right. For him, moral standards exist independently of God, and God can be assessed in the light of them.

Of course, Edwards and people like him do not send people to hell; but a question may still arise of whether intuitive human sympathies for others (“moral intuitions”) conflict with a principled moral approval of eternal torment. Didn’t Edwards find it painful to contemplate any fellow human’s being tortured for ever? In his case: Apparently not. Edwards claims that “the saints in glory will…understand how terrible the sufferings of the damned are; yet…will not be sorry for [them].” He bases this on a rather ugly view of what makes people, including people in Paradise, feel happy:

The seeing of the calamities of others tends to heighten the sense of our own enjoyments. When the saints in glory, therefore, shall see the doleful state of the damned, how will this heighten their sense of the blessedness of their own state…When they shall see how miserable others of their fellow-creatures are…; when they shall see the smoke of their torment,…and hear the dolorous shrieks and cries, and consider that they in the mean time are in the most blissful state, and shall surely be in it to all eternity; how will they rejoice!

…some food for thought about "happiness" for present-day utilitarian philosophers here, perhaps.

(The above, with slight modifications, is lifted from the philosopher Jonathan Bennet’s classic article “The conscience of Huckleberry Finn”)

Expand full comment

Edwards was psychopath. But then the God he believed in was a psychopath. What’s he enraged by? Thats the imperfect brings he created to send to hell acted the way he designed and are duly going to hell, as planned.

Expand full comment

Even considering that the metaphor breaks down under further examination, I still disagree that I should go and pick up the glass for hypothetical children. This is because the line of thought rapidly leads to "if I ever get pregnant I'm not allowed to get rid of it".

I think there's a certain threshold of harm to myself vs amount and certainty of harm to future people where I'd be willing to do it. Abortion is huge harm to myself vs huge certain harm to a future person I think the glass example is minor harm to my vs moderate uncertain harm to a future person, and for me the calculus comes out favouring me.

Most people already make this trade off constantly - every time you choose to fly to travel US to Europe rather than swim, you're picking a lower risk of drowning for you vs a semi certain moderate to large harm for future people.

Expand full comment
founding

Isn't there an unstated premise here that you have only one (moral) value? I want a happy life, but I also want an interesting life, and they're two separate values that are allowed to have more complex interactions than just adding them together. I want an universe populated by happy people, but I also want one which is interesting and complex. I feel like this makes it pretty obvious why a static Alaskan town is missing something - it's only addressing one particular value in our value set.

Expand full comment

Reading the description of the population Repugnant Conclusion is a reminder of why we rarely end up with philosopher kings. They would still be trying to justify themselves even as their immiserated subjects were stringing them up to the nearest lamp pole.

Expand full comment
founding

I really wish this kind of thing was taught to students at a certain age, probably earlier than 15-16. And the point of the lesson should be pointedly that you can use your rational mind to check and train your intuitions, but in the end you're very much allowed to throw away reason if it smells fishy. It's another kind of intellectual humility - even if the logic is perfect you can still fail, and your logic is literally never perfect.

There's a point in most people's lives where they're liable to follow a very reasonable train of thought all the way to Holocaust or Cultural Revolution. And I don't think the current water supply contains this particular failure mode explicitly yet - probably because popular memeplexes work hard on convincing you that _their_ logic is the correct one, and few very few teach general skepticism as an Art in itself.

Expand full comment

Okay so both Average Utilitarism and Total Utilitarism have some advantages and disadvantages in capturing our moral intuitions. It seems that an important sourse of failure modes is whole line of reasoning regarding creating/killing people instead of helping people that already exist. I'm not sure how to properly adress this right now but there seem to be some potential there.

On the other hand as a quick fix, what if we just combine them U = T + A, where utility has both total utilitarian component and average utilitarian one. The repugnant conclusion becomes less repugnant. We are ready to sacrifice some utility for the sake of more people but not up to the point where everybody are barely suicidal. Same thing with killing people with least happiness, total utilitarian component will give us a bar restricting killing people from some level of utility. What rate of total and average utilitarian reasoning contributing to the utility would be a correct one is obviously up to finetuning. What are the possible terrible consequences for this approach?

Expand full comment

Philosophical discussions from 4000 BCE: Urgrah the sage says to Gorgoro the rock basher. "See sticks, some long, some short. Put sticks in piles. One pile many many long sticks, one short stick. Other pile, few long sticks, one short stick. Tie pelt over eyes and spin, then grab stick from one of piles. Maybe you grab short stick, then you probably pulled from small pile of sticks. Now know all caveman must be small pile, because drew short stick to be under the stars now." At this Gorgoro became amazed and delighted at how lucky he was to live at the peak of human civilization.

Expand full comment
Aug 26, 2022·edited Aug 26, 2022

> It's a proof that any consistent system of utilitarianism must either accept the Repugnant Conclusion ("a larger population with very low but positive welfare is better than a small population with very high welfare, for sufficient values of 'larger'"), the Sadistic Conclusion ("it is better, for high-average-welfare populations, to add a small number of people with negative welfare than a larger number with low-but-positive welfare, for sufficient values of 'larger'")

I don't actually see much of a problem here. If the threshold for "positive welfare" is high, then the Repugnant Conclusion isn't very repugnant. If it's low, then the Sadistic Conclusion seems reasonable--you would rather have a small number of people suffering, then a very large number of people who are each suffering slightly less. If the arguments don't put any sort of bound on what is meant by "larger" or the difference between the 2 happiness levels, than these conclusions become even more clear, in my opinion.

(Actually, I'm not sure the "Sadistic Conclusion" is that wrong, regardless of the threshold chosen. If the slightly-worse group must be allowed to be arbitrarily large for the argument to go through, then which is better: one city of extremely happy people, or 1 galaxy of slightly less happy people?)

Expand full comment
founding

My model of David Chapman is that he was semi-serious about "philosophy is bad" and that he wasn't just describing your post!

Expand full comment

I think the actual and fairly trivial answer to the "Repugnant Conclusion" is that the total happiness over time will be vastly higher for the non-tiled population every time. If we look at the world, most advances come from the developed world, not the vast bulk of humanity, and moreover, from the top end of the developed world.

This suggests that any "tiling" argument fails because better people actually lead to exponentially higher levels of growth. Indeed, if we look at history, the Industrial Revolution occurred after several centuries of growth that made the average person in parts of the world multiple times better off than subsistance farmers.

This led to insane levels of economic growth and growth of human well-being.

As such, the entire argument immediately fails because it is obvious that you see much better outcomes with better populations - and you see more people that way, too.

Moreover, because our requirements for minimum standards are non-stable and go up over time, this means that what is considered acceptable today is not considered acceptable in the future. This is why India and the US come out around similar levels - what is considered tolerable in India is not tolerable here.

As such, the entire argument is pretty awful and fails on a pretty basic level.

Not to mention the other obvious problems (like the fact that people don't like being miserable so are likely to effect changes you don't want if they are kept in this state).

Well, that and the fact that the ideology inherently says that the most ethical thing we can do now is commit serial eugenics-based genocide because those near-infinite number of future people will be better off if they are all geniuses and have very few criminals in their ranks, and we know from behavioral genetics that these traits are heritable, so because those future people matter SO much, it's obviously horribly unethical for us NOT to set people on the right path.

Right? :V

Expand full comment
founding
Aug 26, 2022·edited Aug 26, 2022

The mildest of preferences against inequality can defeat the repugnant conclusion. If everyone is very very very happy, then bringing a child into the world who is merely very happy (let alone neutral) will increase inequality instantly. The badness is not connected with the happiness of the child, but the inequality that you have birthed (ha!) on society.

You can get a formal population ethics that formalises this if you want. For instance, arrange everyone from lowest life utility to highest, and apply a discount rate across this before summing. Then bringing people into existence can be a net negative, even if their own utility is positive. I'm not saying you should use this system, just that there exist perfectly fine formalisations of population ethics that avoid the repugnant conclusion entirely.

Now, as someone has pointed out, if you avoid the repugnant conclusion, you have to embrace another bad-sounding conclusion. The one I want to embrace is the so-called "sadistic conclusion". I say "so-called", because it isn't really a sadistic conclusion at all. It says, roughly, that sometimes it's better to bring into existence someone who's life isn't worth living that a larger collection of people whose lives are barely worth living.

Sounds sadistic for that poor person whose life isn't worth living - but what the "sadistic" moniker conceals is that it's a *bad* thing to bring that person into existence. It's also bad to bring those other people into existence, for non-total utilitarianism ethics. All that the sadistic conclusion is saying is that, between two bad things, we can make one of them worse by scaling it up.

More details here: http://blog.practicalethics.ox.ac.uk/2014/02/embracing-the-sadistic-conclusion/

Expand full comment
founding

This is the point where I should bring up the Very Repugnant Conclusion, that Toby Ord shared with me.

If W is a world with a trillion very very very happy people, then there is a better world W', with has a trillion trillion trillion people getting horribly tortured, and a trillion trillion trillion trillion trillion trillion people living barely passable lives (or numbers of that magnitude; add a few trillions as needed).

https://www.lesswrong.com/posts/6vYDsoxwGQraeCJs6/the-very-repugnant-conclusion

Expand full comment

The mathematics behind the RC is almost kindergarten level. Are philosophers really impressed that they worked out that y * N is smaller than y/d * M for values where M/N is greater than d.

Expand full comment
founding

What you've shown is that the RC is a mathematically trivial consequence of total utilitarianism.

But the underlying "mere addition" argument doesn't require total utilitarianism to function, or much in the way of assumptions on how utility is multiplied, combined, etc...

Expand full comment

Didn't comment on the last one, but the comment that struck me most was the one about the infinite oregano. That is, the problem lies with taking statistical averages to pool together people that are stated to not interact with each other. If you instead take a soup approach and assume a person's value is based on their direct effect on the state of the existing group, the "make more, less happy people" problem goes away.

I guess I'll try to make an example. A baby with no living relatives is given to an AI facility who puts it in a solitary room, and gives it food and water that would inevitably go bad and be thrown away if the baby didn't get it. If the baby lives its whole life in that room, and nothing they do leaves the room, is their life's value based on how happy or depressed they are? Or do we say their life is a complete neutrality? If it's a complete neutrality because they never affect anyone else, the Repugnant Conclusion disappears.

Expand full comment

+1 to utilitarianism being fundamentally broken, moral philosophy being inherently incoherent and a waste of time trying to rationalize moral instincts that aren't rational in the first place.

Instead, just help others purely for selfish reasons like because it'll make you feel good, look good to others, or because it helps to maintain a common resource that you/your community benefits from (e.g. social safety nets, charities, Wikipedia, open source projects). That seems much more coherent and in line with our own moral instincts.

Expand full comment
Aug 27, 2022·edited Aug 27, 2022

I'm pretty unconvinced by the non anti-utilitarian argument. If people are at exactly the same utility they are far more likely to be having the same experiences. It's fairly plausible that having the same experiences doesn't actually count as different sources of value, e.g. see https://slatestarcodex.com/2015/03/15/answer-to-job/

Expand full comment

The response (16) to Mentat Saboteurs explanation of glass shards and barefoot running misses something. Every. Single. Longtermism. Hypothetical. Has an an answer like this.

The more you dig into each risk the more you realise that the risks are not as profound as initially thought. After a while you start to notice a pattern: none of the risks pan out. It doesn't mean that the risks shouldn't be taken seriously, a runner could still cut their foot, a nuclear war could still kill a 100 million people; it's just that the risks aren't existential.

Where gaps in knowledge exist fear arises. It's natural to be afraid of the dark, but maybe grab a torch and steelman that shit.

Expand full comment

The world A (five billion happy people) means "and the humanity is extinct after that". As a data point, my intuition doesn't support that at all, even compared to "colonise the Virgo cluster and exist for millions of years as cyberpunk distopia", if that's the only possible variant.

I'd also support Repugnant Conclusion if the question is "utterly destroy a miserable megapolis or a happy hamlet".

Expand full comment

> If playing Civ and losing was genuinely exactly equal in utility to going to the museum, then it might be true that playing Civ and winning dominates it. I agree with Blacktrance that this dones’t feel true,

Well, that does feel true to me. (Except insofar as a "game" of Civ where I know ahead of time whether I win or lose is not quite the same as an actual game, but this isn't applicable to children -- children who underwent embryo screening are just as real as ones who didn't.)

> any consistent system of utilitarianism must either accept the Repugnant Conclusion ("a larger population with very low but positive welfare is better than a small population with very high welfare, for sufficient values of 'larger'"), the Sadistic Conclusion ("it is better, for high-average-welfare populations, to add a small number of people with negative welfare than a larger number with low-but-positive welfare, for sufficient values of 'larger'"), the Anti-Egalitarian Conclusion ("for any population of some number of people and equal utility among all of those people, there is a population with lower average utility distributed unevenly that is better"), or the Oppression Olympics ("all improvement of people's lives is of zero moral value unless it is improvement of the worst life in existence").

The Sadistic Conclusion doesn't *at all* sound obviously wrong to me, not even if I translate it to the first person. (Would I rather my life was extended by thirty years of boredom and drudgery than by one hour of excruciating pain? The hell I would!) So a system which avoided the other three conclusions might still feel okay to me even if it accepted the Sadistic one.

> There’s been some debate about whether we should additionally have an explicit discount rate, where we count future people as genuinely less important than us.

More precisely:

Any faster- (or slower-)than-exponential discounting would be dynamically inconsistent (you would pick A over B if choosing thirty years in advance but you would pick B over A if choosing five minutes in advance *even if you didn't learn anything new in the meantime).

Any exponential discounting would either care about the year 1,102,022 a helluva lot less than about 1,002,022 or care about 3022 nearly as much as about 2022.

> Hari Seldon writes: ... If every human who ever lived or ever will live said "I am not in the first 0.01% of humans to be born", 99.99% of them would be right. If we're going by Bayesian reasoning, that's an awfully strong prior to overcome.

That argument isn't novel and has been extensively discussed for decades. https://en.wikipedia.org/wiki/Doomsday_argument

Expand full comment

I feel that the only place where I disagree with MacAskill is that I don't believe in "sufficiently large" population - more specifically, I believe that people can have degrees of being "the same person" other than 0 or 1. That is to say, once there is a sufficiently large/diverse population, it becomes bad to create someone below average, since you're shifting people's experience-measure in a worse direction. Likewise, it becomes close to pointless to make more people eventually, since they already mostly exist.

Expand full comment

One potential solution is to factor in uncertainties and practical concerns. Instead of asking "which of these two worlds do you prefer", realize that in the real world you always start from somewhere and are proposing changes to go somewhere else.

So if you start from a world of five billion perfectly happy people and go to a world of ten billion slightly less happy people, you have to ask "how wide are the error bars on that?" Because for any change to that many people, they're going to be pretty damn wide. So instead of going from 100 happiness to 95 happiness, you might accidentally go from 100 to 75. Which really changes the calculation. Anyone who says that they have more certainty about a proposed change of that magnitude can not be taken seriously.

The math works the same way going the other direction, starting from ten billion going to five billion. The error bars are too wide.

If you start from somewhere and go somewhere else, and you're uncertain about how close you'll get to your goal, it naturally limits the whole process. Even if you go through the cycle once or twice, quadrupling the population, at that point your error bars have only gotten wider, and your gains smaller, so the process naturally stops because it's no longer a clear win. You certainly won't go all the way to a quadrillion people with 0.01 happiness, because it's impossible to make those fine adjustments at those magnitudes.

Someone mentioned higher up that these philosophical preferences are not transitive. If I prefer A over B, and B over C, that doesn't mean I prefer A over C. this is why. It's just math with error bars.

Expand full comment

Huh. My take on utilitarianism is actually heavily informed by Unsong

[spoilers]

> IF TWO THINGS ARE THE SAME, THEY ARE ONE THING. IF I CREATED TWO PERFECT UNIVERSES, I WOULD ONLY HAVE CREATED ONE UNIVERSE. IN ORDER TO DIFFERENTIATE A UNIVERSE FROM THE PERFECT UNIVERSE, IT MUST BE DIFFERENT IN ITS SEED, ITS SECRET UNDERLYING STRUCTURE.

Translate to utilitarianism: sum utilitarianism, weighted by distinctiveness. If two humans have functionally identical lives of muzak and potatoes, that only counts for one, come up with something original if you want more points. Can you create ten billion meaningfully distinct barely-worth-living lives? If you can, I approve, but I bet you can't. If you simulate a copy of me in hell, that sucks, but I don't much care if the counter next to the simulation says "1 copy" or "9999 copies".

"Meaningfully distinct" is pulling a lot of weight here, and I'm still searching for a mechanism that doesn't incentivize accepting a long string of random numbers into your identity. But this is a descriptive take, not prescriptive, and I think eventually a solution could be found.

Expand full comment

A crucial difference between people who operate in philosophy la-la-land vs. reality is former naturally tend to gravitate towards models where things are known/predictable for certain, while almost nothing in the real world works like that. In the real world you need an appropriate margin of safety for your decisions, which is why you'd never never ever even consider getting close to that 0.001 happiness level because there is an excellent chance you undershoot it by oh I don't know 50 points. And at least in my book if I have 50% chance of undershooting by 50 points and 50% chance to overshoot by 50 points I wouldn't take the risk of creating that much suffering by aiming for neutral point. I'd shoot for a 50 average.

Expand full comment

I'm not sure where this fits in, but I would like to raise my personal objection to the "twenty-nine steps that end with your eyes getting pecked out by seagulls" process.

This kind of thing typically starts with presenting an extremely contrived hypothetical (like "you get to choose between creating a world with X people at Y average happiness, or 2X people at Y-5 average happiness" -- I get to choose that? Really? Where's my world-creation button?) Then it asks people questions about their moral intuitions regarding the hypothetical. Then it takes the answers to those questions, and proceeds to turn them into iron-hard mathematical axioms that are used to build up a giant logic train that ends up in seagulls eating your eyes.

The problem here is that intuitive answers to moral questions about extremely contrived hypotheticals are not reliable information. If you ask me some extremely weird moral hypothetical question, my credence in whatever answer I give is not going to be high. Maybe you could get 80% certainty on an answer like that -- honestly, it's probably lower, but it's certainly not higher.

Turning an intuition into an iron-hard mathematical axiom, of course, requires that it be at 100% certainty. So the sleight of hand is in the bit where you take a mushy contingent answer, and turn it into a 100%-certain axiom. If you actually try to do logical deduction using that 80%-certain intuitive guess, then you have to lower your certainty with each logical step you make, just as when making calculations with physical parameters you have to multiply tolerances. Doing this correctly, you end up saying "this philosophical exercise suggests that, at 5% certainty, you should let seagulls peck out your eyes". To which it is obvious that you're justified in answering "so what".

(A related technique is, IMO, the best way of dissolving Pascal's Wager/Mugging issues. The point of Pascal's Mugging is to present you with an artificial probability distribution where paying the mugger is positive-EV. But the EV, i.e. the mean of the probability distribution, is not the only relevant quantity; you also need to take variance into account. The variance of the Pascal's Mugging distribution is extreme, and the modal outcome is one where the value of paying the mugger is negative. The trick in Pascal's Mugging is to get you to ignore this variance information, which is very relevant to the decision.)

Expand full comment
Sep 5, 2022·edited Sep 5, 2022

These Utilitarian thought experiments sound a lot like Scholasticism. How many angels can dance on the head of a pin? Can God make a stone so heavy that he can't lift it?

I don't intend to be mean spirited, but isn't this all a bit... well... juvenile?

Expand full comment

"But, the odds of me being in the first thousand billion billionth of humanity are somewhere on the order of a thousand billion billion to one against..."

Imagine that every positive integer is conscious. What prior probability should the number 4 place on the proposition "I experience being a single-digit integer?" Why should the conscious entity 4 be surprised that it's a 4? While there may be infinitely many possible integers, there is guaranteed to be a 4, which means that one consciousness is guaranteed to experience the qualia "I am a 4."

Alternate thought experiment: consider two possible worlds. In World A, humanity explodes into nonillions of descendants sometime after 2100 CE. World B is identical to World A, except that the universe vanishes in a cosmic accident on January 1, 2080, and no one after that is born. Now consider Bob, who lives until 2050 in both worlds. His experiences are identical in both worlds. Why should Bob take his experiences as evidence that he lives in World B? Both worlds contain a Bob!

Regardless of what the future looks like, the present contains Bob. He has conscious experiences, and those experiences will always be had by Bob's consciousness, not anyone else's. Why should it surprise the-conscious-entity-experiencing-Bob's-life that it is experiencing Bob's life?

I posit that our current experiences tell us nothing whatsoever about the likelihood of the universe imploding sometime in the future, except insofar as we can trace a causal path from our current observations to future events. Anthropic reasoning doesn't work for estimating future humans.

Expand full comment

> I should stress that even the people who accept the repugnant conclusion don’t believe that “preventing the existence of a future person is as bad as killing an existing person”; in many years of talking to weird utilitarians, I have never heard someone assert this.

As someone whose intuitions see the repugnant conclusion as "not that repugnant", I think "killing somebody with 40 decent years of life left" and "not letting somebody be born with 40 decent years of life left" are at least somewhat close in badness.

My intuitions take "probable persons should be evaluated equivalently to existing persons" quite far, and I haven't had time to work out the math (which is, probably, not easy, and may cause lots of views to change). But I want to file that I do have these intuitions.

Expand full comment
Sep 22, 2022·edited Sep 22, 2022

Something that bothers me about population ethics is that it seems like we live in a Big Universe, and the intuitions from hypotheticals like "what if the universe was just 5,000 people", "what if the universe was just a billion people in Hell", "what if the universe was just a high-tech mega-civilization spanning the Virgo Supercluster", etc. seem like they transfer poorly to the world described by modern physics.

(We seem to be embedded in two unrelated "multiverses" - space-time seems to extend much, much further than the visible universe, and the many-worlds interpretation of quantum mechanics suggests many overlapping areas embodying different quantum outcomes. That's without even getting into more speculative ideas.)

Average utilitarianism is intuitively appealing when it comes to smaller toy models like this, but it only works if you know the whole state of the world. If we're extremely uncertain about the state of everyone outside our tiny bubble, it becomes undecidable, we're left radically uncertain about whether any lives we're capable of are bringing the average up or dragging it down.

Scope insensitivity also makes these questions awkward. Is the reason we like the glorious high-tech civilizations more than the larger but worse-off ones of the Repugnant Conclusion just that our brains are too small to really represent the number of people in either of them, and we're just comparing individual lives? If so, should we ignore our intuitions here because we know they're irrational, the same way we should in a Month Hall problem?

Edit: A third issue: we tend to reduce utilitarianism to hedonic utilitarianism in these discussions. But while there are convincing arguments that you have to be some sort of utilitarian or else your preferences are inconsistent and you can fall into endless loops and the like, there's no such argument requiring us to be pure hedonists. Obviously happiness is immensely important, but we can attach utility to other things as well.

For example, if we regard death as intrinsically very bad, then any new mortal life we create needs to be good enough to balance out the fact that they will someday die; but once they are created that cost is sunk, and they only need to be barely worth living to justify not committing suicide. (Although this runs into some interesting problems of how exactly to define death, I basically think it's true.)

Expand full comment