380 Comments
Comment deleted
August 26, 2022Edited
Comment deleted
Expand full comment

(some context from https://astralcodexten.substack.com/p/effective-altruism-as-a-tower-of/comment/8633798)

Pretty sure that game theoretically you do still run into the drowning child scenario, in the sense that if you could give me my child's life at minimal cost to you and did not, we'll probably have issues (fewer than if you had killed my child, of course). That seems like part of the calculus of coordinating human society.

Expand full comment
Comment deleted
August 26, 2022
Comment deleted
Expand full comment

It seems like you're choosing which things are rights and violations without appealing to your foundational game theory morality now? I'm not saying you're wrong, just that I don't understand how you've decided where the line is.

If it confers game theoretical benefits to harm someone at a particular level of could've/should've, then doesn't this framework suggest it? Ignore here the can of worms around figuring out if you really could've/should've, or explicitly appeal to it.

No existing countries are bound by this framework, so what they do is irrelevant.

Expand full comment
Comment deleted
August 26, 2022
Comment deleted
Expand full comment

We're not talking about whether or not we like or believe something here. There is no such thing as a right (positive or negative) outside of some framework that defines them. My understanding was that your set of rights are defined by game-theory, which would not obviously exclude negative rights.

If you're already jumping to the step where you bend your framework to make it useful or acceptable, it's probably fair for me to point out that trying to measure rights-violation-utils is in practice just as silly as trying to measure QALY-utils -- accuracy is going to be extremely rough.

Expand full comment
Comment deleted
August 26, 2022
Comment deleted
Expand full comment
Comment deleted
August 25, 2022
Comment deleted
Expand full comment
Comment deleted
August 25, 2022
Comment deleted
Expand full comment

This is a good point. We would struggle to determine the line between very small net positive happiness and objective unhappiness. It could be that most people plod along day after day at -10 or whatever, and some researcher/philosopher would mistake that for 0.1 and say they were fine. Not that we have a way to benchmark or calibrate what either of those numbers would mean in real life.

Expand full comment
Comment deleted
August 25, 2022Edited
Comment deleted
Expand full comment

You don't moonlight as Reg, the leader of the People's front of Judea from Monty Python's Life of Brian, do you?

"All right, but apart from the sanitation, the medicine, education, wine, public order, irrigation, roads, a fresh water system, and public health, what have the Romans ever done for us?”

Expand full comment

How do you measure happiness? A caveman sitting in the warm summer sunshine, in good health, with a full belly, and their mate and offspring by their side, could have been perfectly happy in a manner that, if we managed to create a felixometer to measure "brain signals corresponding to happiness', hooked our caveman up to it, measured them, then measured someone living in today's Future World Of Scientific Progress, scored just as highly.

How can we distinguish between our caveman who is warm and happy and our modern person who is warm and happy? Both of them score 100 on the felixometer. How much does Science And Progress contribute there? Yes, of course it contributes, but we can't assign "a single felicity unit on the felixometer" matches up to "ten years advancement in progress", such that if we had a time machine to jump into and measure the happiness of a future person living in a space colony, they would measure 2,000 units, our current day modern person would measure 100 units and our caveman would only measure 1.

"I'm physically and emotionally satisfied, not in pain, not hungry, comfortable, and it's a lovely day in unspoiled nature" is the same for Cave Guy, Modern Guy and Future Guy and will measure the same on our felixometer (even if "physically and emotionally satisfied" includes antibiotics and surgery to treat sources of physical pain for Modern Guy that Cave Guy can't even dream of).

That's why "measures of happiness/contentment" don't tell very much, and why as above "depressed guy in America" and "depressed guy in India" can be roughly 10% of both populations, even if living conditions in America and India are very different.

Expand full comment

I see two major differences in what we are calling "happiness" between our society and a pre-modern society. Our society is far more complex and comes with lots of obligations, stress, and necessary knowledge (i.e. you have to go to school for over a decade to meaningfully participate). So that's on the negative side, lowering much of the potential happiness. On the other side, our society has much better insulation against sudden unhappiness - disease, killed by neighbors or wild animals.

The question seems to be more about average happiness (which would be really hard to determine) or preference for avoiding catastrophe. My wife and I prefer a steady income to a potentially higher but more volatile income. I would prefer today's society (assuming my preferences would carry over) to a caveman's, based on that.

Expand full comment

""All right, but apart from the sanitation, the medicine, education, wine, public order, irrigation, roads, a fresh water system, and public health, what have the Romans ever done for us?”"

How could any of this stuff possibly matter except to the extent that people's experienced sense of wellbeing has improved as result? And if you're saying this stuff *ought* to have improved our sense of wellbeing, well, that's irrelevant. What matters is what has actually happened.

Expand full comment

While people are having Repugnant Conclusion takes, I might as well throw in a plug for my post from 2016 about "the asymmetry between deprivation and non-deprivation," which is an under-appreciated nuance of the issue IMO:

https://lapsarianalyst.tumblr.com/post/147212445190/i-was-thinking-today-about-why-total

Expand full comment

I definitely appreciate this post and think you're basically right

Though when I read "(Somehow I suspect this will be challenged. I am curious to see what people will say. Maybe humans can be happier than I think.)" my inner Nick Cammarata leaps forth. It seems like some people really do live in a state of perpetual, noticeable joy

(Maybe we should be figuring out how to attain that? I know for him it involved meditation and a sprinkle of psychadelics, but I have no clue how I would do the same)

Expand full comment

How can mortal beings who are aware of their own mortality ever be unproblematically happy? With the possible exeption of the lobotomised....

Expand full comment

I think the real answer is that how happy you are is only tenuously and conditionally related to the facts of your life. There's no a priori answer to "set of facts {X, Y, Z} are true, should that make me feel happy?" If there's some process that can take any normal set of facts and make a person feel good about them, then almost anyone could be happy.

Expand full comment

Well, this is "happy" the way a man who falls from a skyscraper, can tell himself as he passes floor after floor on the way down: "All in all, I am quite happy with my situation - I haven't had a scratch so far".

Expand full comment

I agree with this, and this fact alone should give us great pause about our ability to rationally determine how to best create happiness. It seems to me the best way to create happiness is to create the kind of person who always seems to find the good in things and focus on that. As SA's example of unhappiness in India vs. US being relatively the same percent, despite the significant differences in physical reality suggests, it seems that there will be people unhappy in almost any circumstance, and people who will be happy in almost any circumstance, and people in between.

Expand full comment

The US founding fathers wisely stated in the Declaration of Independence that “life, liberty and the pursuit of happiness” are unalienable rights. That turn of phrase is rather clever, notice the difference from stating that “life, liberty and happiness” is what rulers should aim at providing. I pale at the thought of some utilitarian (or a future AGI?) getting into a position of power and insist on providing me and everybody else with happiness; and I pale even more if his/her/its ambition is expanded to make everybody happy in the untold millennia to come.

The late prime minister of Sweden, Tage Erlander, said something rather similar as US founding fathers, concerning the Scandinavian welfare states. He stated (quoting from memory) that the role of the state is to provide the population with a solid dance floor. “But the dancing, people must do themselves”.

Expand full comment

I'd agree with this — not in the sense that it's literally impossible (you could not care, or believe in an afterlife, etc), but in the sense that *I* cannot imagine it and I think most who aren't extremely bothered by it are in some sense wrong / not really thinking about it / have managed to persuade themselves of the usual coping mechanisms.*

Not that this *has* to be the case — perhaps, say, a true Buddhist or Stoic might have legitimately gotten themselves to a place wherein it doesn't bother them without leaning on afterlives or ignorance — just that I believe most people share my basic values here whether they admit it or not.

*That is, the first two options there are pretty self-explanatory; for the third, think of the stuff you hear normies say to be "deep" and "wise": "It's natural, the circle of life, you see; beautiful, in its way... and you'll live on in your children and the memories they have of you!"

Yeah okay but if you knew you'd die tomorrow unless you took some action, would you be like "nah I won't take that action because I'd prefer the 'children and memories' thing to continued life"? No. It's a consolation prize, not what anyone actually prefers. Not something anyone would get much comfort from if there was a choice.

You can make anything shitty *sound* good — "some people are rich, some people are poor; it's natural, the circle of wealth, beautiful in its way... and you can be rich in spirit!" — but make it more concrete, by posing a choice between "rich" and "poor" or "alive" and "dead", and we see how much the latter options are really worth.

Expand full comment

Really interesting post, thanks for sharing.

> we could say that depression just is being close to the zero point, and depression-the-illness is a horrible thing that, unlike other deprivations, drives people’s utilities to near-zero just by itself.

Being depressed felt like everything in life - including continued existence and meeting my basic needs - is too _hard_, in the sense of requiring a lot of effort that I physically cannot expend. Being suicidal felt like I just want what I am feeling to stop, and it's more important than self-preservation because preserving the self will just lead to feeling this forever. I've heard it compared to jumping out of a burning building and I really like this analogy.

Come to think of it, it's the inability to imagine getting better that's the cognitive distortion, the rest of the reasoning is solid. I think we as a culture taboo this kind of thought because you fundamentally cannot estimate your future happiness or suffering.

Expand full comment

>I've heard it compared to jumping out of a burning building

David Foster Wallace's expression of this idea is close to my heart: (linked so one can choose whether to read)

https://www.goodreads.com/quotes/200381-the-so-called-psychotically-depressed-person-who-tries-to-kill-herself

Of course it doesn't capture every type of suicidal ideation or behavior, but it's an excruciatingly keen picture of the type that eventually took Wallace and whose shadow looms over several whom I love, as well as, at times, me.

Expand full comment

+1 good post from me too. It does seem right that non-linearity of non-deprivations is separate from hedonic treadmill, a point that I haven't considered before.

Expand full comment

I don't think it's fair to consider depression the zero point. Humans evolved to want to survive, and our cultures generally reinforce this desire. People who would choose not to exist are, for whatever reason, not thinking normally. There's no good reason to think it's closely related to actual quality of life, and people in chronic pain may have much lower average quality of life than people with depression (which is usually temporary). The bar should be whether you personally would choose to live that person's life in addition to your own or not.

Expand full comment

You might have Schopenhauer as an intellectual ally. I think of his argument that if you compare the pleasure a predator feels when eating a prey, with the pain the prey feels while being eaten, it is obvious that pain is more intense than pleasure in the world. Plus, there is much more of it.

Expand full comment

Similarly, "The rabbit is running for its life, the fox only for its dinner". I should note I still embrace the Repugnant Conclusion (even while also being a moral non-realist).

Expand full comment

I'm honored to be highlighted and get a sarcastic response, although I think I defended the relationship between the weakness of the thought experiment and its consequences for a philosophy of long-termism in the replies to my comment.

Also, if water didn't damage suits, wouldn't your obligation to save the drowning child be more obvious rather than less? An extremely selfish person might respond to the Drowning Child thought experiment by saying, "Why no, the value of my suit is higher to me than the value of that child's life, that's my utility function, I'm cool here." Wouldn't the analogous situation be if children couldn't drown (or my mocking of MacAskill with a version of the Drowning Child where the child is Aquaman)?

Expand full comment

I think Scott's sarcastic point is that it would mean that, actually, *you* never want to sacrifice resources to save someone else – the thought experiment that led you *to think that you want it* was flawed.

But yeah, it's more of a joke than something that stands up to scrutiny.

Expand full comment

Having occasionally ditched my suit as fast as possible and have some basic swimming skills including life saving training, the time it takes to ditch the suit is much less than the time you will save in the water without it if they are far enough from shore to be in deep water. Which is where a drowning person is likely to be. Not to mention the likely effectiveness and safety to both of your mission.

Expand full comment

This. The hypothetical is really badly set up.

Expand full comment

Specify that you can tell the water is 4 ft deep and the child is drowning because they are 3 ft tall and can’t swim. They are far enough that you can’t reach from shore, but close enough that you don’t have to swim any great distance. Does it work now? I guess also specify that you are over 4 ft tall so you can just stand.

Expand full comment

That's the way I read it; a child is drowning in water an adult can wade in. Otherwise, my inability to swim means I watch the kid die and thank God that at least my suit's okay.

Expand full comment

It's also not really an argument for anything resembling utilitarianism - it it was a really nice suit, and the child was horribly abused (and thus very unhappy), utilitarianism is the only serious moral philosophy I'm aware of that advocates not saving the child for the sake of the suit.

Expand full comment

It's odd, because the proponents of utilitarianism seem to take it as a positive trait that their philosophy notices that saving a child is good, even if it would ruin a suit. As you say, other major philosophies don't consider saving the suit to be a moral consideration at all. I think I can see how they would look at that as a better system, because someone would consider the suit in an analysis of the situation - but I don't consider that part of the *moral* calculations. You always save the child, whether you're wearing swimming trunks or an expensive dress - that's the moral imperative. That you bear the burden of a ruined suit is a practical consideration that doesn't impact the morals.

Expand full comment

It’s also weird if you think about who you should save - a very happy successful businessman or a horribly abused child. Most people’s moral intuition is you safe the child, rather than whomever has the nicest life.

Expand full comment

If water didn't damage suits, then that analogy can't be extended to spending money on bed nets.

Expand full comment

Yes! "Long termism's analogies don't work because the real world details are more complex than they allowed" for is a compelling analogy for "long termism doesn't work because the real world details are more complex than they can calculate for" and deserves a better response than sarcasm.

Expand full comment

I get that pedantry is annoying, but the alternative is being wrong.

Expand full comment

Your pedantic comment, while not being data, is an illustrative example of how everyone screws up when trying to predict the future. I think that's still an interesting tangent to go down. Is longtermism any more useful than neartermism if we fundamentally can't predict long-term consequences? How many seemingly-watertight arguments are going to be totally wrong in hindsight? Are enough of them going to be right to outweigh the ones that are wrong?

Expand full comment

Man, I'm getting put in the pendant box? Can I convince you that I'm something far worse than a pendant?

My real motivation was that it annoys the hell out of me when people make stupid assumptions about the dangers of being barefoot. Moderns think that human feet are soft and weak and incapable of being exposed to anything other than carpet. To be fair, most Western people's feet ARE soft and weak, because they've kept them in cumbersome protective coverings for the majority of their lives. But this is not true, human feet, when properly exercised and conditioned, are actually superior to shoes for most cases. This makes sense if you think about it for 20 seconds - shoes did not exist for the vast majority of human history, and if human feet were so weak, then everyone would have been incapacitated pretty quickly by stepping on something sharp. I recommend protective coverings if you're a construction worker, of course, there are dangers there that didn't exist in the ancestral environment. But every time I'm out hiking or running on sidewalks, some asshole feels the need to ask me if I forgot my shoes, or make some other ignorant comment. This annoys me, and seeing an opportunity to educate an internet community on barefooting, I took the opportunity.

So you see, I am not a pendant, even though everything you say is correct and I agree entirely and that factored into my comment. But my primary motivation was advocating for barefooting, and by being obnoxious, I got Scott to signal boost my comment. Hahaha, my evil plan to defeat the shoe industry and restore strong feet is succeeding!

Expand full comment

I object to bare feet on aesthetic grounds

Expand full comment

I object to shoes on aesthetic grounds.

Expand full comment

I object to your reasoning - you claim that just because we evolved with bare feet, they're better than shoes. But then why did we invent shoes in the first place, and why were they so widely adopted? Do you think that only the edge cases where shoes are better drove that adoption?

Expand full comment

> you claim that just because we evolved with bare feet, they're better than shoes.

No, I claim that having gone barefoot and having worn shoes, in my experience bare feet are often superior to shoes, and this is supported by the history of human feet and their obvious evolved purposes.

> But then why did we invent shoes in the first place, and why were they so widely adopted?

Ah yes, humans have never invented and widely adopted anything that is bad for humans, because humans are perfectly reasonable creatures that would never make such mistakes. Cigarettes, cryptocurrencies,.and leaded gasoline simply never existed.

> Do you think that only the edge cases where shoes are better drove that adoption?

I do not. Fashion and advertising by shoe companies are much more dominant forces than construction boots and such. The fact that most modern shoes are horribly designed but nearly universally used is a data point to consider.

Expand full comment

Ummm. It takes less than a minute to undress, and a water-soakd wool suit will significantly weigh one down (making it more likely I'd be unsuccessful rescuing the child). I'd discard the suit before I jumped in to save the child.

Expand full comment

You're right that anti-communist intuitions don't disprove the abstract argument.

But if you have a friend who's constantly coming up with hypotheticals about sleeping with your beautiful wife, maybe don't leave him alone with her.

Expand full comment

I think most people who come up with Repugnant Conclusion hypotheticals are doing so to point out the problems with average utilitarianism, not because they think the universe with a trillion nearly-suicidal people is the best of all possible worlds. That's why they called it the Repugnant Conclusion and not the Awesome Conclusion.

Expand full comment

Correction: total utilitarianism, not average one. Average utilitarianism doesn't lead to repugnant conclusion, but has other problems, see the mind experement of creating more people in hell, who would be a little more happy than the ones already there.

Expand full comment

...or simply removing the least happy people in any given universe.

Expand full comment

That’s just a total misunderstanding of the correct use of the mean though. The RC itself is just simple multiplication.

Expand full comment

You realize that they call such a scenario "repugnant", right?

Expand full comment

I think I have to post my obligatory reminder that consequentialism does not imply utilitarianism, that E-utility functions are terribly ill-defined and it's not clear that they make sense in the first place, and also while I'm at it that decision-theoretic utility functions should be bounded. :P

...you know what, let me just link to my LW post on the subject: https://www.lesswrong.com/posts/DQ4pyHoAKpYutXwSr/underappreciated-points-about-utility-functions-of-both

Expand full comment

Also Scott's old consequentialism FAQ, which describes consequentialism as a sort of template for generating moral systems, of which utilitarianism is one: https://translatedby.com/you/the-consequentalism-faq/original/?page=2 (the original raikoth link no longer works)

Expand full comment

This. I continue to be baffled that people.think it's worthwhile to endlessly tinker with utilitarianism instead of.abandoning it.

Expand full comment

The reason seems pretty clear to me. The problem with accepting consequentialism-but-not-utilitarianism is that there is no obvious grounds to value distant or sufficiently different from you people high enough (or at all), and so no way to rule out various unsavory, but nevertheless empirically appealing ideas like egoism or fascism outright.

Expand full comment

There no strong arguments motivating universalism, but there generally aren't strong arguments forbidding it.

Also...how sure are we that universalism is true? Lots of people here reject it. Do EAs believe in utilitarianism because they believe in universalism, or vice versa?

The theory that's most hostile to universalism is probably contractarianism, but even that only says that you cannot have obligations to people in far off lands who are not members of your community. A contractarian can still regard telescopic ethics as good but not obligatory. Does it matter so much that universalism should not just happen, but be an obligation?

Expand full comment

Well, I'm neither an EA nor an utilitarian, so I'm only speculating, but this community does seem to have a particular affinity to universalism and systematization, and it certainly helped that eloquent thought leaders directed plenty of effort in that general direction.

Expand full comment

I agree, because I am a moral non-realist egoist, describing myself as a consequentialist but not a utilitarian. Although my broader consequentialism is a stance toward others so we can come to some kind of contractarian agreement.

Expand full comment

A blend of satisficing consequentialism and virtue ethics has started to seem pretty workable to me. Thoughts?

Expand full comment

FWIW, it's aggregation, not maximization, that I object to. I think any coherent consequentialism needs to be maximizing due to Savage's theorem / the VNM theorem.

Expand full comment

Interesting. It seems to me both that maximizing is what leads to all the various inescapable repugnant conclusions, and also that maximizing is the most counter-intuitive part; enormous chunks of life are lived with value happily left on the table.

Which of these doesn't seem convincing / why is maximizing convincing for you?

(FWIW, I think VNM has very unrealistic assumptions. Is there an emotional reason why you find it convincing or realistic or intuition fitting?)

Expand full comment

You seem to be using some implicit assumptions that I'm not, but I'm a little unsure what they are, which makes this a little difficult to reply to? For instance, to get the repugnant conclusion, you need a *lot* of things, not just maximization; for instance, you can't get the repugnant conclusion without some sort of aggregation. There's no particular reason that any given non-utilitarian/non-aggregative consequentialism would yield it.

(FWIW, my position is actually not so much "aggregation is *wrong*" as it is "I can't endorse aggregation given the problems it has but I'll admit I don't know what to replace it with". :P )

> Is there an emotional reason why you find it convincing or realistic or intuition fitting?

Minor note -- I'm sure you didn't intend it this way, but this question comes off as fairly rude. Why say "emotional" reason, and thereby include the assumption that my reason can't be a good one? Why not just ask for my reason? Adding "emotional" just changes the question from one of object level discussion to one of Bulverism (of a strange sort). I would recommend against doing that sort of thing. Anyway, I'm going to ignore that and just state the reason.

Now the thing here is once again that it's a bit difficult to reply to your comment as I'm a little unclear as to what assumptions you're using, but it reads to me kind of like you're talking about applying VNM / Savage to humans? Do I have that right?

But, I'm not talking about any sort of utility function (E-utility or decision theoretic utility) as applied to humans. Certainly humans don't satisfy the VNM / Savage assumptions, and as for E-utility, I already said above I'm not willing to accept that notion at all.

As I went through in my linked post, I'm regarding a consequentialist ethics as a sort of agent of its own -- since, after all, it's a set of preferences, and that's all an agent is from a decision-theoretic point of view. And if you were to program it into an AI (not literally possible, but still) then it would be an actual agent carrying out those preferences. So they had better coherent! If this is going to be the one single set of preferences that constitutes a correct ethics, they can't be vulnerable to Dutch books or have other similar incoherencies. So I think the assumptions of Savage's theorem (mostly) follow pretty naturally. To what extent humans meet those assumptions isn't really relevant.

(Again, apologies if I'm misreading your assumptions, I'm having to infer them here.)

So note that this maximizing is purely about maximizing the decision-theoretic utility of this hypothetical agent; it has no necessary relation to maximizing the E-utility or decision-theoretic utility of any particular people.

This I guess is also my answer to your remark that

> maximizing is the most counter-intuitive part; enormous chunks of life are lived with value happily left on the table

Like, what is the relevance? That reads like a statement about E-utility functions to me, not decision-theoretic utility functions. Remember, a decision-theoretic utility function (when it exists) simply *reflects* the agent's preferences; the agent *is* maximizing it, no matter what they appear to be doing. Of course a big part of the problem here is that humans don't actually *have* decision-theoretic utility functions, but, well, either they don't and the statement isn't relevant, or they do and they are implicitly maximizing it (if you aren't maximizing it, it by definition isn't decision-theoretic utility!). I can only make sense of your statement if I read it as instead being about E-utility, which isn't relevant here as I've already said I'm rejecting aggregation!

Does that answer your questions? Or have I missed something?

Expand full comment

My apologies! Emotional has neutral valence and connotations for me. What I meant was, for all that any given argument is rational, we either gravitate to it or away from it by natural preference. A better phrasing would be "what do you think about VNM's prerequisite assumptions?"

"""But, I'm not talking about any sort of utility function (E-utility or decision theoretic utility) as applied to humans. Certainly humans don't satisfy the VNM / Savage assumptions, and as for E-utility, I already said above I'm not willing to accept that notion at all."""

I think this was my confusion, thanks. Actually in general this was a great reply to my vague questions, so thank you!

"""then it would be an actual agent carrying out those preferences. So they had better coherent"""

This might not be necessary or ideal. Maybe not necessary due to complexity; humans are expected to be strictly simpler than AGI, and we can hold inconsistent preferences. Maybe not ideal in the sense that consistency itself may be the vulnerability that leads to radical moral conclusions we want to avoid. It's possible that ambivalence comes from inconsistency, and that it is natural protection against extremism.

"""Like, what is the relevance? That reads like a statement about E-utility functions to me"""

Yes, it was. I suppose the stuff I said above would be my reply to the decision-theoretic maximization case as well.

Thanks!

Expand full comment

Uh thanks for the post! That was a nice read.

However, in this case I don't think we even need to get that far. Where the "mugging" happens is in (as others and Scott's article points out) saying that it's really equivalent

1) losing X Utils from a person that could potentially exist

2) removing from an actually existing person whatever Y Utils cause some "total aggregate" (however ill-defined) utility to go down by the same amount as in scenario (1)

For me, and others, it isn't equivalent, because there's just no "utility" in (1). I don't care about that scenario at all. And there's no need to take the first step to getting-your-eyes-eaten-by-seagulls by entering the discussion about how to aggregate and compare these two scenarios by accepting they both have some positive utility. I really think it's important to cut the argument at the root.

They only reason someone can get away with saying such a choice is "inconsistent" is because it's the future, and the large numbers involved get people's intuitions mixed up. If I say that somebody has devoted their life to building a weird fertility cult in the deep desert in Mongolia, and has managed to get about 50k people to be born there, all living lives barely above the baseline of acceptability, scavenging for food and constantly falling sick, but not quite wanting to die - we wouldn't say "oh what a hero, look at how many beautiful Utils he's brought into existence".

Expand full comment

I think there must be 4th way in which a theory can differ: by refusing the way situation is modeled by math is correct.

I think this is the case. My intuition is that if we insist that a human life value must be a single number then at least we should allow it may change over time - at least in response to the actions of the thought experimenter. For example, if I add a new person to the world already populated with people then their lives are altered, too.

But more importantly, I think the model should clearly differentiate between "actions within world" (like the above one) and "choosing between two worlds" (like: would you preferer a world to start with that additional person).

For me the intuitive judgment of a given world comes from trying to imagine me to randomly inhabit one of its members' minds.

In this framework: I prefer the Hell with N x -100 + N x -90 people to the Hell with only N x -100 as it gives me a slightly better expected life.

At the same time, in the same framework, being already born in this second Hell, I wouldn't want to spawn additional N x -90 people in it, as it would mean for each of them a certainty of inhabiting a mind with negative value, and that is bad for them, and also having to live with my memory of voting for their suffering would make me suffer and change my utility from -100 to even more negative number.

Basically my intuitions are modeling "pool of souls waiting in a lobby to join the game some players are already playing".

In this moral framework the Repugnant Conclusion, even if somehow derived, would only mean that if I were in position to redesign the world from scratch I might do it differently, but as I already am in an already existing world and adding people to it is qualitatively different than switching to a different world such a conclusion would not force me to breed like crazy. It's a type error.

But this framework actually doesn't even reach the RC because it doesn't prefer a world with lower average live utility offered in the first step.

(The most important missing feature in various puzzles of the "choose A vs B" kind is specifying if I'll have the memory of the choice, and memory of the previous state. I think part of the difficulty in designing the AI'a utility function around the state of it's off-switch is because we obsess on the end state (switch is on vs switch is off) forgetting about the path by which we got to it which is burned in the memory of participants at least in the atoms of the brain of human supervisor and thus becomes part of the state. I mean perhaps utility functions should be over paths not endpoints.)

Expand full comment

I agree very much with you comment. Thanks for making it. I'm now curious if philosophers have already come up with nasty trap conclusions from extrapolating the utility-functions-over-paths idea. Not sure how to google that though. Maybe someone with a background in philosophy can answer with the terminology needed if this idea is already a well considered one.

Expand full comment

The "pool of souls" model kind of presupposes egoism, right? If we optimize for a world that a single soul would prefer to enter, it's not surprising that we would get outcomes inconsistent with total wellbeing.

This also serves as a criticism of Rawlsianism I hadn't considered before. Behind the veil of ignorance, you'd prefer a world with a vast number of moderately suffering people compared to a world with a single greatly suffering person.

Expand full comment

Yes, if designing a game I would prefer to make a multiplayer game which sucks and gives -99 fun to everyone than a single player game which sucks at -100.

Yes, I'd prefer to play the first one than the second one.

Yes, if I were a god, I'd prefer to create the first one than the second one.

Also, if already inhabiting the first world I wouldn't even have an option to switch to the other without somehow killing everyony except one person. But, assuming it's somehow possible to remove them in painless way and giving bad memories to the survivor which are only shifting the mood from -99 to -100, I think I might vote for this swap, as I think nonexistence should be valued at 0, so in expectation such a global euthanasia looks like good change in average.

If I were in the second word to begin with, I would not want a swap to the first one, as it brings in new people into sure suffering, and I don't feel like doing it, however the format of the puzzle assumes that for some axiomatic reason my utility will raise from -100 to -99 despite my intuition: I think the puzzle is simply stupid.

Expand full comment

The question is not whether you would change world 2 to world 1 or vice versa, it's which world is preferable over the other.

So using the models you gave, if you were a god, the question is which world you would choose to create. You say you would choose the first world, which is highly surprising. If you had a choice between a world with one person being tortured who also had a painful speck of dust in their eye, or a world with 100 trillion people being tortured, you would really choose the second one?

On the other hand, if you were a soul choosing which world to enter, you would also choose the first world, which seems defensible on the basis of maximizing your personal expectation of wellbeing. But considering your own expectation of wellbeing above all else essentially amounts to egoism.

Expand full comment

I agree one has to be careful to not confuse:

- a god choosing which world to create

- a god choosing to swap one for another (although I am not quite sure why would a god feel it's different if worlds can be paused and started easily without affecting inhabitants, I suspect some gods could)

- being a participant of the world and pondering changing it one way or the other (say by having a child, voting for communist party or killing someone)

I think it's strange that I am accused of "egoism" in a scenario in which I am a god outside of the world, not really participating in it, just watching.

On the one hand, god's happiness is not even a term in the calculation.

On the other the whole puzzle is essentially a question of "which world would the god prefer?" So obviously the answer would please the god - what's wrong with that?

Also it's a bit strange that the format of the puzzle asks me to think like a god but then judges my answer by human standards. Why judge god at all?

I think these whole field is very confused by mixing who is making decisions, who is evaluating, what are really the options on the table, and what was the path of achieving it.

But anyway, yes, as a god I'd prefer to create the world in which avg person live's is better. And that would maximize the happiness of a god which likes this, by definition of such god. Now what? How does this translate into, say, government policy? I say: not at all, because this would be an entirely different puzzle, in which the decision maker is either a politician or voter embeded in the world and the change is occuring within world. One would also have to specify if one wants to optimize happiness of decision maker or of avg citizien or what. If one doesn't specify it and instead shifts the burden of figuring out the goal function to the person being asked the puzzle, then it seems to me yo become a metapuzzle of "which kind of goal function would please you the most as a person watching such a country?". Which seems a bit strange as I don't even live in this country so why do tou care about outsider's oponion?

If the question instead is "ok, but which goal function should the people inside this world adopt as say part of their constitution?" then I'd still say: to optimize what? My outsider's preference?

This looks recursive to me: people set their goals already having some goals in setting them

Ultimately it bottoms out on some evolutionary instincts and in my case this instinct says that I like to not be the cause of someone's suffering, I like to be healthy and smile and thousands other things, and when joining a game server I want to have high expected value of fun from it. So if I as a human may influence/suggest what god should do to create nice games, I suggest high expected utility as design goal.

Expand full comment

I don't understand people who follow the Carter Catastrophe argument all the way to the point of saying "gosh, I wonder what thing I currently consider very unlikely must turn out to be true in order to make conscious life be likely to be wiped out in the near future!" This feels to me like a giant flashing sign saying that your cached thoughts are based on incompatible reasoning and you should fix that before continuing.

If you've done proper Bayesian updates on all of your hypotheses based on all of your evidence, then your posterior should not include contradictions like "it is likely that conscious life will soon be wiped out, but the union of all individual ways this could happen is unlikely".

My personal take on the Carter Catastrophe argument is that it is evidence about the total number of humans who will ever live, but it is only a tiny piece of the available body of evidence, and therefore weighs little compared to everything else. We've got massive amounts of evidence about which possible world we're living in based on our empirical observations about when and how humans are born and what conditions are necessary for this to continue happening; far more bits of evidence than you get from just counting the number of people born before you.

Expand full comment

A bit like the difference between estimating the odds that the sun is still there tomorrow as about (number of sunrises so far[1]):1 using Leibniz's law, and estimating it as much more than that using our understanding of how stars behave.

[1] If "so far" means "in the history of the earth", maybe the order of magnitude you get that way is actually OK. But if it means "that we have observed", you get a laughably high probability of some solar disaster happening in the next 24 hours.

Expand full comment

This has always seemed like a very silly way of looking at life. What are the odds that you are [First Name][Last Name] who lives at [Address], and has [personal characteristics]? One in 8 billion, you say!? Then you should not assume you are who you think you are, and should instead conclude that you are one of the other 8 billion people. Which one? Who knows, but almost certainly not [single identifiable person you think you are].

Expand full comment

can't the repugnant conclusion also say that "a smaller population with very high positive welfare is better than a very large population with very low but positive welfare, for sufficient values of 'high' "?

I find difficult to understand the repugnant conclusion because to me it barely says anything. If you are a utilitarian you want to maximize utility, period. Maybe one way is to create a civilization with infinite people barely wanting to live, or to create a god-like AI that has infinite welfare.

Am I missing something here?

Expand full comment

That would be a utility monster, not the repugnant conclusion, but yes, it's another way that utilitarianism fails on the weird edge cases.

Expand full comment

from Michael Huemer, in defence of repugnance https://philpapers.org/rec/HUEIDO

...I have sided not only with RC but with its logically stronger brother, the Total

Utility Principle. What is the practical import of my conclusion? Should we, in

fact, aim at a drab future like world Z, where each of our descendants occupies a

single, cramped room and there is just enough gruel to keep them from hunger?

Given any plausible view about the actual effects of population growth,

the Total Utility Principle supports no such conclusion. Those who worry about

population growth believe that, as the population increases, society’s average

utility will decline due to crowding, resource shortages, and increasing strain on

the natural environment (Ehrlich and Ehrlich 1990)...

...Some people believe that population increases will lead to increases rather

than declines in average utility, for the foreseeable future (Simon 1996). Their

reasoning is that economic and technological progress will be accelerated as a

result of the new people, and that technology will solve any resource or

environmental problems that we would otherwise face. On this view, we should

strive to increase population indefinitely, and as we do so, we and our descendants

will be better and better off....

...To determine the population utility curve with any precision would require detailed empirical research. Nevertheless, the [argument] suffice to make the point that the Total Utility Principle

does not enjoin us, in reality, to pursue the world of cramped apartments and daily

gruel. Perhaps its critics will therefore look upon the principle with less revulsion

than has hitherto been customary...

Expand full comment

The problem is that lots of people (including Scott) don't intuitively like the idea of worlds with lots of not-very-happy people being (potentially) better than worlds with very-happy people.

If you don't have that intuition, then there's indeed no issue.

Expand full comment

> Peirce is a tragic figure because he was smart enough to discover that logic disconfirmed his biases, but decided to just shrug it off instead of being genuinely open to change.

Is that actually what he was doing? Wikipedia says

> Peirce shared his father's views [i.e. he was a Union partisan] and liked to use the following syllogism to illustrate the unreliability of traditional forms of logic, if one doesn't keep the meaning of the words, phrases, and sentences consistent throughout an argument:

> All Men are equal in their political rights.

> Negroes are Men.

> Therefore, negroes are equal in political rights to whites.

And at the time, the last statement was factually incorrect! The syllogism fails because the definition of Men changes between steps 1 and 2.

Expand full comment

Thanks - I've deleted this until I can look into it more.

Expand full comment

Far be it from me to criticize Peirce but isn’t the problem that the first premise was false at the time, not that there was an ambiguity in the use of the term “men”? (If “created” had been added before “equal” then the first premise would arguably have been true but in that case the syllogism would no longer have been valid.)

Expand full comment

It's just ignoring the difference between moral and legal rights. The phrase, "people in North Korea have the right to free speech" is true if you believe in moral rights and think that's one of them, but false if you're talking about legal rights people can actually enforce against the state.

Expand full comment

The mediocrity principle should not be invoked by people above a certain level of success.

Scott, you are an outlier in the reference class of humans-alive-in-2022; why shouldn't you also be an outlier in the reference class of all-intelligent-life-to-ever-exist?

Expand full comment

I wrote in the previous comments that the repugnant conclusion is highly non robust. Drop 2 units of utility from a society of +1 utility and you go negative, turning a trillion barely happy people into a trillion unhappy people, negative utility is -1T.

Meanwhile the billion perfectly happy people drop from utility of +100B to +98B.

I know that this is not my thought experiment to play around with and theorists might reply - stick to the initial conditions. However I think when we are talking about a future society we should think like this. Its like designing many more bridges to take ten times as much traffic, but if you increase that traffic by 1% all bridges collapse, this isnt sane design.

So if we, or a benign AI, could design an a actual society for robustness and we decided the per capita utility could collapse by up to 50 points, then the repugnant conclusion disappears.

Expand full comment

A very illuminating comment, thank you.

It works the other way round as well, though: You only have to add +2 utility to get it back to utopia.

To me it seems that this puts a limit on both ends: You wouldn't want a trillion people society on the brink of neutrality, but also adding more utility to an already very happy society seems increasingly pointless.

This seems to describe the current situation in the world already, to some degree: We are mostly so rich, that losing or adding some wealth does not have much impact (e.g. the studies that claim that earning more than 60k does not increase happiness). While at the same time, a war in Europe's "backwaters" suddenly makes fuel and food unaffordable in all kinds or poor societies that were on the brink.

Expand full comment

Let me add quickly: This implies that the utility-happiness-function has the steepest slope around neutrality (zero happiness) and flattens out in both directions of happiness.

Of course it also implies an absolute happiness scale.

Given those assumption, the repugnant conclusion seems to vanish.

Expand full comment

It really doesn’t. All utilities in the example were linear (same slope everywhere). Just one of them had much higher constant slope than the other (1T vs 1B).

Expand full comment

The "future people should be exponentially discounted, past people should be given extremely high moral weight" thing just seemed obvious to me; I'm surprised it's controversial among utilitarians. Fortunately there is no actual way to influence the past.

Expand full comment

> Fortunately there is no actual way to influence the past.

Why would that be fortunate? If you believe that past people have extremely high moral weight then being able to help them would be an extremely good thing.

Expand full comment

To me at least it's fortunate because we don't want people with strong moral convictions that they should enact sweeping change to have access to arbitrarily-high-leverage points in our path. I'm sure there's *some* degrowther somewhere who would love to transmit crop blights to the earliest agricultural centers, or someone who wants to drop rods from god on any European ship that seems like it might reach the Americas.

I don't share the parent commenter's intuitions about the value of past people, but even if I did I would

a) be glad that the extremely valuable past people are protected from the malfeasance of other zealots,

b) appreciate that the less valuable but still extant present and future people (incl. you know, me) won't suddenly be unmade and replaced with a wildly indeterminate number of people of unclear happiness, and

c) probably be relieved that I have an excuse to live a normal life and focus on the living and their near descendants rather than founding a multi-generational cult of *miserable* time agents perpetually battling extinctionists and scraping up spare utilons to bestow upon the most improvable cavemen, steppe nomads, and Zoroastrian peasants.

(Now, would I watch that Netflix show and mourn its undoubtedly premature cancellation? 100%)

Expand full comment

Also, as a potentially still-pretty-early-human, you'd be a battleground for the future kiddos.

Expand full comment

> The "future people should be exponentially discounted, past people should be given extremely high moral weight" thing just seemed obvious to me

Wow, talk about a difference in moral intuitions. This just seems obviously wrong to me. I know you used the term "obvious", but can you try to elucidate why those of the past have greater moral weight?

Expand full comment

I think it's obvious future people should be discounted relative to the present. Past people having more moral weight than present people is then just a consequence of what economists call dynamic consistency (your preferences shouldn't change if you just wait around without getting new information).

Expand full comment

Do you have any *prima facie* intuition about moral weight of people of the past? Not that it should override otherwise reasoning, but if you're reasoning from what is "obvious", then we could just as easily start with the "obvious" moral weight of those in the past and deduce the weight of the future people from there.

I'm also curious to probe your intuition that future people should be discounted. What do you make of the shards-of-glass-10-years-later example? If you drop glass on the ground and know that in a decade a class of 8-year-olds will walk over it, you find that has less moral weight than if you know that in a decade a class of 12-year-olds will walk over it?

Expand full comment

No, I don't have prima facie intuitions about past people's moral weight. It's purely a product of beliefs about discounting future people and dynamic consistency.

I don't intuit any moral difference between the 8-year-olds and 12-year-olds, which means I'm probably weighting future human-experiences rather than future humans per se, but that doesn't bother me.

Expand full comment

I always took for granted that past people are entitled to the same moral consideration as far-away present people, and I once started writing a time travel story based on this. The idea behind it is that it's the duty of well-meaning time travelers to see to it that past persons had lives as good as possible. The problem in the story is that whatever happened in the past had left behind physical evidence, so that if you want your altruistic time traveler plan to have any chance of success, it has to leave behind evidence that's consistent with everything we've found. So the heroes do stuff like evacuating male sailors from a shipwreck to a comfortable island where they live reasonably nice lives. Once they're dead, a second team of time travelers move in and clean up after them. Basically, altruistic time travelers try to make historical lives better than they seem from just the surviving evidence. But then, in the story, they hatch a plan to make WWII less bad than it seemed, and things get pretty crazy. Whatever did happen in that time left lots of evidence that puts huge constraints on how much less terrible WWII could be made, but the protagonists are pretty creative and not allergic to manufacturing misleading evidence, just like in the shipwreck story.

Expand full comment

I want to add that some philosophers think they may have beaten the Impossibility Theorems that Magic9Mushroom mentioned. It is an idea called "Critical Range Utilitarianism" that involves some pretty complex modifications to how value itself functions, but seems like it could be workable. It doesn't seem that dissimilar than some ideas Scott has tossed out, actually. Richard Chapell has a good intro to it here: https://www.utilitarianism.net/population-ethics#critical-level-and-critical-range-theories

Even if Critical Range Utilitarianism doesn't pan out, I am not sure the Sadistic Conclusion is as counterintuitive as it is made out to be if you think about it in terms of practical ethics instead of hypothetical possible worlds. From the perspective of these types of utilitarianism, inflicting a mild amount of pain on someone or preventing a moderate amount of happiness is basically the same as creating a new person whose life is just barely not worth living. So another formulation of the Sadistic Conclusion is that it may be better to harm existing people a little rather than bring lives into existence that are worth living, but below some bound or average.

This seems counterintuitive at first, but then if you think about it a little more it's just common sense ethics. People harm themselves in order to avoid having extra kids all the time. They get vasectomies and other operations that can be uncomfortable. They wear condoms and have IUDs inserted. They do this even if they admit that, if they were to think about it, the child they conceive would likely have a life that is worth living by some metrics. So it seems like people act like the Sadistic Conclusion is true.

The Sadistic Conclusion is misnamed anyway. It's not saying that it's good to add miserable lives, just that it might be less bad than the alternatives. You should still try to avoid adding them if possible. (Stuart Armstrong has a nice discussion of this here: https://www.lesswrong.com/posts/ZGSd5K5Jzn6wrKckC/embracing-the-sadistic-conclusion )

Expand full comment

Regarding the sadistic conclusion, it's counterintuitive to think that a world with an extra 10,000 people whose lives are worth living, but kind of "eh" (they can actually be alright, as long as they're below average because of how happy everyone else is) is better than the same world with an extra 100 people being savagely tortured their entire lives.

Fertility prevention for utilitarian reasons is rare even if it exists - people do it for their own happiness, not to increase global happiness. It's because raising a kid who may well be extremely happy sounds like a massive faff.

Expand full comment

It does seem counterintuitive, but I think that might be a "torture vs dust specks" thing where the same disutility that sounds reasonable when spread out among a lot of people sounds unreasonable when concentrated in a small amount of people. Think about all the different inconveniences and pains that people go through to avoid having children. Now imagine that you could "compensate" for all of that by adding one person whose life is just barely not worth living. That's still the Sadistic Conclusion, but it sounds a lot more reasonable.

It is not my experience that fertility prevention for utilitarian reasons is rare. Most people who avoid having children, or have fewer children than they possibly could, have a moral justification for it, even if it is not explicitly utilitarian. They often say that they think it would be morally wrong to bring a child into the world if they can't give them a really good life. It is also fairly common for people to morally condemn others for having too many kids.

There are certainly people who say "Having a child would make the world a better place, but I don't think I'm obligated to do that." But there is also a lot of moralism where people are condemned for having children whose lives are positive, but suboptimal.

Expand full comment

I don't think the torture vs dust specks is the same, because dust specks are negative utility. The problem with average utilitarianism is that (in extreme scenarios):

You have a world only containing 1,000 people who exist in constant ecstatic bliss. You can either add 1,000,000,000 people who aren't quite that happy, but are far happier than anyone currently alive in our world and would thank Quetzlcoatl* for the blessedness of being able to exist as they do with their every breath. Or, you can add 10 people being tortured horribly. If any of the average systems are right, you should add the 10 people being tortured horribly, as they bring the average happiness down yet.

*the Aztec religion had a reformation in this world, and they understand that interpret him as wanting them to sacrifice their own hearts metaphorically in selfless love for all living things

Expand full comment

Critical Range produces the Anti-Egalitarian Conclusion - any equal population above the lower limit of the critical range can be made "better" in Critical-Range by breaking that equality even if it lowers average and total welfare, because the marginal utility of welfare within the critical range is zero.

Expand full comment

If I understand you correctly, you are saying that, for example, if the critical range is "5" then adding one individual whose utility is 6 and one whose utility is 0 is better than adding two people whose utility is 4.

That is not how Critical Range theory works. In Critical Range theory, the 6,0 population is worse than the 4,4 population, and both are "incomparable" with adding no one at all. In Critical Range theory if you add a mix of people who are within the range and above the range, the population is incomparable, not better, if the average utility of all the new people is below the range.

Expand full comment

First, the zero of individual welfare function does almost nothing with what people think about quality of their life? They're just numbers you use to match your intuitions. It doesn't even mean you must disregard their will - zero just only means what aggregate welfare function implies.

And so the second point - you can't choose between repugnant conclusion and sadistic conclusion, only between sadistic conclusion and both of them. Because sadistic conclusion is present even with "zero means almost suicidal" - you would prefer some hero to suffer horribly to prevent billion tortures or something. And that means sadistic variant is the one that is actually perfectly ok - if you sad about repugnant world, you should accept the price of averting it. Even if you don't think that adding people to that world makes it worse, you can just have thresholds for a range of personal welfare you don't care about - then the sad world of any size would have zero value.

It's not that population ethics is a field where math is actively trying to get you - it's all just works out in slightly more complicated models.

Expand full comment

There are four categories of answer that always come up in this kind of discussion; these have (sensibly) not been addressed in the highlights, but they're so recurrent that I'm going to highlight them here. In the spirit of the most recent post, I will do this in Q&A format (also in the spirit some of the questions are comments), and have put them in rough order of sophistication.

Comment #1

Q: Why should I care about other people?

A: Why should you care about yourself?

Comment #2

Q: No one actually care about others/strangers: people pretend to care about starving orphans, but actually only really care about their own status/warm feelings.

A: Even if this was true, it would be basically fine. A world in which people get pleasure from making other people's lives better is a much better world than one in which people gain pleasure from kicking puppies.

Comment #3 (Answers taken from SA and Godoth in the Tower post, which are much better than my initial respose)

Q: All charity is counterproductive/bad/colonialist (implied: there are no exceptions here)

A1: Even organ donation?

A2: If you actually think charity is bad, then you should be working towards ways to end charity (e.g. campaigning against tax-deductibility for charitable donations)

Comment #4

Q: Charity may help in the short run, but ultimately impedes the development of market mechanisms that have made the biggest difference in improving human welfare over the long run (for recent example, see China)

A1: Even deworming tablets, which are one of the most effective educational improvements, which then tend to cascade down to other things getting better?

A2: I've got some Irish historians I'd like you to meet after they've had a few pints ***

*** During the Irish famine, food continued to be exported from Ireland by the (mainly but not exclusively English) landlords on essentially variations on this argument.

Expand full comment

>During the Irish famine, food continued to be exported from Ireland by the (mainly but not exclusively English) landlords on essentially variations on this argument.

I recently learned that while there was export of high-value calories (wheat, oats), it was offset imports of more plentiful cheap calories (maize). By pseudoerasmus

https://pseudoerasmus.com/2015/06/08/markets-famine-stop-treating-amartya-sen-as-the-last-word/

>Of course all of that was too late and took too long to happen. And it would have been much better if all the food produced in Ireland could have been requisitioned and provided to the starving. But it’s also an open question how much the British state in the 1840s was capable of doing that in time, if it had been as activist in its inclination toward famine relief as any modern government today is.

Naturally there was also friction as maize was relatively unknown in Ireland. (But looking at the graph pseudoerasmus cites, I am confused if having net zero export would have been enough to cover what was needed -- the next export of wheat and oats in 1846 are much less than imported maize, and maize wasn't enough.)

Expand full comment

Yes, there are definitely asterisks to the asterisks, and there was some charity. The maize bit is further complicated by the lack of appropriate mills at the time. That said, I do think two core points survive even a detailed look at the nuance:

1) Although exports were lower than imports, more food generally leads to less famine (noting complexities around hoarding and corruption)

2) The Whig government's attitude to charitable relief were heavily influenced by Malthusian and laissez-faire principles

Expand full comment

Same story with the great Bengal famines (1770 and 1943). Moral: even if you believe (religiously?) that market mechanisms optimize happiness as $t\to \infty$, the very contrary can be tremendously true at some critical points in time.

Expand full comment

You know, that was my original example, but it was harder to imagine Bengal historians drunk and rowdy

Expand full comment

The Bengal famine is addressed in that very book pseudoerasmus is linked to reviewing above.

Expand full comment

My comment was meant to refer to "The Whig government's attitude to charitable relief were heavily influenced by Malthusian and laissez-faire principles".

Expand full comment

What Whig government in 1943? Most of 1770 was also under a Tory PM.

Expand full comment

Ah, maize. "Peel's Brimstone":

https://www.rte.ie/history/the-great-irish-famine/2020/1117/1178730-the-temporary-relief-commission/

The problem was, you can't treat Indian maize like wheat. It was much harder to grind in mills of the time, and needed special cooking:

"Confronted by widespread crop failure in November 1845, the Prime Minister, Sir Robert Peel, purchased £100,000 worth of maize and cornmeal secretly from America with Baring Brothers initially acting as his agents. The government hoped that they would not "stifle private enterprise" and that their actions would not act as a disincentive to local relief efforts. Due to poor weather conditions, the first shipment did not arrive in Ireland until the beginning of February 1846. The initial shipments were of unground dried kernels, but the few Irish mills in operation were not equipped for milling maize and a long and complicated milling process had to be adopted before the meal could be distributed. In addition, before the cornmeal could be consumed, it had to be "very much" cooked again, or eating it could result in severe bowel complaints. Due to its yellow colour, and initial unpopularity, it became known as "Peel's brimstone".

Good intentions, but when you are secretly importing your relief because you don't want news to get out that the government is doing this, else it would affect The Market, and you are importing an unfamiliar food source that the population don't know how to use while exporting crops for cash, then no matter how good the intentions, the results are bad.

The entire Famine is a very complex matter. Those who suffered worst were the very poorest, and the landlords did not come out unscathed themselves (for the same reasons Scottish landlords decided on the Highland Clearances, as raising deer was more efficient and profitable than having tenant farmers on the land) due to badly managed estates that were not profitable by having so many small farms and plots of land, and many of them were in debt so that eventually they sold off their estates - see the Encumbered Estates Court: https://en.wikipedia.org/wiki/Encumbered_Estates%27_Court. Indeed, the tangled nature of property ownership in Ireland resulted in several land acts starting in the 1870s and ongoing until the early 20th century, but that's a separate, very large topic of its own.

The Famine was Moloch in operation. Unlike the simple (and understandable, due to anger and the human need to blame *someone*) conspiracy notions of a planned and deliberate genocide, it was rather the culmination of historical processes that just needed one thing to go wrong before it all collapsed. And with the potato blight taking on the role of that one thing, the failure of one food source which should have been a crisis but not a disaster, ended up with causing death and misery on a huge scale, massive emigration that continued on for decades causing the cratering of the national population, and a corrosive memory left behind that ensured bitterness and mistrust and hatred, with the echoes still rippling on to this day.

Expand full comment

I was hoping you would pop by!

Expand full comment

Thank you, I'm back again to get into more trouble (no, not more trouble, I''ll be good, I promise!)

Expand full comment

Q: Why should I care about other people?

A: Why should you care about yourself?

AA: I don't claim that I should care for myself in the sense of "should" meaning a moral obligation. So what's your point?

Expand full comment

"Why should I care about other people?"

That can be steelmanned to "why rationally!should I care about utilities that aren't in my utility function".

And "I rationally!should care about my own utility function" is a tautology.

Expand full comment

World economic growth can be hyperbolic and be on track to go to infinity in a singularity event but this graph does not illustrate it. Human eye cannot distinguish between exponential growth and pre-singularity hyperbolic growth on charts like this. With noise, these two trajectories would be pretty hard to distinguish with any tools.

To really see super-exponential growth we need to look at log-growth rates. If we do that, we can see a new regime of higher (measured) growth post 1950, but it is not obvious at all that growth is accelerating (it actually slowed down by about 1 percentage point from 4.5% to 3.5%) and also a lot of demographic headwinds are now gone. I am not sure I can easily insert a chart here, so you will have to trust me that the data you are using has the growth rate in a moderate decline on a 20-50 year horizon even before coronavirus and not in a rapid acceleration as you would see with a hyperbolic trajectory.

So your chart shows that we live a special time only to the extent that this time is "post WW2". And even then the single 1AD-1000AD step on this chart may easily be hiding even more impressive 70 year growth bursts (it probably does not but there is no way to tell from this data alone.)

Expand full comment

Thanks for including point 9, the 'yes this is philosophy actually' point. Reading that Twitter thread (a) got me grumpy at people being smug asses, and (b) was a bit of a Gell-Mann Amnesia moment – it's healthy to be reminded that the people talking confidently about things in this space (which I usually have to take on trust as I'm no expert in most stuff) can be just as confident exhibiting no idea of what philosophy is (which I do at least have a decent idea of).

Expand full comment

I just don't see why I should prefer a world with ten billion people to a world with five billion people, all else being equal (so no arguments about how a larger population would make more scientific discoveries or something, because that's obviously irrelevant to the Repugnant Conclusion).

Expand full comment
Comment deleted
August 25, 2022
Comment deleted
Expand full comment

While an interesting question, I think the answer you will get will involve unidentified non-hypotheticals from outside the thought experiment. A community that is too small (one person clearly fits) will have no relationships, no families, and will result in zero humans in a short period of time. A more interesting question would be 100 million or some other number large enough to firmly establish human civilization, but less than some other arbitrary and higher number.

Expand full comment

This is not a good thought experiment, because even if you stipulate that they’re as happy as can be, ppl’s intuitions will say the 1 person world is worse because that means humanity goes extinct, they won’t have any other ppl around to talk to, joke with, love, and there’s no way to keep civilization running with one person. But those things are confounders.

Edit: Mr. Doolittle got there before me.

Expand full comment

If we are assuming a world with 1 happy person who is capable of doing everything 5 billion people can do, and there hasn't been some sort of mass genocide or something, yes.

Expand full comment

Because if existence is better than nonexistence, then there's twice as many people enjoying the benefits of existence.

Expand full comment

Existence is better than nonexistence for those who *already exist*.

Expand full comment