Comment deleted
Expand full comment

> Ted will ask you to give one of his talks.

As a counterpoint, the top TED talk by views is by waitbutwhy, a blogger whose only Amazon e-book is called "we finally figured out how to put a blog on an e-reader".

Talk: https://youtu.be/arj7oStGLkU

Blog: https://waitbutwhy.com

Expand full comment

Once again happy not to be a utilitarian. Good review!

Expand full comment

The repugnant conclusion always reminds me of an old joke from Lore Sjoberg about Gamers reacting to a cookbook:

"I found an awesome loophole! On page 242 it says "Add oregano to taste!" It doesn't say how much oregano, or what sort of taste! You can add as much oregano as you want! I'm going to make my friends eat infinite oregano and they'll have to do it because the recipe says so!"

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

> ...happiness 0.001 might not be that bad. People seem to avoid suicide out of stubbornness or moral objections, so “the lowest threshold at which living is still slightly better than dying” doesn’t necessarily mean the level of depression we associate with most real-world suicides. It could still be a sort of okay life. Derek Parfit describes it as “listening to Muzak and eating potatoes”.

Except now, you have to deal with the fact that many-to-most existing people lead lives *worse than death*. Mercy killings of the ill-off become morally compulsory; they may not actively choose it, some may even resist, but only because they're cowards who don't know what's good for them.

Put the zero point too low, and consequentialism demands you tile the universe with shitty lives. Put it too high, and consequentialism demands you cleanse the world of the poor. There is no zero point satisfying to our intuitions on this matter, which is a shame, because it's an *extremely critical* philosophical point for the theory - possibly the *most* critical, for reasons MacAskill makes clear.

Expand full comment

The particular people who happen to be left remaining after the apocalypse is considerably more important than the availability of easily exploitable coal deposits.

People in many countries around the world are struggling to achieve industrialisation today despite a relative abundance of coal (available either to mine themselves or on the global market), plus immediate access to almost all scientific and technological information ever produced including literal blueprints for industrial equipment. That these people would suddenly be able to create industry after the apocalypse with no internet and no foreign trade, even with all the coal in the world readily available to them, is a loopy idea.

Medieval England was wealthier per capita than over a dozen countries are today in real terms, all without the benefit of integrated global markets, the internet and industrialization already having been achieved somewhere else.

I of course do not expect MacASkill to have written this in his book even if he recognized it to be true.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Gentle mention, you missed Sean Carrol, via his Mindscape Podcast, as a recent interviewer https://www.preposterousuniverse.com/podcast/2022/08/15/207-william-macaskill-on-maximizing-good-in-the-present-and-future/

Expand full comment

The repugnant conclusion seems unintuitive to me, specifically because it fails to consider the shape of the population-happiness tradeoff curve.

If you imagine this curve being concave down, then normal moral intuitions seem to apply: a large population that isn’t quite at carrying capacity is better than a much smaller, slightly happier population.

It’s really the concave up case that is unintuitive: where your options are a small happy population or a huge miserable one. But there’s no clear reason to my mind to imagining this is the case. Peoples utility of consumption seems to plateau relatively sharply, suggesting that a smaller society really wouldn’t unlock tons of happiness, and that having a giga-society where people still had net positive lives might not actually be many more people than the current 7 billion.

I don’t want to deny that it’s unintuitive that 20 billion people at happiness 10 really do outperform 1 billion at happiness 90, but I posit that it’s mostly unintuitive because it’d so rarely be just those two options.

Expand full comment

2 things: First, the "number of atoms" limit annoyed me when I saw it, since we can obviously get value from moving atoms around (sometimes even back to the same place!), so the possibilities of value-production are *much* higher than the constraints outlined.

Secondly, stealing my own comment from a related reddit thread on MacAskill: "The thing I took away from [his profile in the New Yorker] is that contrary to "near-termist" views, longtermism has no effective feedback mechanism for when it's gone off the rails.

As covered in the review of The Antipolitics Machine, even neartermist interventions can go off the rails. Even simple, effective interventions like bednets are resulting in environmental pollution or being used as fishing nets! But at least we can pick up on these mistakes after a couple of years, and course correct or repriotise.

With longtermist views, there is no feedback mechanism on unforeseen externalities, mistaken assumptions, etc. All you get at best in deontological assessments like "hmmm, they seem to be spending money on nice offices instead of doing the work", as covered in the article, or maybe "holy crap they're speeding up where we want them to slow down!" The need for epistemic humility in light of exceedingly poor feedback mechanisms calls for a deprioritisation of longtermist concerns compared to what is currently the general feel in what is communicated from the community."

Expand full comment

“suppose the current GDP growth rate is 2%/year. At that rate, the world ten thousand years from now will be only 10^86 times richer. But if you increase the growth rate to 3%, then it will be a whole 10^128 times richer! Okay, never mind, this is a stupid argument. There are only 10^67 atoms in our lightcone; even if we converted all of them into consumer goods, we couldn’t become 10^86 times richer.”

This is a common economic fallacy. Growth is not necessarily correlated with resource production. For example, if you were able to upload every living human’s mind onto a quantum computer, you could feasibly recreate reality at the highest possible fidelity a human could experience while simultaneously giving every living human their own unique planet--all while using less than the mass of the Earth.

As another example, consider the smartphone. A smartphone is several hundred times more valuable than a shovel, and yet a shovel probably has more total mass. This is because the utility of the smartphone, as well as the complicated processes needed to manufacture it, combine to create a price far higher than the simple shovel.

So yes, we could become 10^86 times richer using only 10^67 atoms. You simply have to assume that we become 10^19 times better at putting atoms into useful shapes. Frankly, the latter possibility seems far more likely than that humanity ever fully exploits even a fraction of atoms in the observable universe.

Expand full comment

I always used to make arguments against the repugnant conclusion by saying step C (equalising happiness) was smuggling in communism, or the abolition of Art and Science, etc.

I still think it shows some weird unconscious modern axioms that the step "now equalise everything between people" is seen as uncontroversial and most proofs spend little time on it.

However, I think I'm going to follow OP's suggestion and just tell this nonsense to bugger off.

Expand full comment

"There are only 10^67 atoms in our lightcone"

Are there really? That doesn't seem right. There are about 10^57 atoms in the sun


So 10^67 atoms is what we'd get if there were about ten billion stars of equal average size in our light cone. This seems, at least, inconsistent with the supposition that we might colonize the Virgo Supercluster (population: about a trillion stars.)

Expand full comment

Conditional on the child's existence, it's better for them to be healthy than neutral, but you can't condition on that if you're trying to decide whether to create them.

If our options are "sick child", "neutral child", and "do nothing", it's reasonable to say that creating the neutral child and doing nothing are morally equal for the purposes of this comparison; but if we also have the option "healthy child", then in that comparison we might treat doing nothing as equal to creating the healthy child. That might sound inconsistent, but the actual rule here is that doing nothing is equal to the best positive-or-neutral child creation option (whatever that might be), and better than any negative one.

For an example of other choices that work kind of like this - imagine you have two options: play Civilization and lose, or go to a moderately interesting museum. It's hard to say that one of these options is better than the other, so you might as well treat them as equal. But now suppose that you also have the option of playing Civ and winning. That's presumably more fun than losing, but it's still not clearly better than the museum, so now "play Civ and win" and "museum" are equal, while "play Civ and lose" is eliminated as an inferior choice.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

> MacAskill introduces long-termism with the Broken Bottle hypothetical: you are hiking in the forest and you drop a bottle. It breaks into sharp glass shards. You expect a barefoot child to run down the trail and injure herself. Should you pick up the shards? What if it the trail is rarely used, and it would be a whole year before the expected injury? What if it is very rarely used, and it would be a millennium?

This is a really bad hypothetical! I've done a lot of barefoot running. The sharp edges of glass erode very quickly, and glass quickly becomes pretty much harmless to barefoot runners unless it has been recently broken (less than a week in most outdoor conditions). Even if it's still sharp, it's not a very serious threat (I've cut my foot fairly early in a run and had no trouble running many more miles with no lasting harm done). When you run barefoot you watch where you step and would simply not step on the glass. And trail running is extremely advanced for barefooters - rocks and branches are far more dangerous to a barefoot runner than glass, so any child who can comfortably run on a trail has experience and very tough feet, and would not be threatened by mere glass shards. This is a scenario imagined by someone who has clearly never ran even a mile unshod.

Expand full comment

When I think of happiness 0.01, I don't think of someone on the edge of suicide. I shudder at the thought of living the sorts of lives the vast majority of people have lived historically, yet almost all of them have wanted and tried to prolong their lives. Given how evolution shaped us, it makes sense that we are wired to care about our survival and hope for things to be better, even under great duress. So a suicidal person would have a happiness level well under 0, probably for an extended period of time.

If you think of a person with 0.01 happiness as someone whose life is pretty decent by our standards, the repugnant conclusion doesn't seem so repugnant. If you take a page from the negative utilitarians' book (without subscribing fully to them), you can weight the negatives of pain higher than the positives of pleasure, and say that neutral needs many times more pleasure than pain because pain is more bad than pleasure is good.

Another way to put it is that a life of 0.01 happiness is a life you must actually decide you'd want to live, in addition to your own life, if you had the choice to. If your intuition tells you that you wouldn't want to live it, then its value is not truly >0, and you must shift the scale. Then, once your intuition tells you that this is a life you'd marginally prefer to get to experience yourself, then the repugnant conclusion no longer seems repugnant.

Expand full comment

> If I had to play the philosophy game, I would assert that it’s always bad to create new people whose lives are below zero, and neutral to slightly bad to create new people whose lives are positive but below average.

Any view that takes the average into account falls into the Aliens on Alpha Centauri problem, where if there are a quadrillion aliens living near Alpha Centauri, universal average utility is mostly determined by them, so whether it's good or bad to create new people depends mostly on how happy or miserable they are, even if we never interact with them. If those aliens are miserable, a 0.001 human life is raising the average, so we still basically get the Repugnant Conclusion; if they're living lives of bliss, then even the best human life brings down the average and we shouldn't create it.

Expand full comment

Do people who accept the Repugnant Conclusion, also believe in a concrete moral obligation for individuals to strive to have as many children as possible?

Some religions do, but I'd be surprised to find a modern atheist philosopher among them. But if you accept the premise that preventing the existence of a future person is as bad as killing an existing person..

Expand full comment

The suppositions of misery - whether impoverished nations or sick children- to me always seem to leave aside an important possibility of improvement.

The nation could discover a rare earth mineral. A medical breakthrough could change the course of the lives of the children. A social habit could change.

In fact, while the last half millennium

has been Something Else, and Past Performance Is No Garuntee of Future Returns, it does seem that future improvements are, if not most likely, at least a highly possible outcome that needs consideration.

(Been a while since a post has contained such a density of scissor topics.)

Expand full comment

"they decided to burn “long-termism” into the collective consciousness, and they sure succeeded."

If the goal is "one-tenth the penetration of anti-racism" or some such, that at best remains unclear. It's worth dwelling on your identity as an EA + pre-orderer here and realizing that very few media campaigns have ever been targeted so careful at "people like you." Someone on Facebook asked if anyone could remember a book getting more coverage and I think this response would hold up under investigation:

"Many biographies/autobiographies of powerful people; stuff by Malcom Gladwell, Tai-Nehisi Coates, Freakonomics, The Secret… worth remembering that this is a rare coincidence where you sit impossibly central in the book's target demo. Like if you were a career ANC member, A Long Walk to Freedom would have been everywhere for you at one point"

Expand full comment

Slavery is very much still with us. It is actually legal in several African countries, and de facto legal in several others, as well as in various middle eastern locations. That is to say nothing about about the domestic bondage live-in servants are subjected to across much of south-east Asia, and covertly in various places across the U.S. and Europe, as well as the sex traffic. The world is a stubborn and complicated thing, and doesn't work as cleanly as thought experiments and 40,000 foot overviews would suggest.

Expand full comment

One possibility to consider is radical value changes.

Past people were very different from us today, and future people will probably be different from present humans. They will look weird.

To prevent radical value changes in the future requires global coordination that we presently don't have.

Expand full comment

The Eli Lifland post linked assumes 10% AI x-risk this century.

Expand full comment

Informative article. Thank you. I'm gonna steal your paragraph "if you're under ~50, unaligned AI might kill you and everyone you know. Not your great-great-(...)-great-grandchildren in the year 30,000 AD. Not even your children. You and everyone you know."

Expand full comment

> MacAskill must take Lifland’s side here. Even though long-termism and near-termism are often allied, he must think that there are some important questions where they disagree, questions simple enough that the average person might encounter them in their ordinary life.

I think there's a really simple argument for pushing longtermism that doesn't involve this at all - the default behavior of humanity is so very short-term that pushing in the direction of considering long-term issues is critical.

For example, AI risk. As I've argued before, many AI-risk skeptics have the view that we're decades away from AGI, so we don't need to worry, whereas many AI-safety researchers have the view that we might have as little as a few decades until AGI. Is 30 years "long-term"? Well, in the current view of countries, companies, and most people, it's unimaginably far away for planning. If MacAskill suggesting that we should care about the long-term future gets people to discuss AI-risk, and I think we'd all agree it has, then we're all better off for it.

Ditto seeing how little action climate change receives, for all the attention it gets. And the same for pandemic prevention. It's even worse for nuclear war prevention, or food supply security, which don't even get attention. And to be clear, all of these seem like they are obviously under-resourced with a discount rate of 2%, rather than MackAskill's suggested 0%. I'd argue this is true for the neglected issues even if we were discounting at 5%, where the 30-year future is only worth about a quarter as much as the present - though the case for economic reactions to climate change like imposing a tax of $500/ton CO2, which I think is probably justified using a more reasonable discount rate, is harmed.

Expand full comment

Dwarkesh Patel has a series of pretty good posts related to unintuitive predictions of growth.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Everyone talks about the Repugnant Conclusion, but nobody talks about the Glorious Conclusion: instead of adding slightly-less-happy people and then equalizing, you can add slightly-more-happy people and then equalize. The second option is obviously better than the first. The obvious end point of this is infinite people who are infinitely happy. So that's the true moral end point of total utilitarianism.

Why does no one talk about this? Because no one believes that you can actually in the real world create people with arbitrarily high happiness. Whereas we actually know how to create people with low levels of happiness.

But then the Repugnant Conclusion depends on having at least some realistic assumptions about what's possible and what's not. Why not go all the way and add all the missing realism?

Creating unhappy people costs money. Money that could have been spent on making existing people happier. This is a tradeoff and it probably has an optimal point that is neither of the two extremes of having only one ultra-happy person or having a quadrillion suicidal people.

Expand full comment

A couple of observations I have about the EA movement in general...

It seems to me that those people made rich by a nation or region's hegemon status feel strongly drawn to develop theories of "how the world should be" - how to make things better, or give a better world to our children.

I think it all looks good on the surface. And of course wealth gives us the free time to introspect upon these things. But underneath, I think there's a lot of colonialism in there. It's like the group psyche of the well-off middle-classes seeks to both expunge its own sense of guilt for how hegemon status was achieved, and to reinforce its level of cultural control through developing "improvements" that benefit other races, whilst still preserving hegemony.

Expand full comment

Nice list of publications where WWOTF was featured! Let's not forget of all the videos.

Kurzgesagt: https://youtu.be/W93XyXHI8Nw

Primer: https://youtu.be/r6sa_fWQB_4

Ali Abdaal: https://youtu.be/Zi5gD9Mh29A

Rational Animations: https://youtu.be/_uV3wP5z51U

Expand full comment

It's interesting that towards the end of his career, Derek Parfit embraced Kantianism and tried to prove in his final book that it leads to the same conclusions as utilitarianism. It seems to me that the paradoxes in "Reasons and Persons" should point us in the opposite direction.

Kantians and utilitarians disagree on first-order issues but they start from similar metaethical premises. They think that most moral questions have an objectively correct answer, and that the answer is normally one that identifies some type of duty: either a duty to maximize aggregate well-being, or a duty to respect individual rights.

If you're an evolutionary naturalist you shouldn't believe those things. You should believe that our moral intuitions were shaped by a Darwinian process that maximized selective fitness. This implies that they weren't designed to produce truth-tracking beliefs about what's right or wrong, and it strongly suggests (I don't think it's a logical implication) that there *aren't* any objective truths about right and wrong.

Under those circumstances it's predictable that our intuitions will break down in radically hypothetical situations, like decisions about how many people should exist. Now that human beings have the power to make those decisions, we've got to reach some sort of conclusion. But it would be helpful to start by giving up on ethical objectivism.

Expand full comment

This seems a good place to briefly vent about this slightly maddening topic and an atomistic tendency of thought that is in my opinion not helpful in moral reasoning.

For example, these thought experiments about 'neutral children' with 'neutral lives' and no costs or impacts is not getting to the root of any dilemma. Instead, it is stripping away everything that makes population dilemmas true dilemmas.

In actual cases, you have to look at the whole picture not just the principles. Is it better to have a million extra people? Maybe? Is it better to have them if it means razing x acres of rainforest to make room for them? Maybe not? It will rarely be simple. And it won't be simple even if there are 10^whatever of us, either. Will it be better then to expand into the last remaining spiral arm galaxy or will it be better to leave it as a cosmic nature park, or unplundered resource for our even longer term future? Who knows?

I also think a holistic approach exposes a lot of the unduly human-experience-centred thinking that is rife in this whole scene. I think many people care about wild species and even wild landscapes – not just their experience of them, but the existence of them period. Should we therefore endeavour to multiply every species as far as we can to prevent the possibility of their wipeout? No, because all things are trade-offs.

The world is too complicated for singly held principles.

Expand full comment

The argument that we should aim to reduce unhappiness rather than maximise happiness has always been more persuasive to me. Happiness is something we can hardly define in real life, but people will certainly squeal when they are unhappy! Plus in negative utilitarianism you get to argue about whether blowing up the world is a good idea or not; which is a much more entertaining discussion than whether we should pack it full of people like sardines.

Expand full comment

This stuff is silly and just highlights how the EA people don't understand the fundamental nature of *morality.* Morality doesn't scale - and that's by design. Morals are a set of rules for a particular group of people in a particular time and place. They aren't abstract mathematical rules that apply to everyone, everywhere, at all times and in all places.

Expand full comment

Call me a contrarian if you want, but I don't think that I have a 1% chance of affecting the future. I have about a 0.000025% chance of affecting Los Angeles, and that's me being optimistic. Maybe someone like Xi Jinping, who can command the labor of billions, could pull it off; but even then, a whole 1% seems a bit too high, unless he wanted to just destroy the future with nukes. Wholesale destruction aside, the best that even the most powerful dictator can do is gently steer the future, and I doubt that his contribution could rise to a whole percentage point.

Expand full comment

I think an extremely important reason to prioritise animal welfare is AI risk. A learning AI would likely base at least some of its learning on our moral intuitions. And we would be pretty close to animals for a super intelligent AI. How we treat animals might affect how AIs treat us!

Expand full comment

I guess I’m the first Scottish person to read this, so let me formally object to MacAskill being described as an ‘English writer’, on behalf of our nation

Expand full comment

>There are only 10^67 atoms in our lightcone

Meh, I wouldn't give up quite that fast. Sometimes I think about fun schemes to try if the electroweak vacuum turns out to be metastable (which last I heard, it probably is). And there's a chance more stuff might crop up once we crack quantum gravity.

Also, only a 1% chance of affecting the vast future, really? I suspect that's underselling it. Right now, everything from human extinction to a paradise populated by considerably more than a nonillion people looks possible to me, and which one we get probably depends very strongly on actions taken within this century.

Expand full comment

>"But the future (hopefully) has more people than the present. MacAskill frames this as: if humanity stays at the same population, but exists for another 500 million years, the future will contain about 50,000,000,000,000,000 (50 quadrillion) people. For some reason he stops there, but we don’t have to: if humanity colonizes the whole Virgo Supercluster and lasts a billion years, there could be as many as 100,000,000,000,000,000,000,000,000,000,000 (100 nonillion) people."

The main threat we face may be the reverse:


Expand full comment

Regarding section IV. and Counterfactual Mugging:

You assume that there is no contest of resources (not possible) and that the happiness of people is not an interaction (which I think it is wrong). Happiness is a relative term and even that is a 'resource' If there is one person with happiness 80 and all of a sudden another appears with happiness 100, that 80 may go down to 60 just because the 100 appears. Or it may go up to 90 if they hook up. You are much happier being middle class in Africa surrounded by poorer people than being poor in the US surrounded by richer people.

What I want to say is that simple utility functions don't work except in academic papers or when paying students to switch coffee mugs with pens.

Expand full comment

I would love to read somewhere a more detailed analysis of the "drowning child" thought experiment. Is it actually valid to extrapolate from one's moral obligations in unforeseen emergency scenarios, to policies for dealing with recurring, predictable, structural problems? If so, can we show that rigorously? If not, why not?

Expand full comment

As I see it, at this point all long termism debate is about resolving the philosophical issues caused by assuming utilitarianism. It's probably a worthwhile idea to explore this, but I don't understand why is this important in practise at all? Isn't the one main idea behind EA to use utilitarianism as much as possible, but avoid the repugnancies by responding to Pascal's muggings with "no thank you I'm good"? Practical long termism looks morally consistent. I think it's barely different from EA-before-long-termism. x-risks are very important because we care about future people, but the future people are conditional not only on us surviving but also growing as a civilization. The latter is pretty much EA\{x-risks}, so we're just left with finding the optimal resource assignment between survival and growth. I imagine survival has significantly diminishing returns past a certain amount of resources and even astronomical future people numbers won't make the expected outcome better.

Expand full comment

The future potential people are really not my problem, nor anything I can solve. We definitely want to avoid nuclear war (which remains the biggest threat) but that’s in part because it affects us now. Back in 3000 bc they had their own worries and couldn’t be expected to also worry about the much richer people of the future. I get that the future might not be richer if technology slows but there’s little the average guy can do about that.

Expand full comment

> If I had to play the philosophy game, I would assert that it’s always bad to create new people whose lives are below zero, and neutral to slightly bad to create new people whose lives are positive but below average.

But you cannot rate the worth of different people's lives on a numerical scale, so the whole thing is nonsense from start to finish.

Expand full comment

I feel the Repugnant Conclusion is fine as a conclusion if it's seen as a dynamic system, not static. If there's a trillion people with the *potential* to build something magical in the future that's probably better than 5 billion 100 utilon people. It's the equivalent to (perhaps) the 17th/ 18th century world but much much bigger (which would help increase progress), compared to a much more stagnant world of only the richer parts of the world in 2040.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

I didn't preorder the book, mostly because I suspect I've already internalised everything it says, but also because I don't think the philosophical debate over how much we value the future is as interesting or relevant as the practical details.

Regardless of your moral system, if there are concrete examples of things we can do now to avert disaster or cause long-term benefit, I think people will be in favour of doing them - maybe it's a utilitarian obligation, maybe it's just because it seems like the kind of thing a wise and virtuous person would do. The value of future generations maybe factors in when considering trade-offs compared to focusing on present issues but it's a little ridiculous when all the longtermists end up being mostly concerned with things that are likely to happen soon and would be really bad for everyone currently alive.

"We should do more to address climate change", "we should carefully regulate Artificial Intelligence", and "we should invest in pandemic prevention" are all important ideas worthy of being debated in the present on their own merits (obviously not every idea that's suggested will actually help, or be worth the cost), and I think framing them as longtermist issue that require high-level utilitarianism to care about is actively harmful to the longtermist cause.

The best analogy I have is that the longtermists are in a cabin on a ship trying to convince the rest of the crew that the most important thing is to be concerned about people on the potentially vast number of future voyages, then concluding that the best thing we can do is not run into the rocks on the current voyage. The long-term arguement feels a little redundant if we think there's a good chance of running aground very soon.

Expand full comment

A lot of this “moral mugging” (great term btw) logic reminds me of a trick seasoned traders sometimes play against out-of-college hires. They offer them the following game, asking how much they’ll pay to play:

You flip a coin. If it’s heads you win $2. If it’s tails the game ends.

If you won “round 1” we flip again in “round 2.” If it’s heads, you win $4. If it’s tails, the game ends and you collect $2. In round 3, it’s $8 or you collect $4. Continue until you flip tails.

The expected value of this game is infinite: 1/2 * 2 + 1/4 * 4 + 1/8 * 8 …

Junior traders thus agree to offer the senior ones large sums to play and… always lose. Because there isn’t infinite money (certainly the senior trader doesn’t have it) and if you max out the payment at basically any number the game’s “true” expected value is incredibly low.

The connection here is that strict “moral models” are deeply brittle, relying on narrow and unrealistic assumptions about the world while often ignoring the significance of uncertainty. Following them as actual guides to behavior as opposed to idle thought experiments always strikes me ill-advised and, frankly, often creepy, as such models have a tendency to be usable to justify just about anything…

Expand full comment

If someone wants me to accept some kind of variation on the repugnant conclusion, all they have to do is go out and find me one person with happiness 50 and fifty people with happiness 1 so I can check them out for myself.

This is, of course, impossible. People blithely throw numbers around as if they mean something, but it's not possible to meaningfully define a scale, let alone measure people against it. And even if you manage to dream up a numerical scale it doesn't mean you can start applying mathematical operations like sums or averages to them; it's as meaningless as taking the dot product of salami and Cleveland.

The bizarre thing is that everybody fully admits that obviously you _can't_ go around actually assigning numbers to these things, but then they immediately forget this fact and go back to making arguments that rely on them.

You can't even meaningfully define the zero point of your scale -- the point at which life is _just_ worth living. And if you can't meaningfully define that, then the whole thing blows apart, because imagine you made a small error and accidentally created a hundred trillion people with happiness of -0.01 instead of creating a hundred trillion people of happiness +0.01.

tldr: ethics based on mathematically meaningless combinations of made-up numbers is stupid and everyone should stop doing it.

Expand full comment

My problem with the Repugnant Conclusion is that its conclusions depend on the worlds it describes being actually possible. There might be certain universal rules that govern all complex systems, including social systems. Although we don't currently know what these could be, I believe they are likely to exist and that the world described in the RC would be thereby forbidden. If this is the case the RC argument is premised on an impossibility, equivalent to starting a mathematical proof with a squared circle, and hence its conclusions have no merit in our reality.

Expand full comment

Per the review, the book seems to take as a given that poorer == less happy, but the country comparison data I've seen suggests that's not true, or at best a wild oversimplification. Does the book flesh out this argument?

In the absence of this, the repugnant conclusion's logic seems difficult to map to reality.

I continue to like utilitarianism, philosophically, but no definition of "quals" maps to the rich diversity of preference and experience in reality.

Expand full comment

Scott — I'm not sure male pattern baldness should count as a "medical condition". Is having eyes of different colours a condition? Is red hair a condition? Baldness is just a physical trait. Many people find baldness attractive (attractive enough that shaving your hair even if you're not bald is a thing). Any badness relating to being bald is socially-constructed and contingent, and I don't think it should be talked about in at all the same category as lung malformations.

Expand full comment

Avoiding counterfactual mugging has much the flavor Luria's Siberian peasants, and their refusal to engage in hypothetical reasoning.

Expand full comment

The best argument I ever read against the Repugnant Conclusion is the idea of a Parliament of Unborn Souls. It goes like this:

If we imagine the echoes of all possible future humans voting among themselves on who wants to get to be born, vs. sacrifice lower their odds in favour of a having a better life if they *do* get born — well, the Repugnant Conclusion would be laughed out of the room. The sum total of all possible future people overwhelmingly exceeds the sum total of all people who could possibly be born in the physical world (taking into account all possible sperm/egg combinations). Voting for the Repugnant Conclusion wouldn't meaningfully increase anybody's chances of being born, while it would drastically lower the expected value of life if they do get lucky.

I guess this is an answer to a version of the R.C. that phrases itself in terms of "potential people have moral value and should get to exist", rather than "bringing people into existence has moral value". But the latter as distinct from the former seems, for lack of a better term, bonkers. (I guess this is just Exhibit ∞ in me being a preferentialist and continuing to feel deep down that pure-utilitarians must be weird moon mutants.)

Expand full comment

Just to nitpick: Stalin did not say that "1 million deaths : just statistic"-thing. An unnamed French said it (allegedly - about 100.000 war-deaths) - and was quoted by German leftish/satirical writer Kurt Tucholsky in 1925. - Statistics were important to Stalin - when statisticians showed him the population-numbers for Ukraine et al after the famine, he had those number classified. And the statisticians executed.

Expand full comment

Sorry if this is well trod territory but I'm no philosopher: Doesn't that Parfit thought experiment about the survival of humanity imply a lot about what his views should be on contraception? If the non-existence of an unborn person is morally equivalent to (or, you know, worth any significant percentage of) murdering a living person, then does he consider abortion murder?

Expand full comment

another example of intransitive preferences: currently people have to work unpleasant jobs and any new automation is a great change. But if you keep adding automations, eventually humans don't have any non-artificial struggle at all and at that point it seems kinda pointless to me.

Expand full comment

Before reading the rest of this, I want to register this bit:

> Is it morally good to add five billion more people with slightly less constant excruciating suffering (happiness -90) to hell? No, this is obviously bad,

My intuition straightforwardly disagreed with this on first read! It is a good thing to add five billion more people with slightly less constant excruciating suffering to hell, conditional on hell being the universe you start with. It is not a good thing to add them to non-hell, for instance such as by adding them to the world we currently live in.

Expand full comment

Long termism (or even mid-termism) have one huge drawback: The advice giver (the philosophe, activist, or more importantly politician) will not be there when the results of the advice can be judged.

Like all the future trade off (suffer now for a brighter future), is is inherently scam-like ( 'A Bird in the Hand is Worth Two in the Bush' is not meaningless). It's not sure scam, but it needs to be minutely examined, even for short term advices: Is the adviser accountable in some way on the results? And more importantly, is the adviser in a special position where he would profit from the proposal sooner or more, contrary to the average guy who is asked to suffer near term in exchange of longer term benefit? If it is the case, if the adviser do not suffer at least as much in the short term as an average advisee, it is a scam.

I did not always though like that, but those last decades have been a great lesson, the whole western world is soooooo fond of this particular scam it's everywhere. I guess it taps in a deeply embedded catholic guilt+futurism.

Expand full comment

I don't think the Charcoal thing is a good argument. We can only get about 1 watt of energy for industry for each square meter devoted to forest land. When you have a population constrained by the amount of agricultural land and you also need wood for tools, houses, and heating in addition to industry then being limited to the energy you can get from charcoal for industry is going to essentially prohibit an industrial revolution.

Expand full comment

What does “discovering moral truths” mean? Is the author a moral realist?

Expand full comment

Ugh. This was not a good "book review".

I've come to the conclusion that all of the book reviews so far are pretty bad because they are all too long. After 1200 (maybe 800) words, the payoff should be increasingly better the longer it gets.

This might be reflective of the entire blogging endeavor; there are no newspaper editors telling writers to shorten and tightened it up because space is limited. As a result, the quality is just not high and we'd be better off finding already published reviews.

As I vaguely recall, the poet John Ciardi remarked: that which is written without hard work is usually read without pleasure.

I suggest that distillation is hard work that makes writing better. Could the book reviewers start doing some hard work? Redo all of these and give us your best 1000 words.

D+, revise and resubmit.

Expand full comment

Why do we assume that the morally neutral level of life is objectively defined? I think standards change and depend upon all sorts of things, including the average quality of life across the population. So suppose we consider the blissful planet, with 5 billion people whose happiness is distributed normally with average of 100 and variance of 1. Then by their standards, people with happiness of 80 would be way off down from what they consider normal (20 standard deviations below average which they see around them!), and they would not even consider adding those people to the population. So the trick here is letting us, from our present time, decide whether to add those 80-happiness people. But it is like asking a slave on an ancient subsistence farm to decide whether to add some poor people to modern population. By the slave's standard, having access to cheap fast food and limited hours of work week would make them really happy, and they would obviously be happy for more people like this to exist, but in modern realities those added people might feel really unhappy, have loads of stress, and keep themselves from suicide by moral convictions. Similarly, it is people of the heaven that should decide which people should be added to heaven, not us here with our abysmally low standards (compared to theirs).

Expand full comment

I use a discount rate in considering the future. For financial considerations, a discount rate is composed of (1) a risk free return plus (2) a margin for uncertainty of return.

For purposes of considering the future of humanity, I agree that only (2) makes sense. But (2) can be quite small as a % and still make the present value of affecting the distant future quite small. Not because future people are empirically less important than current people, but because of the uncertainty, or contingency, of how we can affect the far off future from investments today.

This line of reasoning leads me to want to focus my own "altruism" on issues affecting the present.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

I want to preach the benefits of stable state analysis heuristic, which was called Kantian categorical imperative in its previous life:

How will the society look like if an action preferred by your ethical theory were an universal societal rule?

The other version put forward by Kant, "treat person always as an end, not as a means to end" is also useful, though I am less certain of Kant's claim that is essentially derived from the first principle.

I find it much more productive way to think about ethics. Now instead of just thinking "imagine a world 5 billion people in Hell, what if we can magically add 5 billion more people", you have to consider the actions to get from world A to world B.

The various repugnant conclusions become much more implausible. Basic version suggests that everyone should make as many kids as possible, because more utils experienced is always better. I don't think the society would be workable, if for nothing else that there are limits to carrying capacity and the society would eventually collapse. It would also make a society where many other moral imperatives become difficult to follow ("do not knowingly create situations where famines are likely", for instance).

And finally, such calculus also fails by the second criterion, as it views everyone currently alive at any time point T = "currently" more as incubators for the next generations of utils (their own experienced utils become overwhelmed by all the potential utils experienced by N >> (large number) future generations).

Naturally the imperatives can not be exhaustively calculated, but that is just a sign that ethics is an imperfect tool to human life, not that human life is subservient to a method. Hopeful, the rules they can be iteratively refined ("get the British government buy all slaves free if it is possible"). And I think "imperative calculus" would find it is good / necessary to help a drowning child / suffering third world person as long as the method of helping doesn't become dystopian. (Dystopian utilitarianism would allow for "if you can't swim, throw a fat person who can under a tramcar until someone saves a drowning child". I think one of the salient imperatives is, "as much people should learn to swim, help people, and call others help".)

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

The glass shards example seems to me not to rely utilitarian or deontological reasoning. It hinges on the observer's emotive reaction, and the conclusion is that one should never do anything that may have consequences we would feel social regret about. The reason to clean up the glass is because we are thinking ethically, and not to clean it will leave us with doubt and a sense of guilt, whether a child comes along or not. That fits deontology (conforms to our sense of duty), utility (increases our happiness with certainty), intuitionism (just seems wrong to move on without doing it), and emotivism (makes us feel better ethically). Utilitarian reasoning would need to rest not just on a chances-of-benefit calculation, but on a cost-benefit calculation--what is the long term cost of slowing one's journey to clean the glass, delaying and ultimately precluding some "better" use of the time?)

The argument about technology seems to assume that wealth and happiness are linked in some quantitative fashion.

The argument concerning wiping out almost all people vs. wiping out all people (and precluding the birth of further future people) seems based on treating potential people as people, not future people as people. If we owe a debt to other people by virtue of being social beings, we should consider part of that debt due to people who are actual, except not yet born. But people who will never be born are not future people. To treat them as equally due a debt is a step even beyond absolutist pro-lifers, who consider the person a human being from the moment of physical conception--now these non-people are due full ethical standing from the point of intellectual conception.

The argument about immigration and culture change seems to me to make no sense. There is no reason to think that changing culture leads to less happiness, or is a negative good on other grounds, even if those who resist are called names. The fact that it causes temporary social discomfort doesn't mean it is not superseded by long-term net social good (which is how our national narrative treats the immigrant wave of the late-19th and early 20th century -- whether it's "accurate" or not, it is certainly a plausible interpretation).

Expand full comment

Is the repugnant conclusion just the paradox of heap? Is there a version with a less vague predicate (happiness)?

Expand full comment

The Sophists proved via reductio ad absurdum that philosophy is useless, mere rhetorical word tricks, to obfuscate the truth, which is why they were vilified by Plato and his ilk. Don’t feel bad about disagreeing with philosophers. Whatever was of value was extracted in the form of mathematics or science a long time ago.

As for abolitionism, Bartolomé de las Casas (1484-1566) was way ahead of any Quaker or Quakerism itself.

Expand full comment

> I realize that will intuitively feel like leaving some utility on the table - the first step in the chain just looks so much obviously better than the starting point - but I’m willing to make that sacrifice.

As soon as I read this, I had to jump down here to remind everyone about Timeless Decision Theory and Newcomb's problem: https://www.readthesequences.com/Newcombs-Problem-And-Regret-Of-Rationality

Expand full comment

I believe in The Copenhagen Interpretation of Ethics which says that when you interact with a problem in any way, you can be blamed for it.

That's why I feel responsible to prevent utilitarians inflicting harm on children in my presence, but I'm indifferent about whatever happens 100 years from now.

Après moi, le déluge.

Expand full comment

You link to an article on counterfactual mugging, but what you describe here is not counterfactual mugging at all. Counterfactual mugging is when someone flips a coin, and on heads rewards you if you would have paid a penalty on tails.

Expand full comment

*>tfw forget a book review is written by Scott, not part of Book Review Contest

*I can't decide if it would be incredible or insufferable if Scott hired a publicist. So-And-So, potential author of hypothetical books, currently beta-testing in blog format.

*Octopus factory farming: Sad! Doesn't even taste good compared to (brainless idiot pest species) squid. And that's without factoring in the potential sentience, which really makes my stomach churn on the few occasions I do eat it begrudgingly...

*The Broken Bottle Hypothetical is weird...I feel happy and near-scrupulosity-level-compelled to clean up my own messes. But I harbour a deep resentment for cleaning up the messes of others. It just seems to go against every model of behavioral incentives I have...at some point, "leading by example" becomes "sucker doing others' dirtywork". (Besides that - who *wouldn't* pick up their own litter when out in the wilds? I've never understood that mindset...one doesn't have to be a hippie to have a little basic respect for nature. Also, Real Campers Use Metal, among other things to avoid this exact scenario.)

Like I get the direction the thought experiment is intended to go...but many "broken bottle" behaviours have intrinsic benefits in the here-and-now. Cooking with a gas stove or driving with a gas car are pretty high-utility for the user, even if deleterious on the future. What's the NPV of not picking up broken glass? (Yes, probably making too much hay out of nitpicking a specific example.)

Expand full comment

I'm glad to see Scott share this, even though many in the EA community are uncomfortable criticizing EA in public (I myself am victim to this - I omitted to rate WWOTF on Goodreads for fear of harming the book's reach).

Simply put, WWOTF is philosophically weak. This would be understandable if the book was aimed at influencing the general public, but for the reasons Scott mentions in this post, WWOTF doesn't offer any actionable takeaways different than default EA positions... and certainly won't be appealing to the general public.

The problem with all this is that WWOTF's public relations campaign is enormously costly. I don't mean all the money spent on promoting the book, but rather, WWOTF is eating all the positive reputational capital EA accumulated over the last decade.

This was it. This was EAs coming out party. There will not be another positive PR campaign like this.

The problem with this is that the older conception of EA is something most public intellectuals/news readers think very highly of.

Unfortunately, the version of EA that MacAskill puts forwards is perceived as noxious to most people (see this review for context: https://jabberwocking.com/ea/ - there are tons like it).

It seems like WWOTF's release and promotion doesn't accomplish anything helpful while causing meaningful reputational harm.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

>If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity,

The philosophers have gotten ahead of you on that one. Surprised you haven't already read it, actually.


It's a proof that any consistent system of utilitarianism must either accept the Repugnant Conclusion ("a larger population with very low but positive welfare is better than a small population with very high welfare, for sufficient values of 'larger'"), the Sadistic Conclusion ("it is better, for high-average-welfare populations, to add a small number of people with negative welfare than a larger number with low-but-positive welfare, for sufficient values of 'larger'"), the Anti-Egalitarian Conclusion ("for any population of some number of people and equal utility among all of those people, there is a population with lower average utility distributed unevenly that is better"), or the Oppression Olympics ("all improvement of people's lives is of zero moral value unless it is improvement of the worst life in existence").

This proof probably has something to do with why those 29 philosophers said the Repugnant Conclusion shouldn't be grounds to disqualify a moral accounting - it is known that no coherent system of utilitarian ethics avoids all unintuitive results, and the RC is one of the more palatable candidates (this is where the "it's not actually as bad as it looks, because by definition low positive welfare is still a life actually worth living, and also in reality people of 0.001 welfare eat more than 1/1000 as much as people of 1 welfare so the result of applying RC-logic in the real world isn't infinitesimal individual welfare" arguments come in).

(Also, the most obvious eye-pecking of "making kids that are below average is wrong" is "if everyone follows this, the human race goes extinct, as for any non-empty population of real people there will be someone below average who shouldn't have been born". You also get the Sadistic Conclusion, because you assigned a non-infinitesimal negative value to creating people with positive welfare.)

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

[disclaimer: I'm dumb and I don't really know anything]

Thanks for this review, a nice summary on some of the core points of EA and long termism.

One question troubles me as for the "the most important thing ever is your choice of career. You should aim to do the maximum good, preferably earning to give / become an influential leader to change policy / becoming a top AI specialist to solve alignment / etc."

These guidelines are explicitly aimed at talented people. I remember 80kh being very open about this in the past; it seems that somewhere along the line they've altered their front page material on it. But obviously these points mostly concern talented people. Most people will not become scientists, high level engineers, leaders or influential activists.

Where does this leave normal people? What should most people do with their time? "Well duh, that which they can best do to advance the greatest good ever." Ok, but what is that for, say, a normie who can learn a profession, but whose profession is relatively boring and doesn't have anything to do with any of the aforementioned noble goals? What is the greatest utility for a person, who is ill-equipped to cognitively even grasp long-termism properly? Or for a person who does get the point, but who has no business becoming [an influential effective altruist ideal]? And so on.

Lacking an answer (granted, I haven't spent a very long time looking for one), for the time being the advice to look for me most insanely profitably successfully extremely bestest way to increase the number of people alive to [a very high number] seems to me lopsided in favor of very talented people, while simply ignoring most people everywhere. In making EA go mainstream, this might matter - maybe?

Expand full comment

Have we considered that there is a middle ground between "future people matter as much as current people" and "future people don't matter at all"? If you want numbers you can use a function that discounts the value the further in the future it is, just like we do for money or simulations, to account for uncertainty.

I imagine people would argue over what the right discount function should be, but this seems better than the alternative. It also lets us factor in the extent to which we are in a better position to find solutions for our near term problems than for far-future problems.

Expand full comment

I am not sure if I understood the Repugnant Conclusion thing correctly. Is the setting that we are given two alternative universes: 1 with a small population of very happy individuals, and 1 with a very large population of not so happy individuals? And is the issue that most people would rather ACTUALLY LIVE in the first universe, because then they would be happier themselves?

I can also imagine something about scope neglect I guess. A large population may be very valuable and each of those people are unique and special, have their own friends and families, hopes, dreams, etc. But intuitively it sure feels like the difference between 1,000,000 and 10,000,000 people isn't so big, after all it's more people than I could ever imagine interacting with,

Expand full comment

I notice that as soon as we start treating future people as already existing, calculations become messy. Be it anthropics reasoning which assumes that we are randomly selected from all humans that has ever or will ever live, or moral reasoning which passes the buck of utility to future generations.

I can clearly point where is the error of such anthropic reasoning. I'm less certain what's wrong with total utilitarianism. There should be some discounting based on the probability of future humans existing but it's not just that. I guess it just doesn't fit my moral intuition?

Imagine a situation where I know that all my decendants for n next generations will have terrible lives. Lets say there is some problem which can't be fixed for the next many years. But also I know that in at some moment humanity will fix this problem and thus strating from n+1 generation, my decendants will have happy lives. Am I thus morally obliged to create as many decendants as possible? Are my decendants of k-generation facing an even harder situation: if they decide not to breed they are retroactively making me and their relatives from k-1 generations terrible people? Eventually, whatever disutility from the n generations of suffering were accumulated would be outweighted by the utility of n+1 and futher generations. But what's the point? Why not just let people without this problem reproduce and have happiness in all the generations to come?

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Am I the only one who thinks B is clearly better than A with regards to nuclear war? More or less same technological development with 10% of the population so great potential for growth and less zero sum games?

Expand full comment

B->C seems like the more sensible place to get off the repugnant conclusion train than A->B, since that’s the step that actually involves making (some) people worse off.

In your immigration analogy, that corresponds to letting immigrants in but not changing society to accommodate them, which seems much better than not letting immigrants in at all.

Expand full comment

I have a couple of thoughts and I'm not sure which is more likely to start a fight.

1. A sufficiently creative philosopher can construct an ironclad argument for pretty much any conclusion, and which of them you choose is down to your personal aesthetic preferences.

2. The reason abolition of slavery came so late was that for most of human history, being a slave wasn't that bad, relative to being pretty much any other person. Industrialization turned slavery into a practice too reprehensible to survive. Even Aristotle would have looked at the Antebellum South and said hey, that's kinda fucked.

Expand full comment

The issue I always have with ultralarge-potential-future utilitarian arguments is that the Carter Catastrophe argument can be made the same way from the same premises, and that that argument says that the probability of this ultralarge future is proportionately ultrasmall.

Imagine two black boxes (and this will sound very familiar to anyone who has read *Manifold: Time*). Put one red marble in both Box A and Box B. Then, put nine black marbles in Box A and nine hundred ninety-nine black marbles in Box B. Then, shuffle the boxes around so that you don't know which is which, pick a box, and start drawing out marbles at random. And then suppose that the third marble you get is the red marble, after two black ones.

If you were asked, with that information and nothing else, whether the box in front of you was Box A or Box B, you'd probably say 'Box A'. Sure, it's possible to pull the red marble out from 999 black ones after just three tries. It *could* happen. But it's a lot less likely than pulling it out from a box with just 9 black marbles.

The biggest projected future mentioned in this book is the one where humanity colonizes the entire Virgo Cluster, and has a total population of 100 nonillion over the course of its entire history. By comparison, roughly 100 billion human beings have ever lived. If the Virgo Cluster future is in fact our actual future, then only 1 thousand billion billionth of all the humans across history have been born yet. But, the odds of me being in the first thousand billion billionth of humanity are somewhere on the order of a thousand billion billion to one against. The larger the proposed future, the earlier in its history we'd have to be, and the less likely we would declare that a priori.

If every human who ever lived or ever will live said "I am not in the first 0.01% of humans to be born", 99.99% of them would be right. If we're going by Bayesian reasoning, that's an awfully strong prior to overcome.

Expand full comment

As minor as quibbles can be, but:

"each new person is happy to exist but doesn’t make anyone else worse off."

Is there a reason this is a "but" instead of an "and"? As if people being happy usually make others worse off?

Expand full comment

I've linked to Huemer's In Defence of Repugnance in the comments to another post, but it's so on-topic it makes sense to do so here:


As I noted there, Huemer is not a utilitarian but instead a follower of Thomas Reid's philosophy of "common sense".

There really doesn't seem to be any reason to believe it's "neutral to slightly bad to create new people whose lives are positive but below average", which would cause the birth of a child to become bad if some utility monster became extremely happy.

Expand full comment

A post wherein Scott outs himself as one of those people who choose only box B in Newcomb's paradox.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

I haven't seriously struggled with repugnant conclusion style arguments before (Mostly I've decided to ignore them to avoid the aforementioned mugging effect), so what I'm about to write is probably old hat. Still, I'd like to hear people's thoughts.

What if you have the following options:

A) 5 billion people, today, at 100% happiness, then the universe ends

B) 10 billion people, today, at 95% happiness, then the universe ends

C) 5 billion people, today, at 97% happiness followed by another 5 billion people at 97% happiness 50 years later, then the universe ends

I think most people would agree that option C is better than option B. If we're thinking in bizzare, long-termist views anyway, there is likely some sustainable equilibrium level of population such that you can generate 100% happiness for an arbitrary number of person-years. You just might have to have fewer people and wait longer years. So let's... do that, instead of mugging ourselves into a Malthusian hellscape.

If you object that the lifetime of the universe is finite, and so the number of person-years in the above scenario is not arbitrarily high, I would respond with something along the lines of "Yeah, sure, but if humanity survives until the heat death of the universe, I'm pretty sure the people alive at that time won't be bummed out that we didn't maximize humanity's total utility. They won't be cursing their ancestors for not having more children. It's not like they'd decide that maximizing total utility was the meaning of life and we fucked it up all along."

Expand full comment

- There are only 10^67 atoms in our lightcone; even if we converted all of them into consumer goods, we couldn’t become 10^86 times richer.

Warning: rambling.

Most of the value in the modern economy does not come from extracting resources, but rather from turning those resources into more valuable things. The raw materials for an iPhone are worth ~$1, whereas the product has 1000x that value. There is probably a limit to how much value we can get out of a single atom, but I think we can still get a better multiplier than 1000x!

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Sorry for the nit-picking, but the below doesn't follow from the link:

"Octopi seem unusually smart and thoughtful for animals, some people have just barely started factory farming them in horrible painful ways, and probably there aren’t enough entrenched interests here to resist an effort to stop this."

Link just says "there are no released standards for how the octopuses are going to be kept and raised, nor for how they will be slaughtered."

Now maybe the Spanish octopus farmers will do horrible, Snidely Whiplash moustache-twirling, evil octopus farming. Or maybe they will be constrained under EU animal welfare standards. It's no skin off my nose either way, because I've never eaten octopus and have no intention of ever doing so. But this is what is annoying: trying to force us to accept the conclusion that *of course* it will be 'horrible painful ways' because eating meat (do octopi count as fish?) is evil and wicked and immoral, and factory farming is evil and wicked and immoral, and fish farming is factory farming hence is evil and wicked and immoral.

I don't know how smart octopi are, they seem to be smart in some way, and probably smarter than a cow (a low bar). But here's the thing: I am not yet convinced eating octopi is morally evil. And I know dang well that it's not just the octopus farming this campaign would like to stop, it's fishing for wild octopus and eating them at all.

Let's wait and see if the wicked, bull-fighting, blood-thirsty Spaniards *are* going to torture sweet, cute, innocent, smart, octopi to death before we start calling for the war crimes tribunal, hmmm?

EDIT: And if the "scientists and conservationists" are so outraged about the intelligent octopi, then surely Ms. Tonkins should quit her job at Bristol Aquarium, rather than being complicit in the enslavement of these intelligent and sentient beings? Did any of the octopi consent to being captured and imprisoned in tanks for humans to gawk at? Liberate all those incarcerated octopi into the wild and take the beam out of your own eye first!

Also, how moral are octopi themselves if experts fear "if there was more than one octopus in a tank - experts say they could start to eat each other". That seems to be that the greatest threat to an octopus is another octopus, not a human.

Expand full comment

If you support the notion of impartiality and accept the concept of intelligence explosions, doesn't this take the oomf out of human-centric long-termism?

Aren't there almost certainly other life forms in the universe that will experience intelligent explosions, making whatever happens in our story irrelevant?

Who cares if we cant interact with the regions in space they are located as long as they are experiencing lots of positive utils.

Expand full comment

> I realize this is “anti-intellectual” and “defeating the entire point of philosophy”. If you want to complain, you can find me in World A, along with my 4,999,999,999 blissfully happy friends.

The philosopher Massimo Pigliucci on the Rationally Speaking Podcast did something like this once when he was confronted with the vegan troll bit about bestiality. You're against bestiality right? Because it's bad to sexually assault animals? Well, if you think that's bad, then you must definitely be against eating them.

He retorted that he just didn't feel it necessary to be morally consistent. 🤯

Expand full comment

> Is this just Pascalian reasoning, where you name a prize so big that it overwhelms any potential discussion of how likely it is that you can really get the prize? MacAskill carefully avoids doing this explicitly, so much so that he (unconvincingly) denies being a utilitarian at all. Is he doing it implicitly? I think he would make an argument something like Gregory Lewis’ Most Small Probabilities Aren’t Pascalian. This isn’t about an 0.000001% chance of affecting 50 quadrillion people. It’s more like a 1% chance of affecting them. It’s not automatically Pascalian reasoning every time you’re dealing with a high-stakes situation!

Whenever I hear things like "What We Owe The Future" and "dectillions of future humans", I think "ah, the future is a utility monster that we mere 7 billion humans should sacrifice everything to".

The utility monster is a critique of utilitarianism.

Suppose everyone gets about one unit of pleasure from resource unit X. But there exists a person who gets ten billion pleasures from unit X. As a utilitarian you should give everything to that person because it would optimize global pleasure.

In this case, the future is the utility monster because there are so many potential humans to pleasure with existence. Spending any resources on ourselves instead of the future is squandering them. We are the 1%. But actually we are the 0.000001%

Expand full comment

What concerns me about this concept, at least as it has been presented by my peers who are into long-termism, is the accuracy of their predictions. Your actions now have some moral consequence down the line. My question is, how accurate therefore are your predictions, spanning long into the future, that your very rational utilitarian decisions will actually lead to positive outcomes and not negative? We are pretty darn bad at even near term predictions (see Michael Huemer on the experts and predictions problem); so making and explicit statement to live your life in some particular way because you are confident your predictions about how your life will impact humanity and the universe eons into the future just seems silly. In fact, it seems worse than silly, it seems like a load of hubris that is just as likely to be harmful down the line as good, but we will all be dead and no one can call you on it when the consequences occur, conversely, we are all alive now and have to hear how very moral and virtuous long-termism is today by its practitioners.

Expand full comment

This is your regular reminder that nuclear weapons are not an existential risk and never have been, nuclear winter is mostly made up, and we have the technology to build missile defense systems that would make the results of a nuclear war much less bad (although still bad enough that people will want to avoid having one).



Expand full comment

Just a few paragraphs in, and I'm thinking to myself "Thank you for reading and reviewing this book, so now I need not waste my time on it." That, in itself, raises this review several positions in the ranking of reviews so far!

Expand full comment

I still think that naively adding hedons - or utils - or whatever you call them nowadays is not the right approach.

Thought experiment : let’s say that you are pretty happy, and worth 80 "happiness". Now I participate to an experiment when I’m put to sleep, get cloned n times, and me and my clones are put in identical rooms where I can enjoy a book of my choosing after waking up. Under classic utilitarianism, the experiment has created 80*n "happiness". Which sounds wrong to me : as long as me and my clones are identical, no happiness has really been created ; identical clones have no moral value. Generalizing this, addition of happiness should discount for having similarity with other existing individuals.

Expand full comment

I still don't find the repugnant conclusion repugnant, or even surprising. Either a certain level of existence is better than nonexistence, or it isn't. If it's better, let's get more existence!

I think a lot of people have two thresholds in mind: there's the level of existence at which point it's worth creating a new life, and there's a separate, lower one at which point it's worth ending an existing life. But then it's just treating existing lives differently from potential ones.

The biggest objection, to me, is one I never see people raise, and that's the obligation to have more kids. I only have one, and might have a second, but I easily could have 4 by now, and could probably support much more than that at a reasonably high standard of living, so if I really buy the repugnant conclusion, I should be doing that. But I don't, so update your priors accordingly.

Expand full comment

Thanks so much for highlighting my interview of him Scott!

Expand full comment

With regard to the Repugnant Conclusion, I think that one way out is that the weighting of factors determining the utility is somewhat arbitrary, so one can move the zero line to what one considers an acceptable standard of living.

Suppose I assign -1000 for the lack of access to any of: clean water, adequate food or housing, education, recreation, nature, potential for fulfillment etc. Now adding with about zero net util does not seem to bad. In fact, not adding them just to preserve a few utils for the preexisting population would feel wrong -- like hypothetical billionaires (or Asimov's Solarians) preferring to keep giant estates which could otherwise be suburban districts providing decent living for millions.

What life is considered worth living is very dependent on the society. I gather that ancient Mesopotamians probably did not consider either freedom of speech or antibiotics essential, given that they had (to my knowledge) neither concept. For most people living in the middle ages, Famine, War and Pestilence were immutable facts of life along Death. From a modern western point of view, at least two of the horsemen are clearly unacceptable and we work hard to fight the third one. EYs Super Happy People would consider a life containing any involuntary suffering to be morally abhorrent. Perhaps after we fix death, only supervillains would even contemplate creating a sentient being doomed to die.

Of course, this also seems to contradict "Can we solve this by saying that it’s not morally good to create new happy people unless their lives are above a certain quality threshold? No."


Also, I get a strong vibe of "Arguments? You can prove anything with arguments." ( https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/ ) here from Scott with regard to philosophical muggings.


Finally, in long term thinking, extinction is hardly the worst case. The worst case would be that due to value misalignment, someone future being would turn the inner part of the light cone going from Sol, 2022 CE into sufferonium -- turning the reachable universe into sentient beings which have negative utility according to our values.

Expand full comment

“As far as anyone can tell, the first abolitionist was Benjamin Lay (1682 - 1759), a hunchbacked Quaker dwarf who lived in a cave. He convinced some of his fellow Quakers...”

Now this is just not true. Slavery was largely abolished in mediaeval Europe. And often by Catholics. And the invaders of Britain, the Normans, ended it there. However the Normans are looked at with hostility, as is Catholicism in Anglo historiography

Expand full comment

There are three things that grate me in this review (or, may be, in the book as well, I am yet to read the book). All three have to do with exponentials.

1. The hockey stick chart with world economic growth does not prove that we live in an exceptional time. Indeed, if you take a chart of a simple exponential function y=exp(A*x) between 0 and T, then for any T you can find a value of A such that the chart looks just like that. An yet there is nothing special about that or another value of T.

2. I do not see why economic growth is limited by the number of atoms in the universe. It looks to me similar to thinking in 1800 that economic growth is limited by the number of horses. We are already well past the time when most of economic value was generated by tons of steel and megawatts of electricity. Most (90%) of book value in S&P500 is already intangible, i.e. not coming from any physical objects but from abstract things such as ideas and knowledge. I do not see why the quantity of ideas or their value relative to other ideas would be limited by the number of atoms in the universe. If anything, I could see an argument why it there is growth limit of the number of sets consisting of such atoms, which is much larger (it is 2^[number of atoms]) and, at our paltry rates of economic growth, is large enough to last us until the heat death of the universe.

3. All these pictures with figures of future people are relevant only in the absence of discounting aka the value of time. I do not know if the book ignores this issue but you do not mention it at all in the review. Any calculation comparing payoffs at different times has to make these payoffs somehow commensurate. That's a pretty basic feature of any financial analysis and I am not sure why it would be absent in utility analysis. When we are comparing a benefit of $10 in 10 years time to a current cost of $1, it makes no sense to simply take the difference $10-$1. We should divide the benefit by at least the inflation discount factor exp(-[inflation rate]*10). If we have an option to invest $1 today in some stocks, we should additionally multiply by exp(-[real equity growth rate]*10). When our ability to predict future results of our actions decays with time horizon, we should add another exponential factor. This kind of discounting removes a lot of paradoxes and also kills a lot of long-termist conclusions. This argument gets a bit fuzzier if we deal with utilities and not with actual money, but if the annual increase of uncertainty is higher than the annual population growth rate then the utility of all future generations is actually finite even for an infinite number of exponentially growing generations. So not all small probabilities are Pascalian but ones deriving from events far from the future definitely are! I do not know if this is discussed in the book but any long termism discussion seems to be pretty pointless without it.

Expand full comment

Your comment about slavery going away seems to be false, in that there are credible estimates that there are more slaves today than ever:


Expand full comment

For a good introduction to population ethics (surveying the major options), see: https://www.utilitarianism.net/population-ethics

One thing worth flagging is that MacAskill's book neglects the possibility of parity (or "value blur", as we call it in the section on Critical Range theories, above), which can help block some of the more extreme philosophical arguments (though, as we note, there's no way to capture every common intuition here).

Expand full comment

I'm pretty sure most ACX readers would agree that humans cannot psychologically comprehend the differences between very large numbers causes a lot of unnecessary suffering. Therefore, I find it very confusing and epistemically tenuous that the repugnant conclusion, which involves human intuitions with respect to exceptionally large numbers that we know are completely unreliable, is used to reject principles like more flourishing is good and less suffering is bad.

Expand full comment

Now not nitpicking: Erik Hoel has his fine take on the book out. https://erikhoel.substack.com/p/we-owe-the-future-but-why?utm_source=substack&utm_medium=email He offers some help - i.e. arguments -against the 'mugging' ;) - not just flat out refusing the "repugnant conclusion" (as Scott seems to do) - In the comment section at Hoel I liked Mark Baker's comment a lot: "The fundamental error in utilitarianism, and in EA it seems from your description of it, is that is conflates suffering with evil. Suffering is not evil. Suffering is an inherent feature of life. Suffering is information. Without suffering we would all die very quickly, probably by forgetting to eat.

Causing suffering is evil, because it is cruelty.

Ignoring preventable suffering is evil because it is indifference.

But setting yourself up to run the world and dictate how everyone else should live because you believe that you have the calculus to mathematically minimize suffering is also evil because it is tyranny.

Holocausts are evil because they are cruel. Stubbed toes are not evil because they are information. (Put your shoes on!)" - end of quote -

If you read Scott's post first, good for you: Hoel writes less about the book nor how the "repugnant conclusion" is reached. But he had a long, strong post "versus utilitarianism" just last week, so his review is more kind of a follow-up.

I really do like a lot about EA, and strongly dislike "IA". But I agree with Hoel: "All to say: while in-practice the EA movement gets up to a lot of good and generally promotes good causes, its leaders should stop flirting with sophomoric literalisms like “If human civilization were destroyed but replaced by AIs there would be more of them so the human genocide would be a bad thing only if the wrong values got locked-in.” - end of quote

Expand full comment

Nice review. Definitely some interesting thoughts.

If you recall, I thought that your population article was mistaken because it wasn't accurately weighing potential people. [1] You replied (which I appreciate) to say that you reject the Repugnant Conclusion. You said "I am equally happy with any sized human civilization large enough to be interesting and do cool stuff. Or, if I'm not, I will never admit my scaling function, lest you trap me in some kind of paradox." I wrote an article responding to the article, and critiqued possible scaling functions [2].

"If I had to play the philosophy game, I would assert that it’s always bad to create new people whose lives are below zero, and neutral to slightly bad to create new people whose lives are positive but below average. This sort of implies that very poor people shouldn’t have kids, but I’m happy to shrug this off by saying it’s a very minor sin and the joy that the child brings the parents more than compensates for the harm against abstract utility. This series of commitments feels basically right to me and I think it prevents muggings."

Some implications of this view:

1. If no people existed, the average would be 0. In which case, you would have the Repugant Conclusion again.

2. If we set the average value given existing people, it's better to create 1 ever-so-slightly above average person, than tons of ever-so-slightly below average people even if they fully believe their lives are good and worth living.

3. Since the critical value is a function rather than fixed, it will change with the present population. This means that someone who was evaluated as good to produce could later be bad without any aspect of their life changing. While creating a human in 1600 could be regarded as morally good then, it's likely that tons of those lives were below average for 2022 standards. This seems to create odd conclusions similar to asking the child their age after they were cut by the broken bottle.

4. The goodness or badness of having a child is heavily dependent on the existence of "persons" on other planets. If these persons have incredibly good lives, it might be immoral to have any humans. If these persons have incredibly bad lives, it might result in something like the repugant conclusion because they are below 0 and drag the average down to almost zero if they are numerous enough. If you consider animals "persons", then you could argue they suffer so much and are so numerous that the average is below zero.

5. It would be better (but not good) to introduce millions of tormented people into the world rather than a sufficiently larger number of slightly below average people.

6. Imagine we had population A with 101% average utility and a very large population B with 200% average utility which changes the average to 110%. One population is created 1 second before the other. If A comes first then B, it's good to have A. If B comes first, then A, it's bad to have A. The mere 1 second delay creates a very different decision, but partically the exact same world. This seems odd from a perspective where only the consequences matter.

[1] https://astralcodexten.substack.com/p/slightly-against-underpopulation/comment/8159506

[1] https://parrhesia.substack.com/p/in-favor-of-underpopulation-worries

Expand full comment

>> Suppose that some catastrophe “merely” kills 99% of humans. Could the rest survive and rebuild civilization? MacAskill thinks yes, partly because of the indomitable human spirit

Oh no. [survivorship bias airplane.jpg]

Expand full comment

Another scenario:

Suppose god offers you the option to flip a coin. If It comes up heads, the future contains N times as many people as the counterfactual future where you don't flip the coin. (Average happiness remains the same.) If it comes up tails, humanity goes extinct this year. A total utilitarian expectation maximizer would have to flip the coin for any value of N over 2. But I think it is very bad to flip the coin for almost any value of N.

Professional gamblers like me act so as to maximize the expectation of logarithm of their bankroll, because this is how you avoid going broke and maximize the long term growth rate of your bankroll. The Kelly criterion is derived from logarithmic utility.

Would it make any sense to use a logarithmic utility function in population ethics? This could:

1. Avoid the extinction coinflip mugging

2. Avoid the repugnant conclusion because there aren't enough atoms in our lightcone to make into people to multiply the logarithm of the population by a low average happiness and get a bigger utility number than 5 billion happy people.

On the downside it implies you should kill half the population if it will make the remaining people modestly happier.

Expand full comment

Scott, I would imagine that you - like me - are deeply dissatisfied with simply walking away from the moralist who has made an argument for why you should get your eyes pecked out. It seems to me like you’re essentially saying “you fool! Your abstract rules of logic don’t bind my actions!” - and with this statement the entire rationalist endeavor to build a society that privileges logical argument goes out the window.

Is that a fair summary, or is there a deeper justification in the article I missed?

I’ll take a stab at providing one: the common conception of morality encompasses many different systems, and these sorts of arguments confuse them.

System 1: moral intuitions. These can be understood as a cognate of disgust; they are essentially emotional responses that tell us “you can’t do this, it’s beyond the pale”.

System 2: modeling and reasoning about system 1 (moral intuitions). This is the domain of psychology, and involves experiments to figure out exactly what triggers moral intuitions.

System 3: systemic morality. The attempt to construct rules for action that avoid triggering moral intuitions, and that perhaps that maximally trigger some sort of inverse emotion (moral righteousness? Mathematical elegance?). This is the realm of philosophers, with arguments about deontology and utilitarianism. “Mathematics of morality”

The fundamental problem of systemic morality is that our moral intuitions are too complex to model with a logical system. This is pitted against our strong desire to create such a system for many reasons - for its elegance, its righteousness, and for the foundation of society that it could be if it existed.

To bring this idea into focus, imagine another philosophical mugging - but this time plausible. You’ve just left an ice cream shop with your children when a philosopher jumps out of a bush and tells you “I have an argument that will make you hand over your ice cream to me.” You of course object - you’ve just paid for it, and it looks so good - but he says a few words and you hand it over.

What did he say? He walked you through the statistics on contaminants in cream, sugar, and the berries that were likely used to make your ice cream. Then he went into the statistics on worker hygiene and workplace cleanliness, as well as the violation the ice cream shop received two years ago. When he started talking about the health problems caused by sugar and saturated fats you suddenly found you weren’t excited about the ice cream anymore and you handed it over.

Does this mean people shouldn’t eat ice cream? Yeah, it kinda does. But it doesn’t pose any serious philosophical problems for us because we’re not foolish enough to try to systemize our disgust triggers into systems of behavior that we should follow. We can simply recognize the countervailing forces within ourselves, say “I am manifold”, and move on.

I’m not advocating that we should stop trying to systematize our moral intuitions and make them legible within society. Rather I think we should stop expecting these moral systems to work at the extreme margins. They’re deliberately-oversimplified models of something that is extremely complex. We can note where they break down (I.e. diverge from the ground truth of our intuitions) and avoid using them in those situations.

Expand full comment

I am a bit skeptical about the well-definedness of the GDP across the gulf of millennia. How do you inflation-adjust between economies so different? I assume that you pick some principal trade goods existing in both economies (e.g. grain) as a baseline. Grain (or the like) was a big deal of the economy in 1 CE and is today (Ukraine nonwithstanding) not a big deal in the grand scheme of things in the western world: yearly grain production in the order of 2e9 metric tons, times 211 US$ per ton equals some 5e11 US$, about 6/1000 of the world GDP of 84e12 US$.

In ancient times, the median day-wage workers may have earned enough grain to keep them alive for a day or two. Today, by spending 10% of the median US income, you could take a bath in 80kg of fresh grain every other day if you were so inclined.

In fact we should be able to push our GDP advantage over the Roman Empire much further by just spending a few percents of our GDP to subsidize grain or flood the market with cheap low-quality iron nobody wants. Probably a good thing that we do not have intertemporal trade.

Thus, I am not particularly concerned about the GDP being limited by the number of atoms in our light cone (which only grows quadratically). A flagship phone from 2022 worth 800 US$ does not contain more atoms (rare earth elements and the like) than a flagship phone from 2017 worth perhaps 150 US$. The fact that a phone build 100 years from now (if that trend continued) might be worth more than our present global GDP (if we established value equivalence using a series of phone generations) does not bother me, nor the fact that a phone build in 3022 CE might surpass our GDP by 10^whatever. Arbitrary quantities grow at arbitrary speed, film at 11.

Expand full comment

> When we build nuclear waste repositories, we try to build ones that won’t crack in ten thousand years and give our distant descendants weird cancers.

I realize this is seriously discussed by experts, but I'm wondering how it makes sense. It seems like if nuclear waste lasts ten thousand years then it must have a very long half life, so it can't be very radioactive at all?

There's gotta be a flaw in this argument, but I don't know enough about radioactivity to say what it is.

Expand full comment

If I understand correctly, the (overly simplistic version) of the Repugnant Conclusion works like this:

Define utility function U = N * H, where N is number of people and H is happiness. Calculate U for a world A with 1 trillion people with happiness 1 (A = 10^12 people*happiness), and a world B with 1 billion people with happiness 100 (B = 10^11 people*happiness). This leads to the conclusion that an overcrowded, unhappy world is better than a less crowded happy one (A > B), the “Repugnant Conclusion.” Thus, we must either throw out the axioms of utilitarianism or accept the slum world.

This seems like a terrible argument to me, especially this part: "MacAskill concludes that there’s no solution besides agreeing to create as many people as possible even though they will all have happiness 0.001." Why is the utility function linear? This "proof" relies on linearity in N and H, which are NOT axiomatic.

You could easily come to a much less repugnant conclusion by defining something nonlinear. For example, let’s say we want utility to still be linear in happiness but penalize overcrowding. Define U = H * N * exp(-N^2/C), where C is some constant. Now the utility function has a nice peak at some number of people. In fact, we can change U to match our intuition of what a better world would look like.

Expand full comment

Cool article, thanks scott-

Expand full comment

"fighting climate change ... building robust international institutions that avoid war and enable good governance."

MacAskill takes it for granted that these are good things to do, but he might be wrong. Climate change could make us worse off in the long run — or better off. Present global temperatures are high relative to the past few thousand years, low relative to the past few hundred million. Robust international institutions might avoid war. They might also prevent beneficial competition among national institutions and so look us into global stasis.

To make the point more generally, MacAskill seems, judged by the review, to ignore the very serious knowledge problems with deciding what policies will have good effects in the distant future.

Expand full comment

The Old Testament placed limits on slavery, and the Church increasingly limited it for 1500 years - basically until the money wasn't just good, but suddenly amazingly good and more than half of everyone threw their principles in the ocean, overruling the others. The Quakers deserve a lot of credit, but not all of it.

Expand full comment

Another Phil101 class junior high level question:

If the supposed, much larger future population is capable of stability at least comparable to that of today - which it should, in order for us to consider aiming to bring it about - wouldn't it be possible or likely that the exactly same longtermism applied to those people, forcing them to discount their own preferences in order to maximize the utility of a much larger civilization in their far future? If their numbers add up to a rounding error in comparison with the much larger^2 population, it might follow that those people should sacrifice their utility in order to bring about the far future.

And as for the much larger^2 population's longtermist views...

Expand full comment

I think a major point long-termism misses is risk. We discount the future (as in using a discount rate to say how much less we value future money or utility) because we ultimately don’t know what’s going to happen between now and then. A meteor could hit the earth, and then all our fervent long-term investments turned out to be pointless. Or all the other scenarios you could imagine. So the future is worth less than the present, and we should prioritize accordingly. As a rule of thumb, infinite happy people infinitely far in the future don’t matter. That’s not to say we shouldn’t invest in the future, just that we weigh that against a more immediate and certain present.

Practically, this also aligns with Scott’s point that most of the time improving the future is pretty similar to improving the present. Maybe some time soon we can stop torturing ourselves with future infinities and just get back to making things better.

Expand full comment

> Can we solve this by saying you can only create new people if they’re at least as happy as existing people - ie if they raise the average? No. In another mugging, MacAskill proves that if you accept this, then you must accept that it is sometimes better to create suffering people (ie people being tortured whose lives are actively worse than not existing at all) than happy people.

But that's the same as saying that "it's worth suffering in the fight for others' right to die" is problematic in the "zero is almost suicidal" case - quality threshold is just shifting meaning of zero. If conclusion is repugnant then for some scale it's worth creating suffering to avoid it.

Expand full comment

The coal issue seems like a silly distraction. Imagine we evolved on a planet exactly like Earth except there was no coal anywhere. Do you think humanity would stagnate forever at pre-Industrial Revolution technology? A billion years after the emergence of Homo sapiens we're still messing around with muskets and whale oil lamps because we lacked an energy-dense rock to dig out of the ground? Things would surely go slower without coal, but if you're taking a "longtermist" view it seems silly to worry about civilization taking a little longer to rebuild.

Expand full comment

For more on the long history of abolition, The Dawn of Everything [reviewed in the book review contest!] talks about prehistoric California tribes who lived immediately next to each other, some of whom appeared to own slaves and some of whom refused. Oppressing your fellow humans? Refusing to oppress your fellow humans? It's been going on for as long as there have been humans.

And abolition is not a clean line: slave labor still happens in the US, we just call it "prison labor" and look the other way.

As the US has the highest carceral population BY FAR [and, uhh, spoiler: we're not any "safer"...] along with a shocking rise in pre-trial holds since 2000, that seems like the most important "near term" cultural fix on the scale of abolishing slavery. Abolish the carceral state! And if that seems crazy to you, recall that the DOJ's own studies show that prison is not a crime deterrent and imprisoning people likely makes them re-offend more frequently: https://www.ojp.gov/ncjrs/virtual-library/abstracts/imprisonment-and-reoffending-crime-and-justice-review-research

As long as people in the US don't care that marginalized [poor] folks are being oppressed by these systems, we're probably never going to get folks care about hypothetical Future People.

The discussion around this may obviously be different in, say, Norway.

Expand full comment

The main reason why I am not a utilitarian is that once you start to mix morality and math, you usually end up going off the rails. I think the main problem is the assumption that you can measure things like "utility" and "happiness" precisely, and get reasonable results by multiplying large rewards by small probabilities, or summing over vast numbers of hypothetical people. The error bars get too large, too quickly, for that sort of calculation to be viable.

That being said, if you are going to do math, do it properly. In reinforcement learning, if you attempt to sum over all future rewards from now until the end of time, you get an infinite number. The solution is to apply a time discount gamma, where 0.0 < gamma < 1.0, to any future rewards.

R(s_t) = r_t + gamma * E[ R(s_{t+1}) ]

Or in English, the total reward at time "t" is equal to the immediate benefit at time "t", plus the expected total reward at time "t+1", times gamma. Thus, any benefits that might occur at time t+10 will be discounted by gamma^10. This says that we should care about the future, but hypothetical future rewards are worth exponentially less than present rewards, depending on how far in the future you are looking. So long as future benefits don't grow exponentially faster than the decay rate of gamma, the math stays finite.

Note also that we are talking about future rewards "in expectation", which means dealing with uncertainty. Since the future is hard to predict, any future rewards are further discounted by the probability with which they might happen.

The argument over "short-term" vs "long-term" thinking is just an argument over what value to give gamma.

Expand full comment

Can't we just agree that any analysis that relies on collapsing the complex entirety of human experience into a single number is not even wrong?

Expand full comment


Agreed. As evidenced by the later neglecting of exploring real conflict between longtermism and general utilitarian ethics.

>So it would appear we have moral obligations to people who have not yet been born, and to people in the far future who might be millennia away.

There is so much more work needed before this armchair jump to this statement than the thought experiment provides.

>Stalin said that one death was a tragedy but a million was a statistic, but he was joking.

Was he? I don't think that is clear, from his behavior. I also don't think it is clear he was "wrong" about it. Ethics some might argue (I would argue) is context/perspective dependent. What is the right action for Bob walking down the street is not necessarily the right "action" for the US government.


The whole coal conversation is silly. Industrialization is not remotely that path dependent. Might take quite a bit longer without coal, no way that stops anything. Seems like a very bad misreading of history. Industrialization was incredibly rapid. The world seeing more change in 50 years than it had in millennia. If that is instead 500 years because of no coal, what difference does it make? In fact the transition might be smoother and less fraught.

>If only dictators have AI, maybe they can use it to create perfect surveillance states that will never be overthrown.

What is so bad about dictators? Especially ones with AI? When talking about issues this large scale, the exact distribution of political power is the least of our problems.

>Octopus farming

I agree this sounds bad.

>Bertrand Russell was a witch.

Indeed, he is amazing.


And here would be the first of my two main complaints/responses. This "suppose" is doing a lot of the work here. In reality we discount ethical obligations with spatiotemporal distance from ourselves pretty heavily. One big reason for this is epistemology, it just generally isn't as possible to know and understand the outcomes of your actions when you get much beyond your own senses.

You see this with how difficult effective development aid is, and how bad people are at predicting when fusion will happen, and how their behavior impacts the climate, or the political system. All sorts of areas. Because of this epistemic poverty, we discount "potential people", quite heavily, and that makes perfect sense because we mostly aren't in a good position to know what is good for them especially as you get farther from today.

The longtermist tries to construct some ethical dilemma where they say "surely the child running down the path 10 days from now matters no more than the one running down it 10 years from now". And then once you grant that they jump to the seagulls. But the answer is to just impale yourself on that horn of the dilemma, embrace it.

No the child 10 years form now is not as important. Someone else might clean up the glass, a flood might bury it, the trail might become disused. Et cetera, et cetera.

We don't have the same epistemic (and hence moral/ethical) standing towards the child 10 years from now, the situations ARE NOT the same.

The funny thing is overall I expect I am generally somewhat of a longtermist myself. I think one of the main focuses of humanity, should be trying to get itself as extinction proof as possible as soon as possible. Which means perhaps ratcheting down on the optimum economic growth/human flourishing slightly, and up on the interstellar colonization and self-sufficiency slider slightly.

But I certainly don't think we should do that on behalf of nebulous future people, but instead based on the inherent value of our thought//culture/civilization and accumulated knowledge. I don't remotely share the intuition that if I know someone is going to have a great life I owe it to them to make it possible.

>did you know you might want to assign your descendants in the year 30,000 AD exactly equal moral value to yourself?

Anyone who really believes this is far far down an ethical dark alley and needs to find their way back to sanity.

Expand full comment

I never understand why so many people care specifically about the survival of humanity. Isn't it enough that many different species survive? Anyway, our distant descendants won't be humans.

Expand full comment

I think you can head off the Repugnant Conclusion fairly easily by deciding that a larger population is not, in itself, a positive.

Expand full comment

All these thought experiments seem to contain the hidden assumption that the Copenhagen interpretation of quantum mechanics is the correct one. That we live in a universe with a single future. If instead the Many Worlds interpretation of quantum mechanics is true, you don't really have to worry about silly things like humanity going extinct - that would be practically impossible.

You also wouldn't have to stress over whether we should try to have a world with 50 billion happy people or a world with 500 billion slightly less happy people. Many worlds already guarantees countless future worlds with the whole range of population size and disposition. There will be far more distinct individuals across the branches of the wave function than could ever fit in the Virgo Super Cluster of a singular universe, and that's guaranteed no matter what we do today since there is always another branch of the wave function where we do something different.

If you believe the many worlds interpretation of quantum mechanics is true AND that quantum immortality follows from it.. Well that opens up all kinds of fun possibilities!

Expand full comment

I agree that, while I find long-termism very compelling when reasoning about it in the abstract, I must admit that my much stronger personal motivation for trying to get humanity safely through the most important century is my concern for my loved ones and myself, followed by my concern for my imagined future children, followed by my concern for all the strangers alive today or who will be alive at the time of possible extinction in 5-50 years. People who don't yet exist failing to ever exist matters, it just gets like 5% of my total, despite the numbers being huge. I dunno. I think maybe I have decreasing valuation of numbers of people. Like, it matters more to me that somebody is alive vs nobody, than lots of theoretical people vs a few theoretical people. Questions about theoretical moral value are complex, and I don't feel that this has answered them to my satisfaction. I'm not about to let that stop me from trying my hardest to keep humanity from going extinct though!

Expand full comment

>the joy that the child brings the parents more than compensates for the harm against abstract utility

On average, children decrease parental happiness, so this isn't particularly exculpatory.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

-> I realize this is “anti-intellectual” and “defeating the entire point of philosophy”

I think this kind of book is borderline pseudoscience. Philosophy discovers ideas, science discovers truth. And while MacAskill wants to compel you to believe something is true, in fact he is only doing philosophy.

The real idea of science is not "using our big brains to reason out the truth" or "being rational", it is, as Feynman once said, that the test of all knowledge is experiment.

We do not believe the odd things special relativity tells us about the time simply because there is a chain of logic and we believe anything logic tells us. We believe it because that chain of logic leads to testable, falsifiable conclusions that have been verified by experiment.

Mathematics alone is not science because there is nothing to test. Only when you try to apply it (in a field like physics) do you get testable conclusions. Logic does not derive truth, it simply tells us what conclusions are consistent with a given set of axioms. For example, hyperbolic geometry yields different conclusions than Euclidian geometry. Neither is "right" or "wrong" or "true" or "false": it doesn't even make sense to talk about something being "true" until you can to test against reality.

When MacAskill derives his Repugnant Conclusions and decides that they are True, what is the experiment by which we test that truth? I don't think there is one.

One can argue that we should still believe the conclusion because we believe the axioms, but what is the experiment that tested or derived our axioms? Our intuition? But if our intuition is axiomatic, a conclusion that disagrees with our intuition cannot be correct. The "proof" of such a conclusion may have demonstrated that our intuition is not logically consistent, but that does not help us decide what is true or which of the two intuitions (axiom or conclusion) we should discard.

To the extent that MacAskill's arguments are like mathematics, they are interesting and worth thinking about. But to the extent that they are not like science, we should treat the conclusions derived in the same way we would treat the conclusions of hyperbolic geometry. Not true or false, just interesting.

And I think MacAskill knows this is the case. After all, after a quick google it does not look like he's fathering as many children as he possibly can.

Expand full comment

To tackle the core example here: we don’t owe anything to the future child. We owe things only to those that exist. And future children, like future starvation (Malthus) or future fusion (still waiting) aren’t real until the moment they are born/discovered. Apologies if I missed it (although LRG’s comment touches on it). But the doctrine of Presentism seems to be missing from all these discussions. We are all engaging in a type of moral induction. But induction is a deeply flawed method of knowing the truth. Yes, it might be likely that humanity survives next year. But it might not. We can certainly make bets that certain actions taken now (which do affect presently existing moral agents) are worthwhile. But not because we “owe” anything to future generations. But besides we are betting on our continuance and willing to spend some present value for possible future gain. But all of that calculus is present. And to borrow from David Deutsch our inductive reasoning about the future is, at heart, prophecy, not prediction. Sure there may be 500B humans one day. Or AI wipes us all out next Tuesday. The end. What do we owe to those 500B? Nothing. Clearly. Because they don’t exist, and may never. So the real debate is about our inductive confidence. Should I be concerned about the child stepping on glass in 10,000 years? Our inductive reasoning falls utterly apart at that level. So no. Should I be concerned about something that’s reasonably foreseeable in the near term. Yes. But it’s frankly a bet. That it will be beneficial to those who exist at that near future time. Not an obligation. But a moral insurance plan. And there’s only so much insurance one should rationally carry for events that may never occur.

Expand full comment

>This isn’t about an 0.000001% chance of affecting 50 quadrillion people. It’s more like a 1% chance of affecting them.

Bullshit. In order to successfully affect 50 quadrillion people, it's not enough to do something that has some kind of effect on the distant future -- it would have to be some act that uniformly improves the lives of every single person on future Earth in a way that can be accurately predicted 500 million years before it happens. That's not just improbable -- that's insane.

Expand full comment

Fun review. Much of the logic alluded to seems mushy to me.

Example. Regarding having a child with or without a medical condition, these are two decisions conflated into one. "But we already agreed that having the child with the mild medical condition is morally neutral. So it seems that having the healthy child must be morally good, better than not having a child at all." Does not follow.

Another way to look at it is that once it is decided to have a child, and that this decision in and of itself may be morally neutral, then the next decision fork is whether it is known the child will have a morally unacceptable health disorder, a morally neutral health disorder or no health disorders whose morality remains to be determined. It is a fallacy that because decision B lying between two other decisions A and C along a spectrum of characteristic H is morally neutral, morality being characteristic M, any decisions on either side of the H spectrum must therefore correspond to the M spectrum. It is possible that bearing children with health conditions less "severe" for the sake of argument than male pattern baldness might be equally as morally neutral. The M spectrum may only go from bad to neutral in this case. There is no law that options must have a positive outcome option.

Then there is the mugger and the kittens. The decision-maker is loosely represented as the observer. Better outcomes for whom? With the mugger, it is better from the decision-making target's perspective to retain the wallet. From the mugger's perspective, it is better for the target to relinquish it. Regarding drowning kittens, that is undesirable from the kittens' perspective but logically, it is a neutral outcome for the drowner. Do not confuse this observation with sociopathy, please; it is an argument about the logic!

There is so much confusion of categories in these poorly defined arguments that I find them unpersuasive in general.

Expand full comment

I have spent too many hours thinking about questions like the repugnant conclusion, and whether it's better to maximize average or total happiness. I'm still hopelessly confused. It's easy to dismiss all this as pointless philosophizing, but I think if we ever get to a point where we can create large numbers of artificial sentient beings, these questions will have huge moral implications.

I suspect that one reason for present day confusions around the question is a lack of a mechanistic understanding of how sentience and qualia work, and so our frameworks for thinking about these questions could be off.

For example one assumption that seems to be baked in to these questions is that there is a discrete number of distinct humans/brains/entities that do the experiencing. You could imagine a world where the rate of information transfer between these entities is so much higher that they aren't really distinct from one another anymore. In that world differences in happiness between these entities might be kind of like differences in happiness of different brain regions.

I really hope we'll develop better frameworks for thinking about these questions, and I think that by creating and studying artificial sentient systems that can report on their experiences we should be able to do so.

Expand full comment