[original post here]
#1: Isn’t it possible that embryos are alive, or have personhood, or are moral patients? Most IVF involves getting many embryos, then throwing out the ones that the couple doesn’t need to implant. If destroying embryos were wrong, then IVF would be unethical - and embryo selection, which might encourage more people to do IVF, or to maximize the number of embryos they get from IVF, would be extra unethical.
I think a default position would be that if you believe humans are more valuable than cows, and cows more valuable than bugs - presumably because humans are more conscious/intelligent/complex/thoughtful/have more hopes and dreams/experience more emotions - then in that case embryos, which have less of a brain and nervous system even than bugs, should be less valuable still.
One reason to abandon this default position would be if you believe in souls or some other nonphysical basis for personhood. Then maybe the soul would enter the embryo at conception. I think even here, it’s hard to figure out exactly what you’re saying - the soul clearly isn’t doing very much, in the sense of experiencing things, while it’s in the embryo. But it seems like God is probably pretty attached to souls, and maybe you don’t want to mess with them while He’s watching. In any case, all I can say is that this isn’t my metaphysics.
But most people in the comments took a different tactic, arguing that we should give embryos special status (compared to cows and bugs) because they had the potential to grow into a person.
I tried to provide counterexamples - sperm have the potential to grow into a person, but are not themselves people with rights. Pizza has the potential to grow into a person (if a woman eats it while she’s pregnant), but is not itself a person with rights. If we invented conscious/intelligent/complex/thoughtful robots, then a block of iron sitting in front of the robot factory would have the potential to grow into a person, but is not itself a person with rights.
The commenters argued that an embryo was more of a person than these things. Some people said it was because the embryo had everything it needed to grow into a person on its own, as opposed to the sperm (which needs an egg), the pizza (which needs a pregnant woman), and the iron (which needs the robot factory). This isn’t entirely true - an embryo sitting in the middle of a field will just die; development requires a placenta and the carefully-tuned environment of the human uterus - but maybe if you tried hard enough you could come up with some definition for “everything needed to grow” that ruled in the uterus but ruled out the robot factory.
Other people said it was because the embryo already contained all of the necessary information. I don’t think this is right either. A flash drive with an embryo’s genome and a description of how cells work contains all of the necessary information. So does a printed book containing the code for a sentient robot. But neither of these are people.
Maybe we combine these two approaches? It needs to have all the necessary information, and be self-assembling (within definitions of self-assembling that don’t rule out the womb)? But here I think a sperm and an egg in the Fallopian tube just before they fertilize one another and combine into an embryo pass the test and become a person with rights! So does a computer, currently turned off, which is programmed to turn on in one hour and run the code for a sentient robot.
Is there some criterion that would keep embryos, while ruling out sperm-egg pairs and computers with robot code? Trivially yes - to qualify as a person, it must contain the information for a person, and be self-assembling into a person, and start with the letter “E”. This is a deliberately provocative example - what are we even doing here? We can always eventually come up with some gerrymandered criterion that rules in all the things you want to rule in and rules out all the things you want to rule out. But will it be satisfying? Will we, on reflection, think “yes, this is what I mean when I say I’m against murder; the true reason that murder is bad is because it affects things beginning with the letter E”?
When I think about why murder is bad, I think of human beings being conscious, able to feel pain, able to have preferences, having hopes and dreams - things like that. So I would rather just skip this entire process of figuring out exactly how self-assembling counts as really self-assembling, and note that embryos have none of those things.
What about the sleeping hermit?
One commenter raised an objection to my criterion - what about a sleeping hermit? He’s asleep for the night - so he currently has no consciousness, hopes, dreams, etc. And in case I am tempted to say that his death would make other people sad, we stipulate that he is a hermit with no friends or relations; the only person who can suffer from his death is himself. It seems like here, we might need some concept of “but if they’re going to go back to having personhood soon, we should consider them to be people now” - and then we might want to consider that applying to potentially-having-personhood-in-the-future beings like the embryo.
I answered that the hermit’s past personhood gives him some sort of property rights to continue having his personhood respected, the same way I may still own an object when I’m not physically holding it, or an absentee landlord may own a house when he isn’t present.
Philosophy professor Richard Chappell (blog here) showed up and presented a different argument: “A sleeping hermit has a mind, even if their mental states aren't being actively processed. It's completely different from merely having a ‘potential’ to form mental states after a bunch of further development.”
I am always hesitant to disagree with a professional philosopher about philosophy, but I like my explanation better. It’s not clear what it means for the sleeping hermit to have “a mind”. He doesn’t seem to have the metaphysical mind, since (assuming sufficiently deep sleep) he’s not conscious or engaging in any mental activity. He does have a physical brain, but this doesn’t seem like the relevant criterion.
Consider the following story:
You go in for heart surgery. During heart surgery, surgeons cut you open, turn off your lungs, and cut open your heart (in a way incompatible with living, except that the surgeons are supporting your breathing and blood-pumping with machines, and will eventually fix you up). In the middle of the heart surgery, your enemy tries to bribe the doctors to stop the surgery halfway through and throw your body in the hospital Dumpster. This will not involve any extra violence to you (beyond how cut up you are already) but it will definitely result in your death. The doctors refuse, saying that this would be murder.
You recover from surgery, live for many more decades, and enter the glorious transhuman future. In the glorious transhuman future, there is an immortality surgery. It involves taking your body apart cell by cell, then infusing each cell individually with special nanotechnology. This is a very involved process - at some points, no two cells that previously made up your body are touching one another, and some may be in entirely different laboratory rooms from others - but after a few hours all the cells should get successfully infused, you can be re-assembled on the operating table, and you’ll be good to go. Once again, in the middle of the surgery, while you are disassembled into trillions of pieces scattered across a sprawling lab complex, your enemy tries to bribe the doctors to stop the surgery halfway through and throw the cells in the trash instead of reassembling you. Is this murder or not?
I claim that canceling the cellular-disassembly surgery halfway through is murder, for the same reason that canceling the heart surgery halfway through would be murder. But this seems to disprove both the anti-throwing-away-embryos people’s position and Richard Chappell’s position. While you’re disassembled, you don’t have a brain, or a working mind, or any independent potential to self-assemble into a person. You just have a sort of residual claim to personhood that you lodged back when you were a fully-assembled human.
I think lots of things about personhood are a convenient legal fiction. I don’t, right now, have a strong preference against dying (in the sense that my brain is currently focused on writing this essay, rather than on how much I don’t want to die). And I currently possess the money in my bank account, even though I am a slightly different person (in the sense of having slightly different opinions, being made of different matter, having brain cells in different positions, etc) from the person who earned that money. We need to abstract all of this weirdness into the idea of a single continuous person with moral rights in order to do anything at all, and I think this covers the sleeping hermit too.
What about a newborn baby?
A newborn baby is sort of conscious. It probably has some hopes and dreams, like a hope of getting milk. But it doesn’t seem obviously more conscious than a cow. If we are to grant it rights beyond those we grant cows, don’t we need some sense that things which will develop into an adult person deserve personhood rights?
I mostly bite this bullet. I think a newborn baby deserves more rights than a cow for moral rather than axiological reasons (that is, for reasons that involve the fence around the law, rather than just the law). We want to have a bright-line norm against killing humans who are old enough to be conscious persons. But there are only fuzzy, meaningless lines about when babies transition into fully conscious persons. In order to err on the side of caution, we ban killing babies (and in some cases fetuses). I think this is similar to having age-of-consent laws at age 18 - we don’t really claim that there is a magical distinction between 17.99 and 18.01 that makes sex with the latter genuinely more likely to go well than sex with the former, but we have to draw a line somewhere. I draw the line at when babies seem vaguely human-shaped and able to have any desires/preferences at all (even ones not especially superior to a cow’s). This is necessarily unprincipled, and I don’t have strong arguments against hyper-pro-choice people who want abortions even up to partial birth, or against hyper-pro-life people who want to ban abortions as soon as the first brain cell forms. But I do feel like I’m on pretty firm ground saying that an embryo without any brain cells to speak of is too soon.
Also related to fences around the law: in most cases, killing a baby will make their parents, relatives, and tender-hearted onlookers extremely sad. You can come up with weird thought experiments where it doesn’t (hermit babies, anybody?) but part of what we mean by “the fence around the law” is that the law should have clear elegant bright-line boundaries even at the cost of failing certain weird thought experiments.
#2: Isn’t there a value in having the right diversity of traits? Wouldn’t embryo selection, by giving parents control over their children’s traits, cause them to maximize ones that seem “better” without taking overall diversity into account?
There are two versions of this complaint.
First, what if everyone selects their children for the same trait, like extroversion? It seems like probably we need introverts for something (mathematicians? radiologists?), and so this would be net negative for society on practical grounds, as well as some sort of spiritual loss for the diversity of humankind.
Second, what if people select their children for opposite traits? For example, some people might select for extroversion, and others for introversion. Then the human race might split into incompatible clades, or people might end up too extreme to be happy, or it might turn out that one side of the trait is good and the other is bad and the people whose parents chose the bad side are at a major life disadvantage through no fault of their own?
I have a weak theoretical response, and what I hope is a stronger practical response.
Weak theoretical response: isn’t this a problem we face with any technology, or anything that makes humans better able to get things they want? By allowing people to construct buildings, we both homogenize - people in Dubai and Siberia can both be in identical 70 degree concrete cubes - and overdiversify - there can be saunas and ice skating rinks in the same city. But this is not an argument against allowing buildings. Overall, we expect letting people optimize their environment to be good; if diversity is harmed, people will find ways around that or not optimize that hard. I think this response is weak because it’s a nice heuristic, but someone could object that optimizing people goes worse than optimizing other things.
Stronger practical response: I think it’s worth having a clearer picture of exactly what this technology does. It allows the parents to select from some number of embryos (most examples use five). So consider some family you know with five kids. Now choose the healthiest/happiest/most successful kid. Now imagine we did that for thousands of families, and took the people you chose and stuck them on a distant planet to form a new human race. Would that new race lack diversity? Would it be some kind of dystopian cross between Brave New World and GATTACA? If you hopped from Earth to that planet, and back to Earth again, would you even notice a major difference, beyond a couple fewer hospitals?
But this hypothetical greatly over-estimates the potential of embryo selection, because it’s imagining perfect foreknowledge. It would be a better analogy if the selection was made by a drunk person reading a note scrawled in Portuguese by a schizophrenic oracle trying to leave cryptic suggestions about which child would be happiest/healthiest/whatever.
And we’re not exactly creating a new human race instantly either. Metaculus currently expects 20 years before any country has 10% of children selected for intelligence (and I don’t think the “for intelligence” is doing much work here; I would be surprised if 10% were selected for something else first):
…and it will take 20 years for those people to grow up and start affecting society. So I think in this projection, it takes 40 years for there to be a significant contingent of selected people - and even 10% isn’t really enough to affect diversity very much. And selection is weak! Even if you select for intelligence, you only have a 70-30 chance of getting a more-intelligent-than-expected rather than a less-intelligent-than-expected kid.
So it’s an effect which would be kind of hard to notice even if it happened perfectly and instantaneously, happening in an extremely imperfect way with lots of random noise, only becoming relevant half a century from now.
If this is true, doesn’t it suggest I can’t be too in favor of embryo selection either? What’s the argument for saying the technology is powerful enough to be worth it, but not powerful enough to worry about?
I think the first reason that benefits work differently from risks here is that the benefit can happen with a specific person (we prevent that person’s disease), but the costs require affecting a large proportion of the human race (decreasing human diversity). Obviously it’s easier to help a specific person than to harm the whole human race!
A second, more speculative reason is that very slightly increasing the skill of the top few percent of people can significantly affect the human race, and I expect the top few percent to disproportionately use this technology. Suppose that, as above, 10% of the population uses this technology, but that includes half of the smartest 1%. By my (actually o3’s, but I checked them) calculations, this would increase the number of geniuses (IQ > 140) by ~40%, and the number of supergeniuses (IQ > 160) by ~160%. Why can such small adoption increase these numbers so much? Because of the shape of the normal distribution, very small shifts in the right tail of the distribution can result in very large absolute changes in the number of people at any given high-outlier rank. If you think that increasing the number of geniuses by 40%, or the number of supergeniuses by 160%, could have a large effect on society, then this technology could have a large effect on society even with relatively limited adoption.
But the main reason I think this matters is that it gets us on the road to more advanced technologies. Once people are paying for this, companies can afford research divisions, investors know there’s interest, and regulators who hoped to strangle the field in the cradle will back off. My impression is that there are much stronger technologies about 10-20 years down the line, ones that probably disrupt things so profoundly that your opinions should be more related to your general opinions on transhumanism and technological singularities than on specifics of the selection process. I think a technological singularity would be more diverse insofar as diversity is good (people could have wings if they wanted!) and more homogenous insofar as homogenous is good (nobody dying of cancer). I’m happy to take a 10% increase or decrease in the current level of human diversity if it gets us there.
#3: Would you tell a disabled person to their face that you would rather they not exist, or that people like them not exist in the next generation?
Would you tell an embryo-selected person to their face that you would rather they not exist?
Obviously this would be an extremely mean and offensive thing to do. You can be against embryo selection without wanting existing embryo-selected people to die, or accusing them of being unworthy of life.
Or what about rape? I am against rape, would prefer that it not happen, and support efforts to stamp it out. But many people currently alive are the children of rape. Do I have to consider them to be some inferior life-form worthy of extermination?
Or what about lobotomies? When we banned them, we were, in a sense, telling lobotomized people that others like them should not exist in the next generation. But it’s a pretty weak sense. It’s not the sense where you go up to a lobotomized person and shout “You don’t deserve to live, you scum”. It’s just that we would prefer that this not happen in the future.
(I don’t think the fact that it’s possible to imagine stopping the lobotomy without switching which people exist matters very much here, partly because of the considerations I mention here, and partly because I think a person who’s lived with a lobotomy for decades is in some sense a genuinely different person than one who was never lobotomized, even if they have the same genes)
Or what about war? I would like there to be peace in the future. But if World War II hadn’t happened, there wouldn’t have been a baby boom, and millions of Boomers wouldn’t exist. Does this mean we can’t be pacifists?
In all these situations, I think it’s possible to acknowledge that we want to make the world better in the future (less rape, less war, fewer lobotomies, etc) without saying that people who currently owe their existence or their current state to the bad thing are inferior or don’t deserve to exist. I think we do this naturally and common-sensically for everything except embryo selection, I think embryo selection opponents would do it naturally and common-sensically if they ever met an embryo-selected child, and I think following our natural and common-sense impulses solves this problem too.
Share this post