I mean, I agree, sort of? But this is just not a realistic scalable solution. There aren't that many deer. I say that as someone who is trying (so far unsuccessfully) to learn to hunt without the benefit of family/social knowledge. I certainly plan to replace as much meat as I can with venison, but I think that, in the context of this discussion, it's lack of scalability makes it not very realistic.
I think there are different levels of compromise - vegan -> vegetarian -> person who offsets all their meat -> person who eats beef instead of chicken -> person who purchases ethically raised meat -> person who doesn't do anything. People at level n will always think the people at level n-1 are barbarians, and people at level n+1 are "contorting their diet to reach squeaky-clean moral cleanliness" - I don't think it's possible to escape either concern. My priority here is to make achieving whatever level of moral contribution you're going for as easy/effective as possible, so that a given level of effort can produce better results.
Also, my guess is that casually mentioning the existence of higher-commitment ways to do morality makes people more likely to do lower-commitment ways. A lot of people I know are vegan, and I don't think I could be vegan, but constantly being around them has shamed me into inconsistent pescetarianism. If I can make someone who currently does nothing get to the point where they eat ethically raised meat, I'll consider that a victory.
(though I'm also concerned about this because companies are really good at saying "We have ethically raised meat!" while making as few concessions to actually raising the meat ethically as possible, and you've got to be a real expert to navigate this space, whereas just not eating chicken is hard to get wrong)
I'm concerned that "cage free" could be a lie, but so could lots of things at the grocery store. At some point I have to trust the people doing oversight.
The raising of meat is also a lot more legible than carbon offsets. Someone can, in theory, inspect to make sure that chickens destined to live outside of a cage really are outside of a cage.
I'm a utilitarian and very very vegan-sympathetic, so I think I'm qualified to answer your question.
I think a big factor here is the future potential to be a net-positive society. That is to say, even if we're in the red today, our best option is to try and work toward a world where that's no longer the case.
This isn't *necessarily* wishful thinking. If humanity doesn't totally wipe itself out at any point, and advances in technology allow us to expand our total population, the future could contain ~some very high number of humans and other sentient beings who experience much more positive utility and much less negative utility than we do. If that's true, then it's "worth" enduring all the negative stuff now to try and achieve that future.
(Sidenote: If you really want to kill all humans, accelerating climate change is not a great way to go about it. That probably just causes resource scarcity, international, tension, and wars, which usually don't kill *all* humans. Killing *all* humans is really hard, which might be one reason utilitarians don't try to go for it.)
There's a lot of anthropologists and others who argue that most of human history was far worse for most people than pre-agricultural pre-history. However, many of these people still think that human life in the past century or two is far better than pre-agricultural life, so that the net result of agriculture has been positive, even though it was net negative for many centuries (and millennia).
The hope is that even if we are still in fact net negative when we consider factory farming, we might still get net positive. I think this all requires much more empirical investigation.
i'm partial to this argument and i've heard it stated in various ways before.
however, are we really depriving future individuals who haven't existed of a good existence? they aren't ever going to be present to lament the opportunity cost of being denied a good existence. meanwhile, there are presently countless individuals suffering violently. to cease existence for all would end that while not imposing any real threat to the Not Yet Born.
i'm of course not an advocate for the genocide of our planet, i'm just trying to take the above argument at its best and see where that leads us.
I admit that questions about the "rights" of nonexistent entities can be strange and counterintuitive.
I'd say the issue with the extinction route is *not* that it deprives particular hypothetical future people of their existence and "causes them suffering" in some weird counterfactual sense. The problem is that it results in a universe with no people in it (a "net zero" on the utility scale), as opposed to a universe where ~trillions of people get to lead net positive lives (a very high positive on the utility scale). The problem isn't the lament over nonexistence - it's the nonexistence itself.
So I would support humanity "sticking it out" for the same reason I support anything else with a delayed payoff.
I don't think there is an error in this reasoning. It's at least one reason people are concerned about future AI causing human extinction. More typically in the circles of people surrounding Scott, the concern is they'll be programmed with a naive objective function that optimizes for something amoral that inadvertently leads to human extinction. But it's also possible they're programmed perfectly and their objective function tells them the world will be better off without humans and that will be correct.
The objection from a human is just hey, I'm human, and I don't care about the non-human world being better off. The good of my own species is infinitely more important than the good of all other conceivable sentient beings, not for any rational or utilitarian reason, but just because.
Utils need to be biased to make sense. There's no point making everyone else happy if you get screwed over. It's not a human morality. You, your family, your country, and your species get first claim on utils.
It's the trolley problem. Your brother on one side, two randos on another. The good human will save his brother and kill five randos. This is morally acceptable.
The person who refuses to save his brother is a heartless jerk.
Well strictly speaking utilitarians say that it is not OK to save 10 lives and then murder one person because this sequence of actions is morally inferior to the sequence of actions where you save 10 people and then don't murder the extra one (unless of course you are in the situation where you can only save 10 people by murdering the extra one, in which case it is clearly OK to murder one person in order to save 10).
The weirdness comes into utilitarianism due to the better known problem of it being unreasonably demanding of its adherents. If utilitarians who cared about animal welfare really followed utilitarianism to the extreme and they thought that a chicken life was worth more than $6, they would donate all of their available money to effective animal charities and also not eat any meat. The weirdness comes in when very few people are actually willing to spend all of their disposable income on charity and instead spend some of it on themselves. You then end up with weird situations where they can make deals with themselves whereby they can donate more to charity in exchange for doing something morally dubious but have a situation whereby the outcome is better by both their personal standards and the world's standards. This clearly isn't the optimal set of moral actions, but it is perhaps better than what they would have done otherwise.
I don't think it's crazy (though coupling those actions unnecessarily would be), and you would have to be very very careful that you weren't ignoring other effects.
But how is this any different from the standard trolley problem? You either do nothing, causing 2 people to die due to inaction or you switch the tracks causing one person to die through your actions.
The point is that by pure utilitarian standards neither the person who donated $6000 and killed their ex-wife nor the average person is very good. Both of them are letting a lot of people who they could have saved die. Things only become unintiutive when you allow for people not donating nearly enough to charity as they ought to.
Saving someone is the same thing as not letting someone die. You could equally well describe the trolley problem as do nothing vs. pull the lever, saving two people but killing one.
The utilitarian will agree with the intuitive judgment if you're in the real world, because in the real world, the kind of person that kills their ex-wife tends to do lots of other directly brutal things, and either donating to charity or not doesn't usually correlate very strongly with other behaviors that make lives better or worse. It's only in the very tight confines of a thought experiment where you've stipulated that the people are in fact otherwise identical that the utilitarian will judge the one person better than the other, and this is no longer counterintuitive, because we have no intuitions about extremely weird cases like that.
Intuitively, the person who donated $6000 and killed their ex-wife seems worse, but why would we give that intuition any weight once we've examined the situation? If the donation indeed saved more lives than they took, would it have been better if they had done neither? I don't think so - it's better to save more lives in net.
Though the magnitude of the real-life benefit of donations is more uncertain, and they don't viscerally feel like they're doing a lot of good. So even large donations to effective charities might not feel on par with something like pulling some kids out of a burning building.
So let's compare it to that. You know someone who ran into a burning building with their ex-wife, pulled two children out from under a collapsed beam, pushed the ex-wife into the smoke to her death, and ran out carrying the kids. Assuming the kids would've died otherwise, is this better than if he had stayed outside? I think it is.
The more common thought experiment for what you're describing is the surgeon who kills a single patient and harvests all the organs to save nine others, or some number larger than one.
Typically, nobody says this would be okay, but why on strictly utilitarian grounds? Assuming the practice is widely known to occur, it could have a chilling effect where nobody ever voluntarily seeks medical treatment, but to one surgeon making the decision in private, reasonably certain no one will ever find out, it's hard to see why they shouldn't be killing patients and harvesting their organs.
Your example is obvious, the murder is wrong because you can do both - not-murder and donate to save lives. Neither precludes the other.
But in the animal welfare example, at least as considered above, the actions do preclude each other. Though they do so because they're framed that way, "if we're going to eat one meat or the other, which should we eat?".
The obvious resolution is less of both, less chicken and less beef. But staying within the framework above of one or other and not less in sum, the utilitarian calculus seems appropriate.
Utils are weighed. You come first, then your family, then your friends, then your community, city, province, country, civilization, species, class, and kingdom. So utils for you and your friends are worth waaay more than utils for cows or dolphins, which are worth more than chickens, which are worth more than invertebrates.
It's totally arbitrary, and everyone stands on their own moral perspective. The point is to build a society, and use that to define morality. Function and economics should take the lead.
No. I speak as someone who's eaten everything from bugs in ChangMai to whale meat in Osaka. I especially have no problem with eating animals that have eaten other animals,
For the ethical vegetarians here, would animals having some amount of consciousness change your mind about animals that eat other animals that had consciousness?
I am an animal, by definition, with consciousness. If whales have consciousness and get to consume hundreds of millions of tons yearly of invertebrates (also with consciousness) than so do I (I like lobster).
If animals have consciousness, and just like me they have consciousness at various levels on a spectrum, then they take their place in the spectrum of moral and evolutionary choices, just like me. Hence, I'm no better than they are, and as an omnivore my dietary choices are my own, including the animals I raise for food (some ant species farm aphids, so the natural world has allegories).
If on the other hand they don't have consciousness, and I do, and an argument can be made that my humanness puts me outside of the animal spectrum of moral or evolutionary choices, than none of this matters. They are animals and I'm a human and my moral choices regarding them doesn't register at all morally.
I'm either an animal, or I'm not, or I'm both. And in all cases my dietary choices fall into one of two morality spectrums that equally justify a dietary choice to eat other animals.
"I don't have the mastery of ethics to eloquently argue this..."
Whether you think you do or don't really is moot because, quite frankly, I don't think of dietary choices as an ethical issue. OTHER PEOPLE make it an ethical issue. For the vast majority of people it isn't, which makes arguing about it 'ethically' difficult and filled with crevasses and pitfalls as those other people try to convince someone with common sense that their berkshire hog exists in the same moral spectrum as their great-grandma Kathleen.
Like I said, for the overwhelming majority of people, this isn't an ethical issue at all.
> If whales have consciousness and get to consume hundreds of millions of tons yearly of invertebrates (also with consciousness) than so do I (I like lobster).
The project of creating/adopting a moral framework isn't just the replication of whatever behaviors we see in the natural world; it's an attempt to actively modify the world by examining what principles create what we think of as "good" in the world then living/acting according to those principles.
"Animals also do this so I get to" is an extremely poor justifying moral principle in that it could be used to justify nearly any behavior -- eating your children, eating your parents, raping whomever, beating/killing rivals to maximize your sexual chances, etc etc. I assume you don't do all of these things as well and would find most of them morally repugnant, so I assume this framework doesn't guide most of your moral decision-making. Why do you use this principle/framework ("animals do it so I get to") it in this case but not others?
You may have failed to notice that I didn't say which one of the morality spectrums I ascribe to. The above was merely a thought experiment to demonstrate the fundamental logic that BOTH morality spectrums justify a human's decision to eat other animals. And they do.
Your personal beliefs are not particularly relevant to the question of whether "animals get to do this, so I do too" is a good moral principle to follow under any particular framework. If it was a thought experiment, continue to consider it one, and use it to actually engage with the points I brought up in the previous post re what morality is for, what it does, and that someone following this principle would also find themselves morally justified in any of the behaviors I mentioned.
"If whales have consciousness and get to consume hundreds of millions of tons yearly of invertebrates (also with consciousness) than so do I (I like lobster)."
I'm not sure where this "get to" is coming from. Are you saying that someone has certified that whales are doing everything morally right, and therefore if you do things no better than whales do, then you are therefore also morally right?
I would say that, morally speaking, it's better if conscious beings have better experiences, and beings with preferences get more of their more important preferences satisfied. Tornadoes, forest fires, whales, and humans all sometimes do things that get in the way of this. Sometimes, us trying to stop the bad things ends up making things better, but sometimes it makes things worth. So just because I'm not out trying to stop tornadoes and stop whales doesn't mean I think it's bad to try to talk to a human to talk them into changing their behavior.
This is just another instance of the same thing we see that some bad behaviors are illegal, while other bad behaviors aren't, because actively trying to punish people for some bad behaviors is helpful, while actively trying to punish people for other bad behaviors often makes things worse.
Only other whales can do that. In every pod of whales one of the whales is designated as a certified whale morals certifier. Think of it like Iran's morality police, but in whale form. The Cetacean Certification Policewhale (CCP for short) signifies if another whale's actions are moral with one WOO if moral and a double WOO-WOO if immoral.
According to the CCP central statistics office, whales are generally quite moral, except for the southern Arctic and Pacific pods. Those whales are very immoral and have sex outside their pods and bully sharks and sea lions. But the Hawaiian whales are the most immoral of all. So bad they had to come up with a triple WOO-WOO-WOO, which is whale for "so bad they're going to whale hell". Steer clear of the Hawaiian whales, they kill for sport and sell drugs to cuttlefish.
> If animals have consciousness, and just like me they have consciousness at various levels on a spectrum, then they take their place in the spectrum of moral and evolutionary choices, just like me. Hence, I'm no better than they are
I don't see how this follows. Murederous psychopaths are roughly on the same point in the consciousness spectrum, yet it seems undeniable that most people are morally better than murderous psychopaths.
You're merely asserting that they're moral. I've read some of your other posts here and it doesn't seem like you think you need to justify calling something moral, but I don't see why that's true. Why should I accept your claims if you refuse to or cannot justify them?
Yes. The prohibition against cannibalism is primarily cultural (with a strong evolutionary case for disease prevention as well). I would never eat another human, but others have and some probably still do.
Depends on how. If they were simply eating my cadaver, then yes. If someone was attempting to kill me to eat me, well they can certainly try (they'll need a lot of luck and fashionable body armor).
In both cases the moral prohibition is simply cultural: natural circumstances the revulsion against cannibalism, unnatural circumstances the revulsion against murder.
I have no problem being in the food chain. I'm in it now. So you are you. All of us are. I also have no problem being at the top of the food chain. I feel no shame.
Wait a minute, it's a circle. Bacteria have just as much right to claim they're at the "top" of the food chain as you do. Maybe more, since they eat *everything* eventually.
In severe famines, yes. In conditions of extreme deprivation, killing a stranger to feed him/her to yourself, and especially your family and your children, becomes an extremely moral act.
You can see how utility scales with distance and situation by seeing how people react when you put one of their loved ones in front of the trolley problem, and then their friends, and then their countrymen, while putting other things on the other fork.
Please keep in mind that there's a difference between killing an animal and subjecting them to hundreds/thousands of hours of pain and suffering. Are they conscious of being in pain? Yes. Are they conscious of suffering? Yes.
Even if this were true, and I'm very dubious of this, I'm not sure pain morally obliges me to stop eating meat. How and in what way does empathy oblige me morally to care?
The kind that involve the creatures that came up with the concept. Humans. Not the kind that anthropomorphically applies that concept to creatures that don't understand it in any context outside a disney movie. Animals.
So, what about a proto-human. Something not smart enough to have a real system of ethics, but are on the cusp of evolving to be. How many generations away from that would you need to consider them a being deserving of empathy? What if there was some chance it would grow up to be capable of ethical reasoning? would 5% chance be enough? 50%? I have further discussion on this, but in the interest of not making a “gotchya” argument, I personally view babies on about the moral level of someone’s pet dog, so I’m interested if I can argue you into this position
Proto-human? You mean...like a fetus? Whoa....let's slow down...way down...we don't want to tread into that territory of things "on the cusp of evolving to be." All sorts of uncomfortable questions suddenly pop up like 'where is the cusp' or 'evolving to be' what?
Let's use dolphins instead. Maybe dolphins will someday have their own civilization in like 5-10 million years. At that point I'm willing to consider not eating them. Till then they're fair game.
i can see how being able to come up with a concept of morality and to think morally is relevant in deciding who is a moral agent, but i cannot see how it is relevant in determining who has moral standing. what is it about being able to act morally that makes it matter how we treat a creature?
So you're implying humans have more "moral standing" than animals? If that's what you're implying then you're proving one of my morality spectrums. The spectrum that allows us the ability to use animals how we want and to use them how we want morally.
I think this is one of the few posts I have seen on this site that has genuinely shocked me. I hope that you are arguing this in the hypothetical. Pain is a readily recognisable 'bad', and should not be knowingly imposed on others that will suffer from it for no reason. The species of the 'others' is of no consequence. Empathy is not the issue. Quality of life is.
Besides the "depth" of consciousness, there is another consideration: How often the consciousness is "on". I believe there's pretty widespread agreement humans aren't conscious when they are asleep for instance, but personally, I'm not at all convinced all humans are conscious every moment of wakefulness. Since consciousness can only be observed through introspection ("am I conscious? seems like I am"), is there evidence that humans are conscious outside of their most reflective moments?
All of this is so difficult and muddled I can't claim a high subjective probability, but I have difficulty fitting what-we-call-consciousness (which I can confirm as a real phenomenon through introspection, but don't know if it has all or even many of the properties philosophers tend to assign to the concept) in my reductionist worldview as something else than consequence of self-referentiality in a sufficiently complex system, and consequently, I wouldn't expect my consciousness to be there unless I am thinking about whether I am conscious. Furthermore, it seems that human brain asleep almost never passes whatever it takes for consciousness to emerge, which to me tentatively suggests the threshold for it to happen is pretty high, and for what it's worth, waking up from intense concentration/flow subjectively feels similar to when I'm drifting in and out of sleep.
Due to all the complications and uncertainties, my subjective probability for this one model is low (<10%), but I consider it more likely than any other individual model that goes into same amount of detail, and consequently I tentatively operate under belief that all humans are only rarely what-we-call-conscious, some humans (such as children) are probably never conscious, and that there is a good chance that no species outside Homo has ever possessed what-we-call-consciousness.
Of course, when it comes to attempts to calculate utility, you ought to factor in all other possibilities, many of which include non-human animal consciousness, even that it is very widespread, and that their experience is MORE vivid than that of humans (possibly due to humans having greater ability to inhibit their emotions).
In the same kind of sense how plants or mechanical automata we are almost certain aren't conscious can react to stimulus analogous to pain, all the time. That seems uncontroversial.
Other than that, I'm not at all sure (just like I'm not at all sure humans aren't conscious during most moments of wakefulness). I would perhaps expect torture to repeatedly jolt the pain and awareness of your miserable situation you're in to center of the brain's attention, which ought to result in conscious experience, not unlike how I've thought about my own experience when I'm completely absorbed in something (those moments where you metaphorically and perhaps actually don't even notice the passage of time) up until I perhaps miss a step and hurt myself a little and notice I definitely am conscious and in pain, but on the other hand, I know there are lots of hurts I've been able to tune out of eventually even when cause persists. Presumably, torture methods tend to be torture not only because of the intensity of the pain but partly because they are the most difficult to tune out of, so, operating under this model, I would perhaps expect the tortured to be conscious a lot of the time, more than people usually are.
This seems very plausible to me. But it also makes me suspect that "consciousness" isn't really the morally significant thing. Preference satisfaction is good, and preference frustration is bad, whether or not someone is conscious of having it. It's better for a parent if they think their kid has died but the kid is actually living a fruitful life, than if they think their kid is living a fruitful life but the kid has actually died - even though the parent will have happier consciousness in the latter case than the former.
i wrote a dialogue in the old style about this exact problem (do we have duties to lifeless objects?), which you may be interested in reading. as a taster, here is one of the epigraph quotes:
> Thales, according to Herodotus, Duris, and Democritus, was the son of Examyas and Cleobulina, and belonged to the Thelidae, the noblest Phoenician descendants of Cadmus and Agenor. […] Aristotle and Hippias say that he attributed souls even to inanimate objects, arguing from the magnet and from amber.
My thought is that somehow we have to settle how strong various preferences are, in order to determine how important it is to satisfy them. (For instance, my desire to live is stronger than the desire of the homophobe that I die, so at least on those two fronts, it is better for me to live.) My though is that however this works out, in order for a Roomba to have preferences that amount to even a small fraction of those of a chicken, it would have to be much more complex and lifelike than it actually is. But this is very much something that isn't yet worked out, and could conceivably go very weird, as you suggest.
> It's better for a parent if they think their kid has died but the kid is actually living a fruitful life, than if they think their kid is living a fruitful life but the kid has actually died - even though the parent will have happier consciousness in the latter case than the former.
well, it's better _for the child_ if the child is living, but i think it's better _for the parent_ if they think their kid is living even if that is false. of course it's much more important for the child to be alive than it is for the parent to think their child is alive, so it shakes out similarly anyway.
I think if you ask any parent about this, they would say that it is better *for them* if their child lives and they have a false belief about it, than the other way around. They care about how their child is actually doing more than they care about their own experience of it.
How could consciousness be on a spectrum? I mean, if we take as a crude operational definition "consciousness" = "being self-aware" how could you be, say, 20% or 4% self-aware? Seems like you either are or you're not, full stop.
How would I know? So far as I know, I have always been self-aware, because by definition I am not aware of any time when I was not. How could I be? So the question cannot be answered from the inside.
One might attempt to infer an answer from the outside, meaning someone else could try to decide whether I was experiencing self-awareness by examining the evidence of how I look and act. That is a notoriously difficult problem, as for example the problem of comas, "locked-in" states, and badly brain-damaged people (or the cognitive development of children) demonstrates.
But that would not demonstrate the spectrum at all, because *first* you need to write down a definition of "20% self-aware" that can be compared to the evidence. It's that point I'm challenging. I'd like to hear a good definition of "20% self-aware" before I admit it's any less illogical than "20% pregnant".
Have you ever been so tired that you can't maintain coherent thought and even though you are awake and observing your surroundings, you miss multiple details regarding the world around you? Like really obvious things like "who is in the room with me" or "what was I doing just now?" Or perhaps you have been drunk or high to the extent that you barely feel anything. What if an animal lived its life in a fog similar to these states, with no moments of higher thinking/observation/reasoning. Would they still be "equally self aware" as a human in a yes/no spectrum?
To me the answer seems obvious that consciousness is a spectrum.
If the OP meant "higher thinking" = clever and accurate reasoning, rich spectrum of thought, emotional vibrancy -- they he should have said so. But none of that is subsumed under "consciousness." In all of the states you describe, consciousness (meaning self-awareness) exists. I cannot think of situations in which a person is *not* self-aware (being asleep, or in a coma, or under anesthesia, or arguably with certain kinds of brain injury), and I can think of situatiosn in which a person *is* self-aware, but I cannot think of a single example in which a person is 20% self-aware. None of your examples fits, because none of them involve *self*-awareness -- they are all about being aware of the environment, or having a strong or more nuanced interaction with it.
Indeed, I would say the variation in awareness of the environment of which you speak can easily be ascribed to *non* self-aware organisms. A pine tree or single-celled organism can be attuned to it surroundings better or worse, can react to them functionally or not, and at different levels of complexity depends on its internal state (e.g. sick or healthy).
I would put a pig's level of self awareness much higher than that. They are more self-aware than the average dog, and have both the ability to see into the future and plan, and also - I believe - a demonstrable sense of humour. If more people got to know pigs personally, no one would eat pork.
It is one of life's great tragedies that pigs are so tasty. If they tasted terrible, we would most certainly be keeping more of them as companion animals. They are somewhat parrot-like in their appetite for companionship, physical pleasure, and pure mischief.
I raised chickens for years and came to the opposite conclusion. And they have noticable individual personalities. They seemed to be getting a kick out of life. Now ducks, they are stupid.
I strongly concur. I've also had a lot to do with chickens and I believe them to both have individual personalities, and also to be superbly well equipped to be outstanding at all the things that chickens need to do to live good lives and make more chickens.
Chickens approach the problems of life with zest and vigour.
They do, however tend to suffer from the same problem that sheep do - the bigger the flock/herd, the lower the (apparent) collective IQ. So people will rarely - if ever - see them express their full potential in the huge agglomerations that factory farming demands.
Apologies for giving a very serious answer to a fun comment, but before taking this line of reasoning very seriously, consider how it sounds applied to humans with severe mental handicaps. They have very low IQ, but that doesn't necessarily mean they suffer much less than a higher-IQ person (caviat: I have no expertise in neuroscience or anything like that).
The point being, I think I prefer lines of reasoning centered around physical measurements such as neuron count or connectedness, rather than the perception of intelligence manifested by the organism, since I don't think we have a strong intuition for how behavior maps to neural complexity or internal states.
Isn't that the implicit premise of Scott's blog in general? Of trying to convince or inform anyone of anything? Of, I don't know, waking up instead of not?
There's a chance that nihilism is "correct" but it's absolutely certain that it's a total bore
That's fair. But I'd argue that any sort of rumination - any attempt to examine the costs, measure them, compare them, and then make an informed decision consistent with one's values - leaves the individual (and, in the case of someone like Scott, the individual's audience) more aware of the consequences of their actions. Surely we agree that that's just an absolute good?
Consider this: what if, say, all ASX readers scale back their chicken consumption, reducing domestic demand, which in turn lowers prices for chicken, and the result is merely that more chicken is eaten in developing countries like China, because its price relative to alternative protein sources has fallen?
A similar line of thinking is frequently deployed to argue that the US shouldn't do anything to reduce its carbon emissions. It's nonsensical there and it's nonsensical here. We can't control what China does, but we can control what we do, and we have a responsibility to exercise that control in service of a better world.
More broadly, this kind of "what if far-fetched second-order consequence X?" thinking is a terrible way to make decisions because a) it's impossible to prove that consequence X won't happen and b) the supply of potential consequences is infinite. If you go down that road you'll never do anything.
I don't see it as non-sensical at all. If one is going to engage in some kind of self-deprivation in order to achieve a desired outcome, but the actions of others preclude our desired outcome, then asking whether self-deprivation is really worth continuing is a perfectly logical question. I think of it as similar to the whole fossil fuel divestment movement: there was no way a bunch of college kids were going to crimp Exxon's profits by getting their university endowment funds to dump the stock; there were too many other willing buyers for it.
As to your second paragraph, the interactions of supply and demand are hardly far-fetched! Think again!
The first is that it's impossible to know what the actions of others will be, and whether or not those actions would preclude the desired outcome, but we do know that our desired outcome will never be achieved if we just keep perpetuating the status quo. (You'd have to change your behavior at *some* point, or you'd risk being the last person keeping the factory chicken farms going.) This certainty gap tips the balance in favor of action.
The second is that if we want to begin to influence the behavior of others, taking the action we want them to take ourselves first is a prerequisite. No one's going to listen to utilitarian arguments for vegetarianism coming from someone who's housing a KFC Double Down.
Finally - are you *sure* the fossil fuel divestment movement hasn't accomplished its aims? Exxon's stock is down 32% over the last 5 years, while the S&P500 is up almost exactly 100%. An activist hedge fund (Engine No. 1) just successfully convinced shareholders, over Exxon's objections, to appoint at least 2 new renewables/energy transition-focused directors to the board (which has 12 seats IIRC). Was this the direct result of some college kids staging some sit-ins? Impossible to say. But aren't you glad you live in a world where they did something, rather than nothing?
Your ability to influence the behavior of others is highly limited. It'd be foolish not to recognize this is the case.
As for that last part, yes we can be sure. Exxon is effectively a victim of it's own success. Thanks mostly to new extraction methods like fracking and tar sands boiling (or whatever they call it) US oil production soared over the past decade, driving prices down. See here:
Not really germane to your point, but I happen to be an Exxon/Mobil stockholder, and have followed the stock closely for years. My opinion is that the current price of XOM slightly underrepresents its real value. The stock has traded for most of this decade at a P/E of about 10-15, which is conservative and normal. The company had negative earnings this year, but the expectation for next year is that P/E will be back around 10-15. Solid stuff.
The behaviour of the S&P 500 over the last 10 years, however, is absurd, having risen from a P/E of about 15 to its current level of about 45. That is delusional.
I don't think divestment has much power to reduce stock price. There's plenty of money controlled by people willing to move it into undervalued stocks regardless of ethical concerns.
At least from a deontological point of view, it's rather bizarre to argue that it's okay to perform an act if someone else would have done it. Is it okay to rob a jewelry store if someone else would have if you hadn't? A few hundred years ago, would you have accepted the defense "If I didn't do it, someone else would" from a slave trader?
I dont think its absurd, but the "correct" answer, in my view, is the U.S.A curbing its meat consumption is likely to encourge China to curb its meat consumption at least a little and, is a problem that can be worked on concurrently.
I agree with arpanet's other points, but as a more direct response: from what I know about Econ 101 For Dummies, a reduce in demand shouldn't result in a supply increase outside of unusual scenarios. If a bunch of people decide to buy N units less of chicken, then yes, the price of chicken will fall to compensate, incentivizing more people to buy it. However, if you do the Econ 101 math, assume your supply and demand curves are mostly linear, and assume that your chickens are spherical, the net change will still lower the quantity of chickens supplied. You'd need a pretty unusual supply curve (concave up) to get a net increase in production.
Perhaps there could be some unexpected nth order effect where if the price of chicken falls and the demand for it overseas rises, it'll cause some sort of feedback loop in the popularity of chicken, resulting in a supply increase in the long run. In the absence of evidence to the contrary, though, I think it's best to default to the position that chicken (+other animal products) is an ordinary good that follows the usual rules of supply and demand.
If fewer people buy chicken, the economics will move to a different point in the supply-demand curve. If there is less demand, there will be less supply (dead chickens). Others consuming more in response to a lower price is not going to result in a total cancellation of that effect.
That would make sense if the supply of chicken was somehow fixed (e.g. if production were extremely difficult to scale down). I don't see any reason why that would be the case. When supply can go down, lower demand will make it go down.
If it's a "safe" district, then for all practical purposes no. If it's a "swing" district, then your chances of being the deciding vote are actually similar to what they'd be if the election was decided by picking a single random ballot. (Yes, wins by one ballot are rare, but only as rare as you'd expect given the sizes of electorates. They've happened in significant elections before.)
My point is that many of us know that our individual votes don't matter, for all practical purposes, and yet we do it anyway, because it's an action that's consistent with our values.
I'm replying that "my individual vote doesn't matter" is genuinely false for a close-looking election! *Probably* it won't come down to one vote, but in an N-vote election whose polls put a tie within the margin of error, there's more than a 1/N chance that it will in fact come down to one vote. I can go into the Central Limit Theorem if you really want to dispute this.
(And if you're like "but recounts and legal battles", the same marginal reasoning also applies to whether, and how soon, recounts or legal battles get resolved.)
Sure, people often *think* it doesn't matter. But that's different.
The vote percentages have to be very close, or the number of voters small. If there are a million voters and one candidate has a one percent advantage, the probability of a tie is miniscule.
This is one of the common misconceptions: you don't come into Election Day knowing what the vote percentages will be.
It isn't flipping a 51% coin a million times and then adding one. It's flipping a coin a million times when your prior evidence only says that its weight is somewhere from 48% to 52%, and then adding one.
The negligible leverage from the worlds in which the weight is not very near 50-50 are countered by the high leverage in the worlds where the weight is very near 50-50.
I always look at it like a Newcomb problem. I need all of the people who think like me to show up to the polls. I think my personal decision will be reflexively consistent with the group. Therefor I need to go to the polls.
> because it's an action that's consistent with our values.
Not really, it's coordination. If I go vote, that means people with thought processes / beliefs / values closest to mine will also be more likely to go vote. If I don't, that means they're not likely to go either.
IMO spreading awareness of superrationality would fix a lot of stuff.
The best ratio is the bony-eared assfish.
I'm arriving at this comments section late and seeing this line out of context is hilarious.
Glad you enjoyed it! In case you're wondering, that's the actual name for a fish, and it has the smallest brain for its body size of any vertebrate.
I thought it was something like that, thanks for the context!
I mean, I agree, sort of? But this is just not a realistic scalable solution. There aren't that many deer. I say that as someone who is trying (so far unsuccessfully) to learn to hunt without the benefit of family/social knowledge. I certainly plan to replace as much meat as I can with venison, but I think that, in the context of this discussion, it's lack of scalability makes it not very realistic.
I think there are different levels of compromise - vegan -> vegetarian -> person who offsets all their meat -> person who eats beef instead of chicken -> person who purchases ethically raised meat -> person who doesn't do anything. People at level n will always think the people at level n-1 are barbarians, and people at level n+1 are "contorting their diet to reach squeaky-clean moral cleanliness" - I don't think it's possible to escape either concern. My priority here is to make achieving whatever level of moral contribution you're going for as easy/effective as possible, so that a given level of effort can produce better results.
Also, my guess is that casually mentioning the existence of higher-commitment ways to do morality makes people more likely to do lower-commitment ways. A lot of people I know are vegan, and I don't think I could be vegan, but constantly being around them has shamed me into inconsistent pescetarianism. If I can make someone who currently does nothing get to the point where they eat ethically raised meat, I'll consider that a victory.
(though I'm also concerned about this because companies are really good at saying "We have ethically raised meat!" while making as few concessions to actually raising the meat ethically as possible, and you've got to be a real expert to navigate this space, whereas just not eating chicken is hard to get wrong)
I'm concerned that "cage free" could be a lie, but so could lots of things at the grocery store. At some point I have to trust the people doing oversight.
The raising of meat is also a lot more legible than carbon offsets. Someone can, in theory, inspect to make sure that chickens destined to live outside of a cage really are outside of a cage.
https://www.tandfonline.com/doi/full/10.1080/0020174X.2019.1658631
I'm a utilitarian and very very vegan-sympathetic, so I think I'm qualified to answer your question.
I think a big factor here is the future potential to be a net-positive society. That is to say, even if we're in the red today, our best option is to try and work toward a world where that's no longer the case.
This isn't *necessarily* wishful thinking. If humanity doesn't totally wipe itself out at any point, and advances in technology allow us to expand our total population, the future could contain ~some very high number of humans and other sentient beings who experience much more positive utility and much less negative utility than we do. If that's true, then it's "worth" enduring all the negative stuff now to try and achieve that future.
(Sidenote: If you really want to kill all humans, accelerating climate change is not a great way to go about it. That probably just causes resource scarcity, international, tension, and wars, which usually don't kill *all* humans. Killing *all* humans is really hard, which might be one reason utilitarians don't try to go for it.)
There's a lot of anthropologists and others who argue that most of human history was far worse for most people than pre-agricultural pre-history. However, many of these people still think that human life in the past century or two is far better than pre-agricultural life, so that the net result of agriculture has been positive, even though it was net negative for many centuries (and millennia).
The hope is that even if we are still in fact net negative when we consider factory farming, we might still get net positive. I think this all requires much more empirical investigation.
i'm partial to this argument and i've heard it stated in various ways before.
however, are we really depriving future individuals who haven't existed of a good existence? they aren't ever going to be present to lament the opportunity cost of being denied a good existence. meanwhile, there are presently countless individuals suffering violently. to cease existence for all would end that while not imposing any real threat to the Not Yet Born.
i'm of course not an advocate for the genocide of our planet, i'm just trying to take the above argument at its best and see where that leads us.
I admit that questions about the "rights" of nonexistent entities can be strange and counterintuitive.
I'd say the issue with the extinction route is *not* that it deprives particular hypothetical future people of their existence and "causes them suffering" in some weird counterfactual sense. The problem is that it results in a universe with no people in it (a "net zero" on the utility scale), as opposed to a universe where ~trillions of people get to lead net positive lives (a very high positive on the utility scale). The problem isn't the lament over nonexistence - it's the nonexistence itself.
So I would support humanity "sticking it out" for the same reason I support anything else with a delayed payoff.
(As far as stuff like the Repugnant Conclusion goes, I'd use something like the thought process Scott goes through in section five of this essay: https://slatestarcodex.com/2014/08/24/the-invisible-nation-reconciling-utilitarianism-and-contractualism/ )
I don't think there is an error in this reasoning. It's at least one reason people are concerned about future AI causing human extinction. More typically in the circles of people surrounding Scott, the concern is they'll be programmed with a naive objective function that optimizes for something amoral that inadvertently leads to human extinction. But it's also possible they're programmed perfectly and their objective function tells them the world will be better off without humans and that will be correct.
The objection from a human is just hey, I'm human, and I don't care about the non-human world being better off. The good of my own species is infinitely more important than the good of all other conceivable sentient beings, not for any rational or utilitarian reason, but just because.
Utils need to be biased to make sense. There's no point making everyone else happy if you get screwed over. It's not a human morality. You, your family, your country, and your species get first claim on utils.
It's the trolley problem. Your brother on one side, two randos on another. The good human will save his brother and kill five randos. This is morally acceptable.
The person who refuses to save his brother is a heartless jerk.
Well strictly speaking utilitarians say that it is not OK to save 10 lives and then murder one person because this sequence of actions is morally inferior to the sequence of actions where you save 10 people and then don't murder the extra one (unless of course you are in the situation where you can only save 10 people by murdering the extra one, in which case it is clearly OK to murder one person in order to save 10).
The weirdness comes into utilitarianism due to the better known problem of it being unreasonably demanding of its adherents. If utilitarians who cared about animal welfare really followed utilitarianism to the extreme and they thought that a chicken life was worth more than $6, they would donate all of their available money to effective animal charities and also not eat any meat. The weirdness comes in when very few people are actually willing to spend all of their disposable income on charity and instead spend some of it on themselves. You then end up with weird situations where they can make deals with themselves whereby they can donate more to charity in exchange for doing something morally dubious but have a situation whereby the outcome is better by both their personal standards and the world's standards. This clearly isn't the optimal set of moral actions, but it is perhaps better than what they would have done otherwise.
I don't think it's crazy (though coupling those actions unnecessarily would be), and you would have to be very very careful that you weren't ignoring other effects.
But how is this any different from the standard trolley problem? You either do nothing, causing 2 people to die due to inaction or you switch the tracks causing one person to die through your actions.
The point is that by pure utilitarian standards neither the person who donated $6000 and killed their ex-wife nor the average person is very good. Both of them are letting a lot of people who they could have saved die. Things only become unintiutive when you allow for people not donating nearly enough to charity as they ought to.
Saving someone is the same thing as not letting someone die. You could equally well describe the trolley problem as do nothing vs. pull the lever, saving two people but killing one.
The utilitarian will agree with the intuitive judgment if you're in the real world, because in the real world, the kind of person that kills their ex-wife tends to do lots of other directly brutal things, and either donating to charity or not doesn't usually correlate very strongly with other behaviors that make lives better or worse. It's only in the very tight confines of a thought experiment where you've stipulated that the people are in fact otherwise identical that the utilitarian will judge the one person better than the other, and this is no longer counterintuitive, because we have no intuitions about extremely weird cases like that.
Intuitively, the person who donated $6000 and killed their ex-wife seems worse, but why would we give that intuition any weight once we've examined the situation? If the donation indeed saved more lives than they took, would it have been better if they had done neither? I don't think so - it's better to save more lives in net.
Though the magnitude of the real-life benefit of donations is more uncertain, and they don't viscerally feel like they're doing a lot of good. So even large donations to effective charities might not feel on par with something like pulling some kids out of a burning building.
So let's compare it to that. You know someone who ran into a burning building with their ex-wife, pulled two children out from under a collapsed beam, pushed the ex-wife into the smoke to her death, and ran out carrying the kids. Assuming the kids would've died otherwise, is this better than if he had stayed outside? I think it is.
The more common thought experiment for what you're describing is the surgeon who kills a single patient and harvests all the organs to save nine others, or some number larger than one.
Typically, nobody says this would be okay, but why on strictly utilitarian grounds? Assuming the practice is widely known to occur, it could have a chilling effect where nobody ever voluntarily seeks medical treatment, but to one surgeon making the decision in private, reasonably certain no one will ever find out, it's hard to see why they shouldn't be killing patients and harvesting their organs.
Your example is obvious, the murder is wrong because you can do both - not-murder and donate to save lives. Neither precludes the other.
But in the animal welfare example, at least as considered above, the actions do preclude each other. Though they do so because they're framed that way, "if we're going to eat one meat or the other, which should we eat?".
The obvious resolution is less of both, less chicken and less beef. But staying within the framework above of one or other and not less in sum, the utilitarian calculus seems appropriate.
https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/
Utils are weighed. You come first, then your family, then your friends, then your community, city, province, country, civilization, species, class, and kingdom. So utils for you and your friends are worth waaay more than utils for cows or dolphins, which are worth more than chickens, which are worth more than invertebrates.
It's totally arbitrary, and everyone stands on their own moral perspective. The point is to build a society, and use that to define morality. Function and economics should take the lead.
Morality doesn't work that way because humans aren't utilitarians. A society reliably* running on this principle *would* work better than ours.
* preventing net negative utility from psychological costs, cheaters, unfavorable incentives, etc
Maybe it would, but members of that soicety wouldn't resemble humans in the slightest.
No. I speak as someone who's eaten everything from bugs in ChangMai to whale meat in Osaka. I especially have no problem with eating animals that have eaten other animals,
For the ethical vegetarians here, would animals having some amount of consciousness change your mind about animals that eat other animals that had consciousness?
I am an animal, by definition, with consciousness. If whales have consciousness and get to consume hundreds of millions of tons yearly of invertebrates (also with consciousness) than so do I (I like lobster).
If animals have consciousness, and just like me they have consciousness at various levels on a spectrum, then they take their place in the spectrum of moral and evolutionary choices, just like me. Hence, I'm no better than they are, and as an omnivore my dietary choices are my own, including the animals I raise for food (some ant species farm aphids, so the natural world has allegories).
If on the other hand they don't have consciousness, and I do, and an argument can be made that my humanness puts me outside of the animal spectrum of moral or evolutionary choices, than none of this matters. They are animals and I'm a human and my moral choices regarding them doesn't register at all morally.
I'm either an animal, or I'm not, or I'm both. And in all cases my dietary choices fall into one of two morality spectrums that equally justify a dietary choice to eat other animals.
Well, in option 1, it's only OK to rape non-human animals. In option 2, I'm not sure.
"I don't have the mastery of ethics to eloquently argue this..."
Whether you think you do or don't really is moot because, quite frankly, I don't think of dietary choices as an ethical issue. OTHER PEOPLE make it an ethical issue. For the vast majority of people it isn't, which makes arguing about it 'ethically' difficult and filled with crevasses and pitfalls as those other people try to convince someone with common sense that their berkshire hog exists in the same moral spectrum as their great-grandma Kathleen.
Like I said, for the overwhelming majority of people, this isn't an ethical issue at all.
> If whales have consciousness and get to consume hundreds of millions of tons yearly of invertebrates (also with consciousness) than so do I (I like lobster).
The project of creating/adopting a moral framework isn't just the replication of whatever behaviors we see in the natural world; it's an attempt to actively modify the world by examining what principles create what we think of as "good" in the world then living/acting according to those principles.
"Animals also do this so I get to" is an extremely poor justifying moral principle in that it could be used to justify nearly any behavior -- eating your children, eating your parents, raping whomever, beating/killing rivals to maximize your sexual chances, etc etc. I assume you don't do all of these things as well and would find most of them morally repugnant, so I assume this framework doesn't guide most of your moral decision-making. Why do you use this principle/framework ("animals do it so I get to") it in this case but not others?
You may have failed to notice that I didn't say which one of the morality spectrums I ascribe to. The above was merely a thought experiment to demonstrate the fundamental logic that BOTH morality spectrums justify a human's decision to eat other animals. And they do.
Your personal beliefs are not particularly relevant to the question of whether "animals get to do this, so I do too" is a good moral principle to follow under any particular framework. If it was a thought experiment, continue to consider it one, and use it to actually engage with the points I brought up in the previous post re what morality is for, what it does, and that someone following this principle would also find themselves morally justified in any of the behaviors I mentioned.
Where did you get the idea that there are two and only two "morality spectrums"? You seem to be defining morality in a different way than many of us.
"If whales have consciousness and get to consume hundreds of millions of tons yearly of invertebrates (also with consciousness) than so do I (I like lobster)."
I'm not sure where this "get to" is coming from. Are you saying that someone has certified that whales are doing everything morally right, and therefore if you do things no better than whales do, then you are therefore also morally right?
I would say that, morally speaking, it's better if conscious beings have better experiences, and beings with preferences get more of their more important preferences satisfied. Tornadoes, forest fires, whales, and humans all sometimes do things that get in the way of this. Sometimes, us trying to stop the bad things ends up making things better, but sometimes it makes things worth. So just because I'm not out trying to stop tornadoes and stop whales doesn't mean I think it's bad to try to talk to a human to talk them into changing their behavior.
This is just another instance of the same thing we see that some bad behaviors are illegal, while other bad behaviors aren't, because actively trying to punish people for some bad behaviors is helpful, while actively trying to punish people for other bad behaviors often makes things worse.
"certified whales are morally right"
Only other whales can do that. In every pod of whales one of the whales is designated as a certified whale morals certifier. Think of it like Iran's morality police, but in whale form. The Cetacean Certification Policewhale (CCP for short) signifies if another whale's actions are moral with one WOO if moral and a double WOO-WOO if immoral.
According to the CCP central statistics office, whales are generally quite moral, except for the southern Arctic and Pacific pods. Those whales are very immoral and have sex outside their pods and bully sharks and sea lions. But the Hawaiian whales are the most immoral of all. So bad they had to come up with a triple WOO-WOO-WOO, which is whale for "so bad they're going to whale hell". Steer clear of the Hawaiian whales, they kill for sport and sell drugs to cuttlefish.
Wait, so is the CCP a suprapod agency, with its certified whale morals certifier implanted within the pods to observe and report back to the CCP?
> If animals have consciousness, and just like me they have consciousness at various levels on a spectrum, then they take their place in the spectrum of moral and evolutionary choices, just like me. Hence, I'm no better than they are
I don't see how this follows. Murederous psychopaths are roughly on the same point in the consciousness spectrum, yet it seems undeniable that most people are morally better than murderous psychopaths.
Animals eat other animals and that is moral and utilitarian behavior. I am an animal. My eating animals is also a moral and utilitarian behavior.
You're merely asserting that they're moral. I've read some of your other posts here and it doesn't seem like you think you need to justify calling something moral, but I don't see why that's true. Why should I accept your claims if you refuse to or cannot justify them?
Since humans eat other animals, does that make them fair game too?
Yes. The prohibition against cannibalism is primarily cultural (with a strong evolutionary case for disease prevention as well). I would never eat another human, but others have and some probably still do.
Depends on how. If they were simply eating my cadaver, then yes. If someone was attempting to kill me to eat me, well they can certainly try (they'll need a lot of luck and fashionable body armor).
In both cases the moral prohibition is simply cultural: natural circumstances the revulsion against cannibalism, unnatural circumstances the revulsion against murder.
I have no problem being in the food chain. I'm in it now. So you are you. All of us are. I also have no problem being at the top of the food chain. I feel no shame.
Wait a minute, it's a circle. Bacteria have just as much right to claim they're at the "top" of the food chain as you do. Maybe more, since they eat *everything* eventually.
In severe famines, yes. In conditions of extreme deprivation, killing a stranger to feed him/her to yourself, and especially your family and your children, becomes an extremely moral act.
You can expect the other guy to kill you to feed you to his kids too. Both sides are acting morally; this is a conflict of interest.
You can see how utility scales with distance and situation by seeing how people react when you put one of their loved ones in front of the trolley problem, and then their friends, and then their countrymen, while putting other things on the other fork.
Please keep in mind that there's a difference between killing an animal and subjecting them to hundreds/thousands of hours of pain and suffering. Are they conscious of being in pain? Yes. Are they conscious of suffering? Yes.
Even if this were true, and I'm very dubious of this, I'm not sure pain morally obliges me to stop eating meat. How and in what way does empathy oblige me morally to care?
Just out of curiosity, which sources of morality do you find legitimate?
The kind that involve the creatures that came up with the concept. Humans. Not the kind that anthropomorphically applies that concept to creatures that don't understand it in any context outside a disney movie. Animals.
So, what about a proto-human. Something not smart enough to have a real system of ethics, but are on the cusp of evolving to be. How many generations away from that would you need to consider them a being deserving of empathy? What if there was some chance it would grow up to be capable of ethical reasoning? would 5% chance be enough? 50%? I have further discussion on this, but in the interest of not making a “gotchya” argument, I personally view babies on about the moral level of someone’s pet dog, so I’m interested if I can argue you into this position
Proto-human? You mean...like a fetus? Whoa....let's slow down...way down...we don't want to tread into that territory of things "on the cusp of evolving to be." All sorts of uncomfortable questions suddenly pop up like 'where is the cusp' or 'evolving to be' what?
Let's use dolphins instead. Maybe dolphins will someday have their own civilization in like 5-10 million years. At that point I'm willing to consider not eating them. Till then they're fair game.
i can see how being able to come up with a concept of morality and to think morally is relevant in deciding who is a moral agent, but i cannot see how it is relevant in determining who has moral standing. what is it about being able to act morally that makes it matter how we treat a creature?
"moral standing"
So you're implying humans have more "moral standing" than animals? If that's what you're implying then you're proving one of my morality spectrums. The spectrum that allows us the ability to use animals how we want and to use them how we want morally.
I'm a Confucian, so Common Sense is a good place to start. Oh, and you need to get a functioning, prosperous, strong society at the end of it.
I think this is one of the few posts I have seen on this site that has genuinely shocked me. I hope that you are arguing this in the hypothetical. Pain is a readily recognisable 'bad', and should not be knowingly imposed on others that will suffer from it for no reason. The species of the 'others' is of no consequence. Empathy is not the issue. Quality of life is.
Besides the "depth" of consciousness, there is another consideration: How often the consciousness is "on". I believe there's pretty widespread agreement humans aren't conscious when they are asleep for instance, but personally, I'm not at all convinced all humans are conscious every moment of wakefulness. Since consciousness can only be observed through introspection ("am I conscious? seems like I am"), is there evidence that humans are conscious outside of their most reflective moments?
All of this is so difficult and muddled I can't claim a high subjective probability, but I have difficulty fitting what-we-call-consciousness (which I can confirm as a real phenomenon through introspection, but don't know if it has all or even many of the properties philosophers tend to assign to the concept) in my reductionist worldview as something else than consequence of self-referentiality in a sufficiently complex system, and consequently, I wouldn't expect my consciousness to be there unless I am thinking about whether I am conscious. Furthermore, it seems that human brain asleep almost never passes whatever it takes for consciousness to emerge, which to me tentatively suggests the threshold for it to happen is pretty high, and for what it's worth, waking up from intense concentration/flow subjectively feels similar to when I'm drifting in and out of sleep.
Due to all the complications and uncertainties, my subjective probability for this one model is low (<10%), but I consider it more likely than any other individual model that goes into same amount of detail, and consequently I tentatively operate under belief that all humans are only rarely what-we-call-conscious, some humans (such as children) are probably never conscious, and that there is a good chance that no species outside Homo has ever possessed what-we-call-consciousness.
Of course, when it comes to attempts to calculate utility, you ought to factor in all other possibilities, many of which include non-human animal consciousness, even that it is very widespread, and that their experience is MORE vivid than that of humans (possibly due to humans having greater ability to inhibit their emotions).
But when a human is being tortured, how much of the time would you estimate they are suffering?
In the same kind of sense how plants or mechanical automata we are almost certain aren't conscious can react to stimulus analogous to pain, all the time. That seems uncontroversial.
Other than that, I'm not at all sure (just like I'm not at all sure humans aren't conscious during most moments of wakefulness). I would perhaps expect torture to repeatedly jolt the pain and awareness of your miserable situation you're in to center of the brain's attention, which ought to result in conscious experience, not unlike how I've thought about my own experience when I'm completely absorbed in something (those moments where you metaphorically and perhaps actually don't even notice the passage of time) up until I perhaps miss a step and hurt myself a little and notice I definitely am conscious and in pain, but on the other hand, I know there are lots of hurts I've been able to tune out of eventually even when cause persists. Presumably, torture methods tend to be torture not only because of the intensity of the pain but partly because they are the most difficult to tune out of, so, operating under this model, I would perhaps expect the tortured to be conscious a lot of the time, more than people usually are.
This seems very plausible to me. But it also makes me suspect that "consciousness" isn't really the morally significant thing. Preference satisfaction is good, and preference frustration is bad, whether or not someone is conscious of having it. It's better for a parent if they think their kid has died but the kid is actually living a fruitful life, than if they think their kid is living a fruitful life but the kid has actually died - even though the parent will have happier consciousness in the latter case than the former.
i wrote a dialogue in the old style about this exact problem (do we have duties to lifeless objects?), which you may be interested in reading. as a taster, here is one of the epigraph quotes:
> Thales, according to Herodotus, Duris, and Democritus, was the son of Examyas and Cleobulina, and belonged to the Thelidae, the noblest Phoenician descendants of Cadmus and Agenor. […] Aristotle and Hippias say that he attributed souls even to inanimate objects, arguing from the magnet and from amber.
>
> – Diogenes Laertius
=> https://www.erichgrunewald.com/posts/auderico/
My thought is that somehow we have to settle how strong various preferences are, in order to determine how important it is to satisfy them. (For instance, my desire to live is stronger than the desire of the homophobe that I die, so at least on those two fronts, it is better for me to live.) My though is that however this works out, in order for a Roomba to have preferences that amount to even a small fraction of those of a chicken, it would have to be much more complex and lifelike than it actually is. But this is very much something that isn't yet worked out, and could conceivably go very weird, as you suggest.
> It's better for a parent if they think their kid has died but the kid is actually living a fruitful life, than if they think their kid is living a fruitful life but the kid has actually died - even though the parent will have happier consciousness in the latter case than the former.
well, it's better _for the child_ if the child is living, but i think it's better _for the parent_ if they think their kid is living even if that is false. of course it's much more important for the child to be alive than it is for the parent to think their child is alive, so it shakes out similarly anyway.
I think if you ask any parent about this, they would say that it is better *for them* if their child lives and they have a false belief about it, than the other way around. They care about how their child is actually doing more than they care about their own experience of it.
yes, on second thought i think you are right.
How could consciousness be on a spectrum? I mean, if we take as a crude operational definition "consciousness" = "being self-aware" how could you be, say, 20% or 4% self-aware? Seems like you either are or you're not, full stop.
How would I know? So far as I know, I have always been self-aware, because by definition I am not aware of any time when I was not. How could I be? So the question cannot be answered from the inside.
One might attempt to infer an answer from the outside, meaning someone else could try to decide whether I was experiencing self-awareness by examining the evidence of how I look and act. That is a notoriously difficult problem, as for example the problem of comas, "locked-in" states, and badly brain-damaged people (or the cognitive development of children) demonstrates.
But that would not demonstrate the spectrum at all, because *first* you need to write down a definition of "20% self-aware" that can be compared to the evidence. It's that point I'm challenging. I'd like to hear a good definition of "20% self-aware" before I admit it's any less illogical than "20% pregnant".
Have you ever been so tired that you can't maintain coherent thought and even though you are awake and observing your surroundings, you miss multiple details regarding the world around you? Like really obvious things like "who is in the room with me" or "what was I doing just now?" Or perhaps you have been drunk or high to the extent that you barely feel anything. What if an animal lived its life in a fog similar to these states, with no moments of higher thinking/observation/reasoning. Would they still be "equally self aware" as a human in a yes/no spectrum?
To me the answer seems obvious that consciousness is a spectrum.
If the OP meant "higher thinking" = clever and accurate reasoning, rich spectrum of thought, emotional vibrancy -- they he should have said so. But none of that is subsumed under "consciousness." In all of the states you describe, consciousness (meaning self-awareness) exists. I cannot think of situations in which a person is *not* self-aware (being asleep, or in a coma, or under anesthesia, or arguably with certain kinds of brain injury), and I can think of situatiosn in which a person *is* self-aware, but I cannot think of a single example in which a person is 20% self-aware. None of your examples fits, because none of them involve *self*-awareness -- they are all about being aware of the environment, or having a strong or more nuanced interaction with it.
Indeed, I would say the variation in awareness of the environment of which you speak can easily be ascribed to *non* self-aware organisms. A pine tree or single-celled organism can be attuned to it surroundings better or worse, can react to them functionally or not, and at different levels of complexity depends on its internal state (e.g. sick or healthy).
Sorry, typo: "I *can* think of situations in which a person is *not* self-aware..."
Yes.
I think they're sentient/conscious. I imagine something like a dreamlike blur of sensations for chickens and about half-awake-human level for a pig.
FTR I eat meat, but minimize consumption. The cow/chicken mass thing had occurred to me too.
I would put a pig's level of self awareness much higher than that. They are more self-aware than the average dog, and have both the ability to see into the future and plan, and also - I believe - a demonstrable sense of humour. If more people got to know pigs personally, no one would eat pork.
It is one of life's great tragedies that pigs are so tasty. If they tasted terrible, we would most certainly be keeping more of them as companion animals. They are somewhat parrot-like in their appetite for companionship, physical pleasure, and pure mischief.
I don't have anything of substance to add, but your description made me chuckle. :)
I dunno. I saw a chicken run the base pads on a little baseball game for two bits at a tourist trap in Rapid City SD. Let’s see a cow do that!
Don't forget that chicken that lived (and acted pretty normal) for years after it had most of its brain chopped off.
Yeah, a chicken's main evolutionary advantage is that they're so delicious this planet's apex predator will defend them from all comers.
Werner Herzog's take on this is spot-on. https://www.youtube.com/watch?v=QhMo4WlBmGM
I raised chickens for years and came to the opposite conclusion. And they have noticable individual personalities. They seemed to be getting a kick out of life. Now ducks, they are stupid.
I strongly concur. I've also had a lot to do with chickens and I believe them to both have individual personalities, and also to be superbly well equipped to be outstanding at all the things that chickens need to do to live good lives and make more chickens.
Chickens approach the problems of life with zest and vigour.
They do, however tend to suffer from the same problem that sheep do - the bigger the flock/herd, the lower the (apparent) collective IQ. So people will rarely - if ever - see them express their full potential in the huge agglomerations that factory farming demands.
Apologies for giving a very serious answer to a fun comment, but before taking this line of reasoning very seriously, consider how it sounds applied to humans with severe mental handicaps. They have very low IQ, but that doesn't necessarily mean they suffer much less than a higher-IQ person (caviat: I have no expertise in neuroscience or anything like that).
The point being, I think I prefer lines of reasoning centered around physical measurements such as neuron count or connectedness, rather than the perception of intelligence manifested by the organism, since I don't think we have a strong intuition for how behavior maps to neural complexity or internal states.
That, and being 10x as smart as a rock doesn't mean your moral value is only 10x that of a rock.
Is there any actual evidence that individual voting changes affect which candidates get elected?
Isn't that the implicit premise of Scott's blog in general? Of trying to convince or inform anyone of anything? Of, I don't know, waking up instead of not?
There's a chance that nihilism is "correct" but it's absolutely certain that it's a total bore
Another perspective might be that these utilitarian exercises feel like a Rube Goldberg machine of rumination on the unknowable.
That's fair. But I'd argue that any sort of rumination - any attempt to examine the costs, measure them, compare them, and then make an informed decision consistent with one's values - leaves the individual (and, in the case of someone like Scott, the individual's audience) more aware of the consequences of their actions. Surely we agree that that's just an absolute good?
Consider this: what if, say, all ASX readers scale back their chicken consumption, reducing domestic demand, which in turn lowers prices for chicken, and the result is merely that more chicken is eaten in developing countries like China, because its price relative to alternative protein sources has fallen?
A similar line of thinking is frequently deployed to argue that the US shouldn't do anything to reduce its carbon emissions. It's nonsensical there and it's nonsensical here. We can't control what China does, but we can control what we do, and we have a responsibility to exercise that control in service of a better world.
More broadly, this kind of "what if far-fetched second-order consequence X?" thinking is a terrible way to make decisions because a) it's impossible to prove that consequence X won't happen and b) the supply of potential consequences is infinite. If you go down that road you'll never do anything.
I don't see it as non-sensical at all. If one is going to engage in some kind of self-deprivation in order to achieve a desired outcome, but the actions of others preclude our desired outcome, then asking whether self-deprivation is really worth continuing is a perfectly logical question. I think of it as similar to the whole fossil fuel divestment movement: there was no way a bunch of college kids were going to crimp Exxon's profits by getting their university endowment funds to dump the stock; there were too many other willing buyers for it.
As to your second paragraph, the interactions of supply and demand are hardly far-fetched! Think again!
There are a couple ways to answer this.
The first is that it's impossible to know what the actions of others will be, and whether or not those actions would preclude the desired outcome, but we do know that our desired outcome will never be achieved if we just keep perpetuating the status quo. (You'd have to change your behavior at *some* point, or you'd risk being the last person keeping the factory chicken farms going.) This certainty gap tips the balance in favor of action.
The second is that if we want to begin to influence the behavior of others, taking the action we want them to take ourselves first is a prerequisite. No one's going to listen to utilitarian arguments for vegetarianism coming from someone who's housing a KFC Double Down.
Finally - are you *sure* the fossil fuel divestment movement hasn't accomplished its aims? Exxon's stock is down 32% over the last 5 years, while the S&P500 is up almost exactly 100%. An activist hedge fund (Engine No. 1) just successfully convinced shareholders, over Exxon's objections, to appoint at least 2 new renewables/energy transition-focused directors to the board (which has 12 seats IIRC). Was this the direct result of some college kids staging some sit-ins? Impossible to say. But aren't you glad you live in a world where they did something, rather than nothing?
Your ability to influence the behavior of others is highly limited. It'd be foolish not to recognize this is the case.
As for that last part, yes we can be sure. Exxon is effectively a victim of it's own success. Thanks mostly to new extraction methods like fracking and tar sands boiling (or whatever they call it) US oil production soared over the past decade, driving prices down. See here:
https://www.macrotrends.net/2562/us-crude-oil-production-historical-chart
The college kiddies had nothing to do with it.
Not really germane to your point, but I happen to be an Exxon/Mobil stockholder, and have followed the stock closely for years. My opinion is that the current price of XOM slightly underrepresents its real value. The stock has traded for most of this decade at a P/E of about 10-15, which is conservative and normal. The company had negative earnings this year, but the expectation for next year is that P/E will be back around 10-15. Solid stuff.
The behaviour of the S&P 500 over the last 10 years, however, is absurd, having risen from a P/E of about 15 to its current level of about 45. That is delusional.
I don't think divestment has much power to reduce stock price. There's plenty of money controlled by people willing to move it into undervalued stocks regardless of ethical concerns.
At least from a deontological point of view, it's rather bizarre to argue that it's okay to perform an act if someone else would have done it. Is it okay to rob a jewelry store if someone else would have if you hadn't? A few hundred years ago, would you have accepted the defense "If I didn't do it, someone else would" from a slave trader?
Im always conflicted from taking this position and taking the every long trip begins with 1 step.
Like isn''t what you said true about most political movements, even the ones that suceeded?
It is all about likely consequences.
I dont think its absurd, but the "correct" answer, in my view, is the U.S.A curbing its meat consumption is likely to encourge China to curb its meat consumption at least a little and, is a problem that can be worked on concurrently.
I agree with arpanet's other points, but as a more direct response: from what I know about Econ 101 For Dummies, a reduce in demand shouldn't result in a supply increase outside of unusual scenarios. If a bunch of people decide to buy N units less of chicken, then yes, the price of chicken will fall to compensate, incentivizing more people to buy it. However, if you do the Econ 101 math, assume your supply and demand curves are mostly linear, and assume that your chickens are spherical, the net change will still lower the quantity of chickens supplied. You'd need a pretty unusual supply curve (concave up) to get a net increase in production.
Perhaps there could be some unexpected nth order effect where if the price of chicken falls and the demand for it overseas rises, it'll cause some sort of feedback loop in the popularity of chicken, resulting in a supply increase in the long run. In the absence of evidence to the contrary, though, I think it's best to default to the position that chicken (+other animal products) is an ordinary good that follows the usual rules of supply and demand.
If fewer people buy chicken, the economics will move to a different point in the supply-demand curve. If there is less demand, there will be less supply (dead chickens). Others consuming more in response to a lower price is not going to result in a total cancellation of that effect.
That would make sense if the supply of chicken was somehow fixed (e.g. if production were extremely difficult to scale down). I don't see any reason why that would be the case. When supply can go down, lower demand will make it go down.
If the price of chicken falls, farmers will instead grow cows.
Is that a 'lump of meat' fallacy I ask myself.
Farming supply-demand equilibrium will shift, there will be less meat farming.
"Lump of meat" is reasonable if the argument is _switching_ from chicken to beef.
But also, Scott argues that even this mere switch is a good thing, so that's okay.
You're just describing the Jevons Paradox.
Good to know, thanks.
I don't see how.
If it's a "safe" district, then for all practical purposes no. If it's a "swing" district, then your chances of being the deciding vote are actually similar to what they'd be if the election was decided by picking a single random ballot. (Yes, wins by one ballot are rare, but only as rare as you'd expect given the sizes of electorates. They've happened in significant elections before.)
My point is that many of us know that our individual votes don't matter, for all practical purposes, and yet we do it anyway, because it's an action that's consistent with our values.
I'm replying that "my individual vote doesn't matter" is genuinely false for a close-looking election! *Probably* it won't come down to one vote, but in an N-vote election whose polls put a tie within the margin of error, there's more than a 1/N chance that it will in fact come down to one vote. I can go into the Central Limit Theorem if you really want to dispute this.
(And if you're like "but recounts and legal battles", the same marginal reasoning also applies to whether, and how soon, recounts or legal battles get resolved.)
Sure, people often *think* it doesn't matter. But that's different.
Ok, I got you. I think we're agreeing from different directions.
The vote percentages have to be very close, or the number of voters small. If there are a million voters and one candidate has a one percent advantage, the probability of a tie is miniscule.
This is one of the common misconceptions: you don't come into Election Day knowing what the vote percentages will be.
It isn't flipping a 51% coin a million times and then adding one. It's flipping a coin a million times when your prior evidence only says that its weight is somewhere from 48% to 52%, and then adding one.
The negligible leverage from the worlds in which the weight is not very near 50-50 are countered by the high leverage in the worlds where the weight is very near 50-50.
I always look at it like a Newcomb problem. I need all of the people who think like me to show up to the polls. I think my personal decision will be reflexively consistent with the group. Therefor I need to go to the polls.
> because it's an action that's consistent with our values.
Not really, it's coordination. If I go vote, that means people with thought processes / beliefs / values closest to mine will also be more likely to go vote. If I don't, that means they're not likely to go either.
IMO spreading awareness of superrationality would fix a lot of stuff.
It is obviously true. Any attempt to prove it would require assuming facts that are obviously true to a similar degree.