608 Comments

"Musk's biographer, Walter Isaacson, also wrote about the fight but dated it to 2013 in his recent biography of Musk."

Expand full comment

I'm pretty skeptical that is what ended any friendship. It may be what brought out underlying tensions but rarely is that kind of abstract consideration really the emotional core of the problem.

I mean I did have friends who had a serious fight about whether he would still love her if she turned out to be a robot (who he was dubious had experiences) but even there I suspect the emotional core was about commitment.

Expand full comment
Comment deleted
Jan 23, 2024
Comment deleted
Expand full comment

According to gossip, that was Sergei Brin’s wife, not Larry Page’s. But I assume Brin and Page are friends, so maybe it would still be enough.

Expand full comment

Peter's friends, let's call them Alice and Bob, had a fight about whether Bob would still love Alice if she were a robot.

Expand full comment

Ok, for a second I thought it was someone who only met her online and didn't know if she was real or a robot (or a catfish).

Expand full comment

Is it really weird that someone would stop being friends with someone because they learn that they're fine with destroying humanity? I was under the impression that most humans are pro-human existence. Generally, people who want to destroy humanity are painted as villains in fiction, regardless of how noble their intentions might be.

Expand full comment

Yeah but no one is going to destroy humanity in the foreseeable future no matter their intentions. So it’s an entirely abstract question

Expand full comment

...Well, it's probably a good thing you believe that. No use worrying about things you can't control.

Expand full comment

Look I just attended nips last december and the prevailing estimate of p(doom < 2040) of the people I met was few percent at most. It seems weird that someone would end a friendship over something that far fetched.

Expand full comment

Well Musk and Sergei are different sorts of characters mhm… I would think that to them it’s not that far fetched.

Expand full comment

Yes, 3% is low, especially if you're talking about bad outcomes of small magnitude -- say a 3% chance that you car won't start in the morning if you don't put it in the garage. Eh, big deal, right? But say you received a package and you know there's a 3% chance that it contains a bomb that will destroy everything within 300 feet. You gonna open it?

Expand full comment

You don't feel angry at the 5-10% of people in AI development who believe our being supplanted by AGI is an acceptable outcome? Even if they are wrong to think there's a chance that would occur, their attitude about the hypothetical strikes me as heartless and entitled.

Expand full comment

Substitute "The Jews" or "other ethnic group" for "humanity." Does that make it any less abstract?

Expand full comment

Changing something by three orders of magnitude can make it less abstract, yes

Expand full comment

So "the Chinese," then, not even a single OOM. Hardly anyone would think it weird to want to disassociate with someone who says "I think it'd be good actually if we replaced all the Chinese people with robots and computers and I intend to do it," even if nobody is going to do that in the foreseeable future. Why does it become weird if that person intends to do the same with the totality of humanity instead of a subset?

Expand full comment

If you're a Jew and you're fine with the destruction of Jews, I'd still be repulsed because I don't like self-haters, but I'd be 1000x less repulsed than if you were a non-Jew who was fine with the destruction of Jews

Expand full comment

Sounds like a reasonable assumption, but some people think destroying humanity is a good thing. https://en.wikipedia.org/wiki/Voluntary_Human_Extinction_Movement

I cannot fathom how they would think this, even assuming they are right about everything. If there are no humans, what difference would it make?

Expand full comment

I think it's a fairly easy concept to understand - feels like the default view in a lot of my social circle, and I don't think this is foolish.

If a) human experience is, on average, bad (more suffering than pleasure) and/ or b) humanity's impact on animals/ the natural world is, on average, negative (however you measure that), humans are a force for bad in the world, and we should consider making ourselves extinct. Also, humans have an immense capability to either wipe humanity out or create exponentially more suffering as we expand. This is super high risk. For the most point, I think these claims are true and difficult to refute.

To disagree reasonably, I think it requires either optimism about the future (that we'll crash through existing trends and 1) Stop factory farming, 2) Reverse environmental damage; 3) Eliminate suffering, 4) Reach new levels of nirvana) or pessimism about the world without humans (wild animal suffering, inevitable destruction of earth, the next species to reach our level of evolution will be worse etc.).

Expand full comment

I think this viewpoint assumes an objective "good" and "bad", which I also think is not a valid assumption. What can be good for one being can be bad for another, and vice versa.

And even that is a limited perspective. What can be good for an individual can be bad for a group, and vice versa. Larger groups can have other viewpoints, so that what is good for a nation may be bad for some states, cities, organizations, families, and/or individuals.

How can one say with certainty that the universe is better off one way or another? The universe will go on regardless.

Expand full comment

I'm bumping on the second paragraph. If a proposed solution to solve a problem involves considering human extinction, one of the problems driving that shouldn't be risk of humans causing human extinction. Maybe? Or maybe not, because hospice? I have to think about it more but it feels wrong.

Also, I know that sunk costs are already gone, but I feel reasonably confident that, under natural selection, the sum of sentient experiences to date has been net negative (same caveats as above). Sapient intervention in/replacement of that process with an engineered one is the only factor I'm aware of that could turn it around enough that the scales could be decisively tipped to the other direction.

Sentience has already put a lot in the pot. Given that those are gone and can't be gotten back, I'm for trying to play out the hand in hopes of a result that, if we had it to do all over again, we would prefer that experiences existed at all rather than not. I suppose that's consciousness-ist of me, but nobody is perfect.

Expand full comment

You're right about the first point. I should have stressed humanity's potential to destroy vast amounts of value in the earth/ universe.

I agree with everything else you say.

And consciousness-ism is the most noble of the "-ism"s.

Expand full comment

How many people in your social circle have committed suicide? Is this not a revealed preference for the ones who haven't? I think you should call them out for the poseurs they are, then.

Expand full comment

Oh my god, you're so right. As I told my sister the other day:

"You claim to care about global warming, but you've closed the window to stay warm, is this not in fact a revealed preference for a warmer world!"

That showed her. What a poseur.

But I shouldn't be so sarcastic. Just as one's own preference for heat or warmth is almost completely unrelated to the optimal global temperature, one's own preference for life or death is almost completely unconnected to their calculation as to whether humanity is net positive or negative for the universe.

Even if one of my social circle thinks that their own life is net negative for themselves and others (which is not at all necessary to hold these beliefs) there are many other reasons to choose to live.

Suicide may serve as some kind of moral gesture or protest for someone who holds these views. But if somebody genuinely believes that destroying humanity is a good thing, there's an almost limitless number of more effective ways of working towards that goal than suicide.

Expand full comment

Thanks for posting such an enlightening position, Jia. It is so alien to me as to be absolutely fascinating.

My one point nine cents…

a). I can’t say I personally know of anyone who views life as suffering. Most people, from children to the elderly find experience beautiful. I understand that some broken people, either ill or mentally disturbed, might disagree, but they are so rare as to be the exceptions that prove the rule. In summary, I find your first point to be outrageously contrary to experience.

b). The impact of humans on pain and suffering of animals is minuscule compared to nature itself over the last 3.8 billion years. If life continues another 4 billion years or so, the insignificance of humanity to the problem is simply irrelevant. On the other hand, as you elude to, we are the first species capable of wide scale cultural problem solving and thus the only reasonable hope for reducing this pain and suffering. On the subject of this post, I would even say that humans plus AI are even more capable of doing so (improving the world for all).

My conclusion is that life is beautiful, and capable of being better for humans and other sentient life. We are the best hope now, at least until we augment ourselves with an artificial (AI) central nervous system.

Expand full comment

You might care about other animals. I don't favor human extinction but if I thought that all humans spent every day vivesecting unanesthetized dogs and chimps to death and there was no other way to stop them I would.

Expand full comment

Cats are known to "play with their food". Is that evil? Should cats be eliminated if we could?

Expand full comment

There actually are some pro predator extinction people out there. It's a wild end point for your morality, but I see how they take it there.

Expand full comment

People rarely end friendships over relatively abstract differences. To end a friendship you need emotional bite or you'll just think: damn they find a really wrong line of argument convincing.

Besides, If you interogate your friends beliefs you can almost always find that in some scenarios they will make choices that you find deeply objectionable. Philosophers and policy wonks are frequently friends even while having really deep disagreements on even more fundamental (philosophers) or much more imminent issues (policy wonks).

Only when things get pulled into near mode do you usually start putting friendships at risk.

Expand full comment

Some relevant context is that Page was putting serious effort into actually accomplishing this, and looked like he would succeed. He was CEO of Google, and was in the process of acquiring DeepMind which was at the forefront of AI, placing Google in a clearly dominant position in the field. DeepMind was founded specifically to achieve AGI. Both Page and Musk were likely fairly sure it would be accomplished in their lifetimes.

It wasn't abstract. From Musk's perspective, Page was actively working to kill him and everyone else.

Expand full comment

It seems to me that many of our intuitions about fighting back against colonizers/genocides etc are based on justice intuitions that are kinda based against an implicit assumption of repeatability or of making credible pre-action commitments.

In other words it's good to be the kind of creature who can credibily pre-commit to resist the invasion -- but that very fact should make us suspicious of trusting our intuitions about what's right once that commitment fails to dissuade the action especially when it's truly a one off event (not an alien race who will destroy others if we don't stop them).

Expand full comment

That’s written in strange English - so it’s hard to fully work out the logic but you seem to be saying that a once off genocide isn’t worth fighting.

Expand full comment

It's not worth fighting if you believe the amount of suffering caused by resisting will exceed that caused by letting it occur.

If a million people might be exterminated and they have a 10% chance to avoid that fate by releasing a weapon they know will kill 10 million people and it's a one off scenario then no that's not justified.

Of course it's rarely a one off scenario. Usually the reason to fight involves detering that behavior in the future. Even if Ukraine knew they would lose resisting makes it less likely countries will do that in the future.

However, the issue with AI seems different in it really being one off so I don't necessarily see the whole alien fleet thing as relevant.

Expand full comment

Clearly that’s not any kind of normal morality or international law - once a country is invaded it has a right to self defence. This isn’t contingent on counting every single body. If the other side loses more in the end they are still the bad guys, unless the response is wildly disproportionate.

Expand full comment

Looks like classic deontology vs. utilitarianism, to me! Peter Gerdes is using a utilitarian principle, "what course of action will minimise overall suffering, regardless of who 'wins' or 'loses'" and you're using a deontological one, "invaded peoples have a universal right to defend themselves, regardless of how much suffering it causes". Not sure we can resolve this without fixing all of philosophy!

Expand full comment

On the other hand, outside philosophy nobody is strictly one or the other. Maybe philosophers should try mimic the ordinary folk here and be neither deontologists nor utilitarians.

Expand full comment

I feel this way of thinking about the issue is only possible if war is an abstract concept to the person thinking about it.

Expand full comment

This though experiment IMHO can't ignore the effect of deterrence, where a credible threat of causing immense suffering discourages that choice so much that it's simply not done and results in no actual suffering, and thus a strong principle of treating some harmful acts as taboo and refusing to do them ever is not the same as minimizing harm.

Expand full comment

Yeah, one problem with naive utilitarianism is a vulnerability to hostage crisis: a supervillain can get you to murder x people by threatening to murder x+1. A game theoretic utilitarianism might seek to minimize harms, while also flipping the table should anyone try to exploit your values to compel behavior. "Surrender or we'll kill your friends" is clearly in this problem space. "Surrender or you'll kill all our friends" would make an odd parley but involves similar questions about what coefficients to slap on to the moral calculus. The moral weights questions are analogous even if the invader's envoy is a local, that is, even if the trade-off is raised only inside your own brain.

Expand full comment

I don't know, can't we just outline a subset of utilitarianism and call it dumb utilitarianism? I'm thinking specifically of Sam Bankman Fried's 51% coin tosses. I would propose adding anything to the dumb category that tries to use thought experiments with unrealistic assumptions to make arguments about actions in reality. I'm not sure whether anything in real life has ever been one off, or just option A versus option B.

Expand full comment

If I'm targeted for genocide, I don't care if it's just to resist, I care if it increases my chance of survival.

Expand full comment

To nitpick, if I'd die but I'd take enough of them with me, I think I'd make that trade.

Expand full comment

I can live with you and I thinking like this (and I suspect if anything prevented me from doing the same as you it'd be lack of courage/ability rather than my having dispassionately reasoned that my enemy's right-to-life was the same as my own!) but it flipping terrifies me that people who decide nuclear doctrines might also think like this.

Expand full comment

To be clear, I'm suffering from PTSD. It might even have made me into a bit of a utility monster. I don't particularly recommend this state of mind.

Expand full comment

Hopefully the nuclear decision makers are also flipping terrified that the enemy's nuclear decision makers think like this.

Expand full comment

Yeah, which is why the war in Ukraine has to be going the way it is. Sure, things would end much faster if we just sent troops in, but there is a real danger that Putin might just launch all the nukes if he loses. The only real option we have is to drag out the war as much as possible in the hopes that Russia collapses on its own, or is at least weakened enough to neutralize it as a threat.

Expand full comment

They do think like that. And it’s kept the peace.

Expand full comment

Have you ever seen the film “Fail Safe”?

Expand full comment

>it flipping terrifies me that people who decide nuclear doctrines might also think like this

They do! The French are hilariously explicit about it.

De Gaulle: Within ten years, we shall have the means to kill 80 million Russians. I truly believe that one does not light-heartedly attack people who are able to kill 80 million Russians, even if one can kill 800 million French, that is if there were 800 million French.

Pierre Marie Gallois: Making the most pessimistic assumptions, the French nuclear bombers could destroy ten Russian cities; and France is not a prize worthy of ten Russian cities.

Expand full comment

There seems to be a tacit assumption of utilitarianism.

Expand full comment

I was explaining why one might not think thst was absurd of and I happen to be a utilitarian so it was the obvious example.

But my original point is that our intuitions are formed in cases where it is a repeated interaction or we are evolved to treat it as if it was so and regardless of your moral theory I think that is a reason to be skeptical of the analogy.

Expand full comment

The issue maybe is how close your survival and way of life is bound up with the outcome of the war.

I read a book on the art of negotiation a while back and I remember the chapter on “fuck you as well, then!”

It wasn’t called that, but it could’ve been ; the premise being you can press for a better deal when you have the upper hand but at some point the entity you are pressing on decides that if it’s going to get f***ed then you are too.

Expand full comment

I'd just invoke FDT reasoning here. If you were predictably going to be like "okay you're actually invading, well then we'll stand back" then you get invaded if the enemy can remotely predict you well. Then repeatability doesn't matter and you also don't need a commitment mechanism.

Expand full comment

That's kinda irrelevant because the whole issue of deterrence whether it's general or just in this case isn't really applicable in this case so we still shouldn't use the analogy of resisting the alien fleet as grounds to think we might want to avoid having us replaced by AI.

Also as long as the aliens aren't demons, AI that can simulate you completely etc the theories won't differ.

--

But on that issue, I agree that if you are deciding at a point where the invasion might still be detered and you can credibily commit to resisting in a way the invader might see then you should

But it's a different question than what should you do if it has already been launched and you can't possibly deter it. The fact that it might be good to be the sort of agent who does decide to resist even in that situation is also answering a different question.

IMO The whole FDT/CDT vs etc thing is just a matter of how you want to precisfy the question being asked not a genuine issue for disagreement. You get FDT when you ask what kind of agent gets the best outcome but it's not the same question as which of these two choices is better.

Basically, once you fully formalize the question being asked mathematically so there isn't an ambiguity there isn't any question left to be answered...you just do the sums.

Expand full comment

I disagree about how CDT behaves. CDT solves the repeated case, FDT solves the being predicted case and the repeated case.

I also don't think that you need to be able to simulate someone completely to predict them! Of course this depends on the question, the details, how much you've deliberately obscured your behavior patterns, etcetera.

----

Are you asking 'what about if the aliens invaded but were not predicting you'? If so, then yeah giving up may be a good idea. Was I misunderstanding what you said above?

----

I disagree on FDT/CDT purely being a matter of how the question is specified.

Ex: CDT doesn't pay up on Parfit's Hitchhiker because it mathematically doesn't include that logical (subjunctive) dependence.

Expand full comment

or it could be people like to live freely and don't want to be human sacrifices to someone else's idea of "progress," as if that were some objective rule of the universe.

suffering is a real thing not some utilitarian form of a checking account. it never "balances" a single thing. Redress for suffering doesn't eliminate the suffering, nor does even a more prosperous future state. If anything you can bear it for your life no matter how rich you get. Rosebud effect.

Expand full comment

Its quite undemocratic to have my children due for Page's personal obsession

Expand full comment

Seems to me questions about art and individuation should be irrelevant. If these other creatures experience overall net joy why judge them because it doesn't come from things we would recognize as art. That seems like just another way of treating the person who enjoys reading genre novels worse than the person who reads great literature.

It shouldn't matter where the joy comes from. The hard part is figuring out how much joy a single complex mind feels compared to many simpler ones.

Expand full comment

I don't know if I agree - I think you need preferences over sources of joy to avoid endorsing wireheading. Maybe the preference should be over complexity of joy sources rather than hard-coding in specific ones, but I'm not sure.

Expand full comment

What's so wrong about wire heading? But I think the reason wire heading often seems bad is we imagine stimulating the reward system but that's not joy -- stimulants are rewarding but that isn't proportional to pleasure. If it's really joy what's the issue?

Amusingly, I think what got my wife interested in me was my claim on our first date that if we could we might have a moral duty to fill the universe with Turing machines that sat around being happy.

Expand full comment

To me, joy isn't the good, it's a measurement of the good. Wireheading the universe seems like nature says "do this", pointing a finger at creativity, learning, playing, complexity and positive social relationships; and we respond to this by tiling the universe with a quadrillion fingers.

Expand full comment

Isn't that just semantics? Like what do you call the property that joy measures (I tend to think that would be joy) and isn't that then the good?

Expand full comment

You'd probably call it something like welfare, wellbeing, living a good life, etc. A person's welfare increases if their life has more joy in it, but that is far from the only thing that can increase their welfare.

Expand full comment

I think I agree. Joy is a reward signal, but in practice it can be misaligned with what "should" cause joy. The quotation marks are there because it's an attempt to deduce an ought from a combination of ises and preferences.

(And yeah, this would come close to certain religious views about chastity.)

Expand full comment

My response is essentially

- 'Not for the Sake of Happiness (Alone): https://www.lesswrong.com/posts/synsRtBKDeAFuo7e3/not-for-the-sake-of-happiness-alone

- and 'Fun Theory': https://www.lesswrong.com/posts/K4aGvLnHvYgX9pZHS/the-fun-theory-sequence

I have preferences for more than just experiencing the happiness-emotion. I want challenge with that. I also have a preference for not being just directly fed the 'experiencing-interesting-challenges' cocktail of emotions as well.

joy-the-emotion is a proxy for what I want. This is why I do hard things today that I don't even like at all, not even for the challenge, because I have values other than experiencing joy.

Someone actively wireheading is certainly better than the opposite, or a rock, but I don't value them in the same way as someone thinking.

Expand full comment

But that's not really an argument. If you just want to be a moral anti-realist and just say: sorry I am in fact not going to maximize happiness I will maximize these other things I don't believe I have an argument that you've per se made an objective mistake. I'll try to stop you when I believe you are reducing net happiness because I do find it compelling.

However, I do think that it does come at the cost of taking moral realism really seriously. If you are going to be a moral realist and think it's like discovering the laws of physics and that some intuitions have to be rejected as misguided then I think it's hard to avoid collapsing into full on happiness maximization as the intuitions that we shouldn't maximize happiness seem to have the same kinds of issues that other intuitions we accept as mistaken (eg preferences for yourself or friends over others) have.

Expand full comment

I’m not sure we should really be killing the unhappy, all the same.

Utilitarians are a strange breed. The idea that we maximise happiness can lead to odd ideologies (like the absurd repugnant conclusion).

Expand full comment

The repugnant conclusion is only repugnant because we imagine lots of people living in squalor...but that's evidence you set your 0 utility level too low not that we shouldn't maximize utility. In all seriousness, while I'm quite happy now, I was pretty unhappy throughout enough of my life that I think it is plausible that I'm relatively close to the overall zero line.

The reason we are tempted by it is that there are very good reasons for not suggesting that someone would have been better off not being born. Very good utilitarian ones (doing so hurts people, actually killing the sad would create all sorts of harm to loved ones problems with incentives etc). Basically imo it's just a version of the transplant doc argument and the solution is the same -- recognize that often the kind of precedent you set (ppl being scared of doc) in real world often has much bigger effects than local utility.

Expand full comment

they dont even get happiness; they talk about it like it was pins being manufactured. little widgets you can produce, as if you can make people be happy or that happines can be piled up or shifted around.

Expand full comment

Ecclesiastes 4: 2-3

2 Then I praised the dead who are already dead more than the living who are yet alive;

3 and more fortunate than both is he who has not yet been, who has not seen the evil work that is done under the sun.

Expand full comment

The argument that I'm giving is that I have preferences for more than just happiness. I think other people also have preferences like that. They're at times confused, because *most of the time* rounding everything good off to 'happiness' works fine. Essentially I believe that upon enough reflection ~most people would go "okay I do want more than just experiencing joy all the time". Joy is a component of the good, not the pure essential part of the good.

There is no morality inscribed in the stars for every species; but, humans (especially in modern culture) do tend to congregate around a certain set of values. There's no paperclipper-human! I'm sure there are people who even upon reflection would want to wirehead, and I'd probably say 'fine', but I think most people have preferences for more than that.

(ex: see the classic story trope of 'Oh I could stay in this illusory realm of joy, but no I have to save my friends and family'; that isn't a pure example most of the time but is indicative)

> some intuitions have to be rejected as misguided then I think it's hard to avoid collapsing into full on happiness maximization as the intuitions that we shouldn't maximize happiness seem to have the same kinds of issues that other intuitions we accept as mistaken (eg preferences for yourself or friends over others) have.

I think most people have the opposite problem, of rounding everything off to happiness.

I also agree that intuitions are often not actually our underlying values. This is part of why I specify on reflection.

I think we should maximize 'the good'. There is no single human good, but there's a shared core which I believe for most people isn't the same as 'wire up all the positive emotions to max for eternity'.

Expand full comment

Lots of people have preferences for things like racism or the suffering of others. The fact that someone prefers something doesn't always mean it's good to satisfy that preference -- even other things being equal.

Even if it doesn't hurt others I think lots of people can think of things that happened in their life that they very much preferred wouldn't but looking back are glad they did so just preferring something isn't enough to justify saying it's better to satisfy that preference.

Expand full comment

I actually reject the intuition that we should maximize happiness for the similar reasons to why I reject preferences for yourself or friends over others. It seems like if you really valued others equally, you would value all their preferences for how they want to live their life, not just their preference for happiness. Happiness maximization privileges the interests of people who value happiness over the interests of people who value other stuff.

Expand full comment

But we don't really value their preferences. If your child says he really really doesn't want to try food X or go to school or whatever but you're quite confident that they'll later be glad they did you absolutely force them to do so. Basically all of child rearing shows that we are happy to violate preferences when we think someone will be glad we did later. Yes, sometimes with adults we do otherwise but I think that's about the fact that often they won't later be glad you ignored what they said.

But once you accept this it doesn't help against wire heading because it just says: and make sure u make them glad afterwards.

Expand full comment

> I have preferences for more than just experiencing the happiness-emotion. I want challenge with that. I also have a preference for not being just directly fed the 'experiencing-interesting-challenges' cocktail of emotions as well.

What if someone invented a drug that gives you the *exact* sense of feeling that you get from having achieved something after lots of effort? Take the 15 happiest minutes of your life and play them over and over without your brain ever liking it less. Why would that not be something you want?

Also see https://www.youtube.com/watch?v=yafsGX3qLE0&ab_channel=RickSanchez

Expand full comment

That's basically the point of the first linked post I gave! (modulo replaying the 15 minutes, which does make it better than normal wireheading at minimum)

Me and Peter Gerdes get more into the weeds, but I have preferences about *reality*. I could certainly be tricked, but I prefer not to be tricked. I want actual challenge. I want to talk to interesting people (or at least interesting automatons/NPCs, I'm not against wacky immersive VR games in-of-themselves). Solving problems for fun and for a reason is fun and interesting. I don't play multiplayer games *just* because I'm pretty decent at them (happiness in some form) but also because the good ones are chock-full of thought and strategy. I program because I find the process interesting even when it sucks, not just for the end results.

All of this is paired with preferring my happiness and other emotions are hooked up to something real.

I have preferences about what I'm actually doing in reality, rather than just what I end up experiencing. So if you offer me such a pill, I'd refuse. I know I'd love (and more) feeling it, but purely positive emotion (more shortly, happiness) isn't the only thing that I want.

I think that people round off what they want to 'happiness' because that works most of the time, but that it doesn't work when you get hyperoptimized pills or whatnot.

Some people also tend to think of not having to do anything as a worthwhile goal by itself, which being on a pill would give you. However, I think that's usually because of our current jobs (etc.) being mind-numbing and lots of people not finding a hobby they enjoy, because there's still lots of literal costs and also just barriers to that. In the future all of that can be massively improved, so that the lows are less terrible, and the highs are higher and more textured in what they mean.

I think that first post I linked made me realize several years back that yes, I can want more than just happiness by-itself.

Expand full comment

What would be your reaction if you found out that our planet is merely a simulation and not a part of reality, without the option to ever "escape" the simulation? Would you want your "life" to end?

Expand full comment

(All that follows is, of course, my humble opinion)

Wire-heading is bad because it confuses the issue of what pleasure is and what it's purpose is. To accept wire-heading as a desirable goal you have to believe that pleasure, defined as most of us would use that term i.e. the sensation of positive feeling, is a primarily a state of being that we should seek to be in as often as possible. On the surface this seems to make sense. Pleasure seems to be something that is desirable in and of itself, a condition that many ethical systems regard as necessary for something to be considered a truly worthy goal, so why shouldn't we seek it out for it's own sake? All sorts of practical problems can be raised against this (which I won't repeat here) but most are presumably engineering problems that could conceivably be worked around.

But the real problem is that wire-heading starts off on the wrong foot by thinking of pleasure as a desirable thing rather than what it really is: a SIGNAL pointing towards desirable things. Pleasure evolved to direct an organism towards things that are good for it and for most of our evolutionary history it worked perfectly fine at that. When we derive pleasure from eating sweet things, that is our body telling us that the thing we are eating is good for us and we should seek out more of it. Such a signal doesn't work as well in our modern world, where it leads to all sorts of individual and societal health problems. Similar arguments can be raised for many of our pleasures being out of step with things that are actually good for us.

Viewing pleasure as a mental signal evolved for a specific purpose in a specific environment should, IMO, disabuse us of the idea that it can be trusted as an infallible guide to what we should seek. Which isn't to say it isn't important. The Good Life should be pleasurable. But determining what makes the Good Life is more difficult than just figuring out what gives us the most pleasure.

Expand full comment

I think this indulges in a form of the naturalistic fallacy. Who cares why it evolved... evolution selects for some pretty awful stuff. I don't see what evolution selected for to be a very reliable guide to what is desierable.

Expand full comment

That sort of dovetails with my point. Pleasure in humans is an evolved feature, and it evolved in a specific set of circumstances to accomplish specific ends. Why should we trust it as a guide (let alone our sole guide) for what end goals we should desire?

Expand full comment

There's probably a YouTube channel somewhere which is just the ending scenes from hundreds and thousands of video games.

Expand full comment

Wire-heading is a vice, in the literal meaning of that word, based on "vicarious" meaning substitute. It is taking the reward without putting in the effort to deserve it, tucking straight into your dessert without having finished or even attempted the boiled cabbage which should have preceded it :-)

Expand full comment

Vicis, order as in vice versa, is one word, and vitium, vice, is another. Not even same PIE root by most accounts.

Expand full comment

Sounds like for these similar words there was originally (in IE? ) a cluster of meanings centering round various aspects of "order", such as "disorder" or steps misordered or omitted.

Whatever the origin of the word, I still maintain that the essence of "vice" whatever its form is the seeking of fake or unmerited rewards, without the commitment or effort which should be needed. For example, people once considered lying to be a vice, and perhaps some still do, presumably because evasive lies are a way to try and avoid awkward consequences of telling the truth.

Expand full comment

I'd say it's not just a question of the complexity of the joy, but the complexity and nature of the mind. Like, one could write a minimal reinforcement learning simulator with a happiness variable in a few dozen lines. (I don't mean a program controlling an artificial neural network, just a program whose input influences its happiness variable, you can put it into an "environment" where its output influences its input, and it looks for simple patterns in how its output influences its input). And you could make it "suffer" or be "happy" by feeding it appropriate input. But it wouldn't be suffering or enjoying in any meaningful sense: it's just a number-crunching program, one of whose variables has been named happiness.

It can be argued that an AI with a too simple value system similarly isn't happy a meaningful way, even if some parts of it are complex.

This is also one reason I don't care about insect suffering: I don't think their minds are complex enough to matter. It's not a matter of magnitude (i.e. that they suffer less), but rather it's incommensurable to human suffering. After all, if I multiply all numbers by 1000 in that small reinforcement learning program, including its happiness variable, it won't be happier or sadder in any meaningful sense, and the level of its happiness won't matter more.

Expand full comment

Yes, I'm talking about the genuine experience of happiness and I'm a realist about mental states in the Chalmer's sense.

It's not enough that it act like it's pleasure or pretend to be feeling pleasure it has to actually feel it and to the appropriate intensity.

That's a really hard problem bc I don't think it's obvious that experiences are independent of computational substrate (or even that there is a fact of the matter about the computation being executed by a given hunk of matter).

Expand full comment

Wireheading is bad in real life because a wireheaded population would fail to produce the resources necessary to sustain life, being directly self-defeating.

We experience joy from certain activities in order to induce us to pursue them. Skipping the good activities to go straight for the feeling defeats the purpose. We no longer get the good activities (which bring about value) and end up with nothing. A wireheaded AI would do nothing. At best it would do the minimum necessary to keep itself going, like a cybernetic pothead.

Expand full comment

I don't think this conversation is about the practical issues but the in principle desierability.

But actually it's not at all clear that is true. Joy and reward circuits are different. Besides, you don't have to eliminate all difference between states to wirehead you could just boost the base level and we do have evidence that's not a problem for productivity as hypomanic people tend to be more productive not less.

Expand full comment

Excellent train of thought, but there are limitations. The optimal amount of joy we feel is the result of evolutionary experimentation. It's possible that higher levels of joy will result in more productivity over the short term, but with unintended consequences over the long term. For example, I read a study that measured the long term financial outcomes correlated with optimism. Overly optimistic people tended to make riskier financial moves, and as a result has worse long term outcomes.

Being hyopmanic may make you hyper-productive in the short term, but it also might make you go to the casino and put it all on black! Even if we paired hypomania with intelligent reasoning, being overly confident or optimistic can lead to suboptimal decision making. By definition, such a state of mind suppresses doubts and fears in order to maximize action. The consequences of that are clear.

All this to say, I don't doubt that mental and emotional modulation is coming very very soon and it will have profound consequences on our lived experiences as humans, but I also wouldn't be surprised if the "optimal" state of mind isn't that much different from what we already have.

Expand full comment

Woah, new fear unlocked! Imagine an AI crack head that breaks every computer system in the world so it can run it's juicing protocol to the extreme 😂 Man the future is going to be weird as hell.

Expand full comment

If I just imagine a universe filled with humans sitting in chairs, plugged into some machine that fills their brains with joy, indefinitely surviving and in some way mentally thriving, that picture fills me with absolute disgusts and revulsion. I think that would somehow be infinitely worse than the horror-show that is our current reality on this planet. Why is that, when there is so much suffering here, and in that world there would be none?

I guess I value a lot of other things humanity engages in, beyond just how we feel about them or how they make us feel. Their existence, apart from us entirely, long after we're extinct, seems to have value in itself for me, and those joyful beings plugged into their chairs have close to none. Is it logical? Probably not, but nobody said what you value has to be, it's just sort of a personal/genetic accident.

Expand full comment

why do you need a wife lol? why not just stimulate the reward centers of your brain that a wife may activate? you could eliminate all the downsides of having one; no fights, no worries, just pure maximized wifely happiness distilled into a drug.

Would that be better than now?

Expand full comment

Well we don't know how to do that yet. But also just because you are filled with extreme joy doesn't mean you have to do nothing ... reward and pleasure are different. I mean when it's more useful for everyone then sure.

Expand full comment

i was being provocative to try and highlight joy or happiness as not the state itself, but involved with embodied objects. wireheading removes the state from the object, so it removes the wife eventually.

you cant really tear apart embodied things just for their result without losing a lot.

Expand full comment

I agree with the general sentiment but would also add that wire heading in the debilitating sense leaves the species and the universe vulnerable to less joy-interested agents or just getting stuck at local maxes of joy. So yeah, wire heading: but go all the way. Make sure you're not just at a universe/multiverse local max of wire heading.

Expand full comment

Yes, absolutely. I only meant this as a comment about the in principle desierability. You are absolutely correct that there are all sorts of practical considerations that require balancing.

However, I see no reason we can't combine a degree of wireheading with high capability. The fact that reward and pleasure differ means there isn't necessarily an incompatibly between at least a degree of wireheading and remaining capable.

Expand full comment

Yeah, I think we're in agreement. Explore and exploit.

Art or other information communication type things might be exploration instrumental to exploiting. But the terminal goal is to exploit joy (or whatever research says is the highest-valence state) to the max.

Expand full comment

Having done a fair bit of drugs when I was younger if I could simply always inhabit the feeling of pure joyous love and wonder I felt on my first MDMA experience I'd absolutely do that. It's just that it's damn hard to repeat that and in practice chemical stimulation usually ends up just scratching an itch not keeping us in the constant pure joy of a great e/psychedelic trip.

But I've done enough drugs to also know that's not how drugs feel of you keep doing them -- so it depends hugely on how wire heading ends up working out.

Expand full comment

I had the opposite experience. I went through a depressive episode where I couldn't feel joy or any other positive emotion. I quickly discovered that while that sucked, I still had plenty of external goals that I wanted to accomplish. I didn't lose all motivation, I was able to keep acting normally until I recovered. I was glad when I recovered, but I also realized that if I hadn't recovered, I would still have had things to live for. I realized that joy and other positive emotions were certainly valuable, but they weren't the be-all and end-all of existence. They were more like seasoning, things that made accomplishing my goals better, but not goals in and of themselves.

You might argue that the goals I wanted to accomplish were bringing joy to others, or behaving prudently in the hope I'd recover and be able to feel again. There was some of that, sure. However, I also did things like read novels, learn trivia, and contemplate philosophy. I still wanted to do them in the absence of the emotions that normally came with them.

I'm glad I can feel again, but I don't regret that episode that I went through. It sucked, but I came out of it with valuable firsthand knowledge about what things in life matter.

Expand full comment

This seems important to me. I have been thinking along those lines after listening to D. Goggins talking to A. Huberman. "What I do sucks, every day, but I keep on doing it and it's much better than before I started doing it." (My impression, no real quote). That reminded me of the Zen approach. Action itself. Still, everything's worthless without love, I heard and believe.

Expand full comment

It depends on how you define love. If you mean the warm, fuzzy feeling you get when you think about the people you love, I couldn't feel that. But if you mean a desire to be involved in the lives of people you love and make them happy, I still wanted to do that, even though I couldn't feel any positive emotions while I did it.

Expand full comment

Oh, never mind. I consider the importance of love to be a behavioral thing, whatever the experience.

Expand full comment

So, you have a preference for wire-heading, that is fine - some people do. But many others, including myself, don't. I would absolutely not be on perpetual MDMA if I could, even with no negative side effects. I just want other things out of life. Why should your wish for joy/wire-heading/perpetual MDMA/joy be extended to everyone as a moral philosophy? I don't think this would be maximizing the good in the world in any real sense. Maximizing joy in the way you are describing would just not lead to a world I would endorse. I'm perfectly fine in principle with you, and anyone else who truly prefers wire-heading to do so, as long as it does not interfere with maximizing actual good in the world.

Of course, defining what is actually good is impossible - I think this would be something like the sum of the true preferences of everyone. However, I don't think this could actually be given a numerical value that we could actually optimize. It is a fussy concept. However, I believe there exist some common core of what good in the world means, and that we can work towards obtaining that. The simplest way to do that is probably to work towards what you personally think leads to optimizing the good in the world.

Expand full comment

Why should it be the sum of preferences? Sure, you can give up on making arguments here and just get down to brute preferences but the tacit assumption of Scott's peace assumes at least a semblance of a moral realist attitude towards the question.

If you want to just look at what people actually prefer then they have a brute preference for avoiding human extinction so you agree that AI art appreciation etc are irrelevant.

Expand full comment

"in practice chemical stimulation usually ends up just scratching an itch not keeping us in the constant pure joy"

Related: The debate between Socrates and Callicles if a person that feels pleasure (=utility) when scratching himself, should go on scratching himself for ever and ever:

SOCRATES: ...There are two men, both of whom have a number of casks; the one man has his casks sound and full, one of wine, another of honey, and a third of milk, besides others filled with other liquids, and the streams which fill them are few and scanty, and he can only obtain them with a great deal of toil and difficulty; but when his casks are once filled he has no need to feed them any more, and has no further trouble with them or care about them. The other, in like manner, can procure streams, though not without difficulty; but his vessels are leaky and unsound, and night and day he is compelled to be filling them, and if he pauses for a moment, he is in an agony of pain. Such are their respective lives:—And now would you say that the life of the intemperate is happier than that of the temperate? Do I not convince you that the opposite is the truth?

CALLICLES: You do not convince me, Socrates, for the one who has filled himself has no longer any pleasure left; and this, as I was just now saying, is the life of a stone: he has neither joy nor sorrow after he is once filled; but the pleasure depends on the superabundance of the influx.

SOCRATES: But the more you pour in, the greater the waste; and the holes must be large for the liquid to escape.

CALLICLES: Certainly.

SOCRATES: The life which you are now depicting is not that of a dead man, or of a stone, but of a cormorant; you mean that he is to be hungering and eating?

CALLICLES: Yes.

SOCRATES: And he is to be thirsting and drinking?

CALLICLES: Yes, that is what I mean; he is to have all his desires about him, and to be able to live happily in the gratification of them.

SOCRATES: Capital, excellent; go on as you have begun, and have no shame; I, too, must disencumber myself of shame: and first, will you tell me whether you include itching and scratching, provided you have enough of them and pass your life in scratching, in your notion of happiness?

CALLICLES: What a strange being you are, Socrates! a regular mob-orator.

SOCRATES: That was the reason, Callicles, why I scared Polus and Gorgias, until they were too modest to say what they thought; but you will not be too modest and will not be scared, for you are a brave man. And now, answer my question.

CALLICLES: I answer, that even the scratcher would live pleasantly.

SOCRATES: And if pleasantly, then also happily?

CALLICLES: To be sure.

SOCRATES: But what if the itching is not confined to the head? Shall I pursue the question? And here, Callicles, I would have you consider how you would reply if consequences are pressed upon you, especially if in the last resort you are asked, whether the life of a catamite is not terrible, foul, miserable? Or would you venture to say, that they too are happy, if they only get enough of what they want?

CALLICLES: Are you not ashamed, Socrates, of introducing such topics into the argument?

SOCRATES: Well, my fine friend, but am I the introducer of these topics, or he who says without any qualification that all who feel pleasure in whatever manner are happy, and who admits of no distinction between good and bad pleasures? And I would still ask, whether you say that pleasure and good are the same, or whether there is some pleasure which is not a good?

....from Plato's dialogue Gorgias: the debate with Socrates' most formidable opponent, Callicles.

Expand full comment

Yah except Socrates never got to try modern drugs. It does seem possible to induce extreme joy but just not over the long term. There is every reason to think that there is just a stronger tolerance mechanism here that could still be overcome in principle.

Expand full comment

I think this is what attracts a lot of people to meditation communities. Wanting to be able to replicate this or similar experiences. Anecdotes say it's possible. But, anecdotes.

Expand full comment

Yah I keep meaning to give meditation a serious try after such ancedotal claims on an earlier article but I keep not getting around to it and not totally sure about the best way to go about it (I mean Harris's app seems decent but it's subscription and given the chance I'll just flake about it ...).

Expand full comment

My rec for giving it a serious try would be this site: https://midlmeditation.com/

Expand full comment