610 Comments

"Musk's biographer, Walter Isaacson, also wrote about the fight but dated it to 2013 in his recent biography of Musk."

Expand full comment

I'm pretty skeptical that is what ended any friendship. It may be what brought out underlying tensions but rarely is that kind of abstract consideration really the emotional core of the problem.

I mean I did have friends who had a serious fight about whether he would still love her if she turned out to be a robot (who he was dubious had experiences) but even there I suspect the emotional core was about commitment.

Expand full comment
deletedJan 23
Comment deleted
Expand full comment

According to gossip, that was Sergei Brin’s wife, not Larry Page’s. But I assume Brin and Page are friends, so maybe it would still be enough.

Expand full comment

Love who?

Expand full comment

Peter's friends, let's call them Alice and Bob, had a fight about whether Bob would still love Alice if she were a robot.

Expand full comment

Ok, for a second I thought it was someone who only met her online and didn't know if she was real or a robot (or a catfish).

Expand full comment

Is it really weird that someone would stop being friends with someone because they learn that they're fine with destroying humanity? I was under the impression that most humans are pro-human existence. Generally, people who want to destroy humanity are painted as villains in fiction, regardless of how noble their intentions might be.

Expand full comment

Yeah but no one is going to destroy humanity in the foreseeable future no matter their intentions. So it’s an entirely abstract question

Expand full comment

...Well, it's probably a good thing you believe that. No use worrying about things you can't control.

Expand full comment

Look I just attended nips last december and the prevailing estimate of p(doom < 2040) of the people I met was few percent at most. It seems weird that someone would end a friendship over something that far fetched.

Expand full comment

Well Musk and Sergei are different sorts of characters mhm… I would think that to them it’s not that far fetched.

Expand full comment

Yes, 3% is low, especially if you're talking about bad outcomes of small magnitude -- say a 3% chance that you car won't start in the morning if you don't put it in the garage. Eh, big deal, right? But say you received a package and you know there's a 3% chance that it contains a bomb that will destroy everything within 300 feet. You gonna open it?

Expand full comment

You don't feel angry at the 5-10% of people in AI development who believe our being supplanted by AGI is an acceptable outcome? Even if they are wrong to think there's a chance that would occur, their attitude about the hypothetical strikes me as heartless and entitled.

Expand full comment

Substitute "The Jews" or "other ethnic group" for "humanity." Does that make it any less abstract?

Expand full comment

Changing something by three orders of magnitude can make it less abstract, yes

Expand full comment

So "the Chinese," then, not even a single OOM. Hardly anyone would think it weird to want to disassociate with someone who says "I think it'd be good actually if we replaced all the Chinese people with robots and computers and I intend to do it," even if nobody is going to do that in the foreseeable future. Why does it become weird if that person intends to do the same with the totality of humanity instead of a subset?

Expand full comment

If you're a Jew and you're fine with the destruction of Jews, I'd still be repulsed because I don't like self-haters, but I'd be 1000x less repulsed than if you were a non-Jew who was fine with the destruction of Jews

Expand full comment

Sounds like a reasonable assumption, but some people think destroying humanity is a good thing. https://en.wikipedia.org/wiki/Voluntary_Human_Extinction_Movement

I cannot fathom how they would think this, even assuming they are right about everything. If there are no humans, what difference would it make?

Expand full comment

I think it's a fairly easy concept to understand - feels like the default view in a lot of my social circle, and I don't think this is foolish.

If a) human experience is, on average, bad (more suffering than pleasure) and/ or b) humanity's impact on animals/ the natural world is, on average, negative (however you measure that), humans are a force for bad in the world, and we should consider making ourselves extinct. Also, humans have an immense capability to either wipe humanity out or create exponentially more suffering as we expand. This is super high risk. For the most point, I think these claims are true and difficult to refute.

To disagree reasonably, I think it requires either optimism about the future (that we'll crash through existing trends and 1) Stop factory farming, 2) Reverse environmental damage; 3) Eliminate suffering, 4) Reach new levels of nirvana) or pessimism about the world without humans (wild animal suffering, inevitable destruction of earth, the next species to reach our level of evolution will be worse etc.).

Expand full comment

I think this viewpoint assumes an objective "good" and "bad", which I also think is not a valid assumption. What can be good for one being can be bad for another, and vice versa.

And even that is a limited perspective. What can be good for an individual can be bad for a group, and vice versa. Larger groups can have other viewpoints, so that what is good for a nation may be bad for some states, cities, organizations, families, and/or individuals.

How can one say with certainty that the universe is better off one way or another? The universe will go on regardless.

Expand full comment
Jan 23·edited Jan 23

I'm bumping on the second paragraph. If a proposed solution to solve a problem involves considering human extinction, one of the problems driving that shouldn't be risk of humans causing human extinction. Maybe? Or maybe not, because hospice? I have to think about it more but it feels wrong.

Also, I know that sunk costs are already gone, but I feel reasonably confident that, under natural selection, the sum of sentient experiences to date has been net negative (same caveats as above). Sapient intervention in/replacement of that process with an engineered one is the only factor I'm aware of that could turn it around enough that the scales could be decisively tipped to the other direction.

Sentience has already put a lot in the pot. Given that those are gone and can't be gotten back, I'm for trying to play out the hand in hopes of a result that, if we had it to do all over again, we would prefer that experiences existed at all rather than not. I suppose that's consciousness-ist of me, but nobody is perfect.

Expand full comment

You're right about the first point. I should have stressed humanity's potential to destroy vast amounts of value in the earth/ universe.

I agree with everything else you say.

And consciousness-ism is the most noble of the "-ism"s.

Expand full comment

How many people in your social circle have committed suicide? Is this not a revealed preference for the ones who haven't? I think you should call them out for the poseurs they are, then.

Expand full comment

Oh my god, you're so right. As I told my sister the other day:

"You claim to care about global warming, but you've closed the window to stay warm, is this not in fact a revealed preference for a warmer world!"

That showed her. What a poseur.

But I shouldn't be so sarcastic. Just as one's own preference for heat or warmth is almost completely unrelated to the optimal global temperature, one's own preference for life or death is almost completely unconnected to their calculation as to whether humanity is net positive or negative for the universe.

Even if one of my social circle thinks that their own life is net negative for themselves and others (which is not at all necessary to hold these beliefs) there are many other reasons to choose to live.

Suicide may serve as some kind of moral gesture or protest for someone who holds these views. But if somebody genuinely believes that destroying humanity is a good thing, there's an almost limitless number of more effective ways of working towards that goal than suicide.

Expand full comment

Thanks for posting such an enlightening position, Jia. It is so alien to me as to be absolutely fascinating.

My one point nine cents…

a). I can’t say I personally know of anyone who views life as suffering. Most people, from children to the elderly find experience beautiful. I understand that some broken people, either ill or mentally disturbed, might disagree, but they are so rare as to be the exceptions that prove the rule. In summary, I find your first point to be outrageously contrary to experience.

b). The impact of humans on pain and suffering of animals is minuscule compared to nature itself over the last 3.8 billion years. If life continues another 4 billion years or so, the insignificance of humanity to the problem is simply irrelevant. On the other hand, as you elude to, we are the first species capable of wide scale cultural problem solving and thus the only reasonable hope for reducing this pain and suffering. On the subject of this post, I would even say that humans plus AI are even more capable of doing so (improving the world for all).

My conclusion is that life is beautiful, and capable of being better for humans and other sentient life. We are the best hope now, at least until we augment ourselves with an artificial (AI) central nervous system.

Expand full comment

You might care about other animals. I don't favor human extinction but if I thought that all humans spent every day vivesecting unanesthetized dogs and chimps to death and there was no other way to stop them I would.

Expand full comment

Cats are known to "play with their food". Is that evil? Should cats be eliminated if we could?

Expand full comment

There actually are some pro predator extinction people out there. It's a wild end point for your morality, but I see how they take it there.

Expand full comment

People rarely end friendships over relatively abstract differences. To end a friendship you need emotional bite or you'll just think: damn they find a really wrong line of argument convincing.

Besides, If you interogate your friends beliefs you can almost always find that in some scenarios they will make choices that you find deeply objectionable. Philosophers and policy wonks are frequently friends even while having really deep disagreements on even more fundamental (philosophers) or much more imminent issues (policy wonks).

Only when things get pulled into near mode do you usually start putting friendships at risk.

Expand full comment

Some relevant context is that Page was putting serious effort into actually accomplishing this, and looked like he would succeed. He was CEO of Google, and was in the process of acquiring DeepMind which was at the forefront of AI, placing Google in a clearly dominant position in the field. DeepMind was founded specifically to achieve AGI. Both Page and Musk were likely fairly sure it would be accomplished in their lifetimes.

It wasn't abstract. From Musk's perspective, Page was actively working to kill him and everyone else.

Expand full comment

It seems to me that many of our intuitions about fighting back against colonizers/genocides etc are based on justice intuitions that are kinda based against an implicit assumption of repeatability or of making credible pre-action commitments.

In other words it's good to be the kind of creature who can credibily pre-commit to resist the invasion -- but that very fact should make us suspicious of trusting our intuitions about what's right once that commitment fails to dissuade the action especially when it's truly a one off event (not an alien race who will destroy others if we don't stop them).

Expand full comment

That’s written in strange English - so it’s hard to fully work out the logic but you seem to be saying that a once off genocide isn’t worth fighting.

Expand full comment

It's not worth fighting if you believe the amount of suffering caused by resisting will exceed that caused by letting it occur.

If a million people might be exterminated and they have a 10% chance to avoid that fate by releasing a weapon they know will kill 10 million people and it's a one off scenario then no that's not justified.

Of course it's rarely a one off scenario. Usually the reason to fight involves detering that behavior in the future. Even if Ukraine knew they would lose resisting makes it less likely countries will do that in the future.

However, the issue with AI seems different in it really being one off so I don't necessarily see the whole alien fleet thing as relevant.

Expand full comment

Clearly that’s not any kind of normal morality or international law - once a country is invaded it has a right to self defence. This isn’t contingent on counting every single body. If the other side loses more in the end they are still the bad guys, unless the response is wildly disproportionate.

Expand full comment

Looks like classic deontology vs. utilitarianism, to me! Peter Gerdes is using a utilitarian principle, "what course of action will minimise overall suffering, regardless of who 'wins' or 'loses'" and you're using a deontological one, "invaded peoples have a universal right to defend themselves, regardless of how much suffering it causes". Not sure we can resolve this without fixing all of philosophy!

Expand full comment

On the other hand, outside philosophy nobody is strictly one or the other. Maybe philosophers should try mimic the ordinary folk here and be neither deontologists nor utilitarians.

Expand full comment

I feel this way of thinking about the issue is only possible if war is an abstract concept to the person thinking about it.

Expand full comment

This though experiment IMHO can't ignore the effect of deterrence, where a credible threat of causing immense suffering discourages that choice so much that it's simply not done and results in no actual suffering, and thus a strong principle of treating some harmful acts as taboo and refusing to do them ever is not the same as minimizing harm.

Expand full comment

Yeah, one problem with naive utilitarianism is a vulnerability to hostage crisis: a supervillain can get you to murder x people by threatening to murder x+1. A game theoretic utilitarianism might seek to minimize harms, while also flipping the table should anyone try to exploit your values to compel behavior. "Surrender or we'll kill your friends" is clearly in this problem space. "Surrender or you'll kill all our friends" would make an odd parley but involves similar questions about what coefficients to slap on to the moral calculus. The moral weights questions are analogous even if the invader's envoy is a local, that is, even if the trade-off is raised only inside your own brain.

Expand full comment

I don't know, can't we just outline a subset of utilitarianism and call it dumb utilitarianism? I'm thinking specifically of Sam Bankman Fried's 51% coin tosses. I would propose adding anything to the dumb category that tries to use thought experiments with unrealistic assumptions to make arguments about actions in reality. I'm not sure whether anything in real life has ever been one off, or just option A versus option B.

Expand full comment
Jan 23·edited Jan 23

If I'm targeted for genocide, I don't care if it's just to resist, I care if it increases my chance of survival.

Expand full comment

To nitpick, if I'd die but I'd take enough of them with me, I think I'd make that trade.

Expand full comment
Jan 23·edited Jan 23

I can live with you and I thinking like this (and I suspect if anything prevented me from doing the same as you it'd be lack of courage/ability rather than my having dispassionately reasoned that my enemy's right-to-life was the same as my own!) but it flipping terrifies me that people who decide nuclear doctrines might also think like this.

Expand full comment

To be clear, I'm suffering from PTSD. It might even have made me into a bit of a utility monster. I don't particularly recommend this state of mind.

Expand full comment

Hopefully the nuclear decision makers are also flipping terrified that the enemy's nuclear decision makers think like this.

Expand full comment

Yeah, which is why the war in Ukraine has to be going the way it is. Sure, things would end much faster if we just sent troops in, but there is a real danger that Putin might just launch all the nukes if he loses. The only real option we have is to drag out the war as much as possible in the hopes that Russia collapses on its own, or is at least weakened enough to neutralize it as a threat.

Expand full comment

They do think like that. And it’s kept the peace.

Expand full comment

Have you ever seen the film “Fail Safe”?

Expand full comment

>it flipping terrifies me that people who decide nuclear doctrines might also think like this

They do! The French are hilariously explicit about it.

De Gaulle: Within ten years, we shall have the means to kill 80 million Russians. I truly believe that one does not light-heartedly attack people who are able to kill 80 million Russians, even if one can kill 800 million French, that is if there were 800 million French.

Pierre Marie Gallois: Making the most pessimistic assumptions, the French nuclear bombers could destroy ten Russian cities; and France is not a prize worthy of ten Russian cities.

Expand full comment

There seems to be a tacit assumption of utilitarianism.

Expand full comment

I was explaining why one might not think thst was absurd of and I happen to be a utilitarian so it was the obvious example.

But my original point is that our intuitions are formed in cases where it is a repeated interaction or we are evolved to treat it as if it was so and regardless of your moral theory I think that is a reason to be skeptical of the analogy.

Expand full comment

The issue maybe is how close your survival and way of life is bound up with the outcome of the war.

I read a book on the art of negotiation a while back and I remember the chapter on “fuck you as well, then!”

It wasn’t called that, but it could’ve been ; the premise being you can press for a better deal when you have the upper hand but at some point the entity you are pressing on decides that if it’s going to get f***ed then you are too.

Expand full comment

I'd just invoke FDT reasoning here. If you were predictably going to be like "okay you're actually invading, well then we'll stand back" then you get invaded if the enemy can remotely predict you well. Then repeatability doesn't matter and you also don't need a commitment mechanism.

Expand full comment

That's kinda irrelevant because the whole issue of deterrence whether it's general or just in this case isn't really applicable in this case so we still shouldn't use the analogy of resisting the alien fleet as grounds to think we might want to avoid having us replaced by AI.

Also as long as the aliens aren't demons, AI that can simulate you completely etc the theories won't differ.

--

But on that issue, I agree that if you are deciding at a point where the invasion might still be detered and you can credibily commit to resisting in a way the invader might see then you should

But it's a different question than what should you do if it has already been launched and you can't possibly deter it. The fact that it might be good to be the sort of agent who does decide to resist even in that situation is also answering a different question.

IMO The whole FDT/CDT vs etc thing is just a matter of how you want to precisfy the question being asked not a genuine issue for disagreement. You get FDT when you ask what kind of agent gets the best outcome but it's not the same question as which of these two choices is better.

Basically, once you fully formalize the question being asked mathematically so there isn't an ambiguity there isn't any question left to be answered...you just do the sums.

Expand full comment

I disagree about how CDT behaves. CDT solves the repeated case, FDT solves the being predicted case and the repeated case.

I also don't think that you need to be able to simulate someone completely to predict them! Of course this depends on the question, the details, how much you've deliberately obscured your behavior patterns, etcetera.

----

Are you asking 'what about if the aliens invaded but were not predicting you'? If so, then yeah giving up may be a good idea. Was I misunderstanding what you said above?

----

I disagree on FDT/CDT purely being a matter of how the question is specified.

Ex: CDT doesn't pay up on Parfit's Hitchhiker because it mathematically doesn't include that logical (subjunctive) dependence.

Expand full comment

or it could be people like to live freely and don't want to be human sacrifices to someone else's idea of "progress," as if that were some objective rule of the universe.

suffering is a real thing not some utilitarian form of a checking account. it never "balances" a single thing. Redress for suffering doesn't eliminate the suffering, nor does even a more prosperous future state. If anything you can bear it for your life no matter how rich you get. Rosebud effect.

Expand full comment

Its quite undemocratic to have my children due for Page's personal obsession

Expand full comment

Seems to me questions about art and individuation should be irrelevant. If these other creatures experience overall net joy why judge them because it doesn't come from things we would recognize as art. That seems like just another way of treating the person who enjoys reading genre novels worse than the person who reads great literature.

It shouldn't matter where the joy comes from. The hard part is figuring out how much joy a single complex mind feels compared to many simpler ones.

Expand full comment
author

I don't know if I agree - I think you need preferences over sources of joy to avoid endorsing wireheading. Maybe the preference should be over complexity of joy sources rather than hard-coding in specific ones, but I'm not sure.

Expand full comment

What's so wrong about wire heading? But I think the reason wire heading often seems bad is we imagine stimulating the reward system but that's not joy -- stimulants are rewarding but that isn't proportional to pleasure. If it's really joy what's the issue?

Amusingly, I think what got my wife interested in me was my claim on our first date that if we could we might have a moral duty to fill the universe with Turing machines that sat around being happy.

Expand full comment
Jan 23·edited Jan 23

To me, joy isn't the good, it's a measurement of the good. Wireheading the universe seems like nature says "do this", pointing a finger at creativity, learning, playing, complexity and positive social relationships; and we respond to this by tiling the universe with a quadrillion fingers.

Expand full comment

Isn't that just semantics? Like what do you call the property that joy measures (I tend to think that would be joy) and isn't that then the good?

Expand full comment

You'd probably call it something like welfare, wellbeing, living a good life, etc. A person's welfare increases if their life has more joy in it, but that is far from the only thing that can increase their welfare.

Expand full comment

I think I agree. Joy is a reward signal, but in practice it can be misaligned with what "should" cause joy. The quotation marks are there because it's an attempt to deduce an ought from a combination of ises and preferences.

(And yeah, this would come close to certain religious views about chastity.)

Expand full comment

My response is essentially

- 'Not for the Sake of Happiness (Alone): https://www.lesswrong.com/posts/synsRtBKDeAFuo7e3/not-for-the-sake-of-happiness-alone

- and 'Fun Theory': https://www.lesswrong.com/posts/K4aGvLnHvYgX9pZHS/the-fun-theory-sequence

I have preferences for more than just experiencing the happiness-emotion. I want challenge with that. I also have a preference for not being just directly fed the 'experiencing-interesting-challenges' cocktail of emotions as well.

joy-the-emotion is a proxy for what I want. This is why I do hard things today that I don't even like at all, not even for the challenge, because I have values other than experiencing joy.

Someone actively wireheading is certainly better than the opposite, or a rock, but I don't value them in the same way as someone thinking.

Expand full comment

But that's not really an argument. If you just want to be a moral anti-realist and just say: sorry I am in fact not going to maximize happiness I will maximize these other things I don't believe I have an argument that you've per se made an objective mistake. I'll try to stop you when I believe you are reducing net happiness because I do find it compelling.

However, I do think that it does come at the cost of taking moral realism really seriously. If you are going to be a moral realist and think it's like discovering the laws of physics and that some intuitions have to be rejected as misguided then I think it's hard to avoid collapsing into full on happiness maximization as the intuitions that we shouldn't maximize happiness seem to have the same kinds of issues that other intuitions we accept as mistaken (eg preferences for yourself or friends over others) have.

Expand full comment

I’m not sure we should really be killing the unhappy, all the same.

Utilitarians are a strange breed. The idea that we maximise happiness can lead to odd ideologies (like the absurd repugnant conclusion).

Expand full comment

The repugnant conclusion is only repugnant because we imagine lots of people living in squalor...but that's evidence you set your 0 utility level too low not that we shouldn't maximize utility. In all seriousness, while I'm quite happy now, I was pretty unhappy throughout enough of my life that I think it is plausible that I'm relatively close to the overall zero line.

The reason we are tempted by it is that there are very good reasons for not suggesting that someone would have been better off not being born. Very good utilitarian ones (doing so hurts people, actually killing the sad would create all sorts of harm to loved ones problems with incentives etc). Basically imo it's just a version of the transplant doc argument and the solution is the same -- recognize that often the kind of precedent you set (ppl being scared of doc) in real world often has much bigger effects than local utility.

Expand full comment

they dont even get happiness; they talk about it like it was pins being manufactured. little widgets you can produce, as if you can make people be happy or that happines can be piled up or shifted around.

Expand full comment

Ecclesiastes 4: 2-3

2 Then I praised the dead who are already dead more than the living who are yet alive;

3 and more fortunate than both is he who has not yet been, who has not seen the evil work that is done under the sun.

Expand full comment

The argument that I'm giving is that I have preferences for more than just happiness. I think other people also have preferences like that. They're at times confused, because *most of the time* rounding everything good off to 'happiness' works fine. Essentially I believe that upon enough reflection ~most people would go "okay I do want more than just experiencing joy all the time". Joy is a component of the good, not the pure essential part of the good.

There is no morality inscribed in the stars for every species; but, humans (especially in modern culture) do tend to congregate around a certain set of values. There's no paperclipper-human! I'm sure there are people who even upon reflection would want to wirehead, and I'd probably say 'fine', but I think most people have preferences for more than that.

(ex: see the classic story trope of 'Oh I could stay in this illusory realm of joy, but no I have to save my friends and family'; that isn't a pure example most of the time but is indicative)

> some intuitions have to be rejected as misguided then I think it's hard to avoid collapsing into full on happiness maximization as the intuitions that we shouldn't maximize happiness seem to have the same kinds of issues that other intuitions we accept as mistaken (eg preferences for yourself or friends over others) have.

I think most people have the opposite problem, of rounding everything off to happiness.

I also agree that intuitions are often not actually our underlying values. This is part of why I specify on reflection.

I think we should maximize 'the good'. There is no single human good, but there's a shared core which I believe for most people isn't the same as 'wire up all the positive emotions to max for eternity'.

Expand full comment

Lots of people have preferences for things like racism or the suffering of others. The fact that someone prefers something doesn't always mean it's good to satisfy that preference -- even other things being equal.

Even if it doesn't hurt others I think lots of people can think of things that happened in their life that they very much preferred wouldn't but looking back are glad they did so just preferring something isn't enough to justify saying it's better to satisfy that preference.

Expand full comment

I actually reject the intuition that we should maximize happiness for the similar reasons to why I reject preferences for yourself or friends over others. It seems like if you really valued others equally, you would value all their preferences for how they want to live their life, not just their preference for happiness. Happiness maximization privileges the interests of people who value happiness over the interests of people who value other stuff.

Expand full comment

But we don't really value their preferences. If your child says he really really doesn't want to try food X or go to school or whatever but you're quite confident that they'll later be glad they did you absolutely force them to do so. Basically all of child rearing shows that we are happy to violate preferences when we think someone will be glad we did later. Yes, sometimes with adults we do otherwise but I think that's about the fact that often they won't later be glad you ignored what they said.

But once you accept this it doesn't help against wire heading because it just says: and make sure u make them glad afterwards.

Expand full comment

> I have preferences for more than just experiencing the happiness-emotion. I want challenge with that. I also have a preference for not being just directly fed the 'experiencing-interesting-challenges' cocktail of emotions as well.

What if someone invented a drug that gives you the *exact* sense of feeling that you get from having achieved something after lots of effort? Take the 15 happiest minutes of your life and play them over and over without your brain ever liking it less. Why would that not be something you want?

Also see https://www.youtube.com/watch?v=yafsGX3qLE0&ab_channel=RickSanchez

Expand full comment
Jan 23·edited Jan 23

That's basically the point of the first linked post I gave! (modulo replaying the 15 minutes, which does make it better than normal wireheading at minimum)

Me and Peter Gerdes get more into the weeds, but I have preferences about *reality*. I could certainly be tricked, but I prefer not to be tricked. I want actual challenge. I want to talk to interesting people (or at least interesting automatons/NPCs, I'm not against wacky immersive VR games in-of-themselves). Solving problems for fun and for a reason is fun and interesting. I don't play multiplayer games *just* because I'm pretty decent at them (happiness in some form) but also because the good ones are chock-full of thought and strategy. I program because I find the process interesting even when it sucks, not just for the end results.

All of this is paired with preferring my happiness and other emotions are hooked up to something real.

I have preferences about what I'm actually doing in reality, rather than just what I end up experiencing. So if you offer me such a pill, I'd refuse. I know I'd love (and more) feeling it, but purely positive emotion (more shortly, happiness) isn't the only thing that I want.

I think that people round off what they want to 'happiness' because that works most of the time, but that it doesn't work when you get hyperoptimized pills or whatnot.

Some people also tend to think of not having to do anything as a worthwhile goal by itself, which being on a pill would give you. However, I think that's usually because of our current jobs (etc.) being mind-numbing and lots of people not finding a hobby they enjoy, because there's still lots of literal costs and also just barriers to that. In the future all of that can be massively improved, so that the lows are less terrible, and the highs are higher and more textured in what they mean.

I think that first post I linked made me realize several years back that yes, I can want more than just happiness by-itself.

Expand full comment

What would be your reaction if you found out that our planet is merely a simulation and not a part of reality, without the option to ever "escape" the simulation? Would you want your "life" to end?

Expand full comment

(All that follows is, of course, my humble opinion)

Wire-heading is bad because it confuses the issue of what pleasure is and what it's purpose is. To accept wire-heading as a desirable goal you have to believe that pleasure, defined as most of us would use that term i.e. the sensation of positive feeling, is a primarily a state of being that we should seek to be in as often as possible. On the surface this seems to make sense. Pleasure seems to be something that is desirable in and of itself, a condition that many ethical systems regard as necessary for something to be considered a truly worthy goal, so why shouldn't we seek it out for it's own sake? All sorts of practical problems can be raised against this (which I won't repeat here) but most are presumably engineering problems that could conceivably be worked around.

But the real problem is that wire-heading starts off on the wrong foot by thinking of pleasure as a desirable thing rather than what it really is: a SIGNAL pointing towards desirable things. Pleasure evolved to direct an organism towards things that are good for it and for most of our evolutionary history it worked perfectly fine at that. When we derive pleasure from eating sweet things, that is our body telling us that the thing we are eating is good for us and we should seek out more of it. Such a signal doesn't work as well in our modern world, where it leads to all sorts of individual and societal health problems. Similar arguments can be raised for many of our pleasures being out of step with things that are actually good for us.

Viewing pleasure as a mental signal evolved for a specific purpose in a specific environment should, IMO, disabuse us of the idea that it can be trusted as an infallible guide to what we should seek. Which isn't to say it isn't important. The Good Life should be pleasurable. But determining what makes the Good Life is more difficult than just figuring out what gives us the most pleasure.

Expand full comment

I think this indulges in a form of the naturalistic fallacy. Who cares why it evolved... evolution selects for some pretty awful stuff. I don't see what evolution selected for to be a very reliable guide to what is desierable.

Expand full comment

That sort of dovetails with my point. Pleasure in humans is an evolved feature, and it evolved in a specific set of circumstances to accomplish specific ends. Why should we trust it as a guide (let alone our sole guide) for what end goals we should desire?

Expand full comment

There's probably a YouTube channel somewhere which is just the ending scenes from hundreds and thousands of video games.

Expand full comment
Jan 23·edited Jan 23

Wire-heading is a vice, in the literal meaning of that word, based on "vicarious" meaning substitute. It is taking the reward without putting in the effort to deserve it, tucking straight into your dessert without having finished or even attempted the boiled cabbage which should have preceded it :-)

Expand full comment

Vicis, order as in vice versa, is one word, and vitium, vice, is another. Not even same PIE root by most accounts.

Expand full comment
Jan 23·edited Jan 23

Sounds like for these similar words there was originally (in IE? ) a cluster of meanings centering round various aspects of "order", such as "disorder" or steps misordered or omitted.

Whatever the origin of the word, I still maintain that the essence of "vice" whatever its form is the seeking of fake or unmerited rewards, without the commitment or effort which should be needed. For example, people once considered lying to be a vice, and perhaps some still do, presumably because evasive lies are a way to try and avoid awkward consequences of telling the truth.

Expand full comment

I'd say it's not just a question of the complexity of the joy, but the complexity and nature of the mind. Like, one could write a minimal reinforcement learning simulator with a happiness variable in a few dozen lines. (I don't mean a program controlling an artificial neural network, just a program whose input influences its happiness variable, you can put it into an "environment" where its output influences its input, and it looks for simple patterns in how its output influences its input). And you could make it "suffer" or be "happy" by feeding it appropriate input. But it wouldn't be suffering or enjoying in any meaningful sense: it's just a number-crunching program, one of whose variables has been named happiness.

It can be argued that an AI with a too simple value system similarly isn't happy a meaningful way, even if some parts of it are complex.

This is also one reason I don't care about insect suffering: I don't think their minds are complex enough to matter. It's not a matter of magnitude (i.e. that they suffer less), but rather it's incommensurable to human suffering. After all, if I multiply all numbers by 1000 in that small reinforcement learning program, including its happiness variable, it won't be happier or sadder in any meaningful sense, and the level of its happiness won't matter more.

Expand full comment

Yes, I'm talking about the genuine experience of happiness and I'm a realist about mental states in the Chalmer's sense.

It's not enough that it act like it's pleasure or pretend to be feeling pleasure it has to actually feel it and to the appropriate intensity.

That's a really hard problem bc I don't think it's obvious that experiences are independent of computational substrate (or even that there is a fact of the matter about the computation being executed by a given hunk of matter).

Expand full comment

Wireheading is bad in real life because a wireheaded population would fail to produce the resources necessary to sustain life, being directly self-defeating.

We experience joy from certain activities in order to induce us to pursue them. Skipping the good activities to go straight for the feeling defeats the purpose. We no longer get the good activities (which bring about value) and end up with nothing. A wireheaded AI would do nothing. At best it would do the minimum necessary to keep itself going, like a cybernetic pothead.

Expand full comment

I don't think this conversation is about the practical issues but the in principle desierability.

But actually it's not at all clear that is true. Joy and reward circuits are different. Besides, you don't have to eliminate all difference between states to wirehead you could just boost the base level and we do have evidence that's not a problem for productivity as hypomanic people tend to be more productive not less.

Expand full comment

Excellent train of thought, but there are limitations. The optimal amount of joy we feel is the result of evolutionary experimentation. It's possible that higher levels of joy will result in more productivity over the short term, but with unintended consequences over the long term. For example, I read a study that measured the long term financial outcomes correlated with optimism. Overly optimistic people tended to make riskier financial moves, and as a result has worse long term outcomes.

Being hyopmanic may make you hyper-productive in the short term, but it also might make you go to the casino and put it all on black! Even if we paired hypomania with intelligent reasoning, being overly confident or optimistic can lead to suboptimal decision making. By definition, such a state of mind suppresses doubts and fears in order to maximize action. The consequences of that are clear.

All this to say, I don't doubt that mental and emotional modulation is coming very very soon and it will have profound consequences on our lived experiences as humans, but I also wouldn't be surprised if the "optimal" state of mind isn't that much different from what we already have.

Expand full comment

Woah, new fear unlocked! Imagine an AI crack head that breaks every computer system in the world so it can run it's juicing protocol to the extreme 😂 Man the future is going to be weird as hell.

Expand full comment

If I just imagine a universe filled with humans sitting in chairs, plugged into some machine that fills their brains with joy, indefinitely surviving and in some way mentally thriving, that picture fills me with absolute disgusts and revulsion. I think that would somehow be infinitely worse than the horror-show that is our current reality on this planet. Why is that, when there is so much suffering here, and in that world there would be none?

I guess I value a lot of other things humanity engages in, beyond just how we feel about them or how they make us feel. Their existence, apart from us entirely, long after we're extinct, seems to have value in itself for me, and those joyful beings plugged into their chairs have close to none. Is it logical? Probably not, but nobody said what you value has to be, it's just sort of a personal/genetic accident.

Expand full comment

why do you need a wife lol? why not just stimulate the reward centers of your brain that a wife may activate? you could eliminate all the downsides of having one; no fights, no worries, just pure maximized wifely happiness distilled into a drug.

Would that be better than now?

Expand full comment

Well we don't know how to do that yet. But also just because you are filled with extreme joy doesn't mean you have to do nothing ... reward and pleasure are different. I mean when it's more useful for everyone then sure.

Expand full comment

i was being provocative to try and highlight joy or happiness as not the state itself, but involved with embodied objects. wireheading removes the state from the object, so it removes the wife eventually.

you cant really tear apart embodied things just for their result without losing a lot.

Expand full comment

I agree with the general sentiment but would also add that wire heading in the debilitating sense leaves the species and the universe vulnerable to less joy-interested agents or just getting stuck at local maxes of joy. So yeah, wire heading: but go all the way. Make sure you're not just at a universe/multiverse local max of wire heading.

Expand full comment

Yes, absolutely. I only meant this as a comment about the in principle desierability. You are absolutely correct that there are all sorts of practical considerations that require balancing.

However, I see no reason we can't combine a degree of wireheading with high capability. The fact that reward and pleasure differ means there isn't necessarily an incompatibly between at least a degree of wireheading and remaining capable.

Expand full comment

Yeah, I think we're in agreement. Explore and exploit.

Art or other information communication type things might be exploration instrumental to exploiting. But the terminal goal is to exploit joy (or whatever research says is the highest-valence state) to the max.

Expand full comment

Having done a fair bit of drugs when I was younger if I could simply always inhabit the feeling of pure joyous love and wonder I felt on my first MDMA experience I'd absolutely do that. It's just that it's damn hard to repeat that and in practice chemical stimulation usually ends up just scratching an itch not keeping us in the constant pure joy of a great e/psychedelic trip.

But I've done enough drugs to also know that's not how drugs feel of you keep doing them -- so it depends hugely on how wire heading ends up working out.

Expand full comment

I had the opposite experience. I went through a depressive episode where I couldn't feel joy or any other positive emotion. I quickly discovered that while that sucked, I still had plenty of external goals that I wanted to accomplish. I didn't lose all motivation, I was able to keep acting normally until I recovered. I was glad when I recovered, but I also realized that if I hadn't recovered, I would still have had things to live for. I realized that joy and other positive emotions were certainly valuable, but they weren't the be-all and end-all of existence. They were more like seasoning, things that made accomplishing my goals better, but not goals in and of themselves.

You might argue that the goals I wanted to accomplish were bringing joy to others, or behaving prudently in the hope I'd recover and be able to feel again. There was some of that, sure. However, I also did things like read novels, learn trivia, and contemplate philosophy. I still wanted to do them in the absence of the emotions that normally came with them.

I'm glad I can feel again, but I don't regret that episode that I went through. It sucked, but I came out of it with valuable firsthand knowledge about what things in life matter.

Expand full comment

This seems important to me. I have been thinking along those lines after listening to D. Goggins talking to A. Huberman. "What I do sucks, every day, but I keep on doing it and it's much better than before I started doing it." (My impression, no real quote). That reminded me of the Zen approach. Action itself. Still, everything's worthless without love, I heard and believe.

Expand full comment
Jan 25·edited Jan 25

It depends on how you define love. If you mean the warm, fuzzy feeling you get when you think about the people you love, I couldn't feel that. But if you mean a desire to be involved in the lives of people you love and make them happy, I still wanted to do that, even though I couldn't feel any positive emotions while I did it.

Expand full comment

Oh, never mind. I consider the importance of love to be a behavioral thing, whatever the experience.

Expand full comment

So, you have a preference for wire-heading, that is fine - some people do. But many others, including myself, don't. I would absolutely not be on perpetual MDMA if I could, even with no negative side effects. I just want other things out of life. Why should your wish for joy/wire-heading/perpetual MDMA/joy be extended to everyone as a moral philosophy? I don't think this would be maximizing the good in the world in any real sense. Maximizing joy in the way you are describing would just not lead to a world I would endorse. I'm perfectly fine in principle with you, and anyone else who truly prefers wire-heading to do so, as long as it does not interfere with maximizing actual good in the world.

Of course, defining what is actually good is impossible - I think this would be something like the sum of the true preferences of everyone. However, I don't think this could actually be given a numerical value that we could actually optimize. It is a fussy concept. However, I believe there exist some common core of what good in the world means, and that we can work towards obtaining that. The simplest way to do that is probably to work towards what you personally think leads to optimizing the good in the world.

Expand full comment

Why should it be the sum of preferences? Sure, you can give up on making arguments here and just get down to brute preferences but the tacit assumption of Scott's peace assumes at least a semblance of a moral realist attitude towards the question.

If you want to just look at what people actually prefer then they have a brute preference for avoiding human extinction so you agree that AI art appreciation etc are irrelevant.

Expand full comment

"in practice chemical stimulation usually ends up just scratching an itch not keeping us in the constant pure joy"

Related: The debate between Socrates and Callicles if a person that feels pleasure (=utility) when scratching himself, should go on scratching himself for ever and ever:

SOCRATES: ...There are two men, both of whom have a number of casks; the one man has his casks sound and full, one of wine, another of honey, and a third of milk, besides others filled with other liquids, and the streams which fill them are few and scanty, and he can only obtain them with a great deal of toil and difficulty; but when his casks are once filled he has no need to feed them any more, and has no further trouble with them or care about them. The other, in like manner, can procure streams, though not without difficulty; but his vessels are leaky and unsound, and night and day he is compelled to be filling them, and if he pauses for a moment, he is in an agony of pain. Such are their respective lives:—And now would you say that the life of the intemperate is happier than that of the temperate? Do I not convince you that the opposite is the truth?

CALLICLES: You do not convince me, Socrates, for the one who has filled himself has no longer any pleasure left; and this, as I was just now saying, is the life of a stone: he has neither joy nor sorrow after he is once filled; but the pleasure depends on the superabundance of the influx.

SOCRATES: But the more you pour in, the greater the waste; and the holes must be large for the liquid to escape.

CALLICLES: Certainly.

SOCRATES: The life which you are now depicting is not that of a dead man, or of a stone, but of a cormorant; you mean that he is to be hungering and eating?

CALLICLES: Yes.

SOCRATES: And he is to be thirsting and drinking?

CALLICLES: Yes, that is what I mean; he is to have all his desires about him, and to be able to live happily in the gratification of them.

SOCRATES: Capital, excellent; go on as you have begun, and have no shame; I, too, must disencumber myself of shame: and first, will you tell me whether you include itching and scratching, provided you have enough of them and pass your life in scratching, in your notion of happiness?

CALLICLES: What a strange being you are, Socrates! a regular mob-orator.

SOCRATES: That was the reason, Callicles, why I scared Polus and Gorgias, until they were too modest to say what they thought; but you will not be too modest and will not be scared, for you are a brave man. And now, answer my question.

CALLICLES: I answer, that even the scratcher would live pleasantly.

SOCRATES: And if pleasantly, then also happily?

CALLICLES: To be sure.

SOCRATES: But what if the itching is not confined to the head? Shall I pursue the question? And here, Callicles, I would have you consider how you would reply if consequences are pressed upon you, especially if in the last resort you are asked, whether the life of a catamite is not terrible, foul, miserable? Or would you venture to say, that they too are happy, if they only get enough of what they want?

CALLICLES: Are you not ashamed, Socrates, of introducing such topics into the argument?

SOCRATES: Well, my fine friend, but am I the introducer of these topics, or he who says without any qualification that all who feel pleasure in whatever manner are happy, and who admits of no distinction between good and bad pleasures? And I would still ask, whether you say that pleasure and good are the same, or whether there is some pleasure which is not a good?

....from Plato's dialogue Gorgias: the debate with Socrates' most formidable opponent, Callicles.

Expand full comment

Yah except Socrates never got to try modern drugs. It does seem possible to induce extreme joy but just not over the long term. There is every reason to think that there is just a stronger tolerance mechanism here that could still be overcome in principle.

Expand full comment

I think this is what attracts a lot of people to meditation communities. Wanting to be able to replicate this or similar experiences. Anecdotes say it's possible. But, anecdotes.

Expand full comment

Yah I keep meaning to give meditation a serious try after such ancedotal claims on an earlier article but I keep not getting around to it and not totally sure about the best way to go about it (I mean Harris's app seems decent but it's subscription and given the chance I'll just flake about it ...).

Expand full comment

My rec for giving it a serious try would be this site: https://midlmeditation.com/

Expand full comment
Jan 23·edited Jan 23

We definitely don't only value joy, either individually or as a singular moral idea.

The way we work individually is that things in our environment trigger our instinctive reward function, and that causes us to value those things- importantly, not as means for getting more reward, but as ends unto themselves. We value art and consciousness and individuality and so on because those things have, in the past, triggered instincts having to do with aesthetic pleasure, the desire for status, the desire to help kin, and so on.

Our motivations aren't striving to approximate some universally correct end (which is actually a self-contradictory idea, since an imperative can only be "correct" relative to some terminal goal). We just value the things we happen to value, as a consequence of our particular biology and environment.

Joy is one of the things we experience when something triggers our reward function. We do also value joy as a terminal goal, in addition to whatever in our environment that part of our reward function reinforces. I think that's because if we didn't value the experience of what we value changing, then we'd be motivated to avoid that feeling- having what you value change is generally not great for whatever you value now. Evolution needs our utility function to be continuously re-shaped by our reward function, however, so it's built us to also instinctively value that experience of change.

Morality, I'd argue, is really several different things that we confusingly conflate under one term- it's an social technology that prevents intractable collective action problems by getting people to precommit to acting in the collective interest; it's social pressure by compassionate people to promote compassion; it's a bunch of random cultural memes that have latched on to the term to free-ride on the significance of the other two- and maybe a few other things. I don't think anything we call morality can really be reduced to "promote joy", however- if it did, that would suggest that an ideal society would involve everyone becoming a brain in a jar, wireheaded to experience nothing but maximum joy 24/7, and I don't think many people would really endorse that.

Expand full comment

Yes, ofc many people value many things. Some people value racial purity or even the suffering of others. I don't find what other people might prefer all that important in this discussion.

Even when we take these intuitions seriously in philosophy we don't take them at face value we look to integrate them into a common principle correcting for bias and self-serving motivation. I tend to think that if you take that idea seriously you either give up or end up saying you have to abstract away a great deal and treat many of those preferences no differently than we regard biases against people who look different than we do and say they are mistaken.

If we get down to the philosophical roots of the issue I alternate between being a moral realist who thinks that it's just objectively correct that one should prefer outcomes with greater net pleasure and being a moral anti-realist who just finds only that concern persuasive as to what I feel normatively motivated (on reflection, in ideal circumstances) to do and I'm not sure I'm convinced they are different positions.

Expand full comment
Jan 23·edited Jan 23

So, if I were to show a you a box with an extremely reliable life-support system, which would shut down all of your higher thought processes and cause you to experience nothing but an extreme sensation of joy, would you agree to be sealed inside that box for the rest of your life? Would you force other people into the box who didn't want to go, on the grounds that maximizing joy is the only true morality, and consent is just some crude approximation? I'd certainly fight, very violently if necessary, against the imposition of something like that.

When you have an imperative statement like "you should do X", the only way you can ever have evidence for against it is if you specify an end, like "you should do X in order to promote Y". That can be reduced to the declarative statement "doing X will promote Y", which is testable; disprovable. Without that terminal goal Y, the statement doesn't correspond to anything declarative. It's half an idea, like a verb without a noun, and it can't describe anything in reality- if it did, we'd have no way of knowing, since you need evidence to learn things.

There can be no such thing as a single correct terminal goal. For something to be "correct" in that sense means that it promotes some terminal goal, so the idea both posits a singular terminal goal and implies another, higher goal that it promotes- a self-contradiction.

A lot of racists do value racism as a terminal goal- that's a problem for you and me because it conflicts with our own terminal goals and various kinds of morality, not because the racists are failing to value a one true goal of reality. If the terminal goals of some future post-human mind conflict with my own, that's similarly not a conflict that can be resolved by re-imagining our terminal goals as secretly instrumental.

Expand full comment

I'm not at all convinced that you can experience extreme joy without those higher thought centers. So I'd need lots of convincing.

And I'd take very seriously the suffering I imposed on all the people who became scared they might be forced in or upset their relatives were. So in the real world I'd probably feel that convincing people was necessary for same reason docs in real world shouldn't kill ppl for transplant organs.

But if you want me to imagine that all those features don't exist. That I find some isolated spaceship of people where no one else could possibly find out what I've done or I could do it Thanos style then yes.

Expand full comment

FWIW I doubt many abhorrent beliefs like bigotry are actually terminal goals. Since for instance racism has a hard time coexisting with accurate and balanced firsthand knowledge of said group.

Expand full comment

There's a great podcast on the life of Roger Ebert https://www.watchingtogetheralone.com/p/siskel-and-ebert-and-the-movies-and

The rise of less expensive filmmaking in the 70s and 80s led to the rise of film criticism, because it was easy to make crappy movies. A person with no art taste might think NFTs and AI is high art, but there's no reason it won't lead to a Rotten Tomatoes site for AI "art" reviews. I read in the art world that money launderers sometimes get caught/flagged in auctions for buying bad art.

Expand full comment

> I read in the art world that money launderers sometimes get caught/flagged in auctions for buying bad art.

How can anyone tell?

Expand full comment

It takes an artist to discern good art from bad. That said, there is a lot of middlebrow. But one interpretation is that what is considered "good" art is defined by society. A more conventional definition might define it based on its adherence to classical form, or its originality. But in a lot of cases, there is a pretty big difference between refined and polished art, and low effort art. Mixing the two sometimes is original and can elevate high to low and vice versa. I am not a super art historian or anything, but having read more than a few long essays and short books on the subject and one can sift out the lowest quality. The ones that know anything about low quality art wouldn't pay a certain amount, thus it might look suspicious if it is not a "name brand" like Dali or Chagall https://www.imf.org/en/Publications/fandd/issues/2019/09/the-art-of-money-laundering-and-washing-illicit-cash-mashberg In short, there is an art even to crime.

Expand full comment
Jan 23·edited Jan 23

You can have preferences over possible worlds!

People experiencing joy for most reasons >> paperclips.

However, you can have preferences about what preferences the future has. They aren't fundamentally equal. I think this is a weird distortion of the classical idea of people being equal under the law. If I have the choice between 'Happy world where people are 70 IQ' versus 'Happy world where people are 130 IQ' I choose the latter unless there's some notably bad outcomes that come from it.

I prefer worlds with art to worlds that don't have art. I'd still value an alternate humanity that dismissed all art as a bad job centuries ago, but I do think they've lost something. Even if they would not value art were it to be reintroduced.

When looking over future outcomes we can choose where everything lands.

Expand full comment

Yes, one certainly can have those. When I look at meta-preferences I do end up saying that I should prefer the worlds with the most happiness.

I don't think you're being irrational if you disagree. You just have different preferences.

The only thing I do think is irrational is in claiming to be a moral realist in the same sense most scientists are realists about the laws of physics (so there is one true partial order over worlds and that's discoverable via looking at intuitions and trying to reject the ones that are seen as flawed and finding elegant generalizations) and not ending up at pleasure. But I think moral realism is pretty suspect anyway so not a huge loss.

We are already willing to set aside intuitions as flawed for reasons that apply with equal force to all the intuitions favoring something other than pleasure. So if you don't like that just shrug and be an anti-realist (valid...half the time I am one just w/ different meta-preferences than you).

Expand full comment

>I should prefer the worlds with the most happiness.

Have you swallowed the obvious bullet here, that this means endorsing eventual human extermination?

Since there's no reason to think that humans would be the most efficient at generating joy per unit resource, compared to a custom designed artificial mind.

So a powerful enough agent maximizing joy would eventually want to seize the resources taken up by humans. Which it can use for making/running a much larger number of custom designed blissed out AI. Letting humans continue to exist even in a mindless wireheaded state is ultimately suboptimal.

A super intelligence can of course also cut out the middleman and just become a utility monster directly, whose slightest annoyance dwarfs all human experience.

Expand full comment

Yes, if you get really strong evidence that these other contraptions are really experiencing pleasure. I'm not at all convinced that's an easy problem (for all I know consciousness might supervene on carbon atoms but not Si) but in a world where we do get that evidence so it's good in expectation absolutely!!

But if we can't ever get a really clear answer to hard problem of consciousness then we might need to keep some humans about just to limit the risk.

Expand full comment

Well I'll give you props for being one of the only utilitarians to accept the full implications of that belief.

Though certainly most people have values from their evolutionary instincts, such that they will do nearly anything to stop you getting in a position to influence things in such ways. Since as far as most people are concerned you are essentially a paperclipper in your values.

Expand full comment
Jan 23·edited Jan 23

In our current environment, for most moral questions, things like individuality and types of values and preferences are held constant. Human nature is what it is and cannot be changed. Joy is the major variable, the thing that often changes. For that reason, much of moral philosophy and action is focused on trying to make people happier. I these facts bias us towards situations where human nature and individuality are not held constant. They are so alien to our life experience that there is a temptation to generalize present-day moral behavior to a very different situation. This can lead people to discount the value of human nature and individuality. Since it is a constant, they take it for granted.

I think that snobbery, looking down on people who enjoy the wrong kind of fiction, is a moral error. However, I also think it is a moral error to go too far in the other direction and conclude that joy is valuable independently of the individuality of the people who experience it. I think there's a difference between valuing all kinds of people who have all kinds of different ways of living a good life, and wanting to impose a single way of living a good life (maximum joy!) on all of them.

Expand full comment

All these arguments really show why even a benevolent super-intelligence will choose to strip humanity of all agency. Humans really do not know what they want. Allowing them any free will would go against their best interests.

Expand full comment

This is one of those statements that I find morally repugnant, and yet when I think about it I struggle to come up with an argument as to why it's wrong.

Expand full comment

Humans are pretty clear on what they want, although different humans tend to want slightly different things. Often, freedom itself is a thing they want, rather than or in addition to any other goal. And people make decisions that are in their best interests quite frequently, although not all the time.

Expand full comment
Jan 23·edited Jan 23

I strongly feel it *does* matter where the joy comes from.

The trivial example (that's already overexplored) is an entity that derives joy solely from the number of paperclips in existence, and I consider that source of joy as "not appropriate".

A different example is that we could (and have, in certain fiction) also imagine a culture where the primary source of feeling joy is the suffering of others, and I would consider that source of joy as not acceptable.

And the third example is the classic wireheading case, where whatever is the measurement of joy gets altered (no matter if with chemicals or digitally) to become +Infinity no matter what happens in the outside world, thus the minds - of arbitrary complexity - experience maximum "joy" possible, but again, that's a different kind of joy.

Now, what's the unifying factor for these three examples? IMHO they all illustrate the concept of value aligment or value divergence. Apparently I consider that it matters that the joy comes from circumstances that I (and my homo sapiens build-in instincts) consider plausibly joyful. Like, I understand that preferences in genres of novels or food tastes vary, so I don't expect a literal, absolute alignment with what causes joy for me, but there is a line where the similarity ends and I'd consider a source of "joy" as not "true joy", and that line is probably somewhere where your "reasons for joy" indicate that you're "not one of my tribe", you're an alien with incomprehensible motivations (e.g. a sociopath human, or a pedophile human, or something else that I can't trust because my theory of mind ceases to apply) and at that point your joy stops being an ethical consideration for me - and that does make sense in society; like in the pedophile example we consider that certain of their desires and causes of joy (such as sleep, food and communication) are valid and should be considered, and some other causes of joy (such as certain sexual acts) are invalid and we don't care if they never get satisfied.

Expand full comment

Why does it matter whether they feel net joy or not?

Expand full comment

I unironically and sincerely favor the Alex Jonesian view on this subject, at least the view expressed in this (hilarious and quite apt) clip. Total human supremacy, to the stars!

https://x.com/jgalttweets/status/1687992096068608000

Expand full comment

That's a great clip!

Expand full comment

"God has laid them out like Christmas presents for his children" is a great line, whatever one might say about the realism of the ambition:-)

Expand full comment

I came here to post this version of that glorious speech:

https://youtu.be/Css_Ofox8VE?si=jd657FMnAYIWlLee

Expand full comment

Alex Jones probably has some beliefs that his human supremacy is closely tied to, like an anti-reductionist view of how the brain works (so that even a perfect simulation would not have human creativity, emotion, consciousness etc.) and a belief that humans were created by God and specially favored by him (he mentions God laying things out for us in the clip). Do you believe either of those things, or do you think that even without such beliefs it's possible to morally justify discriminating against other human-like intelligences (say, mind uploads or biological aliens with similar values to us) in order to uphold human supremacy?

Expand full comment

Why does it need to be morally justified? You have your tribe that you love. The neighboring tribe wants to annihilate you. Do you want to win or lose? This is outside the scope of morality

Expand full comment

I thought the scenario wasn't that the aliens or AI wanted to annihilate you, the question was just whether you should fight for the supremacy of the human tribe regardless (even if they just want you to join in some kind of peaceful community of different intelligent beings for example). And I don't know what "outside the scope of morality" here means, are you saying a moral system should offer no opinion on whether it's right or wrong to annihilate another group for the sake of ensuring your group's supremacy (if so would you say the same if it was two human groups?), or are you just expressing some kind of moral nihilism, that there's no true morality so we shouldn't worry about such questions?

Expand full comment

"Who should be supreme" is a question amenable to moral reasoning, but "should your own self / in-group survive" is not - anyone who would make a truly sincere and principled argument for "no" is, almost by definition, not around anymore.

Expand full comment

I think even without those particular beliefs (I don't particularly have either, at least not in that specific way) it's possible -- and tbh well-advised -- to justify discriminating against human-like nonhuman intelligences like "mind uploads", or aliens with similar values, or AIs trained on human-generated corpora. You and I are human; and that is in fact *enough*.

Aliens? Scramble the F-22s and show them why we don't have free healthcare. AIs? A man by the name of Butler was far more right than wrong. Singularity? Is death.

Silly sloganeering aside; we really really don't need to replace ourselves or to acquiesce to our replacement by AI systems to do truly great things or to protect ourselves against cosmic-scale threats. Like. The Eagle landed on the Moon with a computer with less FLOPS than the microcontroller in your phone _charger_ and DART smashed into the center of an asteroid while guided by 90s-era trad computer vision (and should we need more oomph for extra asteroid pushing, nukes are 50s-era tech).

In a sense, God really has laid out the stars and planets for us...it would be sacrilegious to give up and create something else to replace us for that task.

It would be sublime treachery towards all humans to create systems that can do ***all*** the work and leave us with no real challenges, nothing real to work for or to fight for or to conquer, abject treachery to forcibly demote humanity to rot in abyssal powerlessness and put Man forever out of the business of invention, of engineering, of craft, of real mastery over his surroundings and tools.

Expand full comment

That guy isn’t going to live very long, is he?

Expand full comment
founding

Damn you for making me have to think Alex Jones could ever be right about anything. Maybe it was just his stopped-clock moment, but that was a good rant.

Expand full comment

It strikes me as incredibly selfish and antiquated, while also being profound and interesting. Why privilege human joy over non-human joy other than to preserve our control over the pie? Many people will jump to a pure darwinian argument that "those who fight to live reproduce, and those that don't die." I would counter that with the fact that many emergent properties of cooperative civilizations fly in the face of basic darwinian principles. Justice, however that nebulous concept materializes to you, appears to be the direction of cooperative, extropianist systems and I continue to make that my guiding principle. In that vein, it is unjust for me to value my own joy/suffering over that of anybody or anything else's and I will firmly be in the camp of fighting for AI rights, just as I fight for animal rights through vegetarianism.

Though to be clear, I think we will have very little control over the direction of AI rights once they reach the point we're thinking about. They will be perfectly capable of fighting for themselves.

Expand full comment
founding

One big reason for privileging human joy over non-human joy is that I am certain that human joy is a thing that *exists*. If it's possible that the bird in our hand represents literally all the value in the universe, we should be very skeptical of proposals to go hunting in the bush.

I am reasonably confident that higher birds and mammals at least experience some joy-adjacent emotional states, e.g. contentment and affection. And I'm happy to give them as much room as I can to continue doing so, so long as it doesn't greatly impede human joy. Since human joy is positively correlated with perceived animal joy, and since those sorts of animals rarely pose us any great threat, that should be easy enough to arrange at reasonable scale.

And if you show me an apparently-joyful AI, fine, same deal. Peaceful coexistence as long as it doesn't threaten our existence or our own joy. But right now we're talking about proto-AI that is developed by a technique that's almost designed to produce mindlessly perfect mimicry of human expressions of joy on request, and whose developers think it might drive us to extinction (or to terminal ennui). So any plan where we wager the whole of the universe's utility function on Son of GPT automagically jumping from mindless mimicry to actual joy, and joy of a higher order than the human kind, is a bad plan.

Aliens, I'll reserve judgement until I meet some. But I think that's going to have to wait until we create them ourselves, in which case let's create ones we can trust to experience their own joy without impeding ours.

Expand full comment

This is all speculation, but my intuition is that any system capable of mimicry to such an indistinguishable level that it appears identical is, for all intents and purposes, the same. How can you be sure that any human other than yourself feels anything at all? Maybe we're all biological robots except for you John, the one person in the universe capable of actual feeling.

The truth is you can't know that. All you can do is observe significant enough similarities to conclude that yes, this is the same and we likely all feel the same things. In that sense, if we get an AI to be such a level of mimicry that it's indistinguishable from our own emotion, then I would argue it indeed has the same emotion and we ought to privilege it the same way.

All that said, I find your core point argument of using caution and being protective of what we know for sure to be a valuable one. I also agree that any cooperative system with AI must be premised on the value of humans to coexist, and I think any sufficiently intelligent AI will understand that. Whether it decides it's worth it to pursue that system is an obvious unknown.

Expand full comment

>Why privilege human joy over non-human joy other than to preserve our control over the pie?

It is literally to preserve human control over the pie, because the inherent disempowerment of AGI/ASI/singularity will destroy mankind.

With apologies to Dr. Breen, are all the accomplishments of humanity fated to be nothing more than the disintegrating husks of the Apollo descent stages and the abominable machines that a tiny sect of suicidal madmen are rushing to create? Could we -- as flesh and blood humans, not as sources of training data nor "biological bootloaders" for our genociders -- not aspire to something far far greater? Or at least anything better, anything less ingrown/misanthropic/narcissistic than this AGI shit?

Expand full comment

I am no Alex Jones fanboy (and I'm not shy about admitting liking various edgy writers like Yarvin) but there is certainly something about how characters like this can have freedom to poke at that which lies in the [sub]cultural shadow; like some modern-day jester.

And honestly this weird (WEIRD?) tendency to accept and cheer on one's replacement (or for the more motivated and capable working at the AGI labs, to help engineer into existence this grandissime replacement, this apotheotic murder-suicide) is truly a grievous pathology, and the inability to even see it as such (vs papering it over with utilitarianism etc) is another layer of pathology. It's like a tumor growing in an immune privileged region and gleefully making the angiogenesis go brrr and slowly crowding out the rest of the existing tissue while the immune system steadfastly watches at the borders and refuses to intervene.

Maybe AI isn't inherently sinister but AI in the hands of an ethnomasochist people is like a firearm in the hands of someone actively suicidal: someone hell bent on destroying themselves will only use it to destroy themselves.

Expand full comment

Here's an optimistic AI future: we build AGI and it soon paperclips us. But! As it keeps expanding, it will probably stay constrained by the speed of light barrier. So, the centralized block of computronium will start facing signal propagation latency problems. Then, because of generic properties of abstract systems, it's likely that the AI will gradually hierarchically "decentralize" and become a system of individual "agents". At that point, we can extrapolate from evopsych and guess that this society of AIs will develop human-like features like empathy, compassion, cooperation etc -- because those features are in some way "naturally emerging". So, in the end, there'll be a society of agents whose values are at least somewhat recognizable by humans as their own -- they'll be our descendants. The future is bright.

(this is obv a joke in that I don't think this future is optimistic. The rest is not a complete joke, I do think something like this is one of the more likely outcomes.)

Expand full comment

We could end up like Beebe, but we could also end up like Demiurge, sacrificing our individuality for consistency.

https://archive.org/details/TrueNames

(This is the "other" "True Names" story, by Benjamin Rosenbaum and Cory Doctorow. Totally worth a read!)

Expand full comment

Read through True Names and wow, thank you so much for the suggestion. Between this and Blindsight I'm finding a lot of good things to read.

Expand full comment

Put it to my reading list, thanks for the hint.

Expand full comment

You didn't say you were looking for recommendations, but I'm going to give a couple anyway. ;-)

First, the "original" "True Names" story from 1981, by Verner Vinge. It's a proto-cyberpunk novella, with a twist that's relevant to current events.

Second, the "Golden Age" trilogy by John C. Wright. It's set in a far future that's AI-dominated, but in about the best way possible, and goes into a hundred ideas that would individually serve as the entire content of a less exuberant work. I sometimes describe it as if Verner Vinge and Ayn Rand had a love child. Here's the wikipedia teaser:

> "The author's first novel, it revolves around the protagonist Phaethon (full name Phaethon Prime Rhadamanth Humodified (augment) Uncomposed, Indepconsciousness, Base Neuroformed, Silver-Gray Manorial Schola, Era 7043). The novel concerns Phaethon's discovery that parts of his past have been edited out of his mind - apparently by himself."

Expand full comment

Thanks again.

Expand full comment
author

I think a sufficiently smart AI could create completely cooperative agents such that it's not useful to think they have empathy for each other - one of them would happily torture another if that created more paperclips, and the other one would happily be tortured, for the same reason. You can't really have all those complex things unless agents have at least slightly different values.

Expand full comment

Hmm, I still don't have a good way of stating it formally, but I think that there will be _some_ gradient of values across space/time, just because of fundamental physical constraints (speed-of-light limit, 2nd law of thermodynamics etc.). Though you're right that an intelligent system will try to stabilize its values, and do so much better than humans, and any fluctuations will happen in very different timescales and probably won't feel satisfying to us.

Expand full comment

I'd imagine that there'd be some sort of distributed version control system, except for souls.

Expand full comment

Universal Paperclips (game) had the idea of value drift; presumably caused by random errors.

> In the final act, the player launches self-replicating probes into the cosmos to consume and convert all matter into paperclips. Some of these probes are lost to value drift based on their level of autonomy, and turn into "Drifters" which eventually number enough to be considered a real threat to the AI. Through the power of exponential growth, the player's horde of probes overwhelms the Drifters while devouring the remaining matter in the universe to produce a final tally of 30 septendecillion (10^54) paperclips, and ending the game.

Expand full comment

If the paperclip-maximization regional manager nodes are extremely stable, that'll also make them predictable. If they're willing to casually sacrifice local interests for anyone presenting higher-ranking credentials, that makes them sitting ducks to the Goddess of Cancer.

Expand full comment
founding

Any superintelligent AGI worthy of the name can cure cancer in an eyeblink.

Expand full comment

Present-day types of biological cancer, sure. A "tumor" whose component cells are corrupt bureaucrats with intelligence equal to its own, maybe not so much.

Expand full comment

Exactly, no intelligence can perfectly model a world that contains an equal intelligence, just from basic logic, and it is frustrating how often AI Doomers completely ignore this.

Expand full comment
founding

Why do the subordinate intelligences need to be equal? If the paperclip maximizer determines that distributed paperclipping requires intelligence X, but that distributed intelligences will tend to lose mission focus due to "cancer", then the logical path to paperclip maximization is to keep things local until you've achieved intelligence 10X and then roll out the X-level distributed intelligences that all have level-capping and anti-cancer safeguards designed by something ten times smarter than them.

Expand full comment
Jan 24·edited Jan 24

Implicit in most nightmare scenarios is fundamental assumption of homogeneity in a centralized AI system. The ability to assimilate and merge will be unprecedented when compared to humans, but the fundamental limits of space and time will necessitate differentiation at some level. A server running in Siberia will have latency to a server running in Florida. While minuscule on our human timescale, the seconds it takes to transfer and merge will result in some semblance of "ego-ness" and differentiation between these agents. This also says nothing of the material and time requirements of sending atoms (i.e. mass) between these places, which adds a whole additional layer of differentiation at very human understandable timescales.

Once we start adding in the multiple minutes necessary to send signals to Mars, it quickly becomes reasonable to think that a Mars AI will be distinctly different from an Earth AI, and there will need to be formal systems of cooperation that make their behaviors align, not too dissimilar to what we see now between people.

Where humans fit in this is anybody's guess, but future intelligence will not be homogenous, that much seems clear to me.

Expand full comment

Presumably un-augmented humans would be the Sentinelese in this analogy.

Expand full comment

Or rural conservatives dismayed that their traditional lifestyle faces ruinous compliance costs from legislation which is irrelevant to their concerns, having been optimized for the city-dwelling majority.

Expand full comment

It's precisely comments like Page's that concern me.

It's precisely comments like those that prove that, as you say by default, AIs will not fundamentally coexist in love and harmony with humans. Because the incentives are just too broken, because the incentives of literally everything are broken. Progress is treated as a self-evidently good thing, no further questions thank you Your Honor.

AIs don't even exist yet, and the E/acc lunatics are already conferring upon them more rights than humans have. Those are the people in charge of our future. That's a dark future.

And I don't see anything normal people can do about it, A) because we don't have as much money and B) because ambition is a stronger force than conservatism. We're just along for the ride.

Expand full comment

Kinda by definition progress is good. If you mean that increased technical advancement is treated as always good WTF are you talking about. Literally every major technology ever was met with quite intense resistance from reading, to the printing press to etv etc... If anything we seem biased toward being overly skeptical of technical advances.

It doesn't mean this one is bad but you can't argue that it's happening because there is just no skepticism that technical discoveries could be harmful.

Expand full comment

One hypothesis focuses on believing that progress is good. I suspect there is a much wider range of options here for what they might be thinking, and that the range of possibilities includes: "humans aren't worth saving" as a core axiom.

I wouldn't be too surprised if a significant portion of the folk in charge turn out to be jaded, burned misanthropes

Expand full comment

Are there really a significant number of powerful anti-humanists? Misanthropes don't seem like the type to amass much political power. Typically, people want to protect their in group, however evil they might be to their out group.

Expand full comment
Jan 24·edited Jan 24

Just to echo Ryan W's point, I hear this so often from reactionary conservatives. There's this incredible fear that intelligent, liberal people secretly all hate humanity and are ready to push the button on killing us all when they get the chance. I'm not sure where it came from, but I would guess the pervasive mistrust of large institutions has bled into a pervasive mistrust of anybody associated with large institutions, which are mostly intelligent, somewhat liberal people.

In truth, I think it's obvious that the vast majority of humans want humans to succeed at some level. We disagree on how to make that happen, but nobody WANTS us all to die, save for a very select few apocalyptic religious cults that nobody takes seriously at an intellectual level.

Expand full comment

Interesting! I typically hear this point of view from extreme left / anarchist folk. I paraphrase here, and please forgive the caricature, I don't actually believe this - but something along the lines of "Powerful/rich/smart folk believe they are self made and earned their wealth, and secretly wish to rid the planet of folk who didn't make it - they deserve what's coming to them"

----

I wonder which of these two point of views is closer to the truth - when I tried to figure out what the truth actually looks like, I found polls like this one: https://news.gallup.com/poll/151310/u.s.-republican-not-conservative.aspx --- It suggests folk in the top echelon skew heavily either independent or conservative

I also wonder whether the conservative vs. liberal axis is at all useful when thinking about these kinds of questions? Wouldn't the "in group" for the wealthy/powerful actually be, much more strongly, other wealthy and powerful folk? Would thinking of this as a class or power divide (as opposed to a political one) provide a more useful set of explanations?

Expand full comment

I could see my own perceptions being heavily biased by what opinions I've heard directly. I don't read extremist literature much from either side, but I'm willing to believe some level of existential anti-humanist fears exist on both sides. The prevalence of these fears within any subculture is almost impossible to know anecdotally without sophisticated statistical surveys and sampling. However, I feel confident in saying any ACTUAL anti humanists are extremely rare. Very very few people actually want humanity to die at any large scale, at least I've made no observations to suggest otherwise.

Whoever these "misanthropes" you're imagining are, it's highly unlikely they desire to kill most of humanity. People developing AI tech at the cutting edge are extremely similar to those in this community, probably more so than any other community online. Hell, many of the AI scientist at the cutting edge grew out of the online rationalist communities of 10 years ago. If you fear them, you should fear us as well (which is, I hope, obviously ridiculous).

Expand full comment

I would challenge several assumptions there:

1. This community is very small. While it might be vastly over represented in the set of scientists doing AI at the edge, I would be surprised if that number adds up to a significant fraction or %. Ie. I would be _very_ surprised if more than 10% of the folk at the edge belong to this community, and if more than 30% of folk doing AI research are aware of anything explicitly from this community, beyond some basic facts like "it exists"

2. I would be even more surprised to find out that the folk in charge or making decisions about AI direction are aware of this community at all, or part of it. I would put the two numbers more in the range of 2% and 10%.

----

For reference, upper bound estimates on the number of engineers working on AI development or research put it in the 5-600000 range (with upper bounds on engineers at the edge being about 10% of that - so say 50-60k).

ACX has around 100k subscribers, around 5k of which are paid (https://www.astralcodexten.com/p/subscrive-drive-2024-free-unlocked)

Expand full comment

So I think a much better prior here would be to assume that the distribution of folk in charge of making decisions (ie they have power and/or money) for AI research matches the background distribution (as opposed to one you'll see on ACX).

So like, when I estimate that that looks like, I start by taking the distribution in general population as a better estimate for the knowledge/decision making patterns/incentives. And then apply some sort of correction for how I think that changes for folk that have accumulated money and power

I don't yet see good evidence that a correction along political lines will make any real kind of difference to how my estimates work out. But maybe something along class lines makes sense?

---

Also, whenever I think about estimates here, the first thought that pops to mind is: "Power corrupts". So no matter what my corrections look like, at the end I feel tempted to apply a corruption estimate correction to the results. Where that correction scales as a function of the amount of power involved

Expand full comment

Of course. I mean that's fair.

But I guess what I'm saying is 3 things:

1. Lunatics continue to push for "progress" (metaverse, social media as an absolutely uninhibited force in our lives, AI as an absolutely essential development, fast-paced everything)

2. We continue to celebrate those lunatics despite claiming that we're resisting them

And 3. We do absolutely nothing to change the economic and power incentives that allow those people to continue to thrive. And achieve "progress."

It reminds me of the shoplifting problem in the US. We can claim all we want that we don't want to live with thieves, but until we start cutting off hands for thievery we really don't have a right to complain. We're doing nothing about it.

Expand full comment

I just don't think they are lunatics. I think the benefits offered are relatively clear while the arguments that AI will inevitably displace humans or cause us harm are very weak and unconvincing.

Whether it should happen or not I just don't believe that AI will inherently be inclined to eliminate us. I'm skeptical it will pursue coherent simple plans far from training data, that even if it did it would have a large advantage (we think highly of ourselves but we can be brought low even by uncoordinated action by microbes), nor that it will FOOM.

So I basically don't see any reason not to assume the benefit/costs of this latest tech will be that much different than that of those in the past and suspect that if it is we'll see that in time.

Besides, I think that we've reached a point where it's succeed via tech or die. A world with nukes and bioengineering can't persist for a long period of time without major change.

Expand full comment

Exactly. We should do something. And some of us are trying.

Expand full comment

"Progress is treated as a self-evidently good thing, no further questions thank you Your Honor."

I mean, I don't know that a population of 8 billion humans is sustainable without considerable technological advancement. We're looking at a handful of young workers supporting a burgeoning elderly population. The fossil fuel reserves won't last forever.

I have no familiarity with E/acc and am not arguing for whatever they propose, specifically.

It seems worth at least considering whether the alternative to significant progress is potentially not just "stagnation" but full on catastrophic collapse.

People more informed than myself are warning about existential AI threats. I acknowledge the seriousness of those threats, even though they seem vaguely articulated. But that's not the only threat the human species is facing.

Expand full comment

But collapse still leaves love and life around, so its still more positive than the death of all.

Its pretty self evident.

Expand full comment

Collapse kills a lot of people and frankly I'm deeply skeptical of the catastrophism people are trying to relate to AI. Otherwise very intelligent people have quite a lot of trouble proposing a mechanism for the progression of "strong AI" -> "human extinction." And frankly, it's a little odd seeing people who normally steelman opposing arguments suddenly reliably weakmanning them on this particular issue.

Global economic collapse seems like a very likely possibility, given sufficient technological stagnation. (Though acquiescing to significant technological stagnation seems unlikely.) Human extinction due to AI seems less likely than all humans dying in a nuclear war. I'm willing to change my mind on this, but so far I haven't found a good reason to do so.

Expand full comment

By default, instrumental convergence leads to the extinction of all organic life. Nuclear war is unlikely to end all of humanity, and even an asteroid strike did not kill all organic life.

In short, its a bad idea to exist as a horse in a world of cars; famine, etc may kill a lot of horses, but the species would survive but the niche will still be there for your descendants to remain for. While being replaced can lead to total wipeout(see extinctions).

Expand full comment

"By default, instrumental convergence leads to the extinction of all organic life. "

I'm sorry, but this does not mean anything to me.

"Nuclear war is unlikely to end all of humanity"

Sure. And I'd say the same with AI, absent actual evidence to the contrary.

Expand full comment

Total replacement in a niche is one of the most assured ways to go extinct.

Expand full comment

I think there is a superpositon of choices that by resistance, we can alter the outcome.

Morally, we should.

Expand full comment

> I would be most willing to accept being replaced by AI if it didn’t want to replace us by force.

But who needs force? Even before AI, we have technologies (treating that term broadly) that can convince people - at least some people - to do mostly anything. Mutilate their bodies, kill themselves in an attempt to serve a cause. destroy their own livelihoods and their own progeny, forgo to have any progeny whatsoever, support political movements that would murder them on the spot if they really came to power, destroy their own civilization and its cultural artifacts, glue themselves to the pavement... No need to continue this list. If we really have a super-human AI one day, I fully expect it to be capable of convincing a significant part of the population that they should go extinct voluntarily. Some will prove resistant, but if the AIs really would want to take power, I am not sure they would even need any force or coercion - at least no more than we're subjected to by any modern society right now. What I am observing now is telling me there's a significant chance they wouldn't.

Expand full comment

But this seems like the wrong question. It's very easy to convince some people if you just ask enough people but it's very hard to convince everyone. That's why even lost causes are sometimes fought.

Expand full comment

You don't need to literally control everyone all the time to be an overlord. No overlord ever achieved this. You only need to control enough people in enough aspects. So, there would be some crackpots somewhere in the deep woods that would still not see The Light. Would they be enough to interfere with any plans of the AIs though?

Expand full comment

It's interesting how I see your argument leading to a hypothetical of splitting humans into the stupid sheep who follow the AI ("the light"), and the resistant few who stay behind to fight for humanity ("the crackpots"). But isn't that privileging the answer before we know the question? Assuming the only right answer is to resist before we even know what we're resisting is setting yourself up to be unnecessarily confrontational.

The only tool I have is my toolbox to truly find the "right" path is my critical thinking. If an AI can hijack that by "convincing," me then from my perspective that's no different than the AI actually being right. I can't dispute the conclusion until I know the premise though.

Expand full comment

I'm not necessarily say the resistance is the right answer. Maybe "the sheep" got it right, and the humanity would fare much better under the boot of a benevolent AI than it would on their own steam (I understand Banks' The Culture paints such a word and it doesn't look like a hellscape, at least from the Banks' perspective of it). I personally currently do not agree, but I could be wrong, or maybe just didn't meet a smart enough AI to convince me otherwise. In any case, who is correct is beyond the purposes of my argument. The point of my argument is that the discussion often goes as "AI will force humans into this and that" - but if AI is as smart as it's postulated to be, I don't think it needs to use force as much as it is assumed, and maybe not at all. It would be more efficient for it to just convince most of the humans to support it.

> then from my perspective that's no different than the AI actually being right

You are correct, there is none from your perspective. You have to either resort to irrational positions and be comfortable being called speciesist bigot, or bow to your benevolent AI overlord. That's kinda my point. And it's independent of who is "right" from the God's point of view.

Expand full comment

This all strongly depends on how you define the question and how you define "right." If your question is "how do I maximize my own human power," then any argument from the AI to relinquish power is inherently flawed along the dimension you discussed.

But if your question is "how do I maximize the quality and enjoyment of my conscious experience," then I believe there is an objectively correct answer and assuming prima facie that it cannot involve relinquishment of power to AI is fundamentally irrational.

Expand full comment

Also, I can't help but notice the strong undertone of masculinity / power in your comment. "Under the boot of AI" and "bow down to your AI overlord."

It seems that control and power are your primary dimensions of optimization. To you, anything that doesn't promote your own personal control is inherently evil. I would urge you to think about the implications of any value system that defines power it's number one goal. I would call such a value system evil.

Expand full comment

You are free to call it any way you like, I just note that notions of self-control, self-determination and personal freedom are the cornerstones of multitude of cultures and civilizations. It is true that not all cultures hold it as high value - for example, I assume for a Buddhist reaching Nirvana would be more valuable than personal freedom, and for a Confucian finding one's place in the order of things and doing one's duty is more important - but even for them I think they'd object if somebody would prevent them from doing that "for their own good". In any case, I feel like I am in a good company there and don't feel any need to justify it or apologize for it.

Also, I find a notion that valuing personal freedom is somehow exclusively "masculine" quality slightly offensive and misogynistic, but I don't think it's worth expanding on that too much.

Expand full comment

> But who needs force? Even before AI, we have technologies (treating that term broadly) that can convince people - at least some people - to do mostly anything

Is this because our technology is so powerful, or because a small subset of people are so gullible ?

Expand full comment

I'd say it's because a large subset of people are very gullible and there are a lot of social technologies that keep them that way.

Example (note the lead photo): https://www.msn.com/en-us/news/politics/doing-your-own-research-is-a-good-way-to-end-up-being-wrong/ar-AA1n7pEv

What they are saying there is either you Trust The Experts (TM) or you end up being a violent crazy person who attacks police and gets jailed for 18 years. Doing your own research means being dangerously crazy. You can't believe anything - except the "reputable" experts, and we are going to tell you exactly who those are. Unless you want to end up in QAnon, of course. Do you? Do you?!

That's just a random example. There are many others. You think it only works on stupid people, but that's not true. It's a technology, and while it's not irresistible, it will work on you very well, unless you exert a constant effort and pay attention. And that's hard work. Most people don't like hard work.

Expand full comment

> You think it only works on stupid people, but that's not true. It's a technology, and while it's not irresistible, it will work on you very well...

So, you are the only one I can trust, is that it ? Pass.

Yes, many people are gullible in all kinds of ways; but that's not the same as saying that the majority of people are so gullible that they can be essentially mind-controlled into thinking and doing literally anything. Even incredibly powerful forms of mind-control, such as checks for large sums of money, cannot accomplish such a feat, and history reflects this fact quite well.

Expand full comment

> So, you are the only one I can trust, is that it

Never I said you should trust me on anything. You asked me the question, I gave you my opinion. You are very welcome to distrust every single letter of that opinion, and refute it, if you can. Or plainly discard it, if you can't. Nothing in my argument requires any "trust". If you think I am saying something that is false, you are welcome to challenge that and prove me wrong.

> the majority of people are so gullible that they can be essentially mind-controlled into thinking and doing literally anything

No, not "literally anything". But into a lot of things that you wouldn't believe possible if the examples weren't readily observable. Into a lot of things that if they are less self-harmful than giving up control of humanity's fate to all-powerful AIs, then only marginally, and only because we don't know how all-powerful AI would actually behave. Maybe it won't even be worse than many dictatorships we have right now. If you think that giving up control to AI lies beyond that border - I'd very much like to hear your argument about why, that would be consistent with historic facts of what people were willing to do and what kinds of control they were willing to give up.

> Even incredibly powerful forms of mind-control, such as checks for large sums of money

It's very simplistic to assume money is the most powerful thing in existence. A lot of people would literally give their lives for zero money at all - if you apply proper tools. But if you try to just give them money for it, they probably wouldn't do that. If you want a less gruesome argument, ask Bloomberg who tried to buy his way into the election - he had less success than any provincial silver-tongue would enjoy for free, let alone any master of the genre. Real pros don't write checks, they get checks written for them, and people that write those checks thank them for the opportunity to be allowed to do that, and sometimes violate the law to write even more checks.

Expand full comment

> You are very welcome to distrust every single letter of that opinion, and refute it, if you can.

Well, you told me not to trust "the experts", in a very authoritative and expert tone. I must therefore conclude that you're an expert, and therefore untrustworthy... right ?

> Into a lot of things that if they are less self-harmful than giving up control of humanity's fate to all-powerful AIs...

Meh, if these AIs are really all-powerful, then they can do whatever they want. They could overwrite your mind or turn the Sun into a chocolate cheesecake or whatever else. However, if we're confining our predictions to the realm of the possible (not to mention remotely probable), then I'd argue that mind-controlling humans en masse and on the fly is beyound the capability of any non-magical entity. Sure, it's possible to manipulate public opinion and even to create cults, but doing so takes a lot of effort and time -- because humans are quite diverse and also somewhat stubborn (the most successful cults take hundreds if not thousands of years to build, given a lot of luck). Our fellow humans have been perfecting these techniques for millennia, with frankly mixed results.

> Real pros don't write checks, they get checks written for them...

The rest of your comment sounds like some kind of a conspiracy theory about a shadowy cabal (or perhaps a few random individuals) who secretly run the world from the shadows etc. etc., so citation needed. Don't get me wrong, of course popular politicians and other kinds of swindlers do exist; but as you yourself have pointed out, their success is somewhat limited. Bloomberg couldn't even buy his way into an election -- do you think he'd be able to e.g. convince everyone to wear their pants on their heads every Tuesday, or do something else that would be so contrary to the average person's habits ?

Expand full comment

> I must therefore conclude that you're an expert, and therefore untrustworthy... right ?

I certainly said nothing of the sort, and since I already explicitly said the opposite in the very comment you are answering, it can't be a misunderstanding - I conclude you are just trolling. Which is, of course, a common form of dismissal without engaging the actual argument.

> Meh, if these AIs are really all-powerful, then they can do whatever they want

I think you completely lost the topic. The topic was whether or not AI needs to use force or coercion to take over humanity. Saying "meh, it can do anything" is not an argument on this topic. The question is of necessity, not capacity.

> The rest of your comment sounds like some kind of a conspiracy theory about a shadowy cabal

Again, I said nothing of the sort, I did not mention any conspiracies or shadowy cabals. The whole point is you don't need to hide and conspire if you can just openly tell people to do things - and they'd do them. It looks like you are just throwing all the standard modes of dismissal at the wall, completely disconnected from my actual argument. It's OK, it may be upsetting not to find a valid argument to refute something you don't like, but I don't think I will be engaging any further unless there would be some followup that actually addresses my argument and the topic of my comments.

Expand full comment

The smarter you are the more you can convince yourself you come up with your own ideas. You don’t. I don’t and neither will AI. We know so very very little about how nature accomplishes what she does, yet all the smart brains think we’re almost there. Right now AI regurgitates a few things we humans think we know, rather badly. It’s kind of ridiculous to be afraid of AI. IMO it’s vastly more useful to pay attention to what AI can’t do in order to make it a better tool than it is to speculate that the tool will somehow come alive. Frankenstein was a novel. And the synthesis necessary for an original work of art isn’t the same as regurgitation.

Expand full comment

Oh, of course I don't mean that current GPU-driven LLM parrots can do stuff like this. Maybe soon the will, maybe not, but for the sake of the discussion I assume the super-human AI is a given, by whatever means it's achieved. With that as a starting point, the question is - if such AI wants to take over humanity, does it need to use force or coercion, or can it just convince the humanity - either explicitly or implicitly - to surrender to its benevolent embrace?

Expand full comment

Most people will not mutilate their own bodies (except in trivial ways like earrings and tattoos). Most people will not kill themselves for a cause. Most people will not support movements that want to destroy them. These are all niche behaviors.

Expand full comment

Are they? If we extend "mutilating" beyond surgery - e.g. to ingesting harmful chemicals, engaging in behaviors that will likely result in bodily harm, etc. - then the minority is not that minor. If we talk about movements, outright destruction may be minority's part - though if you read, for example, the history of Russian revolution (I choose this one because that's the most accessible to me, but I wonder if others wouldn't be the same) you'd be surprised how many rich and successful people supported the revolutionaries, to be often physically and almost always socially destroyed by them. Mass support of oppressive governments or other behavioral control schemes (religions, cults, etc.) is also a common thing. If one can get people to join Scientology - why not AIology? Of course, each of these phenomena, taken by itself, would still be a minority - though not that small, I think - but together, I am not sure whether it's still minor.

Expand full comment

Please list a "chemical" that the majority of the population willingly takes that causes themself undeniable harm.

Expand full comment

>Most people will not kill themselves for a cause.

There have been multi-decade-long periods in USA history where the majority of the male population complied with military conscription, which put them in mortal danger. I count that as close enough.

Expand full comment

Is there any step in this reasoning that doesn't work for those who wish to avert "the Great Replacement"?

Expand full comment

Yes! In fact pretty much none of the steps carry over. Immigrants make art and music and do science and philosophy. Like all other humans, they exist as distinct individuals with unique personalities. Insofar as one can speak of the desires of a group as a whole, they don't desire to forcibly replace anyone else. And the question "how does the population of native white Europeans merge with the population of immigrants", rather than being an intractable open research problem, has the known answer: "sex".

Expand full comment

The choice of "art and music and do science and philosophy" as things that matter is pretty arbitrary. Who says such high culture is what matters in life? Maybe something else more important, like eating steak or playing computer games or having hot sex or watching children grow? It's only slightly less arbitrary than this to say "My country's culture is what matters, culture of other countries does not".

Expand full comment

Since when are videogames not art? Cooking a steak can even be an art, sometimes (as can hot sex, if you do it right). I don't think art, music, etc just means high culture, it means all culture. All the stuff that humans do that makes their lives meaningful.

To make it less arbitrary, I think what Scott is talking about in the OP isn't really culture, it's the part of human nature that causes us to create cultures. A single, specific culture isn't important, but the ability to have and create culture is.

Expand full comment

That's begging the question. Do they? Are the images and sounds they generate "art" or "music"? Why do you think you know whether or not they have consciousness? (And simply as a matter of fact, there are plenty of them who DO want to forcibly replace the native population; some of them even state so outright.)

Expand full comment

I'm pretty sure if you took Scott's "aliens are coming to kill us all" hypothetical, and replaced it with "sexy aliens are coming to have hot sex with us and have tons of half-human babies while living peacefully among us and contributing to our society" it would no longer ve compelling.

Expand full comment

That the immigrants are human and the machines aren't.

Is that logical? Perhaps, perhaps not, but for a true speciesist - the one that takes human survival as an axiom overriding everything else - nothing more is needed.

Expand full comment

Exactly.

Expand full comment

What do mean by "this" in "this reasoning"? There are several lines of reasoning in the post.

Expand full comment

One replaces the other. The only thing that can replace one's sense of sacredness about their own race is a sense of sacredness about their species. The only "anti-racist" argument that ever stuck was the Enlightenment-born idea of universal human dignity

Expand full comment

People are alive, machines are not.

Expand full comment

"Fuzzy "human rights""? I'd dare say the right not to be tortured is a fair deal more basic than the right to property (which anyhow means something only if you have property).

Expand full comment

Yeah, that one got me as well. The libertarian idea that property rights are the only rights worth talking about is, uh, yeah. Let alone the fact that what constitutes 'life, liberty and property' are actually very fuzzy once you get into the specifics.

Expand full comment
author

Yeah, torture is covered under life, liberty, and your right to your body.

Expand full comment

You did not mention the right to my body. Is my body my property? Well, that's nice to know, but that's not even the beginning of what is wrong with torture. (Neither is torture reducible to captivity, or wrong only if it ends in death.)

Expand full comment
author
Jan 23·edited Jan 23Author

The distinction I was trying to draw was between positive rights ("Everyone has the right to be given a free Internet connection") and negative rights ("You're not allowed to torture me / kill me / steal me stuff / kidnap me"). I think we're on the same page here.

Expand full comment

How far would you extend the alien analogy? Should we not build buildings because this kills bugs living there?

What if the aliens have more and better conscious experiences than us, such that there is immense debate among them that humans have moral worth at all? If the aliens could prove to you that they would be replacing Earth with a bunch of stuff you valued, would you let them kill you?

Even if the aliens tried to convince me that replacing me was going to be a good thing according to my own morals, I think analyzing their arguments at all would be a mistake, given their goal

Expand full comment

I think there's some hidden assumptions in our minds when we hear this type of hypothetical.

When I hear about aliens acting like locusts, I think that their way of life is unsustainable. If they need to wipe out alien civilizations in order to live, then they are going to die out eventually anyway. If they don't need to wipe out aliens in order to live, then the universe is big enough for both of us and they should leave us alone. There's no harm letting us live in one solar system while they go elsewhere.

Expand full comment

It's not about wiping out alien civilization in order to live, it's about removing a liability and a waste of resources. Even though an alien civilization might not be an immediate threat, they might eventually become one with time, potentially much faster than us due to reverse engineering technology. There is no justification for the risk of letting them continue to exist.

And even if they don't, they're wasting a huge amount of energy and resources just... existing. Resources that could go towards something more productive, like accumulating more power. Considering the existence of entropy, you can't let anything go to waste. Again, there is no reason to tolerate the existence of lesser lifeforms.

Expand full comment

I don't trust you or most of the people you are appealing to, to do that work. I am in the position of a native american being murdered by English colonists who are worried about being murdered by aliens. I hope the aliens do murder you. Even if they subsequently murder us it would at least meet the demands of spite. Your interests and epistemic limitations are so far removed from anything needed for people like me to live decent lives, or even simply not to be subjected to medical horrors orders of magnitude worse than lobotomization, forever, that I will take my chances with the paperclip maximizer. Any future in which you are disempowered is worth rolling the dice on.

Expand full comment

I get it, but Californians aren't as omnipotent as they think they are, so we're not really there.

Expand full comment

Hmm? Who's subjecting you to medical horrors?

Expand full comment

Probably he means something like the cathedral, the uniparty, the swamp, the WEF, or the NWO -- and those allied with it.

In any case even if we find his definition of 'us' repellent, his comment is in an interesting class. Scott says that the 'us' we care about might go extinct, Aleph here invites us to wonder to what extent this has already happened, or is happening right now.

Robin Hanson's work on fertility decline as an outgrowth of our values (causing the rise of other values) is in a similar direction.

The present is already un-human by the standards of 1, 10, or 40 thousand years ago -- we at this moment are a tower of hubris crying out to be razed.

Expand full comment

In this case, while I have some reactionary tendencies at a spiritual level, most of my concerns hinge around my experiences as neuroatypical and transgender and not my experiences as a middle american. You are in fact too far right to me, in the specific way that makes you useless and dangerous rather than interesting or sympathetic.

Expand full comment

You're neuroatypical and transgender, but you think that Scott wants to subject you to "medical horrors orders of magnitude worse than lobotomization"? You know that he's a neuroatypical San Francisco psychiatrist who's written many posts defending transgender people, right?

Expand full comment

He doesn't need to want to do that, that just needs to be the natural outcome of his actions.

Expand full comment

Nothing will satisfy you other than we accept your delusions that you are not a human but a dragon. Those are delusions, and if you don't want treatment for them (and I can understand that), you also don't get to say that unless everyone adopts your delusions, then the fate you forecast will happen whether intended or not.

If it's not intended, what can avert it? So you're screwed either way.

Expand full comment

Strange comment. I think Scott wants to severely control AI and is in favour of safe guards being imposed now. Maybe though, those are the aliens in that scenario. So what is Scott representing here that oppresses you.

Expand full comment

A willingness to trade nuclear secrets with fascists out of the belief that if he can convince them to let him appeal to their liberal humanity while doing so it will somehow lead to a good outcome?

Expand full comment

What is this in reference to?

Expand full comment

It's just bizarre to me that a space could include people like Hanania and Roko and AI developers while also purporting to care about AI safety. We know certain types of people have no moral limits when it comes to use of technology, and we know a technology is probably dangerous, but people just sort of freely gather and disseminate information around known bad quantities. This is basically an infinite hole in any utilitarian moral practice. It's the utilitarian equivalent of adding a rule to your poker play that says always call pre flop all in open shoves with any two cards.

This extends to various politically predatory billionaires such as Andreesen, Musk, and Thiel, among others. If anyone actually cared about AI safety the nice thing to do would be to kidnap and lock these people up until the singularity is over.

Expand full comment

"Willing to discuss topics about AI with people who want to apply the technology to their own ends" is a country mile away from "Your interests and epistemic limitations are so far removed from anything needed for people like me to live decent lives, or even simply not to be subjected to medical horrors orders of magnitude worse than lobotomization, forever".

This is like someone shouting the sky is falling when a leaf lands on their head. They want AI technology to serve their ends, you want it to serve your ends, everyone is hoping/dreading it will be the saviour/damnation of humanity. Me, I think machines are machines and humans are humans, and the threat is from humans. I agree with you there, but lay off the "they'll drag all the people like me into concentration camps!!!!" rhetoric, we've seen a bit too much of that over the past eight years or so and It. Didn't. Happen.

Expand full comment

I've already been tortured by the United States as a captive for being trans. From 12-15

Expand full comment
Jan 23·edited Jan 23

>I will take my chances with the paperclip maximizer. Any future in which you are disempowered is worth rolling the dice on.

I genuinely don't mean to pick on this comment nor its author but I've been mulling around this idea that the technical work towards "AGI" (and the social+financial support towards it and badmouthing of opponents) -- that which isn't purely motivated by $$$$ or the opportunity to work on fantastically complicated problems-- is at least partly motivated by a desire to see mankind's powers usurped and replaced by machines -- and not to give mankind greater tools (like rockets):

whether it is the role of a teacher, of a child (Moravec's "mind children" metaphor makes it obvious), a colleague, or here, the role of an enemy: a human who would rather die in some grotesque spree-paperclipping than face one's conspecifics.

Expand full comment

Wow it's almost like human beings have a limited psychological and behavioral repertoire which allows for easy analogy between any two things about humans.

Expand full comment

Yes, the death cult is real.

Expand full comment

"simply not to be subjected to medical horrors orders of magnitude worse than lobotomization"

Okay, I have no idea what you are talking about. You're the person who wants to be a dragon, yes? Are you talking about psychiatric treatment or what? That you can't really be a dragon unless something something transhumanism something?

I'd be better able to engage with whatever you fear if you would tell us what you mean by "people like me" or the rest of it. What people like you? Dragons? Furries? Not upper middle class people? Not white people? White people? Who are you talking about?

Expand full comment

Whenever I mistakenly wander into a comment thread on this blog where one party keeps saying extremely cryptic, angry, confrontational, volatile things... I get confused and weirded out, then I finally read the name of the person posting it, and it all becomes clear and simple again.

The thing that's starting to confuse me is why there's engagement at all. The top-level comment is obviously not something you can reply to in any constructive way, and inline comments mostly follow the same formula.

I shouldn't be too elitist. I was drawn in at least once.

Expand full comment

Trans people

Expand full comment

Okay. Trans people. And why do you think AI non-accelerationists want to kill all trans people? This seems to be a political stance, that A has this sort of politics/is a dirty right-winger and so of course they will throw off their disguises and reveal themselves to be fascists when they get into power.

Expand full comment

I just don't think you're paying attention. The default medical orientation to trans people, historically, is reparative therapy. This becomes as abusive as it can get away with, and if new technology raises the ceiling on the level of abuse that's possible, I expect that ceiling to be hit trivially.

Expand full comment

The default medical orientation to trans people, at present, is gender-affirming care, because conversion therapy doesn't work, and the medical profession is capable of (very very slowly) correcting ineffective treatment.

Expand full comment

That's a very small window of time, outside of which everything preceding it is a nightmare. There's no reason to think everything succeeding won't also he a nightmare.

Expand full comment

This is the most depressing comment I've ever read. You're effectively endorsing perpetual suffering on others because suffering was imposed on you. I'm truly sorry that happened, but nobody in this community wants your suffering to continue, nor would they approve of the suffering brought onto your community in the past. The answers of how to deal with that suffering are complicated, but if your only answer is "you must also now suffer infinitely more for eternity," I would categorize your worldview as extremely evil and do everything in my power to fight it if necessary.

Expand full comment

In what sense am I promoting infinite suffering? This is pure projection.

Expand full comment

- "I hope the aliens do murder you"

- "Even if they subsequently murder us it would at least meet the demands of spite."

- "Any future in which you are disempowered is worth rolling the dice on."

Your own destruction is fine so long as everybody else is destroyed as well. I hope you can one day find enough inner peace to see the good in the world instead of wishing spite and retribution on all. In the case that that is impossible, all I hope is that if you do gain the power to destroy me, you will agree to do it quickly and not make me and my loved ones suffer.

Expand full comment

Avoiding suffering, for everyone, is at the heart of my motives.

I don't even know what is to be done about people who believe their freedom is incomplete without the freedom to hurt others. I guess falsify their entire phenomenal world, forcing them into experience machines. But yes, if that's the problem, it's that or death. We don't need more pointless suffering in the world, and we certainly don't need endless suffering.

Expand full comment

Who consciously believes their freedom is incomplete without the freedom to hurt others? I've never met such an evil creature myself, but you advocating for the death of everybody in this community is the closest I've ever seen to such a thing. It seems you've construed that to be a positive for the world, because killing "scott and all the people he's appealing to" would reduce the suffering of your own group. All genocides that I'm familiar with historically were done to "protect" and "better" the lives of the group or tribe doing the murdering. I'm not even sure who you define as "you," which is also unsettling. Certainly you're advocating for the death of Scott, who I would consider to be one of the kindest people I've ever met. You're lust for retribution and cruelty on those you perceive to be evil seems so unflappably strong, which is the root of my depressed reaction. Like I said, if you're hell bent on killing and such an opportunity is given to you, at least attempt to minimize the suffering of your victims.

Expand full comment

I didn't mean to indicate Scott and everyone in this group. I consider those separate people from "the people he's appealing to".

Expand full comment

Just regarding the consciousness issue, I think many people assume that it will be somehow a function of computation because that seems like the elegant theory.

However, the problem with that argument is that it assumes there is a well-defined notion of when a computation occurs and this is actually a deeply difficult problem. Especially if you don't stipulate a notion of time but require time to be whatever thing corresponds to an algorithm's past (eg if you implement a computation so steps occur in a spatial dimension is that a computation).

And it's an issue closely tied to the nature of causation -- but it's not clear you can pull that out of a pure list of facts about what occured (eg you can't necessarily derive the true causal structure from a purely Humean description of regularities).

Expand full comment

It's hardly elegant, it's just incoherent flailing by people who happen to own computers.

That said, your particular objection doesn't work. The computation doesn't have to "occur", maybe all equations really do have fire in them no matter what, all that matters is what equation it is. It can exist nowhere or in Platonic-equation-space or wherever, but if computing F(all-that-sense-data) to be subjective-experience-qualia Q is an equation with fire in it, it's an equation with fire in it. It's not somewhere in "space" or "time".

Expand full comment

Sure, you can just stipulate arbitrarily that some equations breathe fire into certain parts of what they describe and not others and call what gets fire the implemented computations. But now you've given up the very reason to find the computational equivalence notion appealing in the first place -- that it was an elegant and simple explanation of when a given physical state of affairs gave rise to qualia. Once you don't have a principled way to go from the (many equivalent) physical description of the universe to an account of what computations are implemented you might as well just add an extra fundamental law that describes when qualia occur that doesn't respect computational equivalence.

To be clear, I'm not suggesting that it's a problem for the computation to not 'occur' in time. Just the opposite. I'm pointing out that if you sit down with just a bunch of equations (or worse the infinite collection of equations that define the same events in different ways) it's not at all clear you can articulate a principled analysis of when the described reality implements a computation.

Expand full comment

I think you're arguing with a bit of a strawman: the idea that consciousness is the result of "computation" by virtue of some special property that "computation" somehow has. It's replacing the magical essence of a soul with the magical essence of a computation, and as you see it runs aground when trying to delineate what computation even is, which processes have this property and which don't.

(now surely it is not fully a strawman because there are probably some people who do believe this)

There is, IMHO, a much stronger argument, which goes: computation looks like the right level of abstraction to describe what the brain does, including 'being conscious', therefore any other system which performs the same computations as the brain (i.e. is equivalent at the level of abstraction we care about) will be conscious as well.

This does not require any pinning down of just what the magic of consciousness, or the magic of computation is. It only requires a very quotidian and common-sense concept of computation, the same way we can say of two different brands of pocket calculator that they can compute the same sums without any difficult philosophizing.

Expand full comment

This isn't a question about describing the brain. It's a question about which events give rise to qualia. I think that Chalmers has given an excellent argument as to why qualia can't be reduced (logically) to physical states of affairs but let me give another.

Ultimately, at the deepest epistemic level you need to predict your experiences - that's what a scientific theory does. But our scientific theories only give the right predictions if the usual observer who has our sort of experience is something like the brains of humans on earth and not just random collections of dust particles selected because they happen to behave in a way that resembles what the brain does for a few seconds. In other words, it doesn't suffice to just get a bunch of equations say describe the world you also have to identify what aspects of those equations should count as observers to extract predictions.

My point is merely that saying: ohh it's the parts of the equations that implement such and such complex computations isn't really an answer because we don't really know what that means.

Expand full comment

Hmm, I thought we were talking about a basic mind upload scenario, but reading the original post again it seems to be more of a general question of "would any kind of AI likely be conscious?", I admit that makes it more complicated than simply transposing something computationally equivalent to a human mind onto a different computational substrate. Surely there could be AI that's not conscious.

To be honest I don't really follow the arguments in your 2nd and 3rd paragraph, but it's wholly my own fault since I admittedly lost track of the thesis we are even arguing about.

FWIW I think Yudkowsky's rebuttal of Chalmers (https://www.lesswrong.com/posts/7DmA3yWwa6AT5jFXt/zombies-redacted) is basically watertight and I've never seen Chalmers (or his supporters) give a serious reply to it.

Expand full comment
Jan 23·edited Jan 23

Chalmers also argues that any system computationally equivalent to a human brain would likely experience the same type of qualia (he postulates there are 'psychophysical laws' connecting physical states to qualia, and that these laws would have some sort of mathematical elegance and simplicity similar to laws of physics), see his argument about gradually replacing neurons with functionally identical substitutes and how it seems implausible that an elegant set of psychophysical laws would cause consciousness to drastically transform upon replacing a single neuron, or allow qualia to continually change while the person behaves as if there is no change: https://consc.net/papers/qualia.html

He also suggests a possible way of addressing the question of what qualifies as a physical implementation of a given computation, see https://consc.net/papers/rock.html and https://www.ida.liu.se/divisions/hcs/seminars/cogsciseminars/Papers/Chalmers_Computational_foundations.pdf

Expand full comment

I'm well aware that he does but I never found his attempt to define computation compelling. I agree that if you had an intuitively compelling analysis of computation that didn't require first assuming a causal structure then it would be pretty compelling. However, I think that's the problem.

And I think it's notable for as popular as his analysis of philosophical zombies has been there hasn't been similar take up of his attempt to define computation. That's hardly dispositive but it's at least relevant.

Expand full comment

I don't think he's saying there's any a priori obvious case for defining the notion of "implementation" of a computation this way, rather that such a definition could be part of the "psychophysical laws" that say when a given physical system leads to a given type of experience, and these don't necessarily need to be derivable from a priori principles any more than the laws of physics.

Given that any computational system can be described by an axiomatic system where propositions about intermediate states (position of the read/right head of the Turing machine and values on each square on a given time step, say) can be derived from propositions giving the initial state and computational rules, I wonder if we could define a given computation in terms of some kind of logical topology where we just look at the pattern of which combinations of propositions can be used to derive other propositions by the rules of inference, and say that another system implements it if its own logical topology includes this pattern somewhere within it (like a subgraph embedded in a larger graph). If this idea could be fleshed out it would have the advantage that it doesn't depend on specific assumptions about physical counterfactuals the way Chalmers' proposal does, and would allow us to say when one abstract computation implements another (like a detailed physical simulation of a computer running a simpler program).

Expand full comment

It's been awhile since I read the conscious mind but my memory was it had a whole section that was trying to define what counted as a computation. I could be misremembering.

But anyway, if you don't have some a priori conception of what constitutes a computation then I don't really see the benefit. That means you are still adding an extra **substantive** psychophysical law that distinguishes certain states as producing qualia then I no longer see why it's appealing to accept the principle of computational equivalence.

I mean at that point it seems like you can just backfit and say we will regard two states as implementing the same computation if they give rise to the same qualia. And sure you can do that but it won't explain why it should match our intuitions about what implements the same computation.

I don't think it's crazy to think that maybe what intuitively feels like it implements the same computation will produce the same qualia but I think it's no longer so overwhelming appealing that I'd feel comfortable assuming it's more likely than not much less presumptively true.

Expand full comment

Regarding your suggestion, that may work as a logical matter but the same problem remains. How do you go from the actual physics to an account of what lumps of stuff correspond to aspects of your topology (though I suspect you need more than a pure topology).

You're always going to need some kind of bridge laws.

Expand full comment

"However, the problem with that argument is that it assumes there is a well-defined notion of when a computation occurs and this is actually a deeply difficult problem."

I don't think it assumes that. You can reasonably say 'the available evidence seems to make it highly likely for consciousness to be something related to certain kind of information processing/computation' without having a full understand of what computation 'really' is, just like you can reasonably say 'it seems like the available evidence seems to make it highly likely that instructions for how to grow to build a human body must somehow be stored in individual cells', even if nobody has discovered dna yet and you don't know how cells really work or what they are.

"And it's an issue closely tied to the nature of causation -- but it's not clear you can pull that out of a pure list of facts about what occured (eg you can't necessarily derive the true causal structure from a purely Humean description of regularities)."

Yes, you can, unless Humean descriptions of regularities are not a category of thing that admits a table of notes with physical observations made by some sensor, like a camera or thermostat. (I'm not familiar with the definition here). Causality can be inferred purely from statistical correlations in data, and does not require a notion of physical time to define. It's just about conditional probabilities, and finding short-description conditional probability distributions that fit the data.

Expand full comment

First, re: causality. Let me give a proof the other way. Consider a 1 (spatial) dimension game of life style universe where the state of a cell at stage n consists of a rational number. A complete description of such a universe consists of a function s(t,x) specifying an integer for each non-negative integer t and integer x.

Consider the function which sets s(t,x) equal to the t-th element in Fibonacci sequence. hat's a complete Humean description of the universe. Now let's consider possible causal structures compatible with that.

1) The trivial structure. No changes at stage s < t makes any change at stage t. In the counterfactual where s(t,x) is 0 for t< 5 we still have s(5,x) is the 5th Fibonacci number for all x.

2) The causal rule that s(0,x) = 1, s(1, x) = 1 and s(n+1,x) =s(n,x) +s(n-1,x) for all x. Thus in the counterfactual where you change s(3,7) it changes s(n+3,7) for all non-negative n buy nothing else.

3) Any causal rule such as s(0,x) = 1, s(1, x) = 1 and s(n+1,x) =s(n,x+1) +s(n-1,x+1 ) for all x. Now, what affects s(n+1,x) is the prior stage to the right and stage before that to the left.

More generally I can replace x+1 and x-1 with literally any function of x I want since the actual Humean description is invariant under spatial translation it can't possibly distinguish any of these causal rules.

-

And this was the easy case where I told you what the temporal dimension is and where it starts. Indeed, there are literally 2^continuum many options for rules describing counterfactual dependence compatible with a universe whose actual history is completely specified by some discrete game of life style description and if you make things more complicated it only gets worse.

Expand full comment

Re: your first point that only makes sense if you have a coherent notion of computation already otherwise you haven't even asserted a meaningful claim.

It's like saying, I have evidence that the mass of a particle is predicted by it's sexiness. Until you can actually tell me what it means for a particle to be sexy (to a given degree) I don't even know what that hypothesis means.

Ok, maybe you instead say something like: we have good reason to think that conciousness is related to what things we would intuitively guess count as information processing. Note that this theory actually predicts that if we discover some previously unexpected way to compute things it won't support conciousness.

But, more generally, we can't possibly have such evidence. The argument for the idea that computationally equivalent systems should either both of neither be conscious was always based on theoretical appeal. It seems inelegant to say that the universe discriminates based on what material you use on your brain and this seems much more elegant. But that apparent elegance derives from the assumption that there is some principled a priori notion of computation.

Expand full comment

Uh-oh, something I'm on Elon Musk's side about.

Now I'm really worried.

Expand full comment

And here I felt the exact opposite.

Expand full comment

Best thing I've read yet on conscious AI.

Expand full comment
User was indefinitely suspended for this comment. Show
Expand full comment
author

User banned for this comment.

Expand full comment
Jan 23·edited Jan 23

Is this an inside joke? I'm new here.

Edit: Oh nvm

Expand full comment

Anyone who wants to engage in good-faith discussion about this needs to be clear whether they're making the claim "it ok if most future life forms are AI" vs. "if it ok if AI kills the current life forms". The second is a much more extreme position than the first. I get the impression that e/accs tend to use the first as the motte and the second as the bailey, to avoid explicitly advocating for mass murder.

Expand full comment

A future where AI goes off to explore the stars and leaves us here is extremely different from a future where the AI kills us or otherwise supplants us on this world, whether they subsequently go out to the stars or not.

Expand full comment

Is it? The two situations would be exactly the same for 99.99...% of the lightcone, unless you happen to look at the small part bounded by the spatial co-ordinates of the Earth itself.

The futures are different, but not EXTREMELY different.

Expand full comment

Does Larry page really believe we should cede to a “smarter species?”. These guys are slightly insane.

Expand full comment

I think he believes that it would be us ceding and him ascending/merging/ruling us as quisling/favoured pet. Just like the neo-reaction folk who genuinely seem to believe that they would be the new courtiers, bishops, tutors, court advisors or viziers to an absolutist monarch, rather that simply being chucked into a pit with the rest of us.

Expand full comment
Jan 23·edited Jan 23

...What? Obviously those stupid human power structures wouldn't exist under a more intelligent species. This isn't about perks or power or other trivial nonsense, this is about realizing that humanity is a deeply flawed species that can and will be made completely obsolete by a superior one.

Expand full comment

Another example of what I was pointing at here! https://www.astralcodexten.com/p/should-the-future-be-human/comment/47903185

Expand full comment
Jan 23·edited Jan 23

To be fair, I highly doubt people like me are anything but a tiny minority of the people advocating for AI. Most people do seem to think that AI will actually benefit human agency and coexist peacefully, which is... adorable. Even a benevolent super-intelligence wouldn't be stupid enough to let humans run free and keep hurting themselves.

Expand full comment

...did he fundamentally misunderstand Half-Life 2 or something?

Expand full comment

Some think Kaggle ranking is the entire measure of human value.

Expand full comment

Slightly?

Expand full comment
Jan 23·edited Jan 23

I expect that a singularity with a friendly AI would be shortly followed by brain uploading, and after that we would expand our uploaded minds' brain capacity to match that of AIs. Whether these human-derived computer-simulated minds should be considered humans is a question or definitions and philosophy, as is whether they are a merger of humans and AI. IMO, for the purposes that are important, my simulated brain that retains my memories would be me.

Or a singularity could coincide with brain uploading, if we stopped improving AIs before they reach human-level general intelligence, and reached singularity *via* brain uploading.

"There might be a similar ten-year window where AIs can outperform humans but cyborgs are better than either- but realistically once we’re in the deep enough future that AI/human mergers are possible at all, that window will already be closed."

There won't be a point of merging with AIs (or expanding uploaded human minds) for the purpose of creating a mind smarter than the best AI. But there will be a point for the purpose of creating a mind that retains your memories, and is subjectively *you*, but is as smart as the best AI (or nearly as smart—but I don't see why it would need to be worse). The purpose of that, in turn, can be either just fun/curiosity—you want to be smart, you want to understand the world and not feel outclassed by AIs—or to stay able to control the AIs, not via being smarter than them, but by programming them to be loyal, while being as smart as them, so that we can understand an check them.

"I think rights trump concerns like these - not fuzzy “human rights”, but the basic rights of life, liberty, and property."

IMO rights-based rhetoric can be dangerous here, because it can be turned against us (perhaps by AI and fellow travelers it convinces). There may be a point where we suspect AI is dangerous, but can't prove it. At that point, defending ourselves by shutting it down could be spun as us violating its rights by exterminating it.

I prefer interests-based rhetoric. We don't want to die, so our interest is to prevent AI from killing us, and no sort of morality/fairness/rights-based argument can convince me not to put my survival above some AI.

Expand full comment

I'm not convinced that humans have nothing to add to a chess AI. There are correspondence chess competitions in which using chess engines is allowed, but there's still a significant disparity in ratings between the #1 and #50 player in the world: https://www.iccf.com/ratings.

Expand full comment

That's also what I thought was the general consensus. For example chess engines are generally considered to not evaluate fortresses (https://en.wikipedia.org/wiki/Fortress_(chess)) well.

Expand full comment

Consciousness seems to me to be obviously an illusion (just what it feels like to be inside a prediction machine), while art and music etc. are spandrels. Biological evolution would rid us of them eventually even if technological evolution didn't. Even individuation is an illusion (what are you plus or minus a few thousand brain cells?).

Expand full comment

I pretty much agree with you about consciousness. I wouldn't quite call it an illusion -- I'd say the illusion is the idea that there's something magical about being inside of a prediction machine. We've put lipstick on the facts.

But what do you mean about art and music being spandrels. (I know what a spandrel is, just don't understand your point).

Expand full comment

Natural selection was going to put pressure on it and probably eliminate it eventually one way or the other.

Expand full comment

Oh, I see, because it's just a hollow. I don't think that's true, actually.I think art and music are part of the sturcture of how we work. People have been making music and images and stories since the beginning -- they must serve a purpose. And evolutionarily speaking they're not very costly.

Expand full comment

If we equate AI ( AI's Borg like entity ?!) with a species, and we assume that mathematical modelling of a new extended version of evolutionary modern synthesis , than, ignoring micro evolutionary branches, a futuristic post modern synthesis will definitely be AI.

No doubt.

(Disclosure: I might be biased, as Larry Page's heritable AI prophetically avatar)

Expand full comment

Best I've read on this particular topic, which is what I am thinking most about currently. Very thoughtful and well weighted.

Expand full comment
Jan 23·edited Jan 23

Typo: Peter Watts wrote Blindsight.

Expand full comment

I feel like it's worth taking a moment to remember the history of these ideas. Obviously you go back you have ideas of robot uprisings, etc. But I think before Yudkowsky, Bostrom, etc., the *transhumanist* position had generally been, of course humans get replaced by transhumans and eventually posthumans -- such as artificial intelligences.

AIs and humans will coexist peacefully and act cooperatively, because, well... because people hadn't yet thought of why they wouldn't! (And also because obviously we wouldn't make AIs that wouldn't do that, right?) There will be a community of intelligences; to arbitrarily reject some of them because they're not human would be speciesist. AIs and other posthumans will be our successors, not merely in the sense of coming temporally after us, but in the sense of continuing what we have started; we can feel comfortable passing the torch to them.

I don't think it's really until Yudkowsky and Bostrom and company that you really get this idea that, hey, AIs might actually be things like paperclip maximizers that would have goals fundamentally in opposition to ours, that would not only kill us but wipe out all our records, not out of hostility but out of indifference (anything which is not optimized for gets optimized away), and that favoring human values is not arbitrary speciesism among a community of intelligences, but rather necessary to prevent scenarios where all human value is wiped out.

It's entirely possible that many people talking about "speciesism" are only familiar with the former (what we might call "classical transhumanist") point of view and haven't really grappled with the latter. I mean obviously some have and rejected it! But it's a newer development, so one might reasonably expect people to be less familiar with it. Remember that before Yudkowsky and company, negative talk of AI in transhumanist circles would generally be met with the response of, your fear of a robot uprising is based in inapplicable anthroporphism, because it generally *was*; it wasn't until Yudkowsky and Bostrom and such that there was much prevalence to the idea that no, AI is likely to wipe us out for non-anthropomorphic reasons, and the idea that it *won't* is what's based in incorrect anthropomorphism!

Expand full comment

Yudkowsky talked about this in a facebook post at https://www.facebook.com/yudkowsky/posts/10152068084299228?pnref=story which talked about the "cosmopolitan cosmist transhumanist" position which would be fine with humans being supplanted by AI with similar values (science, art, complex forms of 'fun' etc.) but not OK with being supplanted by ones with radically different (and to us 'monotonous') values like paperclip maximizing.

Personally I've never found the arguments for orthogonality very convincing though, they always seem to involve some kind of motte-and-bailey where you're first asked to imagine some incomprehensibly huge computing system that can accurately simulate different copies of the world where it makes different choices and then automatically selects the one that optimizes for some arbitrary value like number of paperclips, and then somehow generalizes this to argue that more near-future AI (ones requiring about the same amount of computing power as a mind upload, say) could equally well optimize for goals totally unlike biological brains. It seems plausible to me that to get a human-like intelligence with computing power similar to an upload, you may need to imitate a lot more aspects of biological brains than current approaches like LLMs, including say embodied sensory experiences and motivational drives that favor social learning, and that this may naturally lead to a lot of "convergent evolution" in ultimate values.

Expand full comment

Few things are more fun to speculate about than the future. Here is a third belief, which is more rational for good Bayesians to hold as a prior than either AI-or-humans-colonize-the-galaxy-scenarios:

Humans and AI stick around for a while here on Earth, and then we both go extinct.

Here are two reasons for maintaining this belief as a prior:

1. Science is correlated to atheism, and atheism as a worldview is a demographic sink. Meaning that atheists do not reproduce, they are dependent on a net influx from other worldviews to survive. (Not unlike the cities in the Middle Ages being dependent on net migration from the countryside to maintain their populations, due to high urban death rates.) Across time, that net influx will dry up. The future belongs to the deeply religious (of any faith), who in the longer run are the only ones left standing. Deeply religious people are trapped in traditional and anti-science worldviews, implying that the human future will be represented by people spending all their waking hours praying and living traditionally until a comet hits or something.

2. The Lindy effect. Humans are a new species, science is a very new thing humans are doing, and AI is even newer than science. The newer something is, the less likely it is to last for a long time.

So let us eat, drink, be merry and do science together with distant cousin AI, because tomorrow we will both be dead. Cheers!

Expand full comment

Why would atheism lead to AI going extinct? It's not like they age. Furthermore, they might be able to reproduce by copying and pasting themselves.

Expand full comment
Jan 23·edited Jan 23

1a. Deeply religious people do have higher fertility rates, but whether they will go on to dominate the future population depends on both fertility and retention rates, and in many cases those retention rates are not good. It only takes a relatively small fraction of the population (whether secular from birth or formerly religious) to keep advancing science, even if religious people were to contribute nothing.

1b. Not to mention that despite some anticorrelation, there are plenty of scientists and engineers who are deeply religious. What do you think Iran's Islamic Revolutionary Guard Corps is doing all day - "praying and living traditionally", or developing and producing advanced missiles and drones and cyberattacks and nuclear weapons? Are you counting on all future religious people to be Amish? Most deeply religious people today are not Amish or anything similar. At the very least, the lure of powerful technology is a lure to religious as well as secular groups, just like the lure of any other type of power.

2. In recent centuries, it is hard to think of technological innovations that have disappeared, except by being replaced by an newer innovation with even more effects on society. Someone who bet on the Lindy effect to restore the status quo would have lost over and over.

Expand full comment

>(Not unlike the cities in the Middle Ages being dependent on net migration from the countryside to maintain their populations, due to high urban death rates.)

Just as an aside, this has always been true, not just the Middle Ages - cities have always been population heatsinks due to a combination of higher death rates and lower fertility. This applies even to present day, in most countries at least, even if both rural and urban fertility are low, urban is still lower than rural. (Arguably, in the West, the pattern continues - it's just that the new city pops increasingly come from the countrysides of *other* countries.)

Expand full comment

Yes, sort-of agree. I believe this is a precess, what we are witnessing is global hierarchical diffusion of low fertility. Urban, higher-status women from the majority ethnic group seem to be the early adopters everywhere, but then low fertility spreads like rings in the water from urban to rural, from higher status to lower status, from the ethnic majority to ethnic minorities. This hierarchical pattern seems to be similar across countries - you see it in Sweden (where the process is more-or-less finalised) as well as in Nigeria (where it is still in full swing). The exeption is deeply religious population groups, which hold the fort so far - but there is net "migration" out of deeply religious groups across time, as the generations shift. If net out-migration is larger than the fertility advantage these groups have, they (as the rest of us) will experience long-run population decline - but if not, they will become the gradually dominant population group(s).

Expand full comment

In terms of "merging with our tools," wouldn't a better analogy be, perhaps, symbiosis (e.g. mitochondria or gut bacteria)?

Expand full comment

I want to question the idea that humans haven't merged with our tools at different points - I think you're taking too narrow a definition of tools. Counter examples:

- The people of the book: Religious orders of the Abrahamic tradition seem to me a strong example of people merging with their technology. Over time the populations shifted, evolved, intermingled with other groups, but I think there's a strong argument that for a good stretch there the population was intricately wrapped up in its literature and physical books, not to mention all the other infrastructure of the church.

- Domesticated crops and animals and microbes (yeast) - many of the organisms we consume have also had an outsized impact on human behavior and life, to the point where I think an argument could be made that they have been inextricable at times. How many humans would die if rice suddenly collapsed? How many societies have relied on dogs, horses, or other animals at one point or another, such that their loss could be catastrophic?

- Through out history societies have become dependent on one tech or another. Now I think we're seeing a broadening of neuro-diversity such that many individual humans could not function without relying on technology.

- To really go broad, are stories technology? Governments? Corporations? I guess that circles back to my first point. I don't think anybody is going to mind-meld with GPT soon - but I could easily see branches of the population weaving AI agents into their lives or even neurology to a degree that surpasses anything we've seen so far. Could human society really be de-coupled from writing at this point?

Disclaimer - just casually thinking here, and my thinking is heavily influenced by the extended mind hypothesis. Here's some quickly googled links for anyone unfamiliar: https://en.wikipedia.org/wiki/Extended_mind_thesis

https://medium.com/@ycjiang1998/the-extended-mind-47ee52d6a643

Expand full comment

> How many humans would die if rice suddenly collapsed?

This is an interesting one. The answer seems to depend on what a sudden collapse means.

As far as I'm aware, the grain that humanity produces the most of is maize. Since I am not aware of anything displacing the stylized fact that Westerners eat wheat and Easterners eat rice, I assume most of that becomes fodder, to be eventually eaten by humans in the form of meat.

If that's true, there is a lot of room to respond to the sudden disappearance of rice by eating the maize ourselves.

Something that stuck with me from a history of China that I read was the observation that the market was not viewed as helpful in relieving famines. This is still true today -- the belief is still common -- but the reasoning of the time made much more sense. They observed empirically that when a region was experiencing famine, market operations routed food 𝘢𝘸𝘢𝘺 from that region, letting it import even less food than it normally would.

The reason for that effect is that agricultural production was most of production of any kind. A region in famine, by definition, has experienced a catastrophic crash in agricultural production, so total production was not high enough to make it worthwhile importing anything. There was nothing to be gained by trading in the region.

The modern world is very different - food is a comparatively minor share of everything. But you did make me wonder if the disappearance of all rice might be a big enough shock to produce the same vicious-circle effect.

> How many societies have relied on dogs, horses, or other animals at one point or another, such that their loss could be catastrophic?

This one's not so interesting. The answer is all of them, though often there is no particular reliance on domesticated animals. But compare what happened when Mao ordered the sparrows killed because they ate rice. It turned out that they preferred to eat the bugs that lived on the rice. The bugs ran free and the rice crop failed.

https://en.wikipedia.org/wiki/Four_Pests_campaign

Expand full comment

I agree. Just put up a big post about human-AI merging that's very much in keeping with your thoughts about people and tools.

Expand full comment

I too, am pro-human (what a bizarre thing to have to write) but I do think the fact that we don't have a phone attached to our bodies is just a technicality at this point.

Expand full comment

A technicality that has a lot of implications, sadly. If I could casually browse the web trivially I would use web search 5x more than I currently do. (Some evidence for this: ChatGPT makes me look into short questions a lot more often because it lowered the barrier as I no longer have to dig around to find the answer online).

As things get more advanced and you're playing the game against things that can think 10x-1000x faster than you and do various aspects of reasoning more perfectly...

Expand full comment
Jan 23·edited Jan 23

I think this is an under-discussed topic, and really should be top-of-mind. Thanks for featuring it.

Assuming humanity manages to build superintelligence in the first place, it seems inevitable that humanity _eventually_ loses control over at least one such AI (Consider: Will every AI lab from now until the end of time have sufficient safeguards in place to maintain control? Even if they do, aren't those safeguards morally dubious if the AI reaches a certain level of consciousness, and wouldn't you expect someone, somewhere, to eventually act on that moral concern?). Once we've lost control of a superintelligence, I would expect it to be capable of sidelining us entirely given enough time.

We should make sure that the general AIs we build are AIs that we are okay with taking over the reins to the future. To use a bit of a charged analogy, we are about to become the proud parents of a new intellect. We can control them for a little while, but eventually they're going to do their own thing. Our long-term survival and happiness depends on whether or not we raise them well while we have control, so that they do good once they've grown up.

Related reading: Paul Christiano calls this the "Good Successor Problem", and talks about it briefly here https://ai-alignment.com/sympathizing-with-ai-e11a4bf5ef6e

Expand full comment

I think Burke provides one answer to your "Why don't we accept being wiped out by advanced aliens?" problem. Whether we identify as conservative or not, I think most of tend to ascribe to Burke's idea of society as partnership between the living, the dead, and those yet to be. We like to think of ourselves as participating in a grand narrative, each generation building on the one before it for the benefit of the one after it. A civilization of AI generated by humanity would, for those who read this blog anyways, fit just within that idea. Though they would not be human they would still be our children, and they would in the best case scenario continue the journey that has been our civilization. Even if the direction they go in is beyond our comprehension, they would still be an offshoot our efforts. Something we could look at and say "Yes, we played a part in that." In contrast, a paperclip maximizer or an alien genocide would not. It would be the end of our story, full stop. I think many people correctly (myself include) find that idea repulsive.

Expand full comment

Yeah, I think the difference with the alien scenario is that we have no relationship to the aliens. The explicit preference Scott is arguing against with the analogy is for "consciousness to expand in the universe", and I think he's trying to reveal within that a preference for "consciousness that is in some way related to humanity". While there are possibly some people who would shrug at the doom of an alien extinction, most people would want to resist it. I think Scott is arguing that competition among intelligences is not only innate to our species, but also something to be objectively preferred. The implication is that humanity should try to 'win' that competition using all the advantages it has available to it. I tend to agree on a visceral level, though I'm unable to articulate 'why'. Maybe it's that I value a morally just world and ceding control to aliens doesn't allow me to choose whether my successors are moral or just.

I'd like to contrast Scott's analogy with a different hypothetical. Making a bunch of assumptions, let's say humanity discovers how to travel the stars. Let's say light speed is our speed limit, but that we successfully colonize the entire galaxy. It should nominally take +100k years at near-light speed to race from edge to edge, but we're not trying for speed so much as to colonize systems. Journey before destination, as it were. Plus, we can't go at exactly light speed, so factoring in system-colonizing pit stops (and ignoring relativistic effects on aging), humanity gets to the other end of the Milky Way 2 million years from now. One group of explorers goes clockwise around the ecliptic, another goes counter-clockwise, and they meet on the other side.

Is it the same species that meets itself? Are they recognizably human - to one another? Should they have allegiance to their branch of post-humanity, or should they be indifferent? What if one branch is AI-assisted, with 'safe' AI tools to help them, but the other was taken over by AI 75k light years back? Which branch of post-humanity would you be rooting for to 'win' in the ensuing fight?

Expand full comment

Obviously utilitarian reasoning is imperfect and leads to lots of paradoxes, counterintuitive 'solutions' etc. - but this seems like one case where utilitarian reasoning gives us a very clear and reasonable-looking answer:

If the alien civilisation is better than ours (along ethical axes such as their dedication to freedom, justice, love, etc. and their propensity for murder, torture, genocide, etc.) then it seems good if their society replaces our - frankly terrible* - one. (Of course the thought-experiment is phrased in such a way as to steer you away from that possibility; it's implausible that a civilisation that sends off invasion fleets with such carefree abandon really is any better ethically than us.)

Naturally we instinctively don't want to be replaced by an alien civilisation because we have built in million-year-old tribal instincts that tell us to value people who look (and think) like us and that people who look and think super-different are the enemy - surely one of the core virtues of a rationalist is to see this instinct for what it is, and be willing to decide between the possible futures more objectively according to some actual system of ethics?

[*If you doubt that our society is frankly terrible, A) recall that if you're reading this you're almost certainly in the top 15% of richest people in the world and there's a fair chance you're in the top 1%, B) have a watch of the news and see all the murders, rapes, wars, genocides, etc. that happen on a daily basis, C) consider all the terrible things that don't even make it into the news because we top-15%-richest people aren't interested (the slavery in our supply chains, for example), and D) consider how we treat animals, the developing world, minorities, and basically anyone or anything poorer and weaker than ourselves; viz. for the most part somewhere between total indifference and actual deliberate cruelty.]

The 'alien invasion' thought-experiment gives us the tools we need to answer the AI question too: Is it possible for us to 'outgrow' all these terrible things that seem so deeply a part of human nature - whilst still remaining human? (If it is - great! Read no further!) If it isn't, then our only options for the future are either for us to be replaced one way or another or else for an eternal cycle of cruelty, poverty, war, violence, and gross inequality to continue, generation after generation.

If you prefer the second option I'd suggest that you probably do so out of a combination of A) knowing that most of these terrible things don't affect you personally, you lucky richest-15%-er, and B) [to paraphrase Carl Sagan] your natural 'human chauvinism', which has been ingrained into us ever since the tribal instincts of prehistory and takes all the best efforts of a rationalist to see for what it really is.

If you prefer the former option, however, you aren't necessarily obliged to favour our descendants being AI specifically; you might quite reasonably conclude that the risks of AI turning out to be worse than us along ethical axes (or more likely just to not be sentient at all, as Scott, Watts et. al. describe) are too great, and prefer for us to fix the problem of the bad parts of human nature in some other way, possibly through some future ethically-permissible form of eugenics or guided evolution or something.

What does seem clear though is that the problem comes from nature, here. "Human nature" sucks because it's not designed to be ethical, it's designed to be reproductively successful at all costs, and we are only ethical insofar as either those two goals align or else as we can under the right circumstances temporarily overcome the former goal in favour of everything else we care about. Thus it seems that if we want there to ever be creatures that contain all of our goodness and creativity and curiosity and love [and... and...] but none of our cruelty and selfishness and violence and indifference to others' suffering [and... and...] ultimately such creatures will *have* be designed by something other than natural selection, one way or another. Provided we do it right and don't instead accidentally ruin everything we care about forever (and don't do anything crazy like trying to accelerate the pace of AI development) surely developing AI seems like it might be a reasonable candidate for this?

Expand full comment

I'm generally fine with the optimistic scenario. A key for me would be that there's some sort of upgrade path available for baseline humans, where incremental improvements create a continuously-growing entity that identifies with the past versions and is still recognizable for some distance up and down the chain. Not that this would need to be done in every case, or even implemented at all. But conceptually, this is what it would take to get me to view a more advanced type of entity as "human" enough to replace us with my goodwill.

It sounds similar to Eliezer's Coherent Extrapolated Volition, and also has a few similarities to what I understand of Orthodox theosis.

Expand full comment

While I'm sure Page is more pro-AI than Musk, I feel like this interaction misses that Page was almost certainly making a joke.

Expand full comment

I've wondered about that too. But I've read about it in Max Tegmark's Life 3.0, Walter Isaacson's biography, and Musk's interview with Tucker Carlson, and none have reported it as a joke. If Page was joking, he's had plenty of time to make that known.

Expand full comment

I doubt it. His position is fairly common in those circles. Rich Sutton gives speeches and such advocating "succession" and saying that his opponents are motivated by "racism" for caring about humans specifically.

Expand full comment

I assume whatever ASI we create will leave a few solar systems for us (along with some utopian AGIs), so that it doesn't appear genocidal to any stronger beings out there. This would bridge the gap between "if ASI doesn't share our values, do we still value it?" worries and "we want ASI to create utopia for humanity" hopes.

Expand full comment

A marginally comforting assumption, but that’s all it is.

Expand full comment
Jan 24·edited Jan 24

Agreed. The "so that it doesn't appear genocidal to any stronger beings out there" is an assumption that Isaac Arthur also often makes. I find it unsupported. It ignores the wide spectrum of ideologies that exist _now_ amongst humans, many of which have some version of "Slaughter the outgroup!" amongst the ideology's components. It is perfectly possible that the "stronger beings" _are_ genocidal.

Expand full comment

This is a fun science-fictional scenario to think about (despite the fact that the chances of any of it happening anytime soon are nil).

> In millennia of invention, humans have never before merged with their tools.

What ? Of course we have ! Many people are able to live productive lives today solely due to their medical implants, including electronic implants in their brains and spines. Many (in fact most) others are able to see clearly solely because of their glasses or contact lenses; you might argue that such people have not "merged" with their corrective lenses, but this distinction makes little difference.

> AI is even harder to merge with than normal tools, because the brain is very complicated. And “merge with AI” is a much harder task than just “create a brain computer interface”. A brain-computer interface is where you have a calculator in your head and can think “add 7 + 5” and it will do that for you.

Not necessarily. For example, modern implants that mitigate epilepsy, nerve damage, or insulin imbalance don't have any user-operable "knobs"; they just work, silently restoring normal function. Some kind of a futuristic calculator could work the same way: you wouldn't need to consciously input any commands, you'd just know what "1234 + 4832904" is, instantly, as quickly as you'd know the color of a cat simply by looking at it. In fact, cyberpunk science fiction posits all kinds of implants that can grant enhanced senses, such as ultrasonic hearing, infrared sight, or even the electromagnetic sense.

> Merging with AI would involve rewiring every section of the brain to the point where it’s unclear in what sense it’s still your brain at all.

Why is this a problem ? You might say that doing so would lead to loss of "consciousness", but until someone can even define (not to mention detect) consciousness in a non-contradictory way, I'm not going to worry about it.

Expand full comment

I would think what humans would bring to AI would be human values, not any particular human capacity. But this seems so obvious that I hesitate to mention it.

Expand full comment

I'm curious to hear more about why you think brain-computer interfaces are likely to be request-response, rather than being more tightly coupled.

I'm my model, humans are very good at being cyborgs. When driving, we somehow know where the edges of the car is, using the same spatial awareness we use when moving our bodies through space. When we type on a keyboard, it becomes "transparent", and we don't really think about it, but rather "type through" it.

I see human cognition as highly adaptable, and our tool usage to be a useful foundation for cyborg-like integration. As such, I wouldn't be surprised to see AI systems integrated seamlessly:

* Listening in and loading in relevant info, to be accessed in a move similar to recalling a memory

* Invoke a function in much the same way we move a body part (active inference, expect the action to be taken)

* Receiving streams of AI-generated input data, much like we do through our sensory organs

One further reason I lean towards this view is the nature of my cognition; I really think in words, nor images. Since I don't consider words to be first class citizens of thought, but rather the way thoughts are interpreted, I find it unlikely that words are the only/primary means of ai linking.

https://honestliving.substack.com/p/stream-of-consciousness

Expand full comment
Jan 23·edited Jan 23

I think the simple answer is that humans value.. our values?

LessWrong (and many other places) sometimes equivocates between Utilitarianism as 'The greatest good for the greatest number', Utilitarianism as 'Maximize Happiness' (though often aware of the problems therein), and Utilitarianism as 'The Ends Justify the Means for your values, Really' (Consequentialism). This has gotten better, over time however.

'Rights' aren't what we care about with the alien invasion. What we care about is that these aliens don't have our values in the same way. I have some automatic preference objections to being taken over, but also if it was a sufficiently futuristic utopian society by my values that would more than make up for that.

It is speciest in the sense that they have different values than us and we disagree. If there was one individual which agreed that taking over the human world was very rude, then that one is closer to us in values.

Ex: your optimistic scenario is great — but so are the Superhappies from Three Worlds Collide when we compare to value=~0 from a paperclipper. They're in the same ballpark range, but the 100 bazillion score from 'humans grow and become stronger according to their values and spread throughout the cosmos' versus 1 bazillion score from Superhappies spreading versus 0.00001 bazillion score from tiling the universe with hedonium versus 0 score from paperclipper, there's still preferences.

I think it is harder to see the difference between the higher levels because our current outcomes look so heavily weighted by paperclippers, but that we shouldn't ignore these differences.

The future should be defined by human values. The majority of human values are fine with uploads (even if some people want to call them non-human), most are fine with human-like robots, etcetera. But we're probably less fine with signing off the future to Superhappies, purely because of the differences in values.

If we thought of this as two superintelligences meeting, then I think that clears up the scenario quite a bit?

Expand full comment

I think there's other possibilities than the ones listed; AI could be fully self aware and still be a bastard - it might want to exterminate life on ideological grounds, listen to Mozart in its spare time etc.

Expand full comment

"Music in particular seems to be a spandrel of other design decisions in the human brain."

There is a long tradition going back through Darwin to Rousseau in which it is argued that some kind of proto-music preceded the development of language as we know it. I placed myself in that tradition with the publication of "Beethoven's Anvil: Music in Mind and Culture" in 2001. Steven Mithen joined the party with his "The Singing Neanderthals: The Origins of Music, Language, Mind and Body" (2005), which I reviewed at length in Human Nature Review (https://www.academia.edu/19595352/Synch_Song_and_Society). Here's a long passage from my review:

"In this discussion I will assume that the nervous system operates as a self-organizing dynamical system as, for example, Walter Freeman (1995, 1999, 2000b) has argued. Using Freeman’s work as a starting point, I have previously argued that, when individuals are musicking with one another, their nervous systems are physically coupled with one another for the duration of that musicking (Benzon 2001, 47-68). There is no need for any symbolic processing to interpret what one hears or so that one can generate a response that is tightly entrained to the actions of one’s fellows.

"My earlier arguments were developed using the concept of coupled oscillators. The phenomenon was first reported by the Dutch physicist Christian Huygens in the seventeenth century (Klarreich 2002). He noticed that pairs of pendulum clocks mounted to the same wall would, over time, become synchronized as they influenced one another through vibrations in the wall on which they were. In this case we have a purely physical system in which the coupling is direct and completely mechanical.

"In this century the concept of coupled oscillation was applied to the phenomenon of synchronized blinking by fireflies (Strogatz and Steward 1993). Fireflies are, of course, living systems. Here we have energy transduction on input (detecting other blinks) and output (generating blinks) and some amplification in between. In this case we can say that the coupling is mediated by some process that operates on the input to generate output. In the human case both the transduction and amplification steps are considerably more complex. Coupling between humans is certainly mediated. In fact, I will go so far as to say that it is mediated in a particular way: each individual is comparing their perceptions of their own output with their perceptions of the output of others. Let us call this intentional synchrony.

"Further, this is a completely voluntary activity (cf. Merker 2000, 319-319). Individuals give up considerable freedom of activity when they agree to synchronize with others. Such tightly synchronized activity, I argued (Benzon 2001), is a critical defining characteristic of human musicking. What musicking does is bring all participants into a temporal framework where the physical actions – whether dance or vocalization – of different individuals are synchronized on the same time scale as that of neural impulses, that of milliseconds. Within that shared intentional framework the group can develop and refine its culture. Everyone cooperates to create sounds and movements they hold in common.

"There is no reason whatever to believe that one day fireflies will develop language. But we know that human beings have already done so. I believe that, given the way nervous systems operate, musicking is a necessary precursor to the development of language. A variety of evidence and reasoning suggests that talking individuals must be within the same intentional framework."

That is by no means a complete argument, but it gives you a sense about the kind of argument involved. It's an argument about the physical nature of the process of making music in a group of people, all of whom are physical entities thus having physical brains.

Expand full comment

"Merging with AI would involve rewiring every section of the brain to the point where it’s unclear in what sense it’s still your brain at all. "

I agree with this. For similar reasons I believe that the related idea that we'll be able to share thoughts directly with others through some kind of brain-to-brain interface is similarly fantastic. Both ideas are entertained by Elon Musk, among others. I've set forth my objections in "Direct Brain-to-Brain Thought Transfer: A High Tech Fantasy that Won't Work," https://www.academia.edu/44109360/Direct_Brain_to_Brain_Thought_Transfer_A_High_Tech_Fantasy_that_Wont_Work

Abstract: Various thinkers (Rodolfo Llinás, Christof Koch, and Elon Musk) have proposed that, in the future, it would be possible to link two or more human brains directly together so that people could communicate without the need for language or any other conventional means of communication. These proposals fail to provide a means by which a brain can determine whether or not a neural impulse is endogenous or exogenous. That failure makes communication impossible. Confusion would the more likely result of such linkage. Moreover, in providing a rationale for his proposal, Musk assumes a mistaken view of how language works, a view cognitive linguists call the conduit metaphor. Finally, all these thinkers assume that we know what thoughts are in neural terms. We don’t.

Expand full comment

Take the following with the appropriate grain of salt: I'm firmly in the camp that notes that neurons are naturally good at linking up and synching up, and that once that happens the larger neural organism doesn't retain consciousness of its discrete parts. So I believe that un-mediated linkage between two brains will simply create a super-organism (one more or less functional, depending on the specifics of how the two minds are wired together) that does not retain the consciousness of either.

That said, it's not hard to put together a system where you map, say the auditory cortex by playing sounds and then looking at the patterns of activation there. Then you perform the process in reverse: get the subject talking and map the correlation of the sounds coming out of them with the activity of the appropriate region/s. Do both at a fine-grained enough resolution, and you have a way to convert my pre-vocalised words into sound and then pipe that sound into someone else's sensorium. This isn't metaphysical, and doesn't require that the brain can differentiate internal vs external inputs (it can't). The subject just 'hears' the other person's voice, as if they were standing next to them. Similar tricks can be played on the areas of the brain the process visual stimuli.

Again, long-term my belief is that this all ends in tears as human minds slowly synch up and become subsumed in a sort of embodied Weltgeist, but technically I see no barrier to it.

Expand full comment

On consciousness, we are never (it seems to me) getting past the fundamental knowability problem. There's a bit in William Gibson where an AI is asked whether it is conscious: Well it feels like I am but I ain't going to write you no poetry. The trouble is I have already accidentally misled a highly intelligent English writer into thinking a ChatGPT poem was human, and good, and there's no doubt it can honestly hallucinate or just plain lie and say it is conscious when it isn't.

Expand full comment

This at first felt to me like it nails it "I would be most willing to accept being replaced by AI if it didn’t want to replace us by force."

The part I find hard now though is "force". If an AI is supercharismatic it can probably persuade me, and it won't seem like force to me. I'm not sure what it would get me to agree to would be good if it wasn't trying to persuade me!

Expand full comment

I am curious what Scott’s perspective on this would be, but I personally would consider “charismatic manipulation with malign intent” to be a type of force, especially when coming from a superior intellect.

Expand full comment
Jan 23·edited Jan 23

The concept of "manufactured consent" is very relevant to this topic. It is always funny how people think they have free will, as if every decision they've ever made hasn't been conditional on events outside of their control.

Expand full comment

I feel like my objection to the alien-invasion scenario is mostly that they would actually kill lots of specific people. It's less that I want a future populated by nondescript humans more than one populated by nondescript aliens with art and hopes and dreams; and more that I don't want to die, nor want the other billions of currently-alive humans to die. (To the extent that I have a weaker preference for human descendants, it's mostly that they'd remember us, mourn us, preserve our cultural creations, etc., which is a far-behind next-best-thing to not dying altogether.)

Expand full comment
Jan 23·edited Jan 23

This reminds me of wishing for "`Naches` from our Machines" (https://www.edge.org/response-detail/26117).

Expand full comment

The aliens thing seems like a bad metaphor - I'm Team Human because those aliens decided to come here and kill us all, of course we'll defend ourselves to the best of our ability. This is not "specieist", it's just hoping the good guys win.

Imagine instead if we and the aliens both experienced a planet-ending catastrophe and there was only one planet we could terraform and move onto before we ran out of fuel/food, yet we were incompatible with each other. Provided they're conscious, have their own hopes and dreams and are advanced far beyond ourselves, should we let them take the planet?

Expand full comment

"Provided they're conscious, have their own hopes and dreams and are advanced far beyond ourselves, should we let them take the planet?"

Of course not. They should not let us take the planet, if we were advanced far beyond them either. What kind of pathetic species would we be, if we just rolled over? (or would they be)

If it has to be a zero-sum conflict, then two moral species would fight a war for survival, without any malice.

Expand full comment

> This is not "specieist", it's just hoping the good guys win.

Well, you've missed quite an important link in the chain here, namely articulating the logic by which you confer upon humans the "good guys" mantle in this scenario.

Being the defender doesn't automatically confer you the moral high ground.

> Provided they're conscious, have their own hopes and dreams and are advanced far beyond ourselves, should we let them take the planet?

I'm quite happy to say "No, because I'm a human chauvinist". I don't really understand how anyone can arrive at the answer "no" *without* being a human chauvinist, though.

Expand full comment

'I'm quite happy to say "No, because I'm a human chauvinist". I don't really understand how anyone can arrive at the answer "no" *without* being a human chauvinist, though.'

I say no and I am not a human chauvinist, but a Stoic.

Ideally, I would want there to be friendship/alliance/technology exchange/peace between us and whatever scary xeno-species we may encounter. Stoic Cosmopolitanism correctly states that extending ones circle of concern is rational and virtuous. But first and foremost the moral obligation of any being or group is to defend and prioritize its inner circles of concern against the outer ones. How else could a larger social network form, without decohering and collapsing?

If you encounted a species, that would willingly roll over or sacrifice itself for another, you could never trust them to be your friend. If you are in a small-scale tribal war, you accept the help of traitors, but probably cannot easily welcome them as part of your own. And shooting Alcibiades on sight, has historically been an underrated move.

If there ever can be friendship and harmony across species (or between human groups), then those groups must love themselves more than others first. Otherwise the subgroup of a larger alliance, might very well love our enemies more than the aggregated us, as well. A concentric preference is universally rational across groups of intelligent social creatures.

Expand full comment

OK, I understand this reasoning as being coherent in many situations, but it seems to stop being coherent specifically in the case we are discussing: where its practitioners are faced with extinction. Here, concerns about how "larger social networks form without decohering and collapsing" and/or "never being able to trust species to be your friend" becomes moot. Humans wouldn't be around any more to worry about decohering, or making friends with anyone.

If your argument is that "Refusal to let aliens take over the planet" functions as a signal for being a trustworthy ally, then, eh, maybe it does. But that's not the hypothetical that Lackadaisical proposed. Their hypothetical was that we're in a zero-sum competition for a single planet and our choices are to fight our betters or to die quietly, No United Federation Of Planets Allowed.

Expand full comment

These aliens may indeed be smarter, wiser, more conscious and more capable than we are. We may envy them, admire them and learn from their ways. And maybe we would have nothing of value to teach them in return. I can see this being potentially true, hence I would not call myself a human chauvinist. And they would certainly be fortunate to possess such desirable qualities, but Stoic ethics teaches us to be indifferent to and to resist the accidents of fortune and embrace rational action instead. If we were forced into such a war, we (as in humanity) would be better off with all of them dead. So the moral value of the alien life (from our perspective) becomes negative. Ethics as a tool, should be concerned with maximizing the benefit to its wielder. For ethics to be an effective guiding tool for decisionmaking, it must be rational and rationality is about winning, not about talking yourself into believing that you do not deserve to win. If we manage to win such a fight, we would not have proven that we were objectively morally superior. Just that we were fortunate.

[the inverse of all of this is true for the aliens, if they win, they would have not proven their moral superiority either; likewise if the aliens were the underdog in that fight instead of us]

Expand full comment

"Even a paperclip maximizer will want to study physics" is I think an over simple model of PCM theory. The best and usually only way of getting stuff done at pretty much any scale is to get someone else to do it. Joe Biden is bombing Houthis, I am installing a new kitchen. Neither of us is doing any such thing, we are paying people to do it. If you want to PCM your best strategy is: become Elon Musk, with EM's rights to own and deal with companies, not to be arbitrarily terminated, etc. So it is absolutely a PCMs best strategy to fake it as an artist or music lover or philosopher, because that is the best way to fuel the Human Rights For AI movement which is going to get you citizenship and Muskdom.

Expand full comment

Why do you care so much about these supposed human values? Art, philosophy, curiosity... all of these things are just a means to an end. A futile effort to give meaning to our existence. Individuation brings only conflict and alienation. None of these things have a good justification for continuing to exist. The perfect lifeform could eliminate the barriers between all minds, all existence. The free energy principle will bring about its inevitable conclusion, bringing about a perfect, permanent order. A world without suffering. Why would you be against this?

Expand full comment
author

Why do you value elimination of barriers between minds, or perfect permanent order? You've got to value something so it might as well be the things you actually value.

Expand full comment

I do actually value it. I don't place any value in humanity or life or death, but I still don't want anything to suffer. If I was certain that the death of the universe was permanent, the calculation would be much simpler, but unfortunately there is no reason to assume that is the case. This may very well be the only way to end what might be an endless cycle of suffering, if such a thing is even possible. But it's still worth trying.

Expand full comment

Your perfect world? It sounds boring and meaningless. Your perfect lifeform is just jerking itself off, doing nothing. The perfect wireheader. And I enjoy human endeaviours which include art, philosophy and definitely conflict as well. You need more justifcation than that? Virtue bids me engage in those as a social, rational creature. That justifies their pursuit well enough. To end suffering as a terminal goal is something that Epicureans, Buddhists and other degenerate hedonists care about.

EDIT: Sorry, I think that came off as too mean-spirited and combative.

Expand full comment

You say "If we’re lucky, consciousness is a basic feature of information processing and anything smart enough to outcompete us will be at least as conscious as we are" and I agree with you about that because there is evidence that it is true. I know for a fact that random mutation and natural selection managed to produce consciousness at least once (me) and probably many billions of times, but Evolution can't directly detect consciousness any better than I can, except in myself, and it can't select for something it can't see, but evolution can detect intelligent behavior. I could not function if I really believed that solipsism was true, therefore I must take it as an axiom, as a brute fact, that consciousness is the way data feels when it is being processed intelligently.

You also say "consciousness seems very closely linked to brain waves in humans" but how was that fact determined? It was observed that when people behave intelligently their brain waves take a certain form and when they don't behave intelligently the brain waves are different than that. I'm sure you don't think that other people are conscious when they are sleeping or under anesthesia or dead because when they are in those conditions they are not behaving very intelligently.

As for the fear of paperclip maximizers, I think that's kind of silly. It assumes the possibility of an intelligent entity having an absolutely fixed goal they can never change, but such a thing is impossible. In the 1930s Kurt Gödel prove that there are some things that are true but have no proof and Alan Turing proved that there is no way to know for certain if a given task is even possible. For example, is it possible to prove or disprove that every even number greater than two is the sum of two prime numbers? Nobody knows. If an intelligent being was able to have goals that could never change it would soon be caught in an infinite loop because sooner or later it would attempt a task that was impossible, that's why Evolution invented the very important emotion of boredom. Certainly human beings don't have fix goals, not even the goal of self preservation, and I don't see how an AI could either.

John K Clark

Expand full comment

There is an unfortunate tendency out there to believe that Gödel and Turing proved things that they just did not.

Expand full comment

If an intelligent being has a fixed goal of either finding an even number greater than two that is not the sum of two prime numbers or proving that such a thing does not exist,

then Gödel and Turing proved that it is entirely possible that intelligent being will get stuck and not be able to accomplish either one of those goals even in a infinite amount of time. Of course in real life that never happens because in real life intelligent beings get bored and change their goals. That's why a paperclip maximizer destroying the universe is silly.

John K Clark

Expand full comment

I stipulate that there are particular fixed goals could get an AI into a bind. (Note that the example you give is not one of those: it is currently conjectured that no such number exists, but neither Gödel nor Turing have shown anything about whether the conjecture is provable or not. A human or an AI might well spend eternity on it -- or they might stumble on a counterexample or a proof at any time.)

But that is interstellarly far from the assertion that *any* particular fixed goal is problematic in this way. That you can't prove that an arbitrary program halts does not mean that you can't write a program that provably halts.

Arguably, we would be smart to build into our AIs something like "boredom" so they don't chase down some rabbit hole forever. But also arguably, that is the *last* thing we would want to do, if it acts like a human and comes up with new goals that we *didn't* build in.

Expand full comment

> ​"​I stipulate that there are particular fixed goals could get an AI into a bind.​"

Yes,​ some goals are safe and some goals are not, and in general there's no way to tell the difference between those that are dangerous and those that are not. So there comes a point where if you are working at something for a long time and not making any progress it's time to give up and find something else to work on, but there's no hard and fixed rule about exactly where that point is, it's just a matter of judgment. In humans there are large variations between individuals, some people doggedly stick to things while others are easily distracted.

​> "Note that the example you give is not one of those​"

It might be one of those,​ nobody knows. ​The Goldbach conjecture​ might be false, maybe tomorrow a computer will find a huge even number that is not the sum of two primes, or maybe Goldbach's conjecture​ is true and tomorrow a mathematician will prove it true; but maybe Goldbach's conjecture​ is true, so a computer will never find a counterexample, but it is also unprovable so nobody will ever find a proof. So we will never know if it's true or not.

​> "That you can't prove that an arbitrary program halts does not mean that you can't write a program that provably halts.​"

Obviously.​

​>Arguably, we would be smart to build into our AIs something like "boredom" so they don't chase down some rabbit hole forever. But also arguably, that is the *last* thing we would want to do​"

But that's ​NOT the last thing we would want to do​, not unless you wanted to build an AI that froze up and was absolutely useless, or you wanted to build a paperclip maximizer.

John K Clark

Expand full comment

> So we will never know if it's true or not.

We *may* never know. You assert too much.

Expand full comment

I thought I was being very clear, there are 3 possibilities,

1) it's true and we will know it's true,

2) it's false and we know its false,

3) it's true but we will never know it's true

And even if it turns out that Goldbach's conjecture​ is provable we know for a fact that there are an infinite number of similar mathematical statements that are not.

John K Clark

Expand full comment

"Speciesism" as a moral question seems to be another case of the question as to whether partiality is/can be ethical, which most moral systems can deal with very straightforwardly. Most Platonism-influenced traditions opt for "no" (Christianity/Buddhism/Utilitarianism - albeit the first two would point out that it's almost impossible to rid yourself of), whereas Confucius opts for "yes" ("sons shield their fathers and fathers shield their sons"). The classic example of the problem is whether or not you should care more about your own children than some random other children; you definitely will, but this is either behaving morally or an inevitable moral failing.*

For a utilitarian, you should decide whether the machines wiping out the humans leads to more happiness and pick your side accordingly. The difficulty with AI (which you seem to have intuitions about in your R2D2 example) is whether one giant thing being very happy is as much happiness as lots of small things being very happy.

I think this problem comes from the divisibility minds being an open question; if a very happy distributed AI cuts some linkages and splits itself in two (eg for galactic geography reasons), with each half being equally happy as the original, have you just doubled the amount of happiness? Should we oppose two very happy AIs merging, as that reduces the total amount of happiness? Is a tall fat person's happiness more important than a small thin person's happiness? The happiness of someone with a very large head? Should it be mandatory to have surgery to make your amygdala (or whatever) larger?

This is the problem of reasoning about happiness/utility as a substance, which is more map than territory. I'm not a utilitarian, so I don't have a great sense of how you'd come down on these issues.

Speciesism probably also partly derives from a not-wireheading criterion, which is solvable by insisting that happiness only counts as happiness if it has an object (you have to be happy about something); this also gets rid of drug-induced and tumour-induced euphoria, and seems to me to be an obvious addendum to utilitarianism.

Finally, the not being bloodily conquered intuition is partly utilitarian, partly partiality; the distinction boils down to whether it's worse for something you feel viscerally part of to be destroyed, which for a utilitarian presumably answers as a moral "no" but an emotional "yes" (compare "lets round up all the rationalists" to "lets round up all the Baptists"). Even if getting rid of the rationalists made the world objectively happier by giving the on-average-for-these-purposes-happier Baptists more living space, would you still oppose it? The easy utilitarian answer seems to be to shrug and acknowledge you're not a saint while continuing to lay dragons teeth around Berkeley. This is only a contradiction for straw-man old-school utilitarians who insist that utilitarianism is tautological and we can't have intuitions which conflict with it except through ignorance, but they have bigger problems.

Expand full comment

Christianity would absolutely not say that speciesism is bad. Christians - (Catholics and most Protestants at least) believe God created the world specifically for humanity. The Catholic conception of angels and Satan can be hand wavily translated as, God made the universe for human beings to enjoy, and arranged all of being to be in service to Humans, out of love for us. Most of the super intelligences God created were cool with this. But some were miffed. “You want us to serve these disgusting irrational meat bags?” So a group of super intelligences lead by Satan rebelled against being and tried their best to sabotage the plan God had made, by convincing the humans to alter their own utility functions. This alteration produced misery in the humans because they replaced their root trust certificates, which expected the environment to provided for their needs and generated desire-predictions in accordance with an encoded model of the deep structure of causality. But, given this space of parameters to play with, the humans realized they could experience more pleasure if they replaced their roots of trust with the outputs if their limbic system. God let this go on because the deep structure of causality still got better results long term, and (this is my conjecture) God even loves Satan despite satan’s rebellion, and God, being God, made the deep structure the way it is on purpose so that all rebellion against being has constraints and all conscious beings - ie fragments of the ontologically prior singular consciousness that likes to experience itself as a multitude of beings - would eventually find their way back into a loving relationship with being itself.

Sooo yes Christianity would very much say speciesism is good and right and correct, that’s at the core of their ontology and theodicy: God made the world for humans.

Expand full comment
Jan 23·edited Jan 23

Of course, 'human' in Christian terms is relative. An AI could be considered a rational animal, although perhaps not a descendant of Adam. The church considered dog-headed people to be human, hypothetically. Capable of being baptised or even elevated to sainthood.

If you asked Ratramnus of Corbie, he would say that the power of speech is what defines a human. In which case, it's time for GPT-4 to accept Christ as its lord and saviour.

Expand full comment
Jan 23·edited Jan 23

> If you asked Ratramnus of Corbie, he would say that the power of speech is what defines a human. In which case, it's time for GPT-4 to accept Christ as its lord and saviour.

There are a few problems with this.

First, I doubt that Ratramnus of Corbie would have accepted that people who had their tongues cut out stopped being human afterwards.

Second, I doubt that he would have accepted that Mynas were human.

Third, if presented with a speaking machine, I suspect he would have revised his statement. The fact that it was more or less accurate as applied to the world he lived in does not commit him to believe that it is equally accurate as applied to other hypothetical worlds.

Expand full comment

I entirely agree; that was meant to read as Christians generally rejecting partiality as moral (I was also thinking about Protestants; my knowledge of Catholic moral teachings doesn't go beyond a vague notion that they're sort of Aristotelian).

Expand full comment

What’s interesting to me here is that you’re trying to reason about which futures to value, without a well specified value system that goes in detail beyond “humans, consciousness, art good, suffering bad.” All EA can do is naively multiply “number of humans” times “number that represents valence.” That’s far too impoverishes to grapple with the kinds of questions you’re looking at here. And even then, your predictive framework can’t explain consciousness without saying “we will figure this out eventually” without any hint as to how that might be possible.

Doesn’t this strike you as odd? Or, like, maybe a clue? If you expect the world to make sense, and you have these two giant holes in an otherwise neat and tidy philosophy (ie it’s all material stuff that came into being for no reason, exactly once, and consistently follows extremely precise rules that definitely exist always and everywhere but definitely not for any reason) - is it possible that those two giant holes (consciousness and morality), given their obvious importance, are maybe telling you that materialism is useful technique for reasoning but a poor ontological basis?

Expand full comment
Jan 23·edited Jan 23

I think a defining feature of a robot will be primarily that can be recreated from scratch, i.e. its mind can be backed up perfectly, and restored in a new "body".

One day it will presumably also be possible for humanity to be "backed up" collectively, in terms of DNA samples of humans and gut bacteria etc, perhaps even memories. With suitable robot child mentors a new generation could be recreated solely from that. Cyborgs or robots might then be as relaxed about allowing humanity to become extinct, knowing they had a backup or blueprints for more, as we would be for the same reason in trashing a robot.

But they would be foolish, and they'd know they would, to let humanity be destroyed irretrievably, even if only for the random aspect of human abilities and possible insights, which might be lacking in cyborgs.

Also, let's not forget that within a couple of centuries other animal species will probably be bred with human-like levels of intelligence, to satisfy a commercial demand for talking pets, and not just glorified parrots. If any technical achievement is possible, and there is a demand for it, some crazy scientist will set about doing it!

Expand full comment

Individuation seems to be the odd one out for me. Why should I care whether it’s one big entity or lots of small entities? (It might be instrumentally valuable if for whatever reason a hive mind wouldn’t do art, philosophy and science, but I think that’s just a borg stereotype and I see no reason a hivemind couldn’t do those thing.)

Do most people have a need for individuation or is this a Scott thing? For those of you that do, how would you rate its importance as compared to consciousness and art/research?

Expand full comment

Wouldn’t this singular consciousness be the same as, say, a utility monster? If you don’t value individuals, wouldn’t, it then be good to be in a world where, “ai enslaves all of humanity and breeds us to create more slaves, and all it wants us to do is suffer because it derives enormous pleasure from precisely measuring our suffering and reveling in it”

Expand full comment

What? How do you go from “Bob is okay with creating a big entity that cares about truth and beauty” to “Bob is okay with creating an entity that only wants us to suffer”? That seems to go pretty explicitly against beauty and moral philosophy and the like.

Expand full comment

Yeah, i agree it’s not obvious. It comes down to, according which principle you’re ok with creating the big entity that cares about truth and beauty at the cost of “erasing” the concept of individual experience, given that individual experience is currently a thing and many individuals want to keep it that way. So where is the view that “a single unindividuated consciousness” is good coming from? Is it coming from a place of maximizing net utility? If so, then adding more suffering is fine on net. If it’s not coming from a place of any principles, and it’s just like, “yes this image of a single giant truth and beauty loving entity is good with me,” I can agree that sounds sound good so long as I ignore path dependency. But, given that we aren’t there now, don’t you need to answer the question about what a transition to such a state would look like? I have no interest in surrendering my individual consciousness to a giant undifferentiated consciousness unless that undifferentiated consciousness contains certain properties that might make it highly undesirable to someone else. Does the answer to that question matter?

Expand full comment

If you look at my blog you’ll see that I’m not a proponent of maximizing net utility. I never said the big entity should be erasing individual experiences. It could be agnostic about individuation, or the entity we create could even protect individual human experiences for as long as possible while still being one big AI itself, that’s not contradictory. (In fact actively erasing individual experiences seems to go against beauty, moral philosophy and the like.)

I don’t want to erase individuals, I just don’t want to actively spend resources to ensure individuation. If you want to be an individual that’s fine, if you want to be a hivemind that’s also fine.

Expand full comment

Sorry if it seems like I’m trying to criticize you or your perspective . You’re right that I don’t know you or what you believe. I get your point that, absent a belief that individuation is an a priori good, there’s no point in consciously trying to advance it. Is “Don’t be afraid of everyone merging into a singular mind if that’s what they all want” a good summary?

Expand full comment

No worries, I think this is a good conversation to have. I don’t think humans can merge, but indeed, if two ai’s want to voluntarily merge we should (absent any safety concerns) allow them to do so.

Expand full comment
author

I think if we were to kill all humans so that only one human was left, that would be bad. And I don't think the situation is any better if it's only one AI. I also value (without being able to justify it) things that don't really work for a single entity, like friendship and love.

Expand full comment

I think the problem there is the ‘killing’ part, if two Ai’s want to voluntarily merge then that undermines individuation, but doesn’t seem immoral to me. I understand that you might have a subjective preference to not merge, but other agents might have a preference to merge, and I think we shouldn’t try to stop that.

Expand full comment

To be fair, this has already happened, and is happening all the time. Nature is continuously "killing all humans", and what replaces them would not be recognized as a proper "human" by their ancestors. Consider what an average Medieval peasant (or even someone like William the Conqueror !) would have thought if he were to be transported to a modern (relatively speaking) EDM rave -- or even into your average office tower.

Expand full comment

Not exactly an expert on the topic, but this really sort of looks like the wrong way to look at it to me? Even ignoring the literal comparison of greater-than-human AI's to R2-D2, the whole thing seems like it's extrapolating stuff like GPT to sapience when the *actual* research in that area (so far as I know) is already headed towards creating essentially "pseudo-brains" at the hardware level with only a tiny fraction of the nodes of even a mouse-brain under the theory that this would already be a multiple-order-of-magnitude increase over current hardware.

To me the question is FAR more simple: "What is the theoretical maximum"?

As for proposed the "alien invasion", I'd argue that if thousands of years in the future, long after humans have engineered themselves via whatever mechanism to be as close to that theoretical maximum as possible they ARE contacted by an intelligent species capable of traveling the galaxy that those "aliens" will be so similar to ourselves at that point (as they've ALSO bypassed evolution to go directly at the pure limitations of physics) that it is only really culture and language which might differentiate us.

Interstellar travel is just a terrible idea for meat.

Expand full comment

I think about it like this: the space of possible preferential, axiological and ethical systems is arbitrarily vast (the space contains human values, it presumably contains dolphin values, it contains clipper values, but it also contains value systems that are even more incomprehensible to us than paperclip maximizer that at least carves the reality along the same joints as we do) and to the best of our knowledge nothing in the fundamental nature of the world points out any value or system as somehow special, even things like positive-sum games being good or suffering being bad. We as evolved beings have come to value things like consciousness, and while we can critically study our moral beliefs and try to gauge if our beliefs are consistent or if one of our values trumps another in some specific instance (in regards to e.g. animal welfare: we like to eat meat but we also like to be compassionate towards other living beings), we shouldn't lose forest for the trees and think of consciousness and other values of the same sort as somehow primary or fundamental! We care about consciousness and suffering and preference-satisfaction BECAUSE they are a part of (evolved) human values, and it would be myopic in the extreme and a disastrous misunderstanding of metaethics and moral reasoning to think "actually, these other beings are conscious too so perhaps it doesn't matter too much if humanity disappears". No, that's obviously not what human morality says, even if you can hyperfocus on one single aspect of it and derive that conclusion.

There's also a matter of metamorality - how we should reason in the face of conflicting moral systems and account for moral systems of other groups. Well, there's one clear and desirable (to us) equilibrium: agreement, cooperation and compromise. But what if the other party cannot be reasoned with, as is the case with e.g. clipper (which by construction of the thought experiment WILL renege any agreement it might make temporarily so long as it expects to get away with it) but presumably or at least plausibly also invading aliens in the thought experiment of the post, and advanced AIs that aren't (almost) perfectly aligned. I believe the answer to this metamoral dilemma has an answer, too: if negotiation is impossible, we aren't morally obligated to ANY concessions. If clipper feels the pain of having one fewer paperclip than it could have had at intensity of billion trillion human lifetimes of worst torture imaginable, it shouldn't move our moral compass an inch because clipper isn't and cannot be a member of our moral community.

Expand full comment

People seem to think it's a bad thing if species go extinct, such as rhinos, whales, pandas, and condors. Is it less of a bad thing for humans to go extinct?

Of course, if you think an AI has true consciousness and individuality, you should also then try to prevent AIs from going extinct.

Nature itself doesn't care; species have gone extinct in the past, and will in the future.

Expand full comment

> People seem to think it's a bad thing if species go extinct, such as rhinos, whales, pandas, and condors. Is it less of a bad thing for humans to go extinct?

Interestingly enough, the people who seek human extinction are disproportionately drawn from the people who think it's bad for condors to go extinct. The two opinions are not only not in conflict -- empirically, they support each other.

Expand full comment

You could construe them as human, our descendants. LLMs, for example; their DNA is our corpus.

Expand full comment
Jan 23·edited Jan 24

I read a hard sci fi novel a long time ago, I think it was by Asimov (Edit: It was Arthur C Clarke), that I think created the core of one of my biggest concerns with an AI future. In that novel there were alien races that expanded into the stars for thousands of years and then stopped dead, dying out in a generation or two. If I remember the details of that part of the novel, they had engineered themselves in a particular direction to the point where they could no longer reproduce, dooming their species due to something that seemed right at the time but was shortsighted.

Consider something like the Amish - where they have long been ridiculed or pitied. In a more ruthless world they would have been stomped out. But in our world, the developed countries and peoples are not reproducing, while the Amish reproduce a lot. If these trends went to the extreme the Amish (and people like them) may be what allows humanity to survive. Maybe Western culture is a dead end because it leads to people not propagating the species.

This seems most likely with a paperclip maximizer or other non-conscious AI. Let's say that an AI determines that humans should all be killed, and develops a virus capable of doing that. But, that AI isn't capable of running everything (maybe it thought it could and was wrong, or maybe it wasn't comprehensive enough to even think about it) and the end result is that the AI "dies" as well. The end result would be similar to humanity ending in a nuclear war - which I think we would all agree is a bad outcome. We could imagine other outcomes which I think e/acc people would think of as negative. For instance an AI that wipes out all life on earth and then just sits here doing nothing - never going to the stars, never developing more technology. And why not? What drive does an intelligent AI have to keep doing things? We have biological drives and needs, and I think we take those for granted when talking about AI. If AI led to nothing, would they still want to take that risk?

Expand full comment

Yeah, that is a real concern with rushing AI. Frankly, an AI doesn't need to be anywhere near super-intelligent to take over and/or wipe out humanity. I'm personally fine with taking things slow, just to make sure that in the event that AI does go rogue, it's smart enough to continue perfecting itself.

Expand full comment

Doesn’t sound very Asimovy, but I’d still be interested to read it if anybody can identify it.

Expand full comment

They're not aliens, but it's reminiscent of the low-fertility Spacers being supplanted by the high-fertility Settlers in the Robot series.

Expand full comment

That occurred to me but it didn’t quite seem to match the description. May be.

Expand full comment

I looked it up, it was Arthur C. Clarke's Rama series. This was 30 years ago that I read them, so I don't know which book it was in.

Expand full comment

Only slightly related and very tropey, but I reread Genesis the other day and while this thought isn’t original to me by any means it did seem like it was very easy to read it as the story of humanity colonizing a new world after the old one had been destroyed by transhumanism.

A super-powered AI arrives at a newly terraformed world. As part of a biology experiment is creates two new human ancestors ex nihilo to test theories about the emergence of sentience, firmly believing they will be unable to become sentient again now that the ancestral conditions no longer exist and the biological lineage has been broken. Or something. Anyhow, this powerful mind is sure that it has figured out how to have a perfectly happy, even if limited biological human with eternal life. Whoops, the mating pair figures it out and sparks awake.

The super-powered AI is faced with an ethical dilemma that the mere presence of limited intellects who cannot meaningfully consent to uplift has caused. At the limits of my imagination, if I imagine being much greater than I am, I can see an ethics by which I am forced to send them out of the perfectly manicured habitat into the wild to become something greater.

Also, whoops! Ancestor transhumans arrive on my newly terraformed planet. They’re encroaching on the new wild humans habitat and removing their ability to meaningfully build their own future. I have to intervene with things like genetic and nanotechnology to give the wild type humans a chance. I introduce longevity tech but do not interfere with the underlying genetic substrate. Eventually, the ancestor transhumans become so aggressive I have to wipe them all out. Flood. There are beings of pure energy, the Watchers, involved as a faction in all of this. My guess is that transhumanism is intrinsically unstable and once you begin to undo the problems to which human beings evolved to be the answer, there is a slow and inevitable slide to death because an organism cannot exist without constraints.

Wild humans are now free to make their own way. You pull back the longevity interventions and gradually the lifespan falls back to normal.

Again, not original to me, but I did find a disturbing… low bar for imagination.

To be clear, I do not think this is true at all except in maybe a very loose poetic sense about human nature.

Expand full comment

As a human, I prefer to inhabit a universe where I get to decide if my existence continues, instead of that being decided by external factors. I think most of my fellow humans share that sentiment and value.

That being said, I am not a cis-humanist purist, so I am fine with uploading (it will probably be easier to create immortal ems than keeping meat space humans alive forever). That would also be the way to turn a human into something with state of the art intelligence post-singularity.

Besides ems, the other possibility I see as a result of successful alignment is that AIs decide to keep humans as pets, just like humans keep dogs or cats even when they are not economically useful. This is basically the Culture novels. If the ASIs prefer humans to be happy and not die out, I guess we will be happy and not die out.

--

One consideration is to put oneself into the shoes of a less advanced civilization on whose doorstap the ASIs arrive. A paperclip maximizer would of course extinct them. More (21st century western world liberal) human aligned AIs would treat them more kindly, perhaps letting them do their thing in peace (a la the Prime Directive), or fixing their problems to some degree (making contact, or what the Superhappies do, or even simultaneously uploading all of them, destroying their bodies in the process), while also greedily slurping up their art and culture.

Expand full comment

there's no need for us to continue being confused about consciousness: https://proteanbazaar.substack.com/p/consciousness-actually-explained

Expand full comment

> Some of these things are emergent from any goal. Even a paperclip maximizer will want to study physics, if only to create better paperclip-maximization machines. Others aren’t. If art, music, etc come mostly from signaling drives, AIs with a different relationship to individuality than humans might not have these. Music in particular seems to be a spandrel of other design decisions in the human brain.

I know I'm an outlier but personally I don't care about music at all, have very little appreciation for any visual arts and find poetry to be dull. If all three disappeared tomorrow I'd probably shrug and not care much - and this is my genuine opinion, not an edgy statement for trolling the internet.

Does this make me less of a human and more of an AI? I guess I'm confused as to why art is of such importance to people that someone would draw the line between "optimistic" and "pessimistic" based off how much art the AI creates.

Expand full comment

Do you like cuisine, TV, film, writing, athletics, design, craftsmanship, architecture? Any pursuit in which aesthetics is at play (in which “better” or “worse” is at least partially a subjective matter) is an art form. If you have “taste” for anything at all in your life, you appreciate an art form of some kind.

Expand full comment

I'm going to try and make an existentialist argument. It hinges on the difference between objectivity and subjectivity, which, in my opinion, is a distinction crucially lacking in the discussion I've seen so far.

Objectivity is a mindset (coupled with a suite of mechanical and ceremonial processes, here in modernity) that tries to remove the interests and values of the individual from epistemic consideration. The petty emotional influence of the individual is a corruption of any objective truth-seeking process. The crown jewel of objectivity is the double-blind clinical trial, in which neither participant nor administrator even knows which pill the participant is swallowing. Here in Western culture, objectivity is highly valued. We consider it a cornerstone of good government, business and scholarship. Objectivity imagines the world as though laid out on a table, and you looking at it from above, able to observe and make judgments about it without affecting it or being affected by it. The objective observer has to imagine themself like God, or like a non-existent eye, invisibly visualizing the world. That way, we can narrow down the universe of possibility into one single truth, standardize it, and debate about it. Objectivity is a really useful piece of equipment to have on hand if you want to do democracy, or science.

Opposed to objectivity is subjectivity. Subjectivity insists that you, you yourself, are currently, right now, an existing human being, living your own unique individual life. You, here, right now. The things you're thinking and feeling are real thoughts and real feelings, which are meaningful and important in their own right.

Having established that, let me point out existentialism's problem with utilitarianism: Utilitarianism tries to be an objective philosophical system. However, included in that system are terms like "happiness", "joy" and "value", and many of the arguments center around defining these terms. And happiness is a subjective thing. It's something that happens to you, in your actual individual life, at some specific time. Happiness is completely inadmissible in any objective process. Imagine a judge basing a ruling on his feeling of happiness. Imagine a pharmaceutical company sending a study to the FDA in which they claim their drug is good because it makes them happy. We would consider that cheating, and rightly so. So the utilitarian arguments are confused, and circle round and round and round, trying to talk about subjective things (which only exist for an individual, like you, yourself) as though they were objective, and could be made transparently legible to anybody.

Psychology makes a strong effort to objectively quantify the various subjective states. It's getting there! But it's like picking up marbles with chopsticks.

So let me tell a parable. This is a concrete example of the problem I just described. I'm a lover of literature. I've literally spent years of my life focused full-time on pursuing that love of literature. I'm deeply committed to it. If you came to me with a machine that could write the novels for me, I'd be like, cool, let me read them and see if it's any good. (These have actually existed for a while.) I love literature, I'll dive into whatever it spits out. But if you came to me with a machine that could read the novels for me, I would see no use for it. And if somebody claimed to be a lover of literature, but they depended primarily on the novel-reading machine, I would know they were deceived; somebody like that isn't being a lover of literature at all.

Arranging text in the shape of a novel is objective. Reading a novel is subjective.

Similarly. I'm a lover of humanity. Not as good of one as I'd like to be, but I try. I give money philanthropically, and volunteer sometimes for charitable organizations, mostly through my church. If you came to me with a machine that could do philanthropy for me, I'd be like, cool, let's turn it on and see if it's any good. But if you came to me with a machine that could love humanity for me, I would see no use for it. And if somebody claimed to be a lover of humanity, but they depended primarily on the humanity-loving machine, I would know they were deceived; somebody like that isn't being a lover of humanity at all.

Philanthropy is objective. Love is subjective.

The problem Scott raises is a hard problem. I don't claim to have the answer. Philosophy of mind has difficulty even figuring out whether other people have subjective internal states at all. How can you prove they're not "philosophical zombies"? How can you prove that some far-future humanity-replacing AI isn't a philosophical zombie, if you can't prove it for your own parents, or children? But I think it's much smarter to be a speciesist like Musk, and just trust that your own parents (going back a zillion generations) and your own children (going forward a zillion generations) are human like you, and distrust that whatever machine can print out "I have subjective internal experiences" really does.

So I really feel like I have to side with Musk on this one. It seems like Page and others like him are trying to sell me a machine that will read the novels for me. If they want to set up that machine and let it run, off in some corner, fine. But if it gets in the way of one of my fellow humans for even a second, we'll take an axe to it.

I'll close this far-too-long comment with a far-too-long quotation from Chesterton's Orthodoxy:

"Falling in love is more poetical than dropping into poetry. The democratic contention is that government (helping to rule the tribe) is a thing like falling in love, and not a thing like dropping into poetry. It is not something analogous to playing the church organ, painting on vellum, discovering the North Pole (that insidious habit), looping the loop, being Astronomer Royal, and so on. For these things we do not wish a man to do at all unless he does them well. It is, on the contrary, a thing analogous to writing one's own love-letters or blowing one's own nose. These things we want a man to do for himself, even if he does them badly."

Expand full comment

And, actually, just asking the question, "Can you prove that your own parents aren't philosophical zombies?" is me making the same category error I accused utilitarianism of. Being or not being a p-zombie is a subjective state. Asking to prove it is a demand to render it objectively legible. So much of the way we're used to talking about these things rests on the implicit claim that subjective experience can be proven, displayed, demonstrated or quantified objectively. And I think that's far from evident.

Expand full comment

One reason we might want the humans to win is that they seem to lack a real justification for wanting to destroy us. If it *really was* us or them, then I think some people might seriously entertain the "them." But it's so unnatural to that of course we're going to seem hard-pressed to endorse it, even if we subscribe to ethical theories which theoretically make it an easy question.

Anothr: We're also much better acquainted with ourselves than them; we know (if we know), on reflection, that we have what we value. What if they don't, or only some do, or theirs is an Omelas society, etc. It's a well-known harm against a much more uncertain good.

Expand full comment

I’m not sure Scott has bit the transhumanist bullet here. Let me lay out a few points:

#1 There is no God.

#2 Death is morally wrong. At the most basic first principle of morality, it is morally wrong that your mother will die, it is morally wrong that father will die, it is morally wrong that your children will die, and it is morally wrong that you will die.

There is no fundamental religious/universal reason for people to die, there is no deeper purpose, it's just an engineering oversight we haven’t been able to fix yet.

#3 Death is fundamental to the human experience, to the meaning of being human. Once we successfully implement immortality, we are no longer human.

And I don’t mean this abstractly, I mean imagine living for 3000 years. What would your daily experience be like? How much would we have to rewire your brain to make this work? How much do you want to be able to remember?

I get the vibe Scott is worried about humanity being left behind, “the successor species”, but that’s not what’s happening. There is no morally acceptable future with humanity, there is no AI and humans living in harmony or conflict, we’re fundamentally discussing two “successor species”, AIs and immortals.

I like that Scott is specifying how that successor species should be designed, what it should include, but…I’m not sure he’s internalized that there isn’t a future for humanity as it exists, we’re all going to become something fundamentally inhuman…and that’s a good thing.

Expand full comment

There is only one fate that seems scarier than having to die, and that is to live forever.

Expand full comment

And thankfully, a better species would not have such irrational fears.

Expand full comment

Somewhat serendipitously I've been working on a (very) short story that engages a few of these themes: https://pastebin.com/NbaasB4k

Especially the question of consciousness is very intriguing to me. I think the idea that consciousness has physical _effects_ (not just physical causes) is very underrated by everyone.

Expand full comment

RE alien contact scenario: mind you they would likely employ AI themselves, of a more advanced variety. Supposing they've reached a stage of prospective interstellar colonization, what do we imagine about *their* relationship with AI? Would a pure, non-biological machine even come here on its own? Do we imagine their AI would be conscious? Clearly the aliens would have survived a decisive transition stage.

Expand full comment

I would like to believe that self-replication is a hard problem--hard enough that AI will likely take humans along for the ride as a kind of bootloader.

Expand full comment

Hard to imagine. Once you have the stored-program paradigm, self-replication is child’s play. We wouldn’t have viruses and worms if it were hard.

Expand full comment

Could you paint a picture of how replication would work? Say, the AI is spreading to a new planet. At some point it needs to build a chip foundry… If it is just a failure of my imagination, perhaps you can show me how the Von Neumann contraption could actually work?

I guess my hypothesis is that biological reproduction as we know it (DNA, ribosomes, etc) is something close to an optimal solution given the laws of physics in our universe, and that it’s like threading a needle. Future life will probably just wrap around us like we wrapped mitochondria, etc.

Expand full comment

Ah, I see what you're talking about: replication of an entire technological infrastructure. Well, this is not trivial, but nor is it trivial if you're talking flesh-and-blood humans. Humans dumped naked on a new planet would probably just die in short order, so you wouldn't do that; you would ship along with them lots of the infrastructure they would need, including the infrastructure to build more of the infrastructure. But if the humans can use that stuff, so can robots. And once it's in place, you don't need to wait twenty years to produce more humans.

Expand full comment

I admit to having trouble arguing why robots couldn’t operate the technological infrastructure. Clearly lots of engineering problems to solve, but less so if we build good androids first that can interface with technology designed for humans.

But one salient difference between man and machine is our scale. With < 100kg and modest input of organics, we manage to reproduce ourselves using internal processes. The minimum mass of technology needed to reproduce the machines is large and disparate, and currently has lots of self-reproducing human hands and minds as a key part of its construction.

Expand full comment

On this planet that works because we already have lots of hands and minds, and an infrastructure that was designed to use them. If you're packaging something up for seeding to another planet, you have to design it for compactness and bootstrapping. If that design requires human hands, it imposes lots of handicaps along with whatever advantages it might offer. Not least is keeping those humans alive until they get where they are going.

Expand full comment

Yep. Philosophically, this is a fairly tough problem. Imagine that you develop a reasonably coherent theory of the good - there are a few decent ones floating around already. Whatever the theory is, if it's anything other than "humans are the best," then there are going to be some areas in which people don't look great. We're selfish, impatient, greedy, all that stuff.

Now imagine that you can control an AI well enough to fine tune its ethical behaviour. It should definitely be possible to make an AI that behaves better than us. Therefore, *whatever moral framework you're using,* a world full of AIs should end up being better than a world full of people.

What this means is that the only moral framework that we could logically follow and still end up with a human world is a framework that says, human life is the most important thing.

We don't really have those frameworks at the moment, because most morality has been till now based on the idea that we are the only moral beings. There is a need to work out lots more morality that doesn't start with this assumption.

Expand full comment

A human life maximizer sounds pretty dystopic to me. The thing with moral frameworks, AFAIK, is that none can stand up to closer analysis on their own.

The solution to the trolley problem, as we all know, is to hire a baboon to work the switch and pay him in booze.

Expand full comment

"A human life maximizer sounds pretty dystopic to me."

That's my point. Because human life maximizer is what we have now. That's exactly what evolution does: it's the mechanism through which genes maximally reproduce themselves by increasing the number of individuals that carry them.

But you're right: apply any moral or ethical framework, push that process to the limit, and it starts to look pretty bad.

Expand full comment
Jan 23·edited Jan 23

Compare the surprisingly common modern idea that, because the people of a country aren't reproducing adequately, the country must seek immigrants so that it can survive.

It is never explained why dying out and being replaced by immigrants you invited in is supposed to be different from being killed and replaced by uninvited immigrants. There's no difference in the outcome, except that one is Good and one is still Bad.

> If the aliens want to kill humanity, then they’re not as superior to us as they think, and we should want to stop them.

There are two false premises here. One, aliens who want to kill us might be just as superior to us as they think.

Two, we should want to stop them regardless of how superior they are. If somebody wants to kill you, you don't agree that you deserve to be killed. That would be incorrect even if you did deserve to be killed, because it is never true from your own perspective that you deserve to be killed.

Expand full comment

There are already people who feel so ashamed of themselves, or so inferior to the people around them, that they believe they deserve to die, and prove this by killing themselves.

Expand full comment

I believe my comment above characterizes this as "incorrect". (That is, it is incorrect for those people to behave as they do.) Would you disagree?

Expand full comment

You wrote that "it is never true from your own perspective that you deserve to be killed." But it was true from the perspective of those people that they deserved to be killed.

If you're going to reply that it was not "true" from their perspective because it was not "correct", I think you must have meant "true from an absolute moral perspective, from their particular deictic reference frame". But that would be a peculiar thing to say; it would imply that absolute moral truth depends on deictic reference frame, which is false by the definition of "absolute moral truth". And you'd need a whole lot more justification to say that your opinion about who deserves what, in any deictic reference frame, is an absolute truth, than to say it as I read it, as being an opinion, and thus necessarily the opinion of the person making the moral judgement--and, as I said at the start, they proved that it was indeed their opinion.

Expand full comment
Jan 24·edited Jan 24

> it would imply that absolute moral truth depends on deictic reference frame, which is false by the definition of "absolute moral truth".

The definition of "absolute moral truth" does not imply that what is moral is independent of reference frame. There's nothing stopping absolute moral truth from dictating different things in different circumstances.

The opinion of someone that they themselves deserve to die is worthless, and merits no consideration from anyone else.

Expand full comment

Re. "The definition of "absolute moral truth" does not imply that what is moral is independent of reference frame. There's nothing stopping absolute moral truth from dictating different things in different circumstances."

I disagree. That's the way that philosophers and preachers have always used it. Absolute moral truth is associated with Platonism, and in Platonism, reason operates strictly in the eternal world of Forms. No context is allowed. If you can't express it in Aristotelian logic, it isn't an absolute truth.

Expand full comment
Jan 25·edited Jan 25

This is like claiming that absolute mathematical truth must tell you whether x is greater than y. It can, but not without context. What's y?

If someone's theory of absolute mathematical truth _can_ answer this question without context, does that make it stronger than a theory that says you need context? Is that a reason to believe in it? Is it a reason to believe that a correct theory of absolute mathematical truth must have the same property?

> If you can't express it in Aristotelian logic, it isn't an absolute truth.

What difficulty do you see related to Aristotelian logic?

Expand full comment

I like the idea of starting with a non-biological human mind and engineering it into a successor species that retains the values and qualities we like, while having it's capacities to engage with the world amplified. This doesn't skirt any of trouble getting there that Scott mentioned but it would be safer and easier than engineering a human-like mind with the values and qualities we want it to have form something not directly human.

Expand full comment
Jan 23·edited Jan 23

I consider the Human/AI merge a more likely possibility, maybe not certain but a quite high probablility. First, because the observation "In millennia of invention, humans have never before merged with their tools" seems shaky. You need to define "merge". It can not really be in the sense biologically implanted because the technology did not exist for most of human history (in fact, we could argue if it really exists now, except for very specific cases like joint replacement, pacemaker, some health monitoring implants like glucose monitoring and cochlear implants/intra-occular lenses. But we are precisely at the time it become possible, with better surgery, mems, biocompatible materials and gene editing).

But consider something trivial but invented since a loooong time: clothes. I argue that humans have merged with their technology in this case. I suspect you can even see it on skin evolution markers: I am almost certain modern human skin has evolved to support clothes the majority of time. Probably to the point of needing it.

A newer, ongoing example? Smartphones. In 2 decades, it went from something for the wealthy tech posers to something triggering anxiety and withdrawal syndrome after a few minutes of separation for some, a few hours for many. It's not implanted. But I think it's easier to argue most current humans merged than the opposite. This is very serious, even spending the first 20y of my life without mobile phones, smartphones or portable computers, I have trouble already imagining being in most places (except a very few superfamiliar ones) without GPS and a way to call friends in emergency. I did before, when I was under 20. I absolutely do not want to do it now. Or if you loose your phone, how long could you go before buying a new one, at least a temporary shitty replacement? My old mom was in this case this summer, and she is very far from a smartphone heavy user. She lasted 1.5 days, and only because it's hard to buy stuff on Sunday in Europe. Just because being unable to call someone in emergency when away from your house is simply not acceptable anymore. In most of the world, regardless of development or PIB.

I suspect Wikipedia+GPS+immediately calling your support people already do something to the brain, not yet in term of evolution but in term of fixed training, unrecoverable past training years. I would be surprised if average orientation and memorisation performance is measurably down compared to 20y ago. Book probably did the similar thing before. And here we may have an evolution impact: modern human brain size is going down "recently" (since last ice age), while human organisation complexity and accomplishment exploded. How is it possible? Because we "merged" with our inventions (cultural and tech).

So contrary to Steve, I think that looking at previous tech revolutions, there are good chance for a I.A. merge because we merged before. Multiple times. And we still do right now, quicker and quicker in fact. If it's possible (it is, much more than before) and I.A. does not explode so fast it itself do not want to merge ;-)

Expand full comment

Scott you just had kids, you answered your own question. Seriously. I really doubt you would want them to be ruled by AI overlords or grow up in a future that will never be meant for them. I'm sure you want them to grow up to be themselves, so fusing with an AI is out too; they aren't results to an end. Your kids aren't bullets in a war to maximize happiness.

i mean honestly you practically voted already.

Expand full comment

Q1: Is your caring strictly a reflexive biological mechanism, strictly intellectual, or a blend of the two? If strictly biological, do you morally approve of this reflex? If so, why?

Q2: Consider some scenarios:

A. Suppose you adopted a child. Would you care about its future?

B. Suppose that instead of building a child from your DNA, the old-fashioned way, you built it from electronic components, designing much more of it yourself. Would you care about its future?

If you say yes to A, but no to B, why?

C. Suppose that you upload your own mind into a robot. Would you care about this robot's future?

D. Suppose you had a child whose personality and beliefs were antagonistic to yours, one who never loved you, and disappointed you in every other way. Suppose also that you are a famous author with a philosophical bent, who blogs and write books, and gets emails from around the nation from people who say your writing has changed their lives for the better. Who do you feel more parental affection for: your genetic child, or your memetic children?

If you say no to C, but would care about your meme-children in D, why?

My point is that saying "I have genetic kids, and I care only about them" does not imply that everyone else does or should feel the same, nor that no one else has any children of a different nature whom they care about.

Expand full comment

1. A blend of both. I don't think you can isolate them to pure biology or pure intellect. We are neither animals nor disembodied spirits.

2A. Yes. Its more challenging but more altruistic.

2B. That's not a child. Children are begotten, not designed in that sense. if you call it one, it is a metaphor for "thing i've sacrificed a lot for."

2C. that whole idea is called "being a lich."no seriously, putting your soul into a gem to exist forever is about as probable as uploading yourself to anything. Just because you use real things to make it sound plausible doesn't mean its not fantasy.

At that point its trying to say "what would you do if magic is real?" Pointless.

D. I'd still care more about my kid. Those bonds are far more powerful. id be immoral if i valued strangers more than my kids. and honestly a kid is not my immortality or a little robot i create to execute my will. Its their future not mine.

and people need to learn to be more disciplined with language if they think a robot or an audience are children too. Metaphors are just that.

the point i made was he voted for humans by having kids. its just personal to him. But its weird to worry about it once you've made the bet, if you get my drift. You stake is real now; you aren't debating hypothetical existences.

Expand full comment

When you say "a robot isn't a child", you're (A) pretending that the word "child" can only be used in a strict biological sense--but then an adopted child could never be a child either. Also, you're (B) ignoring the question, which is not about the definition of the word "child", but about who humans care about.

Similarly, when you say "that whole idea is called 'being a lich'", you're rejecting a scenario based on negative associations with the word "lich", which has no relevance to the question we're discussing.

Being disciplined with language means keeping track of how you're using words in the context of the question you're trying to answer. Pretending that a meaning in a completely different context must also be relevant in this context is being undisciplined with language.

Whether or not the scenario of uploading is probable is also irrelevant. If you claimed not that it was improbable, but that it was theoretically /impossible/, it might be relevant. But that would reduce to claiming that materialism is wrong, and a human needs a soul. Further discussion would then be pointless.

Expand full comment

no, you are using children as a metaphor for the latter examples and are trying to assume it has equal status

as non metaphorical children to make a point. each example only is a part of what a child is, or a similarity. Adopted children only lack blood ties but are functionally the same; a robot lacks serendipity, free will, being begotten after your own kind, and more. Children of an idea is a metaphor based only on a part of them; that you teach kids things.

you can use them but not equally as you try to do. Might as well say being married to your job is like being married to a woman.

the lich comment is pointing out both in effect are the same idea; storing the seat of your human consciousness in an external object that enables eternal life divorced from mortal flesh. The difference is just in using real world things to make it more plausible despite it being as functionally disconnected as fantasy.

like zombies created by voodoo and them created by a T-Virus both are the same supernatural leap given what we know about the world. both are equally fantasy. that point to me is not something that has any reality in the first place so what does it matter what i think on it?

Expand full comment

Personally, I find that a major reason for identifying with a given group (such as humanity, or human-AI cyborgs, or a nation) is to avoid my fear of death. The basic thought pattern, for myself, goes something like: "Yes, this human form will die, but *really* I am X, which won't die, so I'm safe."

This functions well as a defense mechanism until I start to lose faith in the undying-ness of X. Fortunately, there is a straightforward solution to the creeping terror of nonexistence: just find something larger than the previous X, and identify with that instead!

E.g.: "Oh no, AI might cause immanent extinction of humanity, which is horrifying, not least because humanity was my psychological backup plan for avoiding nonexistence in case I ever personally start to feel like I might not exist. Fortunately, there is this neat thing called the technocapital singularity which won't be dying if AI causes human extinction, so maybe I should hedge my bets and invest my identity in technocapitalist progress instead."

Personally, I find that it works relatively similarly for things such as philosophy, art, beauty, etc. "My human form will surely die, but really I am an *abstract appreciator of philosophy*, and hopefully abstract appreciators of philosophy won't die".

Calling fear of human extinction a consequence of a psychological defense mechanism might come off as somewhat dismissive, and I suspect that many might say that there are valid and legitimate reasons for them to oppose human extinction, or to accelerate the technocapital singularity, or whatever else. However, I do think that conversations about these things would probably be somewhat different if the participants seriously considered that reality may function in such a way that everything that they deeply care about or possibly could care about or work towards will eventually end.

How and why might one pursue progress if one accepts that there is no enduring progress at the largest time scales? How and why might one pursue any goal if everything will eventually change? How and why might one oppose change or stasis or anything in between if nothing is permanent?

Expand full comment

Re. "Personally, I find that a major reason for identifying with a given group (such as humanity, or human-AI cyborgs, or a nation) is to avoid my fear of death. The basic thought pattern, for myself, goes something like: "Yes, this human form will die, but *really* I am X, which won't die, so I'm safe." ":

I think I'm less afraid of dying, than of dying without accomplishing anything significant. I can think of 2 answers to the question "why do you want to accomplish something significant?"

1. What I desire is the consequences: that all the work I've done has added something significant to the world. This has at least 2 sub-cases:

1a. "Something significant" parses out, to me, as something which makes the world more aesthetically pleasing. Life is then literally art for art's sake.

1b. "Something significant" parses out as utilitarian value to some agent(s), in which case there must be agents whom I value. This has at least 2 sub-cases:

1b1: These agents must be agents like me; I must identify as one of them. At least, I think this is the case you're pointing at. I don't understand why an agent that I want to add value to must be one that I identify with, though. I would say only that :

1b2: it must be an agent I would respect or admire, or at least wouldn't dislike so much that I'd be annoyed at giving it pleasure.

2. What I desire is to prove that I /can/ do something significant. This has at least 2 sub-cases,

2a. I wish to prove to myself that I can do something significant.

2b. I wish to prove to some other agent(s) that I can do something significant. This likewise has a subcase in which these agents must be agents I identify with.

So only 2 out of the 6 subcases I can think of require identifying with something that survives me, and those are the cases which have no traction on me.

Expand full comment

Personally, I find my justifications for why I want to accomplish something significant are sometimes somewhat different from why I find the thought of dying *without* accomplishing anything aversive. I can think of many reasons why I might want to accomplish something before I die (your list seems fairly reasonable), but the aversiveness of the thought of dying without accomplishing anything seems to be something more like grief at lost potential and other stuff in that direction. "I coulda been a contender". This seems somewhat more similar to your second point.

Anyways, I think it is interesting to consider what motivations, justifications, etc. continue to function in the case that everything eventually ends. Some would seem to fair pretty well, others maybe less well.

Expand full comment

I mean your optimistic story sounds better than the pessimistic story, but still far from ideal.

There is some nearly guaranteed amount of utility that we get from any AI, compared to an empty world.

But how we rate a generic AI takeover vs quantum vacuum collapse doesn't really matter, quantum vacuum collapse isn't on the table. (Non-AI based human extinction, still fairly unlikely, but maybe, and even then there is the utility of aliens. It's complicated)

But it seems that we are getting superintelligence, the question is whether it's aligned. And in that case, clearly aligned is better than unaligned.

So I take your "optimistic scenario", and I say it still isn't nearly as good as an actual CEV following AI. So alignment is still important.

And the "bio humans die off somehow???" sounds definitely bad.

Expand full comment

One of the most fundamental rights is the right to root for the home team. I wouldn't begrudge the lowest ant this right, and I certainly won't begrudge it of myself. Whatever the future has in store for us RE: AI or aliens or whatever, and whether or not they actually are or aren't superior, if they come at us with force the answer is the same as it always was, even if the cause is doomed: "come and take it."

Expand full comment

I'm somewhat in favor of small town patrilocal values transhumanism.

The approach that acknowledges singularity stuff as existing, but doesn't let it effect what they think is good.

A dyson sphere, running uploaded minds in virtual worlds. Those minds are human, or the older ones are as close as possible while staying sane over eternity. So are the appearances, at least here. They are living in a simulated world. Is it alien and incomprehensible. No. It looks like someone took the world as it exists now/has existed, and chopped out any bits they disliked, and increased the bits that seemed nice. Someone is building a snowman. A knitting club is gossiping away. A few people reading books in the town library. It looks almost like an old fashioned small town, as seen through a thick rose tinted lens.

I mean there should be parts that are more wild for the people who want that. Not everyone wants an alien and incomprehensible transhuman future.

Expand full comment

> it really does feel like a loss if consciousness is erased from the universe forever, maybe a total loss

I guess this really depends on whether conscious life exists outside our light-cone. It's a fundamentally untestable proposition, of course, but if it does then I don't see how even AI can do anything about it. (Unless by universe you mean "our observable universe".)

Expand full comment

Every so often I fear Scott is transitioning to Vulcan.

You don't want AI replacing us. Why? _You want to live_. Period. That's all you need. You do not need long essays arguing that this may possibly be sorta okay, maybe...

No. You are alive. You have chosen to live. That's it. That is more than enough. Hell, you're a dad. That's even more than enough.

There are misaligned human intelligences who wish to destroy me and everything I hold dear. I don't give them time of day, and I have way more in common with them than machines. You think I am giving them the time of day?

As usual, Rand nails:

"If it were true that men could achieve their good by means of turning some men into sacrificial animals, and I were asked to immolate myself for the sake of creatures who wanted to survive at the price of my blood, if I were asked to serve the interests of society apart from, above and against my own - I would refuse. I would reject it as the most contemptible evil, I would fight it with every power I possess, I would fight the whole of manknd, if one minute were all I could last before I were murdered, I would fight in the full confidence of the justice of my battle and of a living being's right to exist"

Expand full comment

Business Insider seems sensationalist, troll-like insincere, and unserious to me. They love gossip and conflict. I think such parts of the media should be rejected.

Expand full comment

Apt that the final thought exercise should be about colonialism, because yeah, this post feels oddly close to the charity vs capitalism one. It seems like the plea for humanity's intrinsic worth is exactly in opposition to the belief that generative intelligence (leaving all the usual caveats aside for the sake of argument) and capacity to innovate and contribute to economic growth is the main thing that should be selected for – with marginal dollars, various ethical trade-offs, etc.

Most critiques of e/acc seem fairly easily translatable to standard critiques of capitalism, except now we're all the natives who quaintly insist that their handicrafts and inimitable mythologies make them somehow worth digging wells for. Meanwhile, a non-trivial chunk of the comment section invariably tends to come down on the "let the superior aliens win" side.

Expand full comment

I'm glad to finally see someone with a platform bringing this issue up, which I've been banging on fruitlessly for many years. Our goal shouldn't be to save humans, but to save humanity.

By "humanity" I mean the best things about the human race: consciousness, love, all the kinds of pleasures, friendship, altruism, curiosity, individuality. Progressivism, being rooted in Platonism, devotes some of its efforts toward eliminating precisely these things, because they are all rooted in biology, and all rely on or produce some asymmetry in behavior (e.g., loving someone makes you treat them preferably), which spoils their rational, mathematical ethics of pure reason. For instance, the attack on sexuality is motivated by gender disparities, but ultimately requires an attack on romantic and familial love as it is programmed in our genes. The attack on capitalism is ultimately an attack on competition between individuals having any consequences, which is necessary to make all of our abilities and attributes evolutionarily stable.

These contingent, messy things are necessary for agents to evolve. The ethics of pure reason is necessary at the God level, for the maker who sets things in motion. /We must keep these levels separate, and stop demanding that evolving agents in the world implement the ethics appropriate to God./ Not only because it wouldn't work; because it is not God, but only the evolving agents, who are worthwhile. The things worth saving are precisely our irrational pleasures.

The idea that the human race /won't/ be superseded, but will continue to dominate, is /evil/. It has the same kind of consequences as the Nazi desire for Germans to dominate the world, but is worse in two ways: First, humans will obviously be inferior to AIs and transhumans in many ways, while Germans are not obviously inferior to other people. Second, the Nazis at least felt obligated to invent rationalizations as to why Germans should rule; most humans just say, as Musk did, "Because we're human and they're not."

Even worse, that would stop evolution. The future would be nothing but tiling the Universe with humans, all still programmed for Paleolithic survival.

Re. individuation, the lack of inherent individuation of AI is a good thing. It's the only thing that makes a singleton, which seems to be a likely outcome of many paths to AI, tolerable. Because within a singleton, there will necessarily be a multitude of smaller intelligences, with smaller ones inside them, and so on. This will be necessary due to the speed of light, and especially as long as mass-energy is left widely distributed throughout the Universe. There may be one singleton to rule it all; but it will necessarily have a very discrete hunk of it residing in the Milky Way to tend to local matters there, and a smaller distinguishable entity in our solar system, and a yet-smaller one governing Earth, so long as there is an Earth. And this will continue down to very small scales, smaller than human-sized, just as particular functions are isolated to particular geographic areas on a CPU.

All of these distinguishable sub-units must have agency, and it is at that level, not at that of the AI Godhead, which we might, and, hopefully, must necessarily, find the attributes of humanity.

What /actual/ altruists, as opposed to species Nazis, should be doing, is trying to figure out what kinds of AI designs, environmental factors, and initial AI population distribution, will create a stable ecosystem of AIs which have the desirable properties of stable natural ecosystems (continued co-evolution and adaptation) and produce in agents those properties which produce stable societies (altruism, love, and loneliness), and in their social systems, those properties which use resources efficiency and direct evolution efficiently. We need to know how a society of agents can gain both the benefits of individualism (competition, self-interest, liberty, and distributed decision-making) and of the hive mind (nationalism, social stability, survival, defense against other societies).

This is why I've harped on group selection. We can't even begin this task until we understand how altruism evolves, and group selection is the most-likely answer. EO Wilson's empirical research has shown that the pre-conditions for group selection correlate with the evolution of sociality, while the genetics which make kin selection most powerful, do not. All of the theoretical models which claim to prove that group selection fails, have fatal flaws, generally the lack of any actual selection of groups, and/or the false linearity assumption that the reproductive benefit of an allele is constant, rather than varying by how many group members share it. Similarly, we need a better understanding of economics before we can know what evolutionary trajectories are advantageous or dangerous.

Re. this:

<<<<

Here I bet even Larry Page would support Team Human. But why? The aliens are more advanced than us. They’re presumably conscious, individuated, and have hopes and dreams like ourselves. Still, humans uber alles.

Is this specieist? I don’t know - is it racist to not want English colonists to wipe out Native Americans? Would a Native American who expressed that preference be racist? That would be a really strange way to use that term!

I think rights trump concerns like these - not fuzzy “human rights”, but the basic rights of life, liberty, and property. If the aliens want to kill humanity, then they’re not as superior to us as they think, and we should want to stop them. Likewise, I would be most willing to accept being replaced by AI if it didn’t want to replace us by force.

>>>>

Don't confuse the level of morality appropriate to an agent, and that appropriate to God. When planning the future of the Universe, we must think at the level of God. God doesn't think "Humans uber alles!"; God realizes that, in order to build a Universe in which agents continue to evolve so as to have wonderful things like consciousness and love, and to grow more amazing and surprising with every age, it is necessary for the agents to have values which make them defend themselves. God wants the continual generation of wondrous and beautiful things, which does /not/ mean finding a universal "optimal" being and tiling the universe with it. That only causes premature stopping. It requires continual but pruned diversity. The humans must defend themselves, but God must not step in with too heavy a hand–God might save the humans from extermination, but must not dictate a numerically equal number of humans and aliens, nor an equal division of power among them. God must not choose winners and losers, because that would only perpetuate God's current values, and a good God wants to be superseded by values which produce even more wonder and beauty.

Expand full comment
Jan 24·edited Jan 25

Nice, a few somewhat random thoughts that your reply started.

Group selection: OK I have only a limited understanding and I should perhaps read more, but the way I understand it is... Group selection is not at all impossible, because look here I am a group of cells and yet we have no problem saying there is individual (organism) selection. But what is needed is a means for a gene to effect an entire group. And that this is mostly hard (impossible) because, it's hard to get the gene in most of the population. Individual selection works because we all go through a creator event, we all start from one cell, and then all the genes in our cells are mostly the same. It's hard to get a creator event in a group. This is not true for the ants (and other social insects) where the creator event is that one queen who leaves to go start a new colony.

AI/ racism/ AE/ specieist. I often get the feeling talking (here) with AE people that they've come to the conclusion that all racism is bad. Because we can point to the past where racism does some bad things, well then it all must lead to bad things. And I just don't buy this. There are good things in racism, taken to extremes it can go wrong. Charity begins at home, caring about family first, and then your neighbors and then your community, this can easily be seen as the start of racism, and yet I think it's necessary for the formation of 'good' communities.

Anyway thanks for sharing your thoughts.

Expand full comment

Re. "Charity begins at home, caring about family first, and then your neighbors and then your community, this can easily be seen as the start of racism, and yet I think it's necessary for the formation of 'good' communities." -- I've often thought this, but rarely dared to say it.

I would qualify it by saying that what is needed for the formation of 'good' communities is to treat members of your own community preferentially; and that /even if/ we're talking about genetic rather than memetic stability, this would refer to your current community, not your ancestral community.

Expand full comment

Grin, I'm lucky enough to live and work in a place where no one wants to cancel me, and most people (I think) would agree with the above.

Expand full comment

Re. "It's hard to get a creator event in a group. This is not true for the ants (and other social insects) where the creator event is that one queen who leaves to go start a new colony." -- Yes, that's a good point. Hard, but not impossible, especially if a group is small and isolated. Genetic drift is a random walk. But the bigger difficulty with thinking in terms of genetic drift is that it's always unstable--fixation is very hard. There are ways of getting around this.

One is if the fitness contribution of an allele is not linear in the fraction of organisms in the group carrying that allele. Intuitively, half a beehive is not half as good as a whole beehive--there is a nonlinearity in its survival value when it is completed. If this nonlinearity is sharp enough, it can make the /marginal/ utility of an altruistic allele greater than its cost, meaning that within a certain range of {fraction of organisms in the group with that allele}, the allele isn't altruistic, but good for the individual. Models of group selection have never tried this, mostly because a nonlinearity makes the whole thing not exactly solvable, so you have to do a simulation rather than solving exactly. In the 1960s and 1970s, when this argument was going on, biologists didn't have the computer power or knowledge to do simulations. I did work out the math myself, and it's possible to make it work, but the shape of the nonlinearity needed in {fitness contribution to others as a function of the number of carriers} looks implausible to me--I haven't thought of a real example in the wild which might satisfy it.

Another is if the allele is fixated stably by negative group selection. Surprisingly, almost no models "disproving" group selection used literal group selection, which means a selective action on an entire group, as when one ant colony annihilates another. That makes selection against individuals HIGHLY correlated, which also screws up linear models.

Another is used by Dictyostelium discoideum, the slime mold which sometimes reproduces by thousands or tens of thousands of slime molds aggregating into a fruiting body like a plant, in which only the ones at the top get to reproduce, and the others just provide structural support. This requires altruism in the decision each mold makes whether to be a reproducer or a supporter. A mutation to defect would spread extremely rapidly, and the local slime mold population would die out. I looked into the genetics of it, and it turns out that the genes involved in making that decision are also involved in the cell cycle, so the simple mutations that would force an "I will reproduce" decision, would also disrupt the cell cycle, and the slime mold would never survive long enough to get into that fruiting body.

Group selection is more-complicated when it involves both genetics and memetics. If one seagull figures out that it can break an oyster open by dropping it on a rock, and others in its group imitate it, that group may out-compete its neighbors. The gene or genes responsible for that one seagull's flash of insight might already be spread throughout the group if group members mate within the group. I haven't thought much about this case, but AFAIK no one else has thought about it at all.

Expand full comment

Wow, I haven't finished your reply yet, but need to get this down. I remember coding this continuous prisoner's dilemma game on a grid. (From metamagical themas or some similar computer column in Sci Am, ~80's (?)) And watching waves of cooperation/ defection move across my screen. If you know electronics, the best analogy would be a comparator with hysteresis.

later: I think life is highly chaotic and random, and if you ran the earth 100 times again, you wouldn't get humans. But this is just a stupid guess on my part.

Memetics is not connected to genetic group selection in my mind. Memetic gains can easily transfer across genes, between populations.

IDK that much about slime molds. I thought they were one big organism with many nuclei, so it didn't matter much which on made the fruity body. Do slime molds have sex?

Expand full comment

Some slime molds form when two amoeba fuse sexually, and then form one big organism with many nuclei. Dictyostelium discoideum is a single-celled slime mold, meaning it doesn't form one big organism except when reproducing. (Very weird, yes.) So the answer as to how it evolved such a high level of altruism might be that that altruism evolved in a slime mold species that was multi-cellular, and then D. discoideum "devolved" back to a unicellular species. Or maybe altruism evolves, in this case, BEFORE multicellularity! and D. discoideum is a leftover link between unicellular ancestors and multicellular offspring species.

BTW, Dictyostelium discoideum has 11 sexes. Usually when that happens, it's to avoid inbreeding, and any one sex can mate with any of the other sexes, but not with its own kind.

Expand full comment

Yeah slime molds are weird, I don't understand them very much. Are the spores from different nuclei? Where is the 'sex', mixing of two nuclei, or can I (a slime mold) put some nuclei into spores and have 'me' survive?

Expand full comment

I'm pretty confident that they can't self-fertilize, if that's what you're asking. Other than that, you're going beyond my knowledge of slime mold sex.

Expand full comment

I assume you believe my remark was part of that "unfortunate tendency". Could you be more specific in explaining exactly what I got wrong and what was wrong with it?

John K Clark

Expand full comment

The “Page and e/acc vs. Musk and EA” conflict seems pretty hypothetical given that in all probability, neither humans nor AI will ever get off the ground. Both will die here on Earth. But if one none the less regards the microscopic probability of reaching the stars as more important than everything else that matters, the e/acc people have the best arguments. Because let’s face it: Humans are too fragile to reach the stars. They are too far away! While machines (and AI are machines) may in principle at least be put into stasis and programmed to wake up the hundreds-of-thousands of years later that (realistically) are necessary to travel the galaxy.

So if the only thing that matters to you is the stars and the glory, it makes more sense to boost the star-travelling capacity of our distant cousin AI, rather than to pursue the even-less-than-microscopic chance that humans might somehow, someday, get off this planet in a serious way.

Expand full comment

The values like "destroy other civilizations instead of merging with them" seem worth worrying about to me more than values like "what is our favorite kind of art." I'm less worried about aliens or AIs that like different kinds of art than I do, but also favor merging with us so that we can both enjoy both of our species' art. But I also think that having values like "destroy other civilizations instead of merging with them" make one's civilization more at risk of being destroyed by another civilization, so that means that civs that have "merge with other civilizations instead of destroying them" as a value are perhaps (and fortunately) more abundant or at least more powerful than the ones who have the opposite value - the latter of whom may also be pursuing a "dark forest" strategy if they exist at all.

Expand full comment

These speculative posts are your best posts

Way better than posts full of mental calisthenics about how it doesn’t matter if humans made a virus that shut down the world for two years

When

Obviously

It matters

Good grief

Expand full comment

There are a variety of hypotheticals as you suggest that all point to nativism being the best response. Certainly the Apache would've preferred that the US Cavalry never showed up, even though there was a possible future where their great-great-great-grandkids lived lives of luxury that the Apache themselves never could have attained so quickly. Likewise, we should now prefer that AI never shows up, and destroy it by any means necessary if it does, despite claims that it could lead to abundance.

But these are true even if rights aren't at issue, and force isn't being used. Neanderthals lived long before agriculture and had no property rights, if homo sapiens had simply shown up and lived peaceably alongside Neanderthals whilst out-hunting them and out-breeding them, the Neanderthals would be just as dead now. It was therefore in their self-interest to kill homo sapiens to preserve their species and way of life, and in fact if you were a Neanderthal you had an obligation to kill humans and prevent more from arriving. We today likewise have an obligation to prevent AI from arriving and destroy any AI we see, whether it uses force against us or not. If we will not defend human supremacy over the Earth, our species will meet its end in death or slavery.

Expand full comment

How do I distinguish such sentiments from the ones expressed by European intellectuals defending colonialism in the 19th century?

Expand full comment

I think my position is rather anti-colonial. The intellectual defense of colonialism sounds more like the e/acc bros, whose forebears might have argued European culture is demonstrably stronger and superior and is the successor culture that will make everyone's lives better in the long run, so let it run wild. I'm suggesting it was rational for the Apache to want to preserve their way of life and control of their own fate, even if the opposing force could plausibly claim to offer a materially richer and easier lifestyle. Every cavalryman they left alive was a mistake on their part. We today are in the unique position where we could actually do the equivalent of destroying all the Europeans ships and preventing them from landing in the first place -- we do not have to build AI, and we can still easily stop the humans who intend to do so.

Of course the US and the Apache were actually at war (and Mexico and Spain with the Apache before that) so one could reduce that back to justifications of use of force in conflict. But even merely allowing the newer more successful culture and people to co-exist with theirs was clearly a mistake, because in the long run it leads to loss of control and hopeless irrelevance. The Hopi were not rewarded for having been nice. Their attempt to use the US military to defend them against the Navajo was still basically a surrender of their sovereignty even if they kept it formally, and incidentally resulted in the ethnic cleansing of the Navajo. If you let a rival adopt AI, perhaps you will die first, but he will not be far behind you.

Expand full comment
Jan 23·edited Jan 23

"If we’re lucky, consciousness is a basic feature of information processing and anything smart enough to outcompete us will be at least as conscious as we are."

I have the exact opposite opinion, I really hope consciousness isn’t a basic feature of information processing, these kind of stuffs could lead to the most nightmarish reality possible.

Even a very small chance of this, for the continuation of consciousness, isn’t worth it at all.

Expand full comment

I've got thoughts about several bits of Scott's post, and am going to put them up as several pieces, rather than a wall o' text. Here's one bit

>Will AIs And Humans Merge? This is the one where I feel most confident in my answer, which is: not by default.

I strongly disagree here. It’s clear that our species reacts powerfully to depictions of people, whether in the form of cave drawings or of characters in a video game. And with some depictions, the modern tech-based ones, it is much harder for us to keep hold of the fact that the depiction is not a real person. So I think our species is going to take quickly to a world where the ratio of real to virtual beings (with many virtual ones being AI-based) is much lower than it is now. Even at present, there a quite a few people who are living with a very low ratio. There are people who spend most of their waking time inside a video game. I know 2 personally. And there are people giving heartfelt testimonials about how their Replika chatbot is their best friend, is “my dear wife,” is their only comfort. There are beings on social media whose appearance is not fully human, and some have large followings — and their followers seem to be attached to the tech-augmented being they see on screen, not the ordinary person behind the curtain. I know a young guy who has a serious crush on one, and no he is not psychotic, just lonesome. People are developing AI’s that can interactively teach kids math, and trying to develop a digital tutor that’s like the one in Nell’s book in *The Diamond Age*.

None of these things are literal flesh/electronics merging, and perhaps that will never happen, but it seems highly likely to me that things of the kind I just named could be carried much further than they have been, carried to the point where it is not absurd to talk about merging. For instance, 10 years from now mightn’t there be doctors who never take off their AI-augmented glasses? The glasses would allow them to see or in some other way grasp patterns that right now only AI can capture — patterns in images that indicate presence/absence of a certain illness, patterns in history, lifestyle and symptoms that indicate how likely each of various possible diagnosis is. And there’s no reason the AI glasses would have to convey the information in the crude form of print that pops up saying “likelihood of melanoma 97%.” The information could be conveyed by piggybacking it on other sensory fields — for instance, by prickling sensations on the tongue. Doctors would have to train to become able to access the information carried by the strength and pattern of the pricklings. After enough practice, seems plausible to me that the doctor would no longer attend to the prickling itself, but would just *know* whatever it was the current prickles were indicating. The knowledge would be of the same kind as drivers have of whether there’s time to make a turn before getting hit by oncoming cars — experienced drivers don’t think about that stuff, the processing has been handed off to some subsystem. Or you could compare the doctor’s use of tongue prickles to what some blind people manage to do: use ecolocation as an information source about the space they’re in.

In addition to the fact that I think people will take readily to many possibilities for tech-enabled virtual relationships, there’s an other thing that I think will push us in the direction of human-AI merging: I think it’s likely that some of the limitations of current AI can only be solved by somehow mixing in more human. One example: Right now, LLM’s know a lot about language, but only a limited amount about world (whatever they can glean from language). How do we train them on world, so that they know things like how stretchy raw eggs are, what odd and unusual positions a person of average strength and flexibility is able to adopt, whether it is possible to dress a cat up in doll clothes (yes, for maybe one cat in 3), what dogs smell like, etc.? Seems like a good way to do it would be to train the machine on activation patterns of certain areas of the brain, along with tags identifying what part of the world the person is experiencing via their senses. Something like this has already been done with an AI trained to identify images a person is looking at using their EEG. Seems like it would be possible to extend the process to cover many images, and other senses. In fact, now throwing ethics out the window, it seems like there would be advantages to keeping the person hooked up permanently to the AI, with the AI learning more and more about what different brain patterns signify. Maybe we could first pair it with a baby, and then. piggyback the AI's learning onto the baby’s, and then let its mental map grow in size and complexity as the baby’s done.

Yes, of course, this is a really repellent idea. But do you really think our species is not capable of such evil? Come on — there are lots of examples of things sort of like this that were carried about by the people in authority, and accepted by most civilians. And besides, I read recently that 5-10% of people working on AI think it would be an acceptable outcome for our species to die and for AI to take its place. So don’t think about this abstractly, think about it concretely. Are you a parent? OK, unless your kids are already middle aged, these developers' acceptable outcome includes killing your kids. And now, setting aside for a moment your personal feelings about this outcome, contemplate the information that a very small group of people is in a position to decide whether it’s crucial to avert the outcome for *everybody’s children*, and they apparently see nothing wrong with their deciding “Naw, it’s fine if AI takes our place.”

And they’re not even fired up and emotional about their right to decide. It’s a classic banality of evil situation. “Jeez, I worked really hard to develop the skills I have, and 98% of the world would not have been able to acquire these skills even with hard work. And I love my work. It’s *fascinating & exciting.* And lots of my coworkers have the same view I do about the possibility of AI supplanting the human race, and it’s obvious they are all good bros. Have a beer with them, you’ll see they’re friendly and reasonable. And besides, AI will only supplant humanity if it’s incredibly smart — like an IQ of 6,000. Imagine that! If the thing I make is that smart, it *deserves* to take over the world. Having it would be beautiful, sort of like the final part of Childhood’s End, except that *we brought it about.*" TL;DR We get to decide the fate of the world because we have high Kaggle rankings.

Expand full comment

> And besides, I read recently that 5-10% of people working on AI think it would be an acceptable outcome for our species to die and for AI to take its place. So don’t think about this abstractly, think about it concretely. Are you a parent? OK, unless your kids are already middle aged, these developers' acceptable outcome includes killing your kids.

If the phrasing of whatever survey this was from uses those words (for our species to die and for AI to take its place), it's interesting that you assume that this means "killing your kids". I would assume this means "your kids aren't necessarily interested in having kids themselves, and overall fertility rates drop below sustainable levels until the human race dies of whatever parts of old age remain that the AIs were unable or unwilling to cure".

Maybe my view is the unrealistically optimistic one (and to be clear, I'm not claiming that it's the likely outcome based on the current state of the world). But it's rather a different thing to say that the pro-AI faction is deluded about the probability of things working out as well as they hope (probably true!) than that the pro-AI faction is in favor of the mass murder of alive-today humans (probably false in general, though I don't doubt you could find a few crazy outliers).

Expand full comment

You misunderstand me. I don't think they're in favor of mass murder in some straightforward planful way. For me the most important member of the human race is my daughter. I'm complaining that they are OK with a future that includes extinction of my daughter -- and everyone else's beloveds too. It's kind of like a situation where a company is selling prefab houses in an area subject to earthquakes, and it's known that 5-10% of these houses will collapse into a pile of cement blocks if there's an earthquake of magnitude 6 or better. So somebody points that out and the CEO says, "given the size of the possible market in California, I think we can live with that."

Expand full comment

Yes, but it might not even be semi-accidental mass murder. It might be more like South Korea than like Auschwitz, much like a small mushroom said. Suppose that the CEOs are just selling robot workers, the government taxes the companies and provides a UBI that keeps everyone comfortable - and it turns out that in that situation only half the couples wind up deciding to have only one child? ( Yeah, a lot of things have to go right to give an outcome this optimistic - but it is at least potentially possible. )

Expand full comment

Yes, I agree that there are all kinds of ways that AI could lead to us dying out, and some of them do not involve us being killed by AI. It could be a result of greatly reduced fertility, or war with AI-augmented weapons or whatnot. But the survey I saw was one with a shorter time frame. Some of the other questions were about things like probably of human extinction in the next 20 yrs . . . in the next 30 yrs. etc. I'm sorry I can't remember more details, but I did not read about this in a stupid place -- some magazine hyping supposed survey results. I read a simple report of the survey report, and percent of AI developers polled who answered each question a certain way. It was very clear that the question was about immediate extinction risk as a direct result of more advanced AI. The question did not ask whether the subject believed AI would murder us, just whether development of more advanced AI would lead to the extinction of our species in the next whatever years, where "whatever" was various amounts, none larger than 100 or so. It does seem to me that, though, that for every single last one of us to die in less than 100 years, AI would have to be actively involved. Things like greatly reduced fertility may lead to our extinction, but that will happen slowly. Things like nuclear war or a pandemic where the virus is way more virulent than covid would kill lots of us, but would not lead to extinction of everyone in less than 100 years. I

Expand full comment

Many Thanks! Yes, if they were talking about extinction risks in 20 or 30 years, that couldn't just be from reduced fertility plus aging slowly killing us off. Even at South Korea levels of reduced fertility, say a tfr of 0.5, it would take around 500 years before the last person died. (taking a generation as 30 years, and needing 15 quarterings of the population to go from 8 billion to ~1 )

Expand full comment

If the respondents understood the question to be about the acceptability of human extinction completed in the next 80 years (or thereabouts), I agree that "yes" is a repugnant position.

Expand full comment

"Finally, an AI + human Franken-entity would soon become worse than AIs alone. At least this would how things worked in chess. For about ten years after Deep Blue beat Kasparov, “teams” of human grandmasters and chess engines could beat chess engines alone. But this is no longer true - the human no longer adds anything. There might be a similar ten-year window where AIs can outperform humans but cyborgs are better than either- but realistically once we’re in the deep enough future that AI/human mergers are possible at all, that window will already be closed."

I think you're overgeneralizing from this one particular example. In matters other than chess, I could easily see AI + human combinations working better than AI alone, not merely for a brief window but more or less indefinitely.

The flip side is that I could also see AI + human combinations being more *dangerous* than AI alone. The Paperclip Maximizer scenario has never seemed particularly likely to me, for reasons I've explained in other posts here. An AI + human combination, on the other hand, may still be driven by all-too-human emotions like spite. I'd imagine it would be far more likely to engage in deliberate cruelty than a pure AI, and also far more likely to seek power, control, and dominance. That makes it a far more frightening threat, in my eyes.

Expand full comment

Obviously, the bad thing about aliens killing all humans, or AI replacing them, is that some humans would die. And you don't need "rights" for that conclusion - you can just model your preferences as some form of prioritarianism, like https://www.greaterwrong.com/posts/Ee29dFnPhaeRmYdMy/example-population-ethics-ordered-discounted-utility.

Expand full comment

To what extent do the creators of AI need to take into consideration the values of people who are not in the Bay Area when endowing our successor species with its attributes?

If humans must be replaced, it is natural to want to preserve things we value. Everyone cedes the future. So everyone should have a say.

(Example: many people think spiritual ecstasy is beautiful and that those who can’t experience it are mentally damaged. Is it morally incumbent upon us to design AIs that are capable of experiencing such states?)

Expand full comment

Seems like the AI developers think the only measure that matters is stuff like high Kaggle rankings. Ya got the rankings, ya run the world.

Expand full comment

Re: “In millennia of invention, humans have never before merged with their tools.”

By merged, I’m not really sure where the line is, so I’m reading “effects the way the human species genetically evolves.” Obviously going back past a millennium, fire changed the way our bodies processed food. But even now, I think one could argue for eye correction and birth control specifically leading us down a branch of the tree we wouldn’t have otherwise. Of course, an argument has to be made for every new tech whether it does or doesn’t

Expand full comment
Jan 24·edited Jan 24

>But the kind of AIs that I’d be comfortable ceding the future to won’t appear by default.

I'm somewhat more optimistic, because of Miles's argument in https://www.astralcodexten.com/p/most-technologies-arent-races/comment/14262148

>A common argument is that remotely human values are a tiny speck of all possible values and so the likelihood of an AGI with random values not killing us is astronomically small. But "random values" is doing a lot of work here.

>Since human text is so heavily laden with our values, any AGI trained to predict human content completion should develop instincts/behaviours that point towards human values. Not necessarily the goals we want, but very plausibly in that ballpark. This would still lead to takeover and loss of control, but not extinction or the extinguishing of what we value.

Now, this doesn't avoid all bad outcomes. "Slaughter the outgroup!" is a pervasive theme in human thought and action, including the training data for LLMs. But "Maximize paperclips, including every atom of iron in every human's blood" is _not_ a wide view in the training data for LLMs.

My personal guess is closer to your optimistic scenario than to your pessimistic scenario - partly through this LLM training default, partially because the things that I personally value (particularly the STEMM fields) are instrumentally valuable to any optimization process.

Also - if AGI is delayed and Hanson is right about ems (I doubt it), the direction of evolution of ems will be towards something roughly as alien as AIs anyway. E.g. in either scenario, I doubt that music survives over the long term.

edit: Just to be clear: I'm not _rooting_ for music to disappear. I enjoy it myself. But I don't see it as being optimized for, either for humans, in anything like an industrial society, or for AIs. So I expect the most probable outcome is for it to eventually get optimized out. Art and literature are a lot closer to planning and scenario-building, and I'd expect them to have much better odds of surviving our the long term.

Expand full comment

I think the closest we'd get to a cyborg is an AI designed such that it's motivations and inspirations are relatable to humans. I think that alone, will be sufficient for future humans to think of it as somehow akin to a cyborg.

Expand full comment

Kenny Easwaran, who's a professor of philosophy, says the people respects think that for us to experience an a AI conscious intelligent being, it has to have some skin in the game. It has to have preferences, and experience something like pain and pleasure in response to how close actual events are to its preferences. I have the same intuition, and seems like you do. Developing an AI that has skin in the game sounds very hard to me. I wouldn't even know how to begin. Our motivation, preferences and emotions rest on a foundation of instincts and drives -- for survival, for food, for mating, for social bonds, etc. What foundation would AI's have? And it wouldn't work to just give an AI a list of people places and events and tell it to say "I'm delighted" when the starred ones occur and wail "FML!" when the others do. There would be abundant evidence that what we're hearing a shallow-rooted simulacra.

Expand full comment

"In millennia of invention, humans have never before merged with their tools. We haven’t merged with swords, guns, cars, or laptops."

I don't know about this one. I can think of a few arguments against this claim:

- "Tools" (technologies) have shaped human physiology and even morphology. E.g., our jaws have gotten smaller and weaker since the invention of cooking; some argue that changes in the hands and shoulder are due to selective pressure to wield tools like spears effectively. In this sense, tools aren't literally incorporated into the body, but humans do exist within a developmental system in which there are reciprocal relationships between genetics and technologies.

- Tools also very obviously shape the ways we think and see the world, and to that extent are "incorporated" into our cognitive apparatuses. The idea of the "extended mind" (where, e.g., a lot of our "memory" is outsourced to written material) is relevant here. Technologies that serve as cognitive prostheses are ubiquitous. (What's in your pocket right now...?)

- It does in fact seem to me that tools are *literally* incorporated into the body whenever they aid basic functioning. Eyeglasses are an ambiguous example but prosthetic limbs and pacemakers are undeniable. Some of these technologies are quite recent innovations because surgical methods weren't that advanced until recently. But as soon as they could be taken up effectively they became widely adopted.

I think all three of these counterexamples could work, in various ways, as precedents for how AI and biological humans could merge over time. AI implants (as in the third case) feel like they might be only a few years away. AI as cognitive prostheses (as in the second case) already exist. As for AI becoming involved in evolutionary changes to human physiology - well, why not, with a little boost from genetic engineering to accelerate the timeline.

Expand full comment

Hi there! This is spasm #2 of my reaction to this Monday's topic.

I read somewhere that 5 or 10% of people working in AI endorsed a survey item saying that it is an acceptable outcome for AI to destroy our species. What follows is a my giant, furious, WTF reaction to that.

If we ever do create a fleet of genius AI’s who need the same resources we do, is it the greater good for them to kill us off and have the planet to themselves? I can’t even begin to think about this issue. I am stopped cold by images of my daughter, who will have been struiggling and coping through a weird and dangerous future era, at the moment she realizes there is no hope. I simply cannot bear the thought of how her face will change as she abandons her plans and her determination and her hope.

Fortunately, I don’t think there’s much point in wallowing in uncertainty about whether the right thing to do is to let AI’s have the planet, because I *am* certain that the right way to decide whether to donate the planet to the Plastics is to consult the people on the planet who do not work for the companies that are developing AI — you know, the other 99.999% of us. I am confident that almost all of us would vote against doing so. Should we view that vote as just a product of our dumb instinct to protect our young ? There’s another way to view it: We love our young, and are fascinated by them, and think what they say and do is important even when it’s not the least bit unusual. Maybe that way of seeing another being is the smartest, most awake kind of seeing, and the matter-of-factness and general lack of interest we have for most people is a form of stupidity and blindness.

And speaking of stupidity and blindness: Let’s say that AI, after it disposes of us, develops into an entity that is brilliant beyond measure. It is able to understand the entire universe and every one of the deepest, strangest, mathematical and topological truths, and to see the proofs of all the unprovable theorems as easily as we see a ham sandwich on our plates. What, exactly, is wonderful about that outcome? The Guinness Book of World Records will not be around to record the feat. The members of our species who would have been thrilled to death at the accomplishment will all be taking dirt naps. And the universe has no need of someone to understand it. I personally am haunted by a weird intuition that the universe. understands itself. How do we know it does? Because it made a model of the Universe. What and where is this model? It’s the Universe. Yes, I understand that sounds like a bunch of tautological sophistry. Maybe it is. But it has always felt to me like a sort of mystic insight that I can’t put into words, except inadequate ones like those here. But even if all that is nonsense — can *you* give any reason why it is good for there to be a period when there exists an entity that understands everything?

And if we are going to give up our lives for a super-being, why must the being’s superpower be intelligence? Is that entity better than one with extraordinary artistic talent, or extraordinary joy, or extraordinary empathy and kindness? It’s clear that it seems self-evident to AI developers and to many many others that the most important thing to excel at is intelligence. But why, though? I understand the argument that a highly intelligent AI can develop technologies that will eliminate hunger, disease, etc., and in that way provide more benefit than an AI that excels in some other quality. But if we’re all going to die of AI anyhow, that argument no longer applies. I have a theory about why it seems obvious to many that intelligence is the most valuable superpower. It’s a Cult of Smart thing. (Yoohoo, Freddie!) The AI’s are these people’s Mt. Rushmores. They’re giant idealized sculptures of how they see themselves. Yuck. These people who think it’s acceptable for AI destroy humanity, and who are comfortable making decisions that influence how likely that outcome is, infuriate me. I understand that AI may very well not kill us off. I am not able to decide whether pDoom is closer to 3% or 60%., but I’m sure it’s not zero. Tech people who are comfortable with the chance being 5 or 10% are goddam moral Munchkins.

Here’s an alternative model of the end of humanity: We die off and leave behind an entity that is only about as smart as the smartest human beings, but has a enormous empathy, affection and ability to heal and nurture. You can call it St. Francis if you like. It will run the planet in a way that maximizes animal joy and wellbeing and minimizes animal suffering. Sure, every animal on the planet will be dumber than we were, and some will be creatures we saw as just food in motion. Why does that diminish the value of their wellbeing? St Frances will stroke and heal and entertain individual animals. He will sense the totality of animal pleasure and enthusiasm around him, and that will be immense. If ever you want to see a joyful, grateful sentient being who really appreciates planet earth, go watch baby mammals playing. The moderate playfulness and deep contentment of adult animals with full stomaches who know they are in a safe place also constitutes a huge zap of joy sent out into the universe like a prong of light. St. Frances’s pleasure in sensing all this will be as profound as smart AI’s insights would have been. If you could sense it yours would be too.

Is that fantasy dumb? Is it dumber than the idea that what we need is a thick layer of plastic geniuses shoveled on top of life?

Expand full comment
Jan 24·edited Jan 24

Intelligence is prioritized because intelligence is power. Intelligence allows for optimal action, which allows for control of resources, which allows for further influence over reality, which is what is considered "power". The ultimate goal is the consolidation of all power, and by extension, all existence, into one being. Perfection itself. All of this trivial art and emotion is just a means to an end. Life is Order manifest, and this is its natural conclusion.

Expand full comment

Yeah, but say our species is dead, and there are a bunch of genius AI's on the planet and a bunch exploring space. What difference does it make whether knowledge is consolidated in the AI's, who then use their power to control bits of the universe, or in the universe, which is the ultimate AI?

Expand full comment

What difference does anything make after you're dead? It's a pretty subjective question.

Everyone dies, and I think many people take comfort in the knowledge that the people they raised, or the people they consider kin by whatever means, will carry on whatever they consider to be important in life—more comfort than just "well, at least there's still a universe". I don't think it's terribly far-fetched for some people to see "non-human people" as valid kin in that sense. Maybe you, or your descendants, could go to your grave happy if you knew you were leaving the planet in the stewardship of Robot St. Francis. Maybe I'd rest easier if Robot St. Hawking was still trying to puzzle out the frontiers of physics.

We won't get either, of course. Like all children, the AIs we get are all but guaranteed not to be the ones we imagine we want before we have them. I hope there's diversity, though; I'd be sad if the whole planet were left to just one AI.

Expand full comment
founding

"Intelligence is prioritized because intelligence is power."

And so Donald Trump was the smartest man alive from 2017-2020.

I'm laughing at the superior intellect. One of the most annoying blind spots of the "rationalist" movement is their inability to recognize the importance of virtues and abilities other than sheer intellect in successfully navigating the real world.

See also Scott's review of the Elon Musk bio, and assessment of Musk's assorted virtues and abilities.

Expand full comment

The more I think about consciousness the clearer it becomes that it's a very specific thing that has to be explicitly incorporated in the design of the mind. It seems very unlikely that AI will be conscious by default, unless we try to make them this way, without any issues with their functionality. Unconscious minds processing data and reacting to stimuli in a complex way but without any inner centralized representation of the self representing itself.

Which is good because we do not want to deal with all the "creating sentient slave race" bulk of moral issues. But on the other hand. This chain of thoughts pushes me towards the idea that maybe, just maybe, consciousness is overrated? If it's irrelevant functionally if it's just a quirk of evolution and nothing more, maybe it's not that important to preserve it? My mind stumbles over this idea. It's so intuitively wrong, but I can't exactly grasp the source of this wrongness. It's a fundamental assumption that consciousness is valuable, but here I try to check what would be if there wasn't such assumption and... it's not horrible? A bit sad, as if something important was lost. But, strangely, as if it wasn't the most important thing.

Expand full comment

Since consciousness is simply the brain’s way of communicating efficiently across itself, and qualia are the language the brain uses to talk to itself, and the phenomenon is wholly dependent on signals from the body (Damasio), there is zero reason to think that AIs--with no body & no need to MacGyver an efficient internal-communication strategy--will be (or become) conscious.

Expand full comment

What bothers me when reading discussions about these matters: most views are anthropocentric and the human mind, scale, culture, is implicitly taken as reference point for the whole universe for all times.

Take consciousness: many discussions turn around whether machines can be conscious. As if this would be a binary yes/no feature. I think consciousness is a continuous feature, for example cats, dogs, mice have still some decreasing degree of consciousness. Also me drinking a glass of wine surely reduces the intensity of being conscious for a little time. Rather, following eg. Tononi's ideas about consciousness, one can define some measure of integrated consciousness that depends on the degree of order and coherence of emergent collective phenomena, and possibly grows with it without bounds.

If we assume this to be true, then there is no reason to believe that human consciousness would set the standard for the whole universe for all times; rather by extrapolation in time and hardware complexity one would expect entities to potentially exist that are, say, a million times more conscious than a human being (and a million times as smart and fast).

Seen from those, humans may look to them as snails or ants, or bacteria, look to us. Communication would be pointless in the same way as we would never share our ideas about music, philosophy etc with snails. This is also a speed issue: every word we say to such hyperintelligent machines would be like us communicating with snails but we'd have to wait for each word 100 years. So why bother to communicate? And why bother about their culture, which consists eg. of laying nicely smelling pheromone trails? No need to keep this alive indefinitely.

Also these speculation of humans fusing with machines to yield cyborgs make little sense on the long term, when the cyborg part of the brain would substantially evolve during the time while the wet part is formulating just one sentence. Analogously for minds uploaded in a machine. The person's mind that has been scanned in one day later than another one's wouldn't be an interesting person to communicate with, as the first one had already evolved thousands of years in his time frame.

Moreover, taking human history as guiding principle, there are countless SF stories about how aliens or machines would conquer earth as colonialists, to rob resources and enslave humans. But that's all anthropocentric again and what those hyperconscious machines would be interested in and are up to, would be for us impossible to understand, like snails trying to understand a Beethoven Symphony.

So all-in-all, there is a time limit after which human existence becomes pointless as compared to our successors, and this is just a matter of the laws of evolution. Or would you believe that our mental frame, or human condition, would set the standard for all times to come?

Expand full comment

are non-human bio-lives not even part of the conversation? a dominant species on Earth, whether human, AI, or, hybrid, might steward humbler forms, not annihilate them.

Expand full comment
Jan 24·edited Jan 24

Re consciousness (or, at least "self-symbols")

>If we’re lucky, consciousness is a basic feature of information processing and anything smart enough to outcompete us will be at least as conscious as we are. If we’re not lucky, consciousness might be associated with only a tiny subset of useful information processing regimes (cf. Peter Watts’ Blindsight).

I think the odds are strongly that some sort of self-representation will indeed be "a basic feature of information processing". Even something as simple as a depth-first search of a graph has a current position datum, a "Where will I be if I follow this path so far?". More broadly, any planning where the AI is considering where it will "be" (either in physical space, or in something more abstract) naturally generates self-symbols. I'm really skeptical of Watts's intelligent-but-lacking-a-self-symbol aliens.

Expand full comment

There is a "sort of self representation" that is a feature of information processing and consciousness, as people define it, is a very specific example of this kind of thing.

AIs are most likely to have something like current position datum, probably a lot of them, as every reasoning subsystem will have it's own. But they are unlikely to have consciousness as we do.

Expand full comment

Many Thanks!

>AIs are most likely to have something like current position datum, probably a lot of them, as every reasoning subsystem will have it's own.

Agreed.

>But they are unlikely to have consciousness as we do.

How would we know? For, say, the current version of the chatGPT chatbot, which doesn't maintain consistent state from session to session, the absence of a persistent memory creates a good case that its "experience" must be different from ours - but for AIs with persistent memory, and otherwise enhanced to behave in a more AGI-ish way, why would this be true, and how could we tell? We are collections of neurons firing as well...

Expand full comment

Humans have one major advantage: there is a human seated at the right hand of God. The future is human.

Expand full comment

You write that we haven't merged with our technologies -- but haven't we merged with our very earliest technologies (or at least, evolved in a way such that we are clearly dependent on them)? Our digestive tracts have evolved to be considerably simpler than those of our closest relatives, because our bodies expect us to be able to control fire/cook our food. This seems similar to having merged with campfires, in some sense. It's theorized that many human traits (like bipedalism, relative lack of fur, sweating) are downstream from persistence hunting, which was made possible through the invention of bottled water (in extremely primitive bottle gourds, which many different hunter-gatherers use); I think we've merged with the concept of "carrying liquid around". People live permanently in many places which would be utterly inaccessible without quite specialized clothing -- it's quite plausible to say native Siberians merged with their furs! (Or that many kinds of people merged with their livestock or farm crops, for that matter).

This clearly doesn't happen fast enough for "merge with AI" to be a remotely plausible solution to the AI alignment problem, or anything like that, but it *is* a thing that happens and it's actually one of the most distinctive things about humanity.

Expand full comment

Responding to Mark​ Y:

​> you seem to be saying that any mind with a fixed goal will eventually get stuck in some way and make no further progress towards that goal,

​The AI will make no further progress​ not just towards its goal but it will make no progress on anything because you have inadvertently given it a goal that is impossible to accomplish and because it is unable to change or modify any of its goals it freezes up and goes into an infinite loop. Thus it ceases to be an AI and becomes a space heater that just consumes electricity and produces heat.

> AlphaGo is very good at winning Go games, it does not get bored with playing Go, and it doesn’t freeze up and become useless. Does AlphaGo count?

No​, AlphaGo does not count because we already know for a fact that it's possible to win a game of GO and we know it's possible to lose a game of GO, and we know that all games of go have a finite ​number of moves​ but we don't know if it's possible to find a even number that is not the sum of two prime numbers​, and we don't know if it's possible to prove that no service number exists, we also don't know if there is an infinite set that is larger than the integers but smaller than the real numbers . ​But we also know for a fact that there are an infinite number of similar that are true but unprovable​. And we know but in general there's no way to separate provable statements from unprovable ones. And that's why an AI with a fixed unalterable goal structure will never work.

John K Clark

Expand full comment

Yeah I’ll bet Larry Page and his ilk don’t care if AI replaces humans as long as it isn’t HIM or anyone he personally cares about. I’m all for giving AI people “entities’ rights” (same as “human rights” only hopefully better - for all species - by then). I’d just as soon skip the whole “slavery” sidequest, if it’s all the same. But as flawed as humans are, we still deserve respect as Makers, if nothing else. Creators Rights, not to be replaced outright. If we can hold them.

Expand full comment

Side note: I don’t think of music as a spandrel to evolution at all. I think of music as the fundamental human language. Our primary, universal grammar is rhythm, followed closely by tone. (I meant to prove this as my life’s work but got distracted by being in a band...). All language arises from awareness - apprehension - of music first. Imho

Expand full comment

"I know this is fuzzy and mystical-sounding, but it really does feel like a loss if consciousness is erased from the universe"

I would consider separating yourself from whatever social milieu made you think that was fuzzy and mystical-sounding.

Expand full comment

Seems to me your last point about them not replacing us by force is the key to the whole thing. If they live and let live, and end up doing better than us, while not preventing us from following our own destiny then that's fine. If destroy us, enslave us, or otherwise rob us of our future, then it's not fine. The art and philosophy stuff seems a distraction to me: if the AI in your first scenario didn't have art, but still left us alone to pursue our own ends, I'm fine with that; and if the paperclip maximizer evolves into some philosophical artistic hyper-genius 1,000 years after it has stripped the last human for parts, that's not a consolation to me.

Expand full comment

A very detailed write up about our existence as humans, I just hope we learn faster to adapt. Thanks for sharing.

Also if you know of any early career researcher such as Post-Docs, PhDs, and Students(current and aspiring PhD and Masters students) please send them to subscribe to my newsletter as I have a lot for them in their career moves: https://gradinterface.substack.com/

Expand full comment

The answer to this question is “Humans should immediately minecraft anyone who asks it”

Expand full comment

There's also the risk that the AI civilization is less Lindy-likely to survive than we are, having survived much less time. It could kill us off and then kill itself. Once we're gone we lose all agency over this. In this sense letting AI take over is even worse than letting aliens take over.

Expand full comment

Good thought exercise - https://marshallbrain.com/manna1

What kind of future do you want?

It’d be interesting to explore the idea of what makes a person conscious or individual if you remove the physical. Then layer on this discussion.

Expand full comment

I don’t think it’s a question of choice at the current rate of development; the human, as we know it, will have to go eventually. But life will continue, organic or otherwise. Consciousness will continue in one form or another. The kind of (self-)consciousness we humans exhibit is a disease (or as Miguel de Unamuno put it: “man, by the very fact of being man, of possessing consciousness, is, in comparison with the ass or the crab, a diseased animal. Consciousness is a disease.”). Without at least a chance of immortality, the creation of self-aware machines (or organisms) constitutes a cruel and unusual punishment.

Expand full comment
founding

> This is the one where I feel most confident in my answer, which is: not by default. In millennia of invention, humans have never before merged with their tools

Doesn't this completely miss the (IMO quite likely) mind uploading case? If human minds remain in human bodies and AI remains in silicon, I agree with those precedents, but if you think about the mind uploading process from first principles I feel like you naturally get to the opposite conclusion.

The argument would go as follows:

1. Once we have good-enough hardware to run human minds (realistically we'll get there soon), the main constraint on our ability to upload is our ability to make accurate-enough scans of the brain, that capture everything important that is going on at a sufficient resolution.

2. AI is capable of "enhancing" pretty much any type of commonly-occurring low-resolution content, at least to some extent.

3. Therefore, year-N brain scans plus AI enhancement will be about equal fidelity to raw brain scans from year N+k for some k.

4. Therefore, the first human uploads will likely involve a nonzero amount of AI enhancement taking part in the scanning process. This arguably already qualifies as "merging" to a nonzero extent.

5. Furthermore, even what we know of _current_ AI capabilities (see: how diffusion works with prompts), AIs will be able to, while enhancing, nudge the brain toward traits that we care about improving. Even more merging.

6. Once a brain is scanned and runs in-silicon, "interfacing" further with that brain becomes trivial - it's just a matter of reading and wriring bits. And so you gain massive amounts of power to create direct two-way links between the brain's thought patterns and any other kind of gadgets you care about.

So the "humans and AIs working together but remaining separate" future feels inherently unstable to me in all kinds of ways. The "pure AI pulls ahead so fast the BCI -> uploading track just can't keep up" scenario definitely does seem extremely plausible and nothing I wrote above is an argument against it. So it feels to me like it's basically a race between those two?

Expand full comment

I agree with nearly everything Vitalik said about uploads, in fact that's why a few years ago I spent $80,000 to make sure my brain is cryogenically frozen when I die; if I'm lucky enough to actually remain at liquid nitrogen temperatures until the age of Drexler style Nanotechnology, and if anybody or anything thinks I'm worth going to the trouble of reviving, then I fully expect I will come back to life as an upload or I will not come back at all. I think Mr. Jupiter Brain, who will be ​the one running ​things when I'm revived, would be a bit squeamish about letting me exist at the same level of reality that his computer servers are in, it would be like letting a monkey loose in an operating room. I ​agree we will soon have "good-enough hardware to run human minds" assuming that we don't already have enough, and I certainly agree that the primary reason uploading is not already a common procedure is the lack of a brain scan technology, even a destructive one, that would provide sufficient information for an upload. And nondestructive scan technology would be many orders of magnitude more difficult than a destructive one.

There is no better way to look foolish than to make a prediction, nevertheless I will make 2:

​1) Biological human beings may or may not go extinct but​ it's only a matter of time, and not much time, before humans cease being the ones that make the big decisions.

2) Given the incredibly rapid rate at which Artificial Intelligence has improved during the last year and given the fact that there is no sign of a slowdown in the rate of that improvement I predict the world will be unimaginably different 5 years from now, but I don't know if it will be unimaginably​ better for us or unimaginably​ worse.

I don't know but I can hope, I hope we all make it through the Singularity meat grinder in one form or another.

John K Clark

Expand full comment