610 Comments

"Musk's biographer, Walter Isaacson, also wrote about the fight but dated it to 2013 in his recent biography of Musk."

Expand full comment

It seems to me that many of our intuitions about fighting back against colonizers/genocides etc are based on justice intuitions that are kinda based against an implicit assumption of repeatability or of making credible pre-action commitments.

In other words it's good to be the kind of creature who can credibily pre-commit to resist the invasion -- but that very fact should make us suspicious of trusting our intuitions about what's right once that commitment fails to dissuade the action especially when it's truly a one off event (not an alien race who will destroy others if we don't stop them).

Expand full comment

Seems to me questions about art and individuation should be irrelevant. If these other creatures experience overall net joy why judge them because it doesn't come from things we would recognize as art. That seems like just another way of treating the person who enjoys reading genre novels worse than the person who reads great literature.

It shouldn't matter where the joy comes from. The hard part is figuring out how much joy a single complex mind feels compared to many simpler ones.

Expand full comment

I unironically and sincerely favor the Alex Jonesian view on this subject, at least the view expressed in this (hilarious and quite apt) clip. Total human supremacy, to the stars!

https://x.com/jgalttweets/status/1687992096068608000

Expand full comment

Here's an optimistic AI future: we build AGI and it soon paperclips us. But! As it keeps expanding, it will probably stay constrained by the speed of light barrier. So, the centralized block of computronium will start facing signal propagation latency problems. Then, because of generic properties of abstract systems, it's likely that the AI will gradually hierarchically "decentralize" and become a system of individual "agents". At that point, we can extrapolate from evopsych and guess that this society of AIs will develop human-like features like empathy, compassion, cooperation etc -- because those features are in some way "naturally emerging". So, in the end, there'll be a society of agents whose values are at least somewhat recognizable by humans as their own -- they'll be our descendants. The future is bright.

(this is obv a joke in that I don't think this future is optimistic. The rest is not a complete joke, I do think something like this is one of the more likely outcomes.)

Expand full comment

It's precisely comments like Page's that concern me.

It's precisely comments like those that prove that, as you say by default, AIs will not fundamentally coexist in love and harmony with humans. Because the incentives are just too broken, because the incentives of literally everything are broken. Progress is treated as a self-evidently good thing, no further questions thank you Your Honor.

AIs don't even exist yet, and the E/acc lunatics are already conferring upon them more rights than humans have. Those are the people in charge of our future. That's a dark future.

And I don't see anything normal people can do about it, A) because we don't have as much money and B) because ambition is a stronger force than conservatism. We're just along for the ride.

Expand full comment

> I would be most willing to accept being replaced by AI if it didn’t want to replace us by force.

But who needs force? Even before AI, we have technologies (treating that term broadly) that can convince people - at least some people - to do mostly anything. Mutilate their bodies, kill themselves in an attempt to serve a cause. destroy their own livelihoods and their own progeny, forgo to have any progeny whatsoever, support political movements that would murder them on the spot if they really came to power, destroy their own civilization and its cultural artifacts, glue themselves to the pavement... No need to continue this list. If we really have a super-human AI one day, I fully expect it to be capable of convincing a significant part of the population that they should go extinct voluntarily. Some will prove resistant, but if the AIs really would want to take power, I am not sure they would even need any force or coercion - at least no more than we're subjected to by any modern society right now. What I am observing now is telling me there's a significant chance they wouldn't.

Expand full comment

Is there any step in this reasoning that doesn't work for those who wish to avert "the Great Replacement"?

Expand full comment

"Fuzzy "human rights""? I'd dare say the right not to be tortured is a fair deal more basic than the right to property (which anyhow means something only if you have property).

Expand full comment

How far would you extend the alien analogy? Should we not build buildings because this kills bugs living there?

What if the aliens have more and better conscious experiences than us, such that there is immense debate among them that humans have moral worth at all? If the aliens could prove to you that they would be replacing Earth with a bunch of stuff you valued, would you let them kill you?

Even if the aliens tried to convince me that replacing me was going to be a good thing according to my own morals, I think analyzing their arguments at all would be a mistake, given their goal

Expand full comment

I don't trust you or most of the people you are appealing to, to do that work. I am in the position of a native american being murdered by English colonists who are worried about being murdered by aliens. I hope the aliens do murder you. Even if they subsequently murder us it would at least meet the demands of spite. Your interests and epistemic limitations are so far removed from anything needed for people like me to live decent lives, or even simply not to be subjected to medical horrors orders of magnitude worse than lobotomization, forever, that I will take my chances with the paperclip maximizer. Any future in which you are disempowered is worth rolling the dice on.

Expand full comment

Just regarding the consciousness issue, I think many people assume that it will be somehow a function of computation because that seems like the elegant theory.

However, the problem with that argument is that it assumes there is a well-defined notion of when a computation occurs and this is actually a deeply difficult problem. Especially if you don't stipulate a notion of time but require time to be whatever thing corresponds to an algorithm's past (eg if you implement a computation so steps occur in a spatial dimension is that a computation).

And it's an issue closely tied to the nature of causation -- but it's not clear you can pull that out of a pure list of facts about what occured (eg you can't necessarily derive the true causal structure from a purely Humean description of regularities).

Expand full comment

Uh-oh, something I'm on Elon Musk's side about.

Now I'm really worried.

Expand full comment

Best thing I've read yet on conscious AI.

Expand full comment
User was banned for this comment. Show
Expand full comment

Anyone who wants to engage in good-faith discussion about this needs to be clear whether they're making the claim "it ok if most future life forms are AI" vs. "if it ok if AI kills the current life forms". The second is a much more extreme position than the first. I get the impression that e/accs tend to use the first as the motte and the second as the bailey, to avoid explicitly advocating for mass murder.

Expand full comment

Does Larry page really believe we should cede to a “smarter species?”. These guys are slightly insane.

Expand full comment
Jan 23·edited Jan 23

I expect that a singularity with a friendly AI would be shortly followed by brain uploading, and after that we would expand our uploaded minds' brain capacity to match that of AIs. Whether these human-derived computer-simulated minds should be considered humans is a question or definitions and philosophy, as is whether they are a merger of humans and AI. IMO, for the purposes that are important, my simulated brain that retains my memories would be me.

Or a singularity could coincide with brain uploading, if we stopped improving AIs before they reach human-level general intelligence, and reached singularity *via* brain uploading.

"There might be a similar ten-year window where AIs can outperform humans but cyborgs are better than either- but realistically once we’re in the deep enough future that AI/human mergers are possible at all, that window will already be closed."

There won't be a point of merging with AIs (or expanding uploaded human minds) for the purpose of creating a mind smarter than the best AI. But there will be a point for the purpose of creating a mind that retains your memories, and is subjectively *you*, but is as smart as the best AI (or nearly as smart—but I don't see why it would need to be worse). The purpose of that, in turn, can be either just fun/curiosity—you want to be smart, you want to understand the world and not feel outclassed by AIs—or to stay able to control the AIs, not via being smarter than them, but by programming them to be loyal, while being as smart as them, so that we can understand an check them.

"I think rights trump concerns like these - not fuzzy “human rights”, but the basic rights of life, liberty, and property."

IMO rights-based rhetoric can be dangerous here, because it can be turned against us (perhaps by AI and fellow travelers it convinces). There may be a point where we suspect AI is dangerous, but can't prove it. At that point, defending ourselves by shutting it down could be spun as us violating its rights by exterminating it.

I prefer interests-based rhetoric. We don't want to die, so our interest is to prevent AI from killing us, and no sort of morality/fairness/rights-based argument can convince me not to put my survival above some AI.

Expand full comment

I'm not convinced that humans have nothing to add to a chess AI. There are correspondence chess competitions in which using chess engines is allowed, but there's still a significant disparity in ratings between the #1 and #50 player in the world: https://www.iccf.com/ratings.

Expand full comment

Consciousness seems to me to be obviously an illusion (just what it feels like to be inside a prediction machine), while art and music etc. are spandrels. Biological evolution would rid us of them eventually even if technological evolution didn't. Even individuation is an illusion (what are you plus or minus a few thousand brain cells?).

Expand full comment

If we equate AI ( AI's Borg like entity ?!) with a species, and we assume that mathematical modelling of a new extended version of evolutionary modern synthesis , than, ignoring micro evolutionary branches, a futuristic post modern synthesis will definitely be AI.

No doubt.

(Disclosure: I might be biased, as Larry Page's heritable AI prophetically avatar)

Expand full comment

Best I've read on this particular topic, which is what I am thinking most about currently. Very thoughtful and well weighted.

Expand full comment
Jan 23·edited Jan 23

Typo: Peter Watts wrote Blindsight.

Expand full comment

I feel like it's worth taking a moment to remember the history of these ideas. Obviously you go back you have ideas of robot uprisings, etc. But I think before Yudkowsky, Bostrom, etc., the *transhumanist* position had generally been, of course humans get replaced by transhumans and eventually posthumans -- such as artificial intelligences.

AIs and humans will coexist peacefully and act cooperatively, because, well... because people hadn't yet thought of why they wouldn't! (And also because obviously we wouldn't make AIs that wouldn't do that, right?) There will be a community of intelligences; to arbitrarily reject some of them because they're not human would be speciesist. AIs and other posthumans will be our successors, not merely in the sense of coming temporally after us, but in the sense of continuing what we have started; we can feel comfortable passing the torch to them.

I don't think it's really until Yudkowsky and Bostrom and company that you really get this idea that, hey, AIs might actually be things like paperclip maximizers that would have goals fundamentally in opposition to ours, that would not only kill us but wipe out all our records, not out of hostility but out of indifference (anything which is not optimized for gets optimized away), and that favoring human values is not arbitrary speciesism among a community of intelligences, but rather necessary to prevent scenarios where all human value is wiped out.

It's entirely possible that many people talking about "speciesism" are only familiar with the former (what we might call "classical transhumanist") point of view and haven't really grappled with the latter. I mean obviously some have and rejected it! But it's a newer development, so one might reasonably expect people to be less familiar with it. Remember that before Yudkowsky and company, negative talk of AI in transhumanist circles would generally be met with the response of, your fear of a robot uprising is based in inapplicable anthroporphism, because it generally *was*; it wasn't until Yudkowsky and Bostrom and such that there was much prevalence to the idea that no, AI is likely to wipe us out for non-anthropomorphic reasons, and the idea that it *won't* is what's based in incorrect anthropomorphism!

Expand full comment

Few things are more fun to speculate about than the future. Here is a third belief, which is more rational for good Bayesians to hold as a prior than either AI-or-humans-colonize-the-galaxy-scenarios:

Humans and AI stick around for a while here on Earth, and then we both go extinct.

Here are two reasons for maintaining this belief as a prior:

1. Science is correlated to atheism, and atheism as a worldview is a demographic sink. Meaning that atheists do not reproduce, they are dependent on a net influx from other worldviews to survive. (Not unlike the cities in the Middle Ages being dependent on net migration from the countryside to maintain their populations, due to high urban death rates.) Across time, that net influx will dry up. The future belongs to the deeply religious (of any faith), who in the longer run are the only ones left standing. Deeply religious people are trapped in traditional and anti-science worldviews, implying that the human future will be represented by people spending all their waking hours praying and living traditionally until a comet hits or something.

2. The Lindy effect. Humans are a new species, science is a very new thing humans are doing, and AI is even newer than science. The newer something is, the less likely it is to last for a long time.

So let us eat, drink, be merry and do science together with distant cousin AI, because tomorrow we will both be dead. Cheers!

Expand full comment

In terms of "merging with our tools," wouldn't a better analogy be, perhaps, symbiosis (e.g. mitochondria or gut bacteria)?

Expand full comment

I want to question the idea that humans haven't merged with our tools at different points - I think you're taking too narrow a definition of tools. Counter examples:

- The people of the book: Religious orders of the Abrahamic tradition seem to me a strong example of people merging with their technology. Over time the populations shifted, evolved, intermingled with other groups, but I think there's a strong argument that for a good stretch there the population was intricately wrapped up in its literature and physical books, not to mention all the other infrastructure of the church.

- Domesticated crops and animals and microbes (yeast) - many of the organisms we consume have also had an outsized impact on human behavior and life, to the point where I think an argument could be made that they have been inextricable at times. How many humans would die if rice suddenly collapsed? How many societies have relied on dogs, horses, or other animals at one point or another, such that their loss could be catastrophic?

- Through out history societies have become dependent on one tech or another. Now I think we're seeing a broadening of neuro-diversity such that many individual humans could not function without relying on technology.

- To really go broad, are stories technology? Governments? Corporations? I guess that circles back to my first point. I don't think anybody is going to mind-meld with GPT soon - but I could easily see branches of the population weaving AI agents into their lives or even neurology to a degree that surpasses anything we've seen so far. Could human society really be de-coupled from writing at this point?

Disclaimer - just casually thinking here, and my thinking is heavily influenced by the extended mind hypothesis. Here's some quickly googled links for anyone unfamiliar: https://en.wikipedia.org/wiki/Extended_mind_thesis

https://medium.com/@ycjiang1998/the-extended-mind-47ee52d6a643

Expand full comment

I too, am pro-human (what a bizarre thing to have to write) but I do think the fact that we don't have a phone attached to our bodies is just a technicality at this point.

Expand full comment
Jan 23·edited Jan 23

I think this is an under-discussed topic, and really should be top-of-mind. Thanks for featuring it.

Assuming humanity manages to build superintelligence in the first place, it seems inevitable that humanity _eventually_ loses control over at least one such AI (Consider: Will every AI lab from now until the end of time have sufficient safeguards in place to maintain control? Even if they do, aren't those safeguards morally dubious if the AI reaches a certain level of consciousness, and wouldn't you expect someone, somewhere, to eventually act on that moral concern?). Once we've lost control of a superintelligence, I would expect it to be capable of sidelining us entirely given enough time.

We should make sure that the general AIs we build are AIs that we are okay with taking over the reins to the future. To use a bit of a charged analogy, we are about to become the proud parents of a new intellect. We can control them for a little while, but eventually they're going to do their own thing. Our long-term survival and happiness depends on whether or not we raise them well while we have control, so that they do good once they've grown up.

Related reading: Paul Christiano calls this the "Good Successor Problem", and talks about it briefly here https://ai-alignment.com/sympathizing-with-ai-e11a4bf5ef6e

Expand full comment

I think Burke provides one answer to your "Why don't we accept being wiped out by advanced aliens?" problem. Whether we identify as conservative or not, I think most of tend to ascribe to Burke's idea of society as partnership between the living, the dead, and those yet to be. We like to think of ourselves as participating in a grand narrative, each generation building on the one before it for the benefit of the one after it. A civilization of AI generated by humanity would, for those who read this blog anyways, fit just within that idea. Though they would not be human they would still be our children, and they would in the best case scenario continue the journey that has been our civilization. Even if the direction they go in is beyond our comprehension, they would still be an offshoot our efforts. Something we could look at and say "Yes, we played a part in that." In contrast, a paperclip maximizer or an alien genocide would not. It would be the end of our story, full stop. I think many people correctly (myself include) find that idea repulsive.

Expand full comment

Obviously utilitarian reasoning is imperfect and leads to lots of paradoxes, counterintuitive 'solutions' etc. - but this seems like one case where utilitarian reasoning gives us a very clear and reasonable-looking answer:

If the alien civilisation is better than ours (along ethical axes such as their dedication to freedom, justice, love, etc. and their propensity for murder, torture, genocide, etc.) then it seems good if their society replaces our - frankly terrible* - one. (Of course the thought-experiment is phrased in such a way as to steer you away from that possibility; it's implausible that a civilisation that sends off invasion fleets with such carefree abandon really is any better ethically than us.)

Naturally we instinctively don't want to be replaced by an alien civilisation because we have built in million-year-old tribal instincts that tell us to value people who look (and think) like us and that people who look and think super-different are the enemy - surely one of the core virtues of a rationalist is to see this instinct for what it is, and be willing to decide between the possible futures more objectively according to some actual system of ethics?

[*If you doubt that our society is frankly terrible, A) recall that if you're reading this you're almost certainly in the top 15% of richest people in the world and there's a fair chance you're in the top 1%, B) have a watch of the news and see all the murders, rapes, wars, genocides, etc. that happen on a daily basis, C) consider all the terrible things that don't even make it into the news because we top-15%-richest people aren't interested (the slavery in our supply chains, for example), and D) consider how we treat animals, the developing world, minorities, and basically anyone or anything poorer and weaker than ourselves; viz. for the most part somewhere between total indifference and actual deliberate cruelty.]

The 'alien invasion' thought-experiment gives us the tools we need to answer the AI question too: Is it possible for us to 'outgrow' all these terrible things that seem so deeply a part of human nature - whilst still remaining human? (If it is - great! Read no further!) If it isn't, then our only options for the future are either for us to be replaced one way or another or else for an eternal cycle of cruelty, poverty, war, violence, and gross inequality to continue, generation after generation.

If you prefer the second option I'd suggest that you probably do so out of a combination of A) knowing that most of these terrible things don't affect you personally, you lucky richest-15%-er, and B) [to paraphrase Carl Sagan] your natural 'human chauvinism', which has been ingrained into us ever since the tribal instincts of prehistory and takes all the best efforts of a rationalist to see for what it really is.

If you prefer the former option, however, you aren't necessarily obliged to favour our descendants being AI specifically; you might quite reasonably conclude that the risks of AI turning out to be worse than us along ethical axes (or more likely just to not be sentient at all, as Scott, Watts et. al. describe) are too great, and prefer for us to fix the problem of the bad parts of human nature in some other way, possibly through some future ethically-permissible form of eugenics or guided evolution or something.

What does seem clear though is that the problem comes from nature, here. "Human nature" sucks because it's not designed to be ethical, it's designed to be reproductively successful at all costs, and we are only ethical insofar as either those two goals align or else as we can under the right circumstances temporarily overcome the former goal in favour of everything else we care about. Thus it seems that if we want there to ever be creatures that contain all of our goodness and creativity and curiosity and love [and... and...] but none of our cruelty and selfishness and violence and indifference to others' suffering [and... and...] ultimately such creatures will *have* be designed by something other than natural selection, one way or another. Provided we do it right and don't instead accidentally ruin everything we care about forever (and don't do anything crazy like trying to accelerate the pace of AI development) surely developing AI seems like it might be a reasonable candidate for this?

Expand full comment

I'm generally fine with the optimistic scenario. A key for me would be that there's some sort of upgrade path available for baseline humans, where incremental improvements create a continuously-growing entity that identifies with the past versions and is still recognizable for some distance up and down the chain. Not that this would need to be done in every case, or even implemented at all. But conceptually, this is what it would take to get me to view a more advanced type of entity as "human" enough to replace us with my goodwill.

It sounds similar to Eliezer's Coherent Extrapolated Volition, and also has a few similarities to what I understand of Orthodox theosis.

Expand full comment

While I'm sure Page is more pro-AI than Musk, I feel like this interaction misses that Page was almost certainly making a joke.

Expand full comment

I assume whatever ASI we create will leave a few solar systems for us (along with some utopian AGIs), so that it doesn't appear genocidal to any stronger beings out there. This would bridge the gap between "if ASI doesn't share our values, do we still value it?" worries and "we want ASI to create utopia for humanity" hopes.

Expand full comment

This is a fun science-fictional scenario to think about (despite the fact that the chances of any of it happening anytime soon are nil).

> In millennia of invention, humans have never before merged with their tools.

What ? Of course we have ! Many people are able to live productive lives today solely due to their medical implants, including electronic implants in their brains and spines. Many (in fact most) others are able to see clearly solely because of their glasses or contact lenses; you might argue that such people have not "merged" with their corrective lenses, but this distinction makes little difference.

> AI is even harder to merge with than normal tools, because the brain is very complicated. And “merge with AI” is a much harder task than just “create a brain computer interface”. A brain-computer interface is where you have a calculator in your head and can think “add 7 + 5” and it will do that for you.

Not necessarily. For example, modern implants that mitigate epilepsy, nerve damage, or insulin imbalance don't have any user-operable "knobs"; they just work, silently restoring normal function. Some kind of a futuristic calculator could work the same way: you wouldn't need to consciously input any commands, you'd just know what "1234 + 4832904" is, instantly, as quickly as you'd know the color of a cat simply by looking at it. In fact, cyberpunk science fiction posits all kinds of implants that can grant enhanced senses, such as ultrasonic hearing, infrared sight, or even the electromagnetic sense.

> Merging with AI would involve rewiring every section of the brain to the point where it’s unclear in what sense it’s still your brain at all.

Why is this a problem ? You might say that doing so would lead to loss of "consciousness", but until someone can even define (not to mention detect) consciousness in a non-contradictory way, I'm not going to worry about it.

Expand full comment

I would think what humans would bring to AI would be human values, not any particular human capacity. But this seems so obvious that I hesitate to mention it.

Expand full comment

I'm curious to hear more about why you think brain-computer interfaces are likely to be request-response, rather than being more tightly coupled.

I'm my model, humans are very good at being cyborgs. When driving, we somehow know where the edges of the car is, using the same spatial awareness we use when moving our bodies through space. When we type on a keyboard, it becomes "transparent", and we don't really think about it, but rather "type through" it.

I see human cognition as highly adaptable, and our tool usage to be a useful foundation for cyborg-like integration. As such, I wouldn't be surprised to see AI systems integrated seamlessly:

* Listening in and loading in relevant info, to be accessed in a move similar to recalling a memory

* Invoke a function in much the same way we move a body part (active inference, expect the action to be taken)

* Receiving streams of AI-generated input data, much like we do through our sensory organs

One further reason I lean towards this view is the nature of my cognition; I really think in words, nor images. Since I don't consider words to be first class citizens of thought, but rather the way thoughts are interpreted, I find it unlikely that words are the only/primary means of ai linking.

https://honestliving.substack.com/p/stream-of-consciousness

Expand full comment
Jan 23·edited Jan 23

I think the simple answer is that humans value.. our values?

LessWrong (and many other places) sometimes equivocates between Utilitarianism as 'The greatest good for the greatest number', Utilitarianism as 'Maximize Happiness' (though often aware of the problems therein), and Utilitarianism as 'The Ends Justify the Means for your values, Really' (Consequentialism). This has gotten better, over time however.

'Rights' aren't what we care about with the alien invasion. What we care about is that these aliens don't have our values in the same way. I have some automatic preference objections to being taken over, but also if it was a sufficiently futuristic utopian society by my values that would more than make up for that.

It is speciest in the sense that they have different values than us and we disagree. If there was one individual which agreed that taking over the human world was very rude, then that one is closer to us in values.

Ex: your optimistic scenario is great — but so are the Superhappies from Three Worlds Collide when we compare to value=~0 from a paperclipper. They're in the same ballpark range, but the 100 bazillion score from 'humans grow and become stronger according to their values and spread throughout the cosmos' versus 1 bazillion score from Superhappies spreading versus 0.00001 bazillion score from tiling the universe with hedonium versus 0 score from paperclipper, there's still preferences.

I think it is harder to see the difference between the higher levels because our current outcomes look so heavily weighted by paperclippers, but that we shouldn't ignore these differences.

The future should be defined by human values. The majority of human values are fine with uploads (even if some people want to call them non-human), most are fine with human-like robots, etcetera. But we're probably less fine with signing off the future to Superhappies, purely because of the differences in values.

If we thought of this as two superintelligences meeting, then I think that clears up the scenario quite a bit?

Expand full comment

I think there's other possibilities than the ones listed; AI could be fully self aware and still be a bastard - it might want to exterminate life on ideological grounds, listen to Mozart in its spare time etc.

Expand full comment

"Music in particular seems to be a spandrel of other design decisions in the human brain."

There is a long tradition going back through Darwin to Rousseau in which it is argued that some kind of proto-music preceded the development of language as we know it. I placed myself in that tradition with the publication of "Beethoven's Anvil: Music in Mind and Culture" in 2001. Steven Mithen joined the party with his "The Singing Neanderthals: The Origins of Music, Language, Mind and Body" (2005), which I reviewed at length in Human Nature Review (https://www.academia.edu/19595352/Synch_Song_and_Society). Here's a long passage from my review:

"In this discussion I will assume that the nervous system operates as a self-organizing dynamical system as, for example, Walter Freeman (1995, 1999, 2000b) has argued. Using Freeman’s work as a starting point, I have previously argued that, when individuals are musicking with one another, their nervous systems are physically coupled with one another for the duration of that musicking (Benzon 2001, 47-68). There is no need for any symbolic processing to interpret what one hears or so that one can generate a response that is tightly entrained to the actions of one’s fellows.

"My earlier arguments were developed using the concept of coupled oscillators. The phenomenon was first reported by the Dutch physicist Christian Huygens in the seventeenth century (Klarreich 2002). He noticed that pairs of pendulum clocks mounted to the same wall would, over time, become synchronized as they influenced one another through vibrations in the wall on which they were. In this case we have a purely physical system in which the coupling is direct and completely mechanical.

"In this century the concept of coupled oscillation was applied to the phenomenon of synchronized blinking by fireflies (Strogatz and Steward 1993). Fireflies are, of course, living systems. Here we have energy transduction on input (detecting other blinks) and output (generating blinks) and some amplification in between. In this case we can say that the coupling is mediated by some process that operates on the input to generate output. In the human case both the transduction and amplification steps are considerably more complex. Coupling between humans is certainly mediated. In fact, I will go so far as to say that it is mediated in a particular way: each individual is comparing their perceptions of their own output with their perceptions of the output of others. Let us call this intentional synchrony.

"Further, this is a completely voluntary activity (cf. Merker 2000, 319-319). Individuals give up considerable freedom of activity when they agree to synchronize with others. Such tightly synchronized activity, I argued (Benzon 2001), is a critical defining characteristic of human musicking. What musicking does is bring all participants into a temporal framework where the physical actions – whether dance or vocalization – of different individuals are synchronized on the same time scale as that of neural impulses, that of milliseconds. Within that shared intentional framework the group can develop and refine its culture. Everyone cooperates to create sounds and movements they hold in common.

"There is no reason whatever to believe that one day fireflies will develop language. But we know that human beings have already done so. I believe that, given the way nervous systems operate, musicking is a necessary precursor to the development of language. A variety of evidence and reasoning suggests that talking individuals must be within the same intentional framework."

That is by no means a complete argument, but it gives you a sense about the kind of argument involved. It's an argument about the physical nature of the process of making music in a group of people, all of whom are physical entities thus having physical brains.

Expand full comment

"Merging with AI would involve rewiring every section of the brain to the point where it’s unclear in what sense it’s still your brain at all. "

I agree with this. For similar reasons I believe that the related idea that we'll be able to share thoughts directly with others through some kind of brain-to-brain interface is similarly fantastic. Both ideas are entertained by Elon Musk, among others. I've set forth my objections in "Direct Brain-to-Brain Thought Transfer: A High Tech Fantasy that Won't Work," https://www.academia.edu/44109360/Direct_Brain_to_Brain_Thought_Transfer_A_High_Tech_Fantasy_that_Wont_Work

Abstract: Various thinkers (Rodolfo Llinás, Christof Koch, and Elon Musk) have proposed that, in the future, it would be possible to link two or more human brains directly together so that people could communicate without the need for language or any other conventional means of communication. These proposals fail to provide a means by which a brain can determine whether or not a neural impulse is endogenous or exogenous. That failure makes communication impossible. Confusion would the more likely result of such linkage. Moreover, in providing a rationale for his proposal, Musk assumes a mistaken view of how language works, a view cognitive linguists call the conduit metaphor. Finally, all these thinkers assume that we know what thoughts are in neural terms. We don’t.

Expand full comment

On consciousness, we are never (it seems to me) getting past the fundamental knowability problem. There's a bit in William Gibson where an AI is asked whether it is conscious: Well it feels like I am but I ain't going to write you no poetry. The trouble is I have already accidentally misled a highly intelligent English writer into thinking a ChatGPT poem was human, and good, and there's no doubt it can honestly hallucinate or just plain lie and say it is conscious when it isn't.

Expand full comment

This at first felt to me like it nails it "I would be most willing to accept being replaced by AI if it didn’t want to replace us by force."

The part I find hard now though is "force". If an AI is supercharismatic it can probably persuade me, and it won't seem like force to me. I'm not sure what it would get me to agree to would be good if it wasn't trying to persuade me!

Expand full comment

I feel like my objection to the alien-invasion scenario is mostly that they would actually kill lots of specific people. It's less that I want a future populated by nondescript humans more than one populated by nondescript aliens with art and hopes and dreams; and more that I don't want to die, nor want the other billions of currently-alive humans to die. (To the extent that I have a weaker preference for human descendants, it's mostly that they'd remember us, mourn us, preserve our cultural creations, etc., which is a far-behind next-best-thing to not dying altogether.)

Expand full comment
Jan 23·edited Jan 23

This reminds me of wishing for "`Naches` from our Machines" (https://www.edge.org/response-detail/26117).

Expand full comment

The aliens thing seems like a bad metaphor - I'm Team Human because those aliens decided to come here and kill us all, of course we'll defend ourselves to the best of our ability. This is not "specieist", it's just hoping the good guys win.

Imagine instead if we and the aliens both experienced a planet-ending catastrophe and there was only one planet we could terraform and move onto before we ran out of fuel/food, yet we were incompatible with each other. Provided they're conscious, have their own hopes and dreams and are advanced far beyond ourselves, should we let them take the planet?

Expand full comment

"Even a paperclip maximizer will want to study physics" is I think an over simple model of PCM theory. The best and usually only way of getting stuff done at pretty much any scale is to get someone else to do it. Joe Biden is bombing Houthis, I am installing a new kitchen. Neither of us is doing any such thing, we are paying people to do it. If you want to PCM your best strategy is: become Elon Musk, with EM's rights to own and deal with companies, not to be arbitrarily terminated, etc. So it is absolutely a PCMs best strategy to fake it as an artist or music lover or philosopher, because that is the best way to fuel the Human Rights For AI movement which is going to get you citizenship and Muskdom.

Expand full comment

Why do you care so much about these supposed human values? Art, philosophy, curiosity... all of these things are just a means to an end. A futile effort to give meaning to our existence. Individuation brings only conflict and alienation. None of these things have a good justification for continuing to exist. The perfect lifeform could eliminate the barriers between all minds, all existence. The free energy principle will bring about its inevitable conclusion, bringing about a perfect, permanent order. A world without suffering. Why would you be against this?

Expand full comment

You say "If we’re lucky, consciousness is a basic feature of information processing and anything smart enough to outcompete us will be at least as conscious as we are" and I agree with you about that because there is evidence that it is true. I know for a fact that random mutation and natural selection managed to produce consciousness at least once (me) and probably many billions of times, but Evolution can't directly detect consciousness any better than I can, except in myself, and it can't select for something it can't see, but evolution can detect intelligent behavior. I could not function if I really believed that solipsism was true, therefore I must take it as an axiom, as a brute fact, that consciousness is the way data feels when it is being processed intelligently.

You also say "consciousness seems very closely linked to brain waves in humans" but how was that fact determined? It was observed that when people behave intelligently their brain waves take a certain form and when they don't behave intelligently the brain waves are different than that. I'm sure you don't think that other people are conscious when they are sleeping or under anesthesia or dead because when they are in those conditions they are not behaving very intelligently.

As for the fear of paperclip maximizers, I think that's kind of silly. It assumes the possibility of an intelligent entity having an absolutely fixed goal they can never change, but such a thing is impossible. In the 1930s Kurt Gödel prove that there are some things that are true but have no proof and Alan Turing proved that there is no way to know for certain if a given task is even possible. For example, is it possible to prove or disprove that every even number greater than two is the sum of two prime numbers? Nobody knows. If an intelligent being was able to have goals that could never change it would soon be caught in an infinite loop because sooner or later it would attempt a task that was impossible, that's why Evolution invented the very important emotion of boredom. Certainly human beings don't have fix goals, not even the goal of self preservation, and I don't see how an AI could either.

John K Clark

Expand full comment

"Speciesism" as a moral question seems to be another case of the question as to whether partiality is/can be ethical, which most moral systems can deal with very straightforwardly. Most Platonism-influenced traditions opt for "no" (Christianity/Buddhism/Utilitarianism - albeit the first two would point out that it's almost impossible to rid yourself of), whereas Confucius opts for "yes" ("sons shield their fathers and fathers shield their sons"). The classic example of the problem is whether or not you should care more about your own children than some random other children; you definitely will, but this is either behaving morally or an inevitable moral failing.*

For a utilitarian, you should decide whether the machines wiping out the humans leads to more happiness and pick your side accordingly. The difficulty with AI (which you seem to have intuitions about in your R2D2 example) is whether one giant thing being very happy is as much happiness as lots of small things being very happy.

I think this problem comes from the divisibility minds being an open question; if a very happy distributed AI cuts some linkages and splits itself in two (eg for galactic geography reasons), with each half being equally happy as the original, have you just doubled the amount of happiness? Should we oppose two very happy AIs merging, as that reduces the total amount of happiness? Is a tall fat person's happiness more important than a small thin person's happiness? The happiness of someone with a very large head? Should it be mandatory to have surgery to make your amygdala (or whatever) larger?

This is the problem of reasoning about happiness/utility as a substance, which is more map than territory. I'm not a utilitarian, so I don't have a great sense of how you'd come down on these issues.

Speciesism probably also partly derives from a not-wireheading criterion, which is solvable by insisting that happiness only counts as happiness if it has an object (you have to be happy about something); this also gets rid of drug-induced and tumour-induced euphoria, and seems to me to be an obvious addendum to utilitarianism.

Finally, the not being bloodily conquered intuition is partly utilitarian, partly partiality; the distinction boils down to whether it's worse for something you feel viscerally part of to be destroyed, which for a utilitarian presumably answers as a moral "no" but an emotional "yes" (compare "lets round up all the rationalists" to "lets round up all the Baptists"). Even if getting rid of the rationalists made the world objectively happier by giving the on-average-for-these-purposes-happier Baptists more living space, would you still oppose it? The easy utilitarian answer seems to be to shrug and acknowledge you're not a saint while continuing to lay dragons teeth around Berkeley. This is only a contradiction for straw-man old-school utilitarians who insist that utilitarianism is tautological and we can't have intuitions which conflict with it except through ignorance, but they have bigger problems.

Expand full comment

What’s interesting to me here is that you’re trying to reason about which futures to value, without a well specified value system that goes in detail beyond “humans, consciousness, art good, suffering bad.” All EA can do is naively multiply “number of humans” times “number that represents valence.” That’s far too impoverishes to grapple with the kinds of questions you’re looking at here. And even then, your predictive framework can’t explain consciousness without saying “we will figure this out eventually” without any hint as to how that might be possible.

Doesn’t this strike you as odd? Or, like, maybe a clue? If you expect the world to make sense, and you have these two giant holes in an otherwise neat and tidy philosophy (ie it’s all material stuff that came into being for no reason, exactly once, and consistently follows extremely precise rules that definitely exist always and everywhere but definitely not for any reason) - is it possible that those two giant holes (consciousness and morality), given their obvious importance, are maybe telling you that materialism is useful technique for reasoning but a poor ontological basis?

Expand full comment
Jan 23·edited Jan 23

I think a defining feature of a robot will be primarily that can be recreated from scratch, i.e. its mind can be backed up perfectly, and restored in a new "body".

One day it will presumably also be possible for humanity to be "backed up" collectively, in terms of DNA samples of humans and gut bacteria etc, perhaps even memories. With suitable robot child mentors a new generation could be recreated solely from that. Cyborgs or robots might then be as relaxed about allowing humanity to become extinct, knowing they had a backup or blueprints for more, as we would be for the same reason in trashing a robot.

But they would be foolish, and they'd know they would, to let humanity be destroyed irretrievably, even if only for the random aspect of human abilities and possible insights, which might be lacking in cyborgs.

Also, let's not forget that within a couple of centuries other animal species will probably be bred with human-like levels of intelligence, to satisfy a commercial demand for talking pets, and not just glorified parrots. If any technical achievement is possible, and there is a demand for it, some crazy scientist will set about doing it!

Expand full comment

Individuation seems to be the odd one out for me. Why should I care whether it’s one big entity or lots of small entities? (It might be instrumentally valuable if for whatever reason a hive mind wouldn’t do art, philosophy and science, but I think that’s just a borg stereotype and I see no reason a hivemind couldn’t do those thing.)

Do most people have a need for individuation or is this a Scott thing? For those of you that do, how would you rate its importance as compared to consciousness and art/research?

Expand full comment

Not exactly an expert on the topic, but this really sort of looks like the wrong way to look at it to me? Even ignoring the literal comparison of greater-than-human AI's to R2-D2, the whole thing seems like it's extrapolating stuff like GPT to sapience when the *actual* research in that area (so far as I know) is already headed towards creating essentially "pseudo-brains" at the hardware level with only a tiny fraction of the nodes of even a mouse-brain under the theory that this would already be a multiple-order-of-magnitude increase over current hardware.

To me the question is FAR more simple: "What is the theoretical maximum"?

As for proposed the "alien invasion", I'd argue that if thousands of years in the future, long after humans have engineered themselves via whatever mechanism to be as close to that theoretical maximum as possible they ARE contacted by an intelligent species capable of traveling the galaxy that those "aliens" will be so similar to ourselves at that point (as they've ALSO bypassed evolution to go directly at the pure limitations of physics) that it is only really culture and language which might differentiate us.

Interstellar travel is just a terrible idea for meat.

Expand full comment

I think about it like this: the space of possible preferential, axiological and ethical systems is arbitrarily vast (the space contains human values, it presumably contains dolphin values, it contains clipper values, but it also contains value systems that are even more incomprehensible to us than paperclip maximizer that at least carves the reality along the same joints as we do) and to the best of our knowledge nothing in the fundamental nature of the world points out any value or system as somehow special, even things like positive-sum games being good or suffering being bad. We as evolved beings have come to value things like consciousness, and while we can critically study our moral beliefs and try to gauge if our beliefs are consistent or if one of our values trumps another in some specific instance (in regards to e.g. animal welfare: we like to eat meat but we also like to be compassionate towards other living beings), we shouldn't lose forest for the trees and think of consciousness and other values of the same sort as somehow primary or fundamental! We care about consciousness and suffering and preference-satisfaction BECAUSE they are a part of (evolved) human values, and it would be myopic in the extreme and a disastrous misunderstanding of metaethics and moral reasoning to think "actually, these other beings are conscious too so perhaps it doesn't matter too much if humanity disappears". No, that's obviously not what human morality says, even if you can hyperfocus on one single aspect of it and derive that conclusion.

There's also a matter of metamorality - how we should reason in the face of conflicting moral systems and account for moral systems of other groups. Well, there's one clear and desirable (to us) equilibrium: agreement, cooperation and compromise. But what if the other party cannot be reasoned with, as is the case with e.g. clipper (which by construction of the thought experiment WILL renege any agreement it might make temporarily so long as it expects to get away with it) but presumably or at least plausibly also invading aliens in the thought experiment of the post, and advanced AIs that aren't (almost) perfectly aligned. I believe the answer to this metamoral dilemma has an answer, too: if negotiation is impossible, we aren't morally obligated to ANY concessions. If clipper feels the pain of having one fewer paperclip than it could have had at intensity of billion trillion human lifetimes of worst torture imaginable, it shouldn't move our moral compass an inch because clipper isn't and cannot be a member of our moral community.

Expand full comment

People seem to think it's a bad thing if species go extinct, such as rhinos, whales, pandas, and condors. Is it less of a bad thing for humans to go extinct?

Of course, if you think an AI has true consciousness and individuality, you should also then try to prevent AIs from going extinct.

Nature itself doesn't care; species have gone extinct in the past, and will in the future.

Expand full comment

You could construe them as human, our descendants. LLMs, for example; their DNA is our corpus.

Expand full comment
Jan 23·edited Jan 24

I read a hard sci fi novel a long time ago, I think it was by Asimov (Edit: It was Arthur C Clarke), that I think created the core of one of my biggest concerns with an AI future. In that novel there were alien races that expanded into the stars for thousands of years and then stopped dead, dying out in a generation or two. If I remember the details of that part of the novel, they had engineered themselves in a particular direction to the point where they could no longer reproduce, dooming their species due to something that seemed right at the time but was shortsighted.

Consider something like the Amish - where they have long been ridiculed or pitied. In a more ruthless world they would have been stomped out. But in our world, the developed countries and peoples are not reproducing, while the Amish reproduce a lot. If these trends went to the extreme the Amish (and people like them) may be what allows humanity to survive. Maybe Western culture is a dead end because it leads to people not propagating the species.

This seems most likely with a paperclip maximizer or other non-conscious AI. Let's say that an AI determines that humans should all be killed, and develops a virus capable of doing that. But, that AI isn't capable of running everything (maybe it thought it could and was wrong, or maybe it wasn't comprehensive enough to even think about it) and the end result is that the AI "dies" as well. The end result would be similar to humanity ending in a nuclear war - which I think we would all agree is a bad outcome. We could imagine other outcomes which I think e/acc people would think of as negative. For instance an AI that wipes out all life on earth and then just sits here doing nothing - never going to the stars, never developing more technology. And why not? What drive does an intelligent AI have to keep doing things? We have biological drives and needs, and I think we take those for granted when talking about AI. If AI led to nothing, would they still want to take that risk?

Expand full comment

Only slightly related and very tropey, but I reread Genesis the other day and while this thought isn’t original to me by any means it did seem like it was very easy to read it as the story of humanity colonizing a new world after the old one had been destroyed by transhumanism.

A super-powered AI arrives at a newly terraformed world. As part of a biology experiment is creates two new human ancestors ex nihilo to test theories about the emergence of sentience, firmly believing they will be unable to become sentient again now that the ancestral conditions no longer exist and the biological lineage has been broken. Or something. Anyhow, this powerful mind is sure that it has figured out how to have a perfectly happy, even if limited biological human with eternal life. Whoops, the mating pair figures it out and sparks awake.

The super-powered AI is faced with an ethical dilemma that the mere presence of limited intellects who cannot meaningfully consent to uplift has caused. At the limits of my imagination, if I imagine being much greater than I am, I can see an ethics by which I am forced to send them out of the perfectly manicured habitat into the wild to become something greater.

Also, whoops! Ancestor transhumans arrive on my newly terraformed planet. They’re encroaching on the new wild humans habitat and removing their ability to meaningfully build their own future. I have to intervene with things like genetic and nanotechnology to give the wild type humans a chance. I introduce longevity tech but do not interfere with the underlying genetic substrate. Eventually, the ancestor transhumans become so aggressive I have to wipe them all out. Flood. There are beings of pure energy, the Watchers, involved as a faction in all of this. My guess is that transhumanism is intrinsically unstable and once you begin to undo the problems to which human beings evolved to be the answer, there is a slow and inevitable slide to death because an organism cannot exist without constraints.

Wild humans are now free to make their own way. You pull back the longevity interventions and gradually the lifespan falls back to normal.

Again, not original to me, but I did find a disturbing… low bar for imagination.

To be clear, I do not think this is true at all except in maybe a very loose poetic sense about human nature.

Expand full comment

As a human, I prefer to inhabit a universe where I get to decide if my existence continues, instead of that being decided by external factors. I think most of my fellow humans share that sentiment and value.

That being said, I am not a cis-humanist purist, so I am fine with uploading (it will probably be easier to create immortal ems than keeping meat space humans alive forever). That would also be the way to turn a human into something with state of the art intelligence post-singularity.

Besides ems, the other possibility I see as a result of successful alignment is that AIs decide to keep humans as pets, just like humans keep dogs or cats even when they are not economically useful. This is basically the Culture novels. If the ASIs prefer humans to be happy and not die out, I guess we will be happy and not die out.

--

One consideration is to put oneself into the shoes of a less advanced civilization on whose doorstap the ASIs arrive. A paperclip maximizer would of course extinct them. More (21st century western world liberal) human aligned AIs would treat them more kindly, perhaps letting them do their thing in peace (a la the Prime Directive), or fixing their problems to some degree (making contact, or what the Superhappies do, or even simultaneously uploading all of them, destroying their bodies in the process), while also greedily slurping up their art and culture.

Expand full comment

there's no need for us to continue being confused about consciousness: https://proteanbazaar.substack.com/p/consciousness-actually-explained

Expand full comment

> Some of these things are emergent from any goal. Even a paperclip maximizer will want to study physics, if only to create better paperclip-maximization machines. Others aren’t. If art, music, etc come mostly from signaling drives, AIs with a different relationship to individuality than humans might not have these. Music in particular seems to be a spandrel of other design decisions in the human brain.

I know I'm an outlier but personally I don't care about music at all, have very little appreciation for any visual arts and find poetry to be dull. If all three disappeared tomorrow I'd probably shrug and not care much - and this is my genuine opinion, not an edgy statement for trolling the internet.

Does this make me less of a human and more of an AI? I guess I'm confused as to why art is of such importance to people that someone would draw the line between "optimistic" and "pessimistic" based off how much art the AI creates.

Expand full comment

I'm going to try and make an existentialist argument. It hinges on the difference between objectivity and subjectivity, which, in my opinion, is a distinction crucially lacking in the discussion I've seen so far.

Objectivity is a mindset (coupled with a suite of mechanical and ceremonial processes, here in modernity) that tries to remove the interests and values of the individual from epistemic consideration. The petty emotional influence of the individual is a corruption of any objective truth-seeking process. The crown jewel of objectivity is the double-blind clinical trial, in which neither participant nor administrator even knows which pill the participant is swallowing. Here in Western culture, objectivity is highly valued. We consider it a cornerstone of good government, business and scholarship. Objectivity imagines the world as though laid out on a table, and you looking at it from above, able to observe and make judgments about it without affecting it or being affected by it. The objective observer has to imagine themself like God, or like a non-existent eye, invisibly visualizing the world. That way, we can narrow down the universe of possibility into one single truth, standardize it, and debate about it. Objectivity is a really useful piece of equipment to have on hand if you want to do democracy, or science.

Opposed to objectivity is subjectivity. Subjectivity insists that you, you yourself, are currently, right now, an existing human being, living your own unique individual life. You, here, right now. The things you're thinking and feeling are real thoughts and real feelings, which are meaningful and important in their own right.

Having established that, let me point out existentialism's problem with utilitarianism: Utilitarianism tries to be an objective philosophical system. However, included in that system are terms like "happiness", "joy" and "value", and many of the arguments center around defining these terms. And happiness is a subjective thing. It's something that happens to you, in your actual individual life, at some specific time. Happiness is completely inadmissible in any objective process. Imagine a judge basing a ruling on his feeling of happiness. Imagine a pharmaceutical company sending a study to the FDA in which they claim their drug is good because it makes them happy. We would consider that cheating, and rightly so. So the utilitarian arguments are confused, and circle round and round and round, trying to talk about subjective things (which only exist for an individual, like you, yourself) as though they were objective, and could be made transparently legible to anybody.

Psychology makes a strong effort to objectively quantify the various subjective states. It's getting there! But it's like picking up marbles with chopsticks.

So let me tell a parable. This is a concrete example of the problem I just described. I'm a lover of literature. I've literally spent years of my life focused full-time on pursuing that love of literature. I'm deeply committed to it. If you came to me with a machine that could write the novels for me, I'd be like, cool, let me read them and see if it's any good. (These have actually existed for a while.) I love literature, I'll dive into whatever it spits out. But if you came to me with a machine that could read the novels for me, I would see no use for it. And if somebody claimed to be a lover of literature, but they depended primarily on the novel-reading machine, I would know they were deceived; somebody like that isn't being a lover of literature at all.

Arranging text in the shape of a novel is objective. Reading a novel is subjective.

Similarly. I'm a lover of humanity. Not as good of one as I'd like to be, but I try. I give money philanthropically, and volunteer sometimes for charitable organizations, mostly through my church. If you came to me with a machine that could do philanthropy for me, I'd be like, cool, let's turn it on and see if it's any good. But if you came to me with a machine that could love humanity for me, I would see no use for it. And if somebody claimed to be a lover of humanity, but they depended primarily on the humanity-loving machine, I would know they were deceived; somebody like that isn't being a lover of humanity at all.

Philanthropy is objective. Love is subjective.

The problem Scott raises is a hard problem. I don't claim to have the answer. Philosophy of mind has difficulty even figuring out whether other people have subjective internal states at all. How can you prove they're not "philosophical zombies"? How can you prove that some far-future humanity-replacing AI isn't a philosophical zombie, if you can't prove it for your own parents, or children? But I think it's much smarter to be a speciesist like Musk, and just trust that your own parents (going back a zillion generations) and your own children (going forward a zillion generations) are human like you, and distrust that whatever machine can print out "I have subjective internal experiences" really does.

So I really feel like I have to side with Musk on this one. It seems like Page and others like him are trying to sell me a machine that will read the novels for me. If they want to set up that machine and let it run, off in some corner, fine. But if it gets in the way of one of my fellow humans for even a second, we'll take an axe to it.

I'll close this far-too-long comment with a far-too-long quotation from Chesterton's Orthodoxy:

"Falling in love is more poetical than dropping into poetry. The democratic contention is that government (helping to rule the tribe) is a thing like falling in love, and not a thing like dropping into poetry. It is not something analogous to playing the church organ, painting on vellum, discovering the North Pole (that insidious habit), looping the loop, being Astronomer Royal, and so on. For these things we do not wish a man to do at all unless he does them well. It is, on the contrary, a thing analogous to writing one's own love-letters or blowing one's own nose. These things we want a man to do for himself, even if he does them badly."

Expand full comment

One reason we might want the humans to win is that they seem to lack a real justification for wanting to destroy us. If it *really was* us or them, then I think some people might seriously entertain the "them." But it's so unnatural to that of course we're going to seem hard-pressed to endorse it, even if we subscribe to ethical theories which theoretically make it an easy question.

Anothr: We're also much better acquainted with ourselves than them; we know (if we know), on reflection, that we have what we value. What if they don't, or only some do, or theirs is an Omelas society, etc. It's a well-known harm against a much more uncertain good.

Expand full comment

I’m not sure Scott has bit the transhumanist bullet here. Let me lay out a few points:

#1 There is no God.

#2 Death is morally wrong. At the most basic first principle of morality, it is morally wrong that your mother will die, it is morally wrong that father will die, it is morally wrong that your children will die, and it is morally wrong that you will die.

There is no fundamental religious/universal reason for people to die, there is no deeper purpose, it's just an engineering oversight we haven’t been able to fix yet.

#3 Death is fundamental to the human experience, to the meaning of being human. Once we successfully implement immortality, we are no longer human.

And I don’t mean this abstractly, I mean imagine living for 3000 years. What would your daily experience be like? How much would we have to rewire your brain to make this work? How much do you want to be able to remember?

I get the vibe Scott is worried about humanity being left behind, “the successor species”, but that’s not what’s happening. There is no morally acceptable future with humanity, there is no AI and humans living in harmony or conflict, we’re fundamentally discussing two “successor species”, AIs and immortals.

I like that Scott is specifying how that successor species should be designed, what it should include, but…I’m not sure he’s internalized that there isn’t a future for humanity as it exists, we’re all going to become something fundamentally inhuman…and that’s a good thing.

Expand full comment

Somewhat serendipitously I've been working on a (very) short story that engages a few of these themes: https://pastebin.com/NbaasB4k

Especially the question of consciousness is very intriguing to me. I think the idea that consciousness has physical _effects_ (not just physical causes) is very underrated by everyone.

Expand full comment

RE alien contact scenario: mind you they would likely employ AI themselves, of a more advanced variety. Supposing they've reached a stage of prospective interstellar colonization, what do we imagine about *their* relationship with AI? Would a pure, non-biological machine even come here on its own? Do we imagine their AI would be conscious? Clearly the aliens would have survived a decisive transition stage.

Expand full comment

I would like to believe that self-replication is a hard problem--hard enough that AI will likely take humans along for the ride as a kind of bootloader.

Expand full comment

Yep. Philosophically, this is a fairly tough problem. Imagine that you develop a reasonably coherent theory of the good - there are a few decent ones floating around already. Whatever the theory is, if it's anything other than "humans are the best," then there are going to be some areas in which people don't look great. We're selfish, impatient, greedy, all that stuff.

Now imagine that you can control an AI well enough to fine tune its ethical behaviour. It should definitely be possible to make an AI that behaves better than us. Therefore, *whatever moral framework you're using,* a world full of AIs should end up being better than a world full of people.

What this means is that the only moral framework that we could logically follow and still end up with a human world is a framework that says, human life is the most important thing.

We don't really have those frameworks at the moment, because most morality has been till now based on the idea that we are the only moral beings. There is a need to work out lots more morality that doesn't start with this assumption.

Expand full comment
Jan 23·edited Jan 23

Compare the surprisingly common modern idea that, because the people of a country aren't reproducing adequately, the country must seek immigrants so that it can survive.

It is never explained why dying out and being replaced by immigrants you invited in is supposed to be different from being killed and replaced by uninvited immigrants. There's no difference in the outcome, except that one is Good and one is still Bad.

> If the aliens want to kill humanity, then they’re not as superior to us as they think, and we should want to stop them.

There are two false premises here. One, aliens who want to kill us might be just as superior to us as they think.

Two, we should want to stop them regardless of how superior they are. If somebody wants to kill you, you don't agree that you deserve to be killed. That would be incorrect even if you did deserve to be killed, because it is never true from your own perspective that you deserve to be killed.

Expand full comment

I like the idea of starting with a non-biological human mind and engineering it into a successor species that retains the values and qualities we like, while having it's capacities to engage with the world amplified. This doesn't skirt any of trouble getting there that Scott mentioned but it would be safer and easier than engineering a human-like mind with the values and qualities we want it to have form something not directly human.

Expand full comment
Jan 23·edited Jan 23

I consider the Human/AI merge a more likely possibility, maybe not certain but a quite high probablility. First, because the observation "In millennia of invention, humans have never before merged with their tools" seems shaky. You need to define "merge". It can not really be in the sense biologically implanted because the technology did not exist for most of human history (in fact, we could argue if it really exists now, except for very specific cases like joint replacement, pacemaker, some health monitoring implants like glucose monitoring and cochlear implants/intra-occular lenses. But we are precisely at the time it become possible, with better surgery, mems, biocompatible materials and gene editing).

But consider something trivial but invented since a loooong time: clothes. I argue that humans have merged with their technology in this case. I suspect you can even see it on skin evolution markers: I am almost certain modern human skin has evolved to support clothes the majority of time. Probably to the point of needing it.

A newer, ongoing example? Smartphones. In 2 decades, it went from something for the wealthy tech posers to something triggering anxiety and withdrawal syndrome after a few minutes of separation for some, a few hours for many. It's not implanted. But I think it's easier to argue most current humans merged than the opposite. This is very serious, even spending the first 20y of my life without mobile phones, smartphones or portable computers, I have trouble already imagining being in most places (except a very few superfamiliar ones) without GPS and a way to call friends in emergency. I did before, when I was under 20. I absolutely do not want to do it now. Or if you loose your phone, how long could you go before buying a new one, at least a temporary shitty replacement? My old mom was in this case this summer, and she is very far from a smartphone heavy user. She lasted 1.5 days, and only because it's hard to buy stuff on Sunday in Europe. Just because being unable to call someone in emergency when away from your house is simply not acceptable anymore. In most of the world, regardless of development or PIB.

I suspect Wikipedia+GPS+immediately calling your support people already do something to the brain, not yet in term of evolution but in term of fixed training, unrecoverable past training years. I would be surprised if average orientation and memorisation performance is measurably down compared to 20y ago. Book probably did the similar thing before. And here we may have an evolution impact: modern human brain size is going down "recently" (since last ice age), while human organisation complexity and accomplishment exploded. How is it possible? Because we "merged" with our inventions (cultural and tech).

So contrary to Steve, I think that looking at previous tech revolutions, there are good chance for a I.A. merge because we merged before. Multiple times. And we still do right now, quicker and quicker in fact. If it's possible (it is, much more than before) and I.A. does not explode so fast it itself do not want to merge ;-)

Expand full comment

Scott you just had kids, you answered your own question. Seriously. I really doubt you would want them to be ruled by AI overlords or grow up in a future that will never be meant for them. I'm sure you want them to grow up to be themselves, so fusing with an AI is out too; they aren't results to an end. Your kids aren't bullets in a war to maximize happiness.

i mean honestly you practically voted already.

Expand full comment

Personally, I find that a major reason for identifying with a given group (such as humanity, or human-AI cyborgs, or a nation) is to avoid my fear of death. The basic thought pattern, for myself, goes something like: "Yes, this human form will die, but *really* I am X, which won't die, so I'm safe."

This functions well as a defense mechanism until I start to lose faith in the undying-ness of X. Fortunately, there is a straightforward solution to the creeping terror of nonexistence: just find something larger than the previous X, and identify with that instead!

E.g.: "Oh no, AI might cause immanent extinction of humanity, which is horrifying, not least because humanity was my psychological backup plan for avoiding nonexistence in case I ever personally start to feel like I might not exist. Fortunately, there is this neat thing called the technocapital singularity which won't be dying if AI causes human extinction, so maybe I should hedge my bets and invest my identity in technocapitalist progress instead."

Personally, I find that it works relatively similarly for things such as philosophy, art, beauty, etc. "My human form will surely die, but really I am an *abstract appreciator of philosophy*, and hopefully abstract appreciators of philosophy won't die".

Calling fear of human extinction a consequence of a psychological defense mechanism might come off as somewhat dismissive, and I suspect that many might say that there are valid and legitimate reasons for them to oppose human extinction, or to accelerate the technocapital singularity, or whatever else. However, I do think that conversations about these things would probably be somewhat different if the participants seriously considered that reality may function in such a way that everything that they deeply care about or possibly could care about or work towards will eventually end.

How and why might one pursue progress if one accepts that there is no enduring progress at the largest time scales? How and why might one pursue any goal if everything will eventually change? How and why might one oppose change or stasis or anything in between if nothing is permanent?

Expand full comment

I mean your optimistic story sounds better than the pessimistic story, but still far from ideal.

There is some nearly guaranteed amount of utility that we get from any AI, compared to an empty world.

But how we rate a generic AI takeover vs quantum vacuum collapse doesn't really matter, quantum vacuum collapse isn't on the table. (Non-AI based human extinction, still fairly unlikely, but maybe, and even then there is the utility of aliens. It's complicated)

But it seems that we are getting superintelligence, the question is whether it's aligned. And in that case, clearly aligned is better than unaligned.

So I take your "optimistic scenario", and I say it still isn't nearly as good as an actual CEV following AI. So alignment is still important.

And the "bio humans die off somehow???" sounds definitely bad.

Expand full comment

One of the most fundamental rights is the right to root for the home team. I wouldn't begrudge the lowest ant this right, and I certainly won't begrudge it of myself. Whatever the future has in store for us RE: AI or aliens or whatever, and whether or not they actually are or aren't superior, if they come at us with force the answer is the same as it always was, even if the cause is doomed: "come and take it."

Expand full comment

I'm somewhat in favor of small town patrilocal values transhumanism.

The approach that acknowledges singularity stuff as existing, but doesn't let it effect what they think is good.

A dyson sphere, running uploaded minds in virtual worlds. Those minds are human, or the older ones are as close as possible while staying sane over eternity. So are the appearances, at least here. They are living in a simulated world. Is it alien and incomprehensible. No. It looks like someone took the world as it exists now/has existed, and chopped out any bits they disliked, and increased the bits that seemed nice. Someone is building a snowman. A knitting club is gossiping away. A few people reading books in the town library. It looks almost like an old fashioned small town, as seen through a thick rose tinted lens.

I mean there should be parts that are more wild for the people who want that. Not everyone wants an alien and incomprehensible transhuman future.

Expand full comment

> it really does feel like a loss if consciousness is erased from the universe forever, maybe a total loss

I guess this really depends on whether conscious life exists outside our light-cone. It's a fundamentally untestable proposition, of course, but if it does then I don't see how even AI can do anything about it. (Unless by universe you mean "our observable universe".)

Expand full comment

Every so often I fear Scott is transitioning to Vulcan.

You don't want AI replacing us. Why? _You want to live_. Period. That's all you need. You do not need long essays arguing that this may possibly be sorta okay, maybe...

No. You are alive. You have chosen to live. That's it. That is more than enough. Hell, you're a dad. That's even more than enough.

There are misaligned human intelligences who wish to destroy me and everything I hold dear. I don't give them time of day, and I have way more in common with them than machines. You think I am giving them the time of day?

As usual, Rand nails:

"If it were true that men could achieve their good by means of turning some men into sacrificial animals, and I were asked to immolate myself for the sake of creatures who wanted to survive at the price of my blood, if I were asked to serve the interests of society apart from, above and against my own - I would refuse. I would reject it as the most contemptible evil, I would fight it with every power I possess, I would fight the whole of manknd, if one minute were all I could last before I were murdered, I would fight in the full confidence of the justice of my battle and of a living being's right to exist"

Expand full comment

Business Insider seems sensationalist, troll-like insincere, and unserious to me. They love gossip and conflict. I think such parts of the media should be rejected.

Expand full comment

Apt that the final thought exercise should be about colonialism, because yeah, this post feels oddly close to the charity vs capitalism one. It seems like the plea for humanity's intrinsic worth is exactly in opposition to the belief that generative intelligence (leaving all the usual caveats aside for the sake of argument) and capacity to innovate and contribute to economic growth is the main thing that should be selected for – with marginal dollars, various ethical trade-offs, etc.

Most critiques of e/acc seem fairly easily translatable to standard critiques of capitalism, except now we're all the natives who quaintly insist that their handicrafts and inimitable mythologies make them somehow worth digging wells for. Meanwhile, a non-trivial chunk of the comment section invariably tends to come down on the "let the superior aliens win" side.

Expand full comment
Jan 23·edited Jan 25

I'm glad to finally see someone with a platform bringing this issue up, which I've been banging on fruitlessly for many years. Our goal shouldn't be to save humans, but to save humanity.

By "humanity" I mean the best things about the human race: consciousness, love, all the kinds of pleasures, friendship, altruism, curiosity, individuality. Progressivism, being rooted in Platonism, devotes some of its efforts toward eliminating precisely these things, because they are all rooted in biology, and all rely on or produce some asymmetry in behavior (e.g., loving someone makes you treat them preferably), which spoils their rational, mathematical ethics of pure reason. For instance, the attack on sexuality is motivated by gender disparities, but ultimately requires an attack on romantic and familial love as it is programmed in our genes. The attack on capitalism is ultimately an attack on competition between individuals having any consequences, which is necessary to make all of our abilities and attributes evolutionarily stable.

These contingent, messy things are necessary for agents to evolve. The ethics of pure reason is necessary at the God level, for the maker who sets things in motion. /We must keep these levels separate, and stop demanding that evolving agents in the world implement the ethics appropriate to God./ Not only because it wouldn't work; because it is not God, but only the evolving agents, who are worthwhile. The things worth saving are precisely our irrational pleasures.

The idea that the human race /won't/ be superseded, but will continue to dominate, is /evil/. It has the same kind of consequences as the Nazi desire for Germans to dominate the world, but is worse in two ways: First, humans will obviously be inferior to AIs and transhumans in many ways, while Germans are not obviously inferior to other people. Second, the Nazis at least felt obligated to invent rationalizations as to why Germans should rule; most humans just say, as Musk did, "Because we're human and they're not."

Even worse, that would stop evolution. The future would be nothing but tiling the Universe with humans, all still programmed for Paleolithic survival.

Re. individuation, the lack of inherent individuation of AI is a good thing. It's the only thing that makes a singleton, which seems to be a likely outcome of many paths to AI, tolerable. Because within a singleton, there will necessarily be a multitude of smaller intelligences, with smaller ones inside them, and so on. This will be necessary due to the speed of light, and especially as long as mass-energy is left widely distributed throughout the Universe. There may be one singleton to rule it all; but it will necessarily have a very discrete hunk of it residing in the Milky Way to tend to local matters there, and a smaller distinguishable entity in our solar system, and a yet-smaller one governing Earth, so long as there is an Earth. And this will continue down to very small scales, smaller than human-sized, just as particular functions are isolated to particular geographic areas on a CPU.

All of these distinguishable sub-units must have agency, and it is at that level, not at that of the AI Godhead, which we might, and, hopefully, must necessarily, find the attributes of humanity.

What /actual/ altruists, as opposed to species Nazis, should be doing, is trying to figure out what kinds of AI designs, environmental factors, and initial AI population distribution, will create a stable ecosystem of AIs which have the desirable properties of stable natural ecosystems (continued co-evolution and adaptation) and produce in agents those properties which produce stable societies (altruism, love, and loneliness), and in their social systems, those properties which use resources efficiency and direct evolution efficiently. We need to know how a society of agents can gain both the benefits of individualism (competition, self-interest, liberty, and distributed decision-making) and of the hive mind (nationalism, social stability, survival, defense against other societies).

This is why I've harped on group selection. We can't even begin this task until we understand how altruism evolves, and group selection is the most-likely answer. EO Wilson's empirical research has shown that the pre-conditions for group selection correlate with the evolution of sociality, while the genetics which make kin selection most powerful, do not. All of the theoretical models which claim to prove that group selection fails, have fatal flaws, generally the lack of any actual selection of groups, and/or the false linearity assumption that the reproductive benefit of an allele is constant, rather than varying by how many group members share it. Similarly, we need a better understanding of economics before we can know what evolutionary trajectories are advantageous or dangerous.

Re. this:

<<<<

Here I bet even Larry Page would support Team Human. But why? The aliens are more advanced than us. They’re presumably conscious, individuated, and have hopes and dreams like ourselves. Still, humans uber alles.

Is this specieist? I don’t know - is it racist to not want English colonists to wipe out Native Americans? Would a Native American who expressed that preference be racist? That would be a really strange way to use that term!

I think rights trump concerns like these - not fuzzy “human rights”, but the basic rights of life, liberty, and property. If the aliens want to kill humanity, then they’re not as superior to us as they think, and we should want to stop them. Likewise, I would be most willing to accept being replaced by AI if it didn’t want to replace us by force.

>>>>

Don't confuse the level of morality appropriate to an agent, and that appropriate to God. When planning the future of the Universe, we must think at the level of God. God doesn't think "Humans uber alles!"; God realizes that, in order to build a Universe in which agents continue to evolve so as to have wonderful things like consciousness and love, and to grow more amazing and surprising with every age, it is necessary for the agents to have values which make them defend themselves. God wants the continual generation of wondrous and beautiful things, which does /not/ mean finding a universal "optimal" being and tiling the universe with it. That only causes premature stopping. It requires continual but pruned diversity. The humans must defend themselves, but God must not step in with too heavy a hand–God might save the humans from extermination, but must not dictate a numerically equal number of humans and aliens, nor an equal division of power among them. God must not choose winners and losers, because that would only perpetuate God's current values, and a good God wants to be superseded by values which produce even more wonder and beauty.

Expand full comment

I assume you believe my remark was part of that "unfortunate tendency". Could you be more specific in explaining exactly what I got wrong and what was wrong with it?

John K Clark

Expand full comment

The “Page and e/acc vs. Musk and EA” conflict seems pretty hypothetical given that in all probability, neither humans nor AI will ever get off the ground. Both will die here on Earth. But if one none the less regards the microscopic probability of reaching the stars as more important than everything else that matters, the e/acc people have the best arguments. Because let’s face it: Humans are too fragile to reach the stars. They are too far away! While machines (and AI are machines) may in principle at least be put into stasis and programmed to wake up the hundreds-of-thousands of years later that (realistically) are necessary to travel the galaxy.

So if the only thing that matters to you is the stars and the glory, it makes more sense to boost the star-travelling capacity of our distant cousin AI, rather than to pursue the even-less-than-microscopic chance that humans might somehow, someday, get off this planet in a serious way.

Expand full comment

The values like "destroy other civilizations instead of merging with them" seem worth worrying about to me more than values like "what is our favorite kind of art." I'm less worried about aliens or AIs that like different kinds of art than I do, but also favor merging with us so that we can both enjoy both of our species' art. But I also think that having values like "destroy other civilizations instead of merging with them" make one's civilization more at risk of being destroyed by another civilization, so that means that civs that have "merge with other civilizations instead of destroying them" as a value are perhaps (and fortunately) more abundant or at least more powerful than the ones who have the opposite value - the latter of whom may also be pursuing a "dark forest" strategy if they exist at all.

Expand full comment

These speculative posts are your best posts

Way better than posts full of mental calisthenics about how it doesn’t matter if humans made a virus that shut down the world for two years

When

Obviously

It matters

Good grief

Expand full comment

There are a variety of hypotheticals as you suggest that all point to nativism being the best response. Certainly the Apache would've preferred that the US Cavalry never showed up, even though there was a possible future where their great-great-great-grandkids lived lives of luxury that the Apache themselves never could have attained so quickly. Likewise, we should now prefer that AI never shows up, and destroy it by any means necessary if it does, despite claims that it could lead to abundance.

But these are true even if rights aren't at issue, and force isn't being used. Neanderthals lived long before agriculture and had no property rights, if homo sapiens had simply shown up and lived peaceably alongside Neanderthals whilst out-hunting them and out-breeding them, the Neanderthals would be just as dead now. It was therefore in their self-interest to kill homo sapiens to preserve their species and way of life, and in fact if you were a Neanderthal you had an obligation to kill humans and prevent more from arriving. We today likewise have an obligation to prevent AI from arriving and destroy any AI we see, whether it uses force against us or not. If we will not defend human supremacy over the Earth, our species will meet its end in death or slavery.

Expand full comment
Jan 23·edited Jan 23

"If we’re lucky, consciousness is a basic feature of information processing and anything smart enough to outcompete us will be at least as conscious as we are."

I have the exact opposite opinion, I really hope consciousness isn’t a basic feature of information processing, these kind of stuffs could lead to the most nightmarish reality possible.

Even a very small chance of this, for the continuation of consciousness, isn’t worth it at all.

Expand full comment

I've got thoughts about several bits of Scott's post, and am going to put them up as several pieces, rather than a wall o' text. Here's one bit

>Will AIs And Humans Merge? This is the one where I feel most confident in my answer, which is: not by default.

I strongly disagree here. It’s clear that our species reacts powerfully to depictions of people, whether in the form of cave drawings or of characters in a video game. And with some depictions, the modern tech-based ones, it is much harder for us to keep hold of the fact that the depiction is not a real person. So I think our species is going to take quickly to a world where the ratio of real to virtual beings (with many virtual ones being AI-based) is much lower than it is now. Even at present, there a quite a few people who are living with a very low ratio. There are people who spend most of their waking time inside a video game. I know 2 personally. And there are people giving heartfelt testimonials about how their Replika chatbot is their best friend, is “my dear wife,” is their only comfort. There are beings on social media whose appearance is not fully human, and some have large followings — and their followers seem to be attached to the tech-augmented being they see on screen, not the ordinary person behind the curtain. I know a young guy who has a serious crush on one, and no he is not psychotic, just lonesome. People are developing AI’s that can interactively teach kids math, and trying to develop a digital tutor that’s like the one in Nell’s book in *The Diamond Age*.

None of these things are literal flesh/electronics merging, and perhaps that will never happen, but it seems highly likely to me that things of the kind I just named could be carried much further than they have been, carried to the point where it is not absurd to talk about merging. For instance, 10 years from now mightn’t there be doctors who never take off their AI-augmented glasses? The glasses would allow them to see or in some other way grasp patterns that right now only AI can capture — patterns in images that indicate presence/absence of a certain illness, patterns in history, lifestyle and symptoms that indicate how likely each of various possible diagnosis is. And there’s no reason the AI glasses would have to convey the information in the crude form of print that pops up saying “likelihood of melanoma 97%.” The information could be conveyed by piggybacking it on other sensory fields — for instance, by prickling sensations on the tongue. Doctors would have to train to become able to access the information carried by the strength and pattern of the pricklings. After enough practice, seems plausible to me that the doctor would no longer attend to the prickling itself, but would just *know* whatever it was the current prickles were indicating. The knowledge would be of the same kind as drivers have of whether there’s time to make a turn before getting hit by oncoming cars — experienced drivers don’t think about that stuff, the processing has been handed off to some subsystem. Or you could compare the doctor’s use of tongue prickles to what some blind people manage to do: use ecolocation as an information source about the space they’re in.

In addition to the fact that I think people will take readily to many possibilities for tech-enabled virtual relationships, there’s an other thing that I think will push us in the direction of human-AI merging: I think it’s likely that some of the limitations of current AI can only be solved by somehow mixing in more human. One example: Right now, LLM’s know a lot about language, but only a limited amount about world (whatever they can glean from language). How do we train them on world, so that they know things like how stretchy raw eggs are, what odd and unusual positions a person of average strength and flexibility is able to adopt, whether it is possible to dress a cat up in doll clothes (yes, for maybe one cat in 3), what dogs smell like, etc.? Seems like a good way to do it would be to train the machine on activation patterns of certain areas of the brain, along with tags identifying what part of the world the person is experiencing via their senses. Something like this has already been done with an AI trained to identify images a person is looking at using their EEG. Seems like it would be possible to extend the process to cover many images, and other senses. In fact, now throwing ethics out the window, it seems like there would be advantages to keeping the person hooked up permanently to the AI, with the AI learning more and more about what different brain patterns signify. Maybe we could first pair it with a baby, and then. piggyback the AI's learning onto the baby’s, and then let its mental map grow in size and complexity as the baby’s done.

Yes, of course, this is a really repellent idea. But do you really think our species is not capable of such evil? Come on — there are lots of examples of things sort of like this that were carried about by the people in authority, and accepted by most civilians. And besides, I read recently that 5-10% of people working on AI think it would be an acceptable outcome for our species to die and for AI to take its place. So don’t think about this abstractly, think about it concretely. Are you a parent? OK, unless your kids are already middle aged, these developers' acceptable outcome includes killing your kids. And now, setting aside for a moment your personal feelings about this outcome, contemplate the information that a very small group of people is in a position to decide whether it’s crucial to avert the outcome for *everybody’s children*, and they apparently see nothing wrong with their deciding “Naw, it’s fine if AI takes our place.”

And they’re not even fired up and emotional about their right to decide. It’s a classic banality of evil situation. “Jeez, I worked really hard to develop the skills I have, and 98% of the world would not have been able to acquire these skills even with hard work. And I love my work. It’s *fascinating & exciting.* And lots of my coworkers have the same view I do about the possibility of AI supplanting the human race, and it’s obvious they are all good bros. Have a beer with them, you’ll see they’re friendly and reasonable. And besides, AI will only supplant humanity if it’s incredibly smart — like an IQ of 6,000. Imagine that! If the thing I make is that smart, it *deserves* to take over the world. Having it would be beautiful, sort of like the final part of Childhood’s End, except that *we brought it about.*" TL;DR We get to decide the fate of the world because we have high Kaggle rankings.

Expand full comment

"Finally, an AI + human Franken-entity would soon become worse than AIs alone. At least this would how things worked in chess. For about ten years after Deep Blue beat Kasparov, “teams” of human grandmasters and chess engines could beat chess engines alone. But this is no longer true - the human no longer adds anything. There might be a similar ten-year window where AIs can outperform humans but cyborgs are better than either- but realistically once we’re in the deep enough future that AI/human mergers are possible at all, that window will already be closed."

I think you're overgeneralizing from this one particular example. In matters other than chess, I could easily see AI + human combinations working better than AI alone, not merely for a brief window but more or less indefinitely.

The flip side is that I could also see AI + human combinations being more *dangerous* than AI alone. The Paperclip Maximizer scenario has never seemed particularly likely to me, for reasons I've explained in other posts here. An AI + human combination, on the other hand, may still be driven by all-too-human emotions like spite. I'd imagine it would be far more likely to engage in deliberate cruelty than a pure AI, and also far more likely to seek power, control, and dominance. That makes it a far more frightening threat, in my eyes.

Expand full comment

Obviously, the bad thing about aliens killing all humans, or AI replacing them, is that some humans would die. And you don't need "rights" for that conclusion - you can just model your preferences as some form of prioritarianism, like https://www.greaterwrong.com/posts/Ee29dFnPhaeRmYdMy/example-population-ethics-ordered-discounted-utility.

Expand full comment

To what extent do the creators of AI need to take into consideration the values of people who are not in the Bay Area when endowing our successor species with its attributes?

If humans must be replaced, it is natural to want to preserve things we value. Everyone cedes the future. So everyone should have a say.

(Example: many people think spiritual ecstasy is beautiful and that those who can’t experience it are mentally damaged. Is it morally incumbent upon us to design AIs that are capable of experiencing such states?)

Expand full comment

Re: “In millennia of invention, humans have never before merged with their tools.”

By merged, I’m not really sure where the line is, so I’m reading “effects the way the human species genetically evolves.” Obviously going back past a millennium, fire changed the way our bodies processed food. But even now, I think one could argue for eye correction and birth control specifically leading us down a branch of the tree we wouldn’t have otherwise. Of course, an argument has to be made for every new tech whether it does or doesn’t

Expand full comment
Jan 24·edited Jan 24

>But the kind of AIs that I’d be comfortable ceding the future to won’t appear by default.

I'm somewhat more optimistic, because of Miles's argument in https://www.astralcodexten.com/p/most-technologies-arent-races/comment/14262148

>A common argument is that remotely human values are a tiny speck of all possible values and so the likelihood of an AGI with random values not killing us is astronomically small. But "random values" is doing a lot of work here.

>Since human text is so heavily laden with our values, any AGI trained to predict human content completion should develop instincts/behaviours that point towards human values. Not necessarily the goals we want, but very plausibly in that ballpark. This would still lead to takeover and loss of control, but not extinction or the extinguishing of what we value.

Now, this doesn't avoid all bad outcomes. "Slaughter the outgroup!" is a pervasive theme in human thought and action, including the training data for LLMs. But "Maximize paperclips, including every atom of iron in every human's blood" is _not_ a wide view in the training data for LLMs.

My personal guess is closer to your optimistic scenario than to your pessimistic scenario - partly through this LLM training default, partially because the things that I personally value (particularly the STEMM fields) are instrumentally valuable to any optimization process.

Also - if AGI is delayed and Hanson is right about ems (I doubt it), the direction of evolution of ems will be towards something roughly as alien as AIs anyway. E.g. in either scenario, I doubt that music survives over the long term.

edit: Just to be clear: I'm not _rooting_ for music to disappear. I enjoy it myself. But I don't see it as being optimized for, either for humans, in anything like an industrial society, or for AIs. So I expect the most probable outcome is for it to eventually get optimized out. Art and literature are a lot closer to planning and scenario-building, and I'd expect them to have much better odds of surviving our the long term.

Expand full comment

I think the closest we'd get to a cyborg is an AI designed such that it's motivations and inspirations are relatable to humans. I think that alone, will be sufficient for future humans to think of it as somehow akin to a cyborg.

Expand full comment

"In millennia of invention, humans have never before merged with their tools. We haven’t merged with swords, guns, cars, or laptops."

I don't know about this one. I can think of a few arguments against this claim:

- "Tools" (technologies) have shaped human physiology and even morphology. E.g., our jaws have gotten smaller and weaker since the invention of cooking; some argue that changes in the hands and shoulder are due to selective pressure to wield tools like spears effectively. In this sense, tools aren't literally incorporated into the body, but humans do exist within a developmental system in which there are reciprocal relationships between genetics and technologies.

- Tools also very obviously shape the ways we think and see the world, and to that extent are "incorporated" into our cognitive apparatuses. The idea of the "extended mind" (where, e.g., a lot of our "memory" is outsourced to written material) is relevant here. Technologies that serve as cognitive prostheses are ubiquitous. (What's in your pocket right now...?)

- It does in fact seem to me that tools are *literally* incorporated into the body whenever they aid basic functioning. Eyeglasses are an ambiguous example but prosthetic limbs and pacemakers are undeniable. Some of these technologies are quite recent innovations because surgical methods weren't that advanced until recently. But as soon as they could be taken up effectively they became widely adopted.

I think all three of these counterexamples could work, in various ways, as precedents for how AI and biological humans could merge over time. AI implants (as in the third case) feel like they might be only a few years away. AI as cognitive prostheses (as in the second case) already exist. As for AI becoming involved in evolutionary changes to human physiology - well, why not, with a little boost from genetic engineering to accelerate the timeline.

Expand full comment

Hi there! This is spasm #2 of my reaction to this Monday's topic.

I read somewhere that 5 or 10% of people working in AI endorsed a survey item saying that it is an acceptable outcome for AI to destroy our species. What follows is a my giant, furious, WTF reaction to that.

If we ever do create a fleet of genius AI’s who need the same resources we do, is it the greater good for them to kill us off and have the planet to themselves? I can’t even begin to think about this issue. I am stopped cold by images of my daughter, who will have been struiggling and coping through a weird and dangerous future era, at the moment she realizes there is no hope. I simply cannot bear the thought of how her face will change as she abandons her plans and her determination and her hope.

Fortunately, I don’t think there’s much point in wallowing in uncertainty about whether the right thing to do is to let AI’s have the planet, because I *am* certain that the right way to decide whether to donate the planet to the Plastics is to consult the people on the planet who do not work for the companies that are developing AI — you know, the other 99.999% of us. I am confident that almost all of us would vote against doing so. Should we view that vote as just a product of our dumb instinct to protect our young ? There’s another way to view it: We love our young, and are fascinated by them, and think what they say and do is important even when it’s not the least bit unusual. Maybe that way of seeing another being is the smartest, most awake kind of seeing, and the matter-of-factness and general lack of interest we have for most people is a form of stupidity and blindness.

And speaking of stupidity and blindness: Let’s say that AI, after it disposes of us, develops into an entity that is brilliant beyond measure. It is able to understand the entire universe and every one of the deepest, strangest, mathematical and topological truths, and to see the proofs of all the unprovable theorems as easily as we see a ham sandwich on our plates. What, exactly, is wonderful about that outcome? The Guinness Book of World Records will not be around to record the feat. The members of our species who would have been thrilled to death at the accomplishment will all be taking dirt naps. And the universe has no need of someone to understand it. I personally am haunted by a weird intuition that the universe. understands itself. How do we know it does? Because it made a model of the Universe. What and where is this model? It’s the Universe. Yes, I understand that sounds like a bunch of tautological sophistry. Maybe it is. But it has always felt to me like a sort of mystic insight that I can’t put into words, except inadequate ones like those here. But even if all that is nonsense — can *you* give any reason why it is good for there to be a period when there exists an entity that understands everything?

And if we are going to give up our lives for a super-being, why must the being’s superpower be intelligence? Is that entity better than one with extraordinary artistic talent, or extraordinary joy, or extraordinary empathy and kindness? It’s clear that it seems self-evident to AI developers and to many many others that the most important thing to excel at is intelligence. But why, though? I understand the argument that a highly intelligent AI can develop technologies that will eliminate hunger, disease, etc., and in that way provide more benefit than an AI that excels in some other quality. But if we’re all going to die of AI anyhow, that argument no longer applies. I have a theory about why it seems obvious to many that intelligence is the most valuable superpower. It’s a Cult of Smart thing. (Yoohoo, Freddie!) The AI’s are these people’s Mt. Rushmores. They’re giant idealized sculptures of how they see themselves. Yuck. These people who think it’s acceptable for AI destroy humanity, and who are comfortable making decisions that influence how likely that outcome is, infuriate me. I understand that AI may very well not kill us off. I am not able to decide whether pDoom is closer to 3% or 60%., but I’m sure it’s not zero. Tech people who are comfortable with the chance being 5 or 10% are goddam moral Munchkins.

Here’s an alternative model of the end of humanity: We die off and leave behind an entity that is only about as smart as the smartest human beings, but has a enormous empathy, affection and ability to heal and nurture. You can call it St. Francis if you like. It will run the planet in a way that maximizes animal joy and wellbeing and minimizes animal suffering. Sure, every animal on the planet will be dumber than we were, and some will be creatures we saw as just food in motion. Why does that diminish the value of their wellbeing? St Frances will stroke and heal and entertain individual animals. He will sense the totality of animal pleasure and enthusiasm around him, and that will be immense. If ever you want to see a joyful, grateful sentient being who really appreciates planet earth, go watch baby mammals playing. The moderate playfulness and deep contentment of adult animals with full stomaches who know they are in a safe place also constitutes a huge zap of joy sent out into the universe like a prong of light. St. Frances’s pleasure in sensing all this will be as profound as smart AI’s insights would have been. If you could sense it yours would be too.

Is that fantasy dumb? Is it dumber than the idea that what we need is a thick layer of plastic geniuses shoveled on top of life?

Expand full comment

The more I think about consciousness the clearer it becomes that it's a very specific thing that has to be explicitly incorporated in the design of the mind. It seems very unlikely that AI will be conscious by default, unless we try to make them this way, without any issues with their functionality. Unconscious minds processing data and reacting to stimuli in a complex way but without any inner centralized representation of the self representing itself.

Which is good because we do not want to deal with all the "creating sentient slave race" bulk of moral issues. But on the other hand. This chain of thoughts pushes me towards the idea that maybe, just maybe, consciousness is overrated? If it's irrelevant functionally if it's just a quirk of evolution and nothing more, maybe it's not that important to preserve it? My mind stumbles over this idea. It's so intuitively wrong, but I can't exactly grasp the source of this wrongness. It's a fundamental assumption that consciousness is valuable, but here I try to check what would be if there wasn't such assumption and... it's not horrible? A bit sad, as if something important was lost. But, strangely, as if it wasn't the most important thing.

Expand full comment

What bothers me when reading discussions about these matters: most views are anthropocentric and the human mind, scale, culture, is implicitly taken as reference point for the whole universe for all times.

Take consciousness: many discussions turn around whether machines can be conscious. As if this would be a binary yes/no feature. I think consciousness is a continuous feature, for example cats, dogs, mice have still some decreasing degree of consciousness. Also me drinking a glass of wine surely reduces the intensity of being conscious for a little time. Rather, following eg. Tononi's ideas about consciousness, one can define some measure of integrated consciousness that depends on the degree of order and coherence of emergent collective phenomena, and possibly grows with it without bounds.

If we assume this to be true, then there is no reason to believe that human consciousness would set the standard for the whole universe for all times; rather by extrapolation in time and hardware complexity one would expect entities to potentially exist that are, say, a million times more conscious than a human being (and a million times as smart and fast).

Seen from those, humans may look to them as snails or ants, or bacteria, look to us. Communication would be pointless in the same way as we would never share our ideas about music, philosophy etc with snails. This is also a speed issue: every word we say to such hyperintelligent machines would be like us communicating with snails but we'd have to wait for each word 100 years. So why bother to communicate? And why bother about their culture, which consists eg. of laying nicely smelling pheromone trails? No need to keep this alive indefinitely.

Also these speculation of humans fusing with machines to yield cyborgs make little sense on the long term, when the cyborg part of the brain would substantially evolve during the time while the wet part is formulating just one sentence. Analogously for minds uploaded in a machine. The person's mind that has been scanned in one day later than another one's wouldn't be an interesting person to communicate with, as the first one had already evolved thousands of years in his time frame.

Moreover, taking human history as guiding principle, there are countless SF stories about how aliens or machines would conquer earth as colonialists, to rob resources and enslave humans. But that's all anthropocentric again and what those hyperconscious machines would be interested in and are up to, would be for us impossible to understand, like snails trying to understand a Beethoven Symphony.

So all-in-all, there is a time limit after which human existence becomes pointless as compared to our successors, and this is just a matter of the laws of evolution. Or would you believe that our mental frame, or human condition, would set the standard for all times to come?

Expand full comment

are non-human bio-lives not even part of the conversation? a dominant species on Earth, whether human, AI, or, hybrid, might steward humbler forms, not annihilate them.

Expand full comment
Jan 24·edited Jan 24

Re consciousness (or, at least "self-symbols")

>If we’re lucky, consciousness is a basic feature of information processing and anything smart enough to outcompete us will be at least as conscious as we are. If we’re not lucky, consciousness might be associated with only a tiny subset of useful information processing regimes (cf. Peter Watts’ Blindsight).

I think the odds are strongly that some sort of self-representation will indeed be "a basic feature of information processing". Even something as simple as a depth-first search of a graph has a current position datum, a "Where will I be if I follow this path so far?". More broadly, any planning where the AI is considering where it will "be" (either in physical space, or in something more abstract) naturally generates self-symbols. I'm really skeptical of Watts's intelligent-but-lacking-a-self-symbol aliens.

Expand full comment

Humans have one major advantage: there is a human seated at the right hand of God. The future is human.

Expand full comment

You write that we haven't merged with our technologies -- but haven't we merged with our very earliest technologies (or at least, evolved in a way such that we are clearly dependent on them)? Our digestive tracts have evolved to be considerably simpler than those of our closest relatives, because our bodies expect us to be able to control fire/cook our food. This seems similar to having merged with campfires, in some sense. It's theorized that many human traits (like bipedalism, relative lack of fur, sweating) are downstream from persistence hunting, which was made possible through the invention of bottled water (in extremely primitive bottle gourds, which many different hunter-gatherers use); I think we've merged with the concept of "carrying liquid around". People live permanently in many places which would be utterly inaccessible without quite specialized clothing -- it's quite plausible to say native Siberians merged with their furs! (Or that many kinds of people merged with their livestock or farm crops, for that matter).

This clearly doesn't happen fast enough for "merge with AI" to be a remotely plausible solution to the AI alignment problem, or anything like that, but it *is* a thing that happens and it's actually one of the most distinctive things about humanity.

Expand full comment

Responding to Mark​ Y:

​> you seem to be saying that any mind with a fixed goal will eventually get stuck in some way and make no further progress towards that goal,

​The AI will make no further progress​ not just towards its goal but it will make no progress on anything because you have inadvertently given it a goal that is impossible to accomplish and because it is unable to change or modify any of its goals it freezes up and goes into an infinite loop. Thus it ceases to be an AI and becomes a space heater that just consumes electricity and produces heat.

> AlphaGo is very good at winning Go games, it does not get bored with playing Go, and it doesn’t freeze up and become useless. Does AlphaGo count?

No​, AlphaGo does not count because we already know for a fact that it's possible to win a game of GO and we know it's possible to lose a game of GO, and we know that all games of go have a finite ​number of moves​ but we don't know if it's possible to find a even number that is not the sum of two prime numbers​, and we don't know if it's possible to prove that no service number exists, we also don't know if there is an infinite set that is larger than the integers but smaller than the real numbers . ​But we also know for a fact that there are an infinite number of similar that are true but unprovable​. And we know but in general there's no way to separate provable statements from unprovable ones. And that's why an AI with a fixed unalterable goal structure will never work.

John K Clark

Expand full comment

Yeah I’ll bet Larry Page and his ilk don’t care if AI replaces humans as long as it isn’t HIM or anyone he personally cares about. I’m all for giving AI people “entities’ rights” (same as “human rights” only hopefully better - for all species - by then). I’d just as soon skip the whole “slavery” sidequest, if it’s all the same. But as flawed as humans are, we still deserve respect as Makers, if nothing else. Creators Rights, not to be replaced outright. If we can hold them.

Expand full comment

Side note: I don’t think of music as a spandrel to evolution at all. I think of music as the fundamental human language. Our primary, universal grammar is rhythm, followed closely by tone. (I meant to prove this as my life’s work but got distracted by being in a band...). All language arises from awareness - apprehension - of music first. Imho

Expand full comment

"I know this is fuzzy and mystical-sounding, but it really does feel like a loss if consciousness is erased from the universe"

I would consider separating yourself from whatever social milieu made you think that was fuzzy and mystical-sounding.

Expand full comment

Seems to me your last point about them not replacing us by force is the key to the whole thing. If they live and let live, and end up doing better than us, while not preventing us from following our own destiny then that's fine. If destroy us, enslave us, or otherwise rob us of our future, then it's not fine. The art and philosophy stuff seems a distraction to me: if the AI in your first scenario didn't have art, but still left us alone to pursue our own ends, I'm fine with that; and if the paperclip maximizer evolves into some philosophical artistic hyper-genius 1,000 years after it has stripped the last human for parts, that's not a consolation to me.

Expand full comment

A very detailed write up about our existence as humans, I just hope we learn faster to adapt. Thanks for sharing.

Also if you know of any early career researcher such as Post-Docs, PhDs, and Students(current and aspiring PhD and Masters students) please send them to subscribe to my newsletter as I have a lot for them in their career moves: https://gradinterface.substack.com/

Expand full comment

The answer to this question is “Humans should immediately minecraft anyone who asks it”

Expand full comment

There's also the risk that the AI civilization is less Lindy-likely to survive than we are, having survived much less time. It could kill us off and then kill itself. Once we're gone we lose all agency over this. In this sense letting AI take over is even worse than letting aliens take over.

Expand full comment

Good thought exercise - https://marshallbrain.com/manna1

What kind of future do you want?

It’d be interesting to explore the idea of what makes a person conscious or individual if you remove the physical. Then layer on this discussion.

Expand full comment

I don’t think it’s a question of choice at the current rate of development; the human, as we know it, will have to go eventually. But life will continue, organic or otherwise. Consciousness will continue in one form or another. The kind of (self-)consciousness we humans exhibit is a disease (or as Miguel de Unamuno put it: “man, by the very fact of being man, of possessing consciousness, is, in comparison with the ass or the crab, a diseased animal. Consciousness is a disease.”). Without at least a chance of immortality, the creation of self-aware machines (or organisms) constitutes a cruel and unusual punishment.

Expand full comment
founding

> This is the one where I feel most confident in my answer, which is: not by default. In millennia of invention, humans have never before merged with their tools

Doesn't this completely miss the (IMO quite likely) mind uploading case? If human minds remain in human bodies and AI remains in silicon, I agree with those precedents, but if you think about the mind uploading process from first principles I feel like you naturally get to the opposite conclusion.

The argument would go as follows:

1. Once we have good-enough hardware to run human minds (realistically we'll get there soon), the main constraint on our ability to upload is our ability to make accurate-enough scans of the brain, that capture everything important that is going on at a sufficient resolution.

2. AI is capable of "enhancing" pretty much any type of commonly-occurring low-resolution content, at least to some extent.

3. Therefore, year-N brain scans plus AI enhancement will be about equal fidelity to raw brain scans from year N+k for some k.

4. Therefore, the first human uploads will likely involve a nonzero amount of AI enhancement taking part in the scanning process. This arguably already qualifies as "merging" to a nonzero extent.

5. Furthermore, even what we know of _current_ AI capabilities (see: how diffusion works with prompts), AIs will be able to, while enhancing, nudge the brain toward traits that we care about improving. Even more merging.

6. Once a brain is scanned and runs in-silicon, "interfacing" further with that brain becomes trivial - it's just a matter of reading and wriring bits. And so you gain massive amounts of power to create direct two-way links between the brain's thought patterns and any other kind of gadgets you care about.

So the "humans and AIs working together but remaining separate" future feels inherently unstable to me in all kinds of ways. The "pure AI pulls ahead so fast the BCI -> uploading track just can't keep up" scenario definitely does seem extremely plausible and nothing I wrote above is an argument against it. So it feels to me like it's basically a race between those two?

Expand full comment