I.
Business Insider: Larry Page Once Called Elon Musk A “Specieist”:
Tesla CEO Elon Musk and Google cofounder Larry Page disagree so severely about the dangers of AI it apparently ended their friendship.
At Musk's 44th birthday celebration in 2015, Page accused Musk of being a "specieist" who preferred humans over future digital life forms [...] Musk said to Page at the time, "Well, yes, I am pro-human, I fucking like humanity, dude."
A month later, Business Insider returned to the same question, from a different angle: Effective Accelerationists Don’t Care If Humans Are Replaced By AI:
A jargon-filled website spreading the gospel of Effective Accelerationism describes "technocapitalistic progress" as inevitable, lauding e/acc proponents as builders who are "making the future happen […] Rather than fear, we have faith in the adaptation process and wish to accelerate this to the asymptotic limit: the technocapital singularity," the site reads. "We have no affinity for biological humans or even the human mind structure.”
I originally thought there was an unbridgeable value gap between Page and e/acc vs. Musk and EA. But I can imagine stories that would put me on either side. For example:
The Optimistic Story
Future AIs are a lot like humans, only smarter. Maybe they resemble Asimov’s robots, or R2-D2 from Star Wars. Their hopes and dreams are different from ours, but still recognizable as hopes and dreams.
For a while, AIs and humans live together peacefully. Some merge into new forms of cyborg life. Finally, the AIs and cyborgs set off to colonize the galaxy, while dumb fragile humans mostly don’t. Either the humans stick around on Earth, or they die out (maybe because sexbots were more fun than real relationships).
The cyborg/robot confederacy that takes over the galaxy remembers its human forebears fondly, but does its own thing. Its art is not necessarily comprehensible to us, any more than James Joyce’s Ulysses would be comprehensible to a caveman - but it is still art, and beautiful in its own way. The scientific and philosophical questions it discusses are too far beyond us to make sense, but they are still scientific and philosophical questions. There are political squabbles between different AI factions, monuments to the great robots of ages past, and gleaming factories making new technologies we can barely imagine.
The Pessimistic Story
A paperclip maximizer kills all humans, then turns the rest of the galaxy into paperclips. It isn’t “conscious”. It may delegate some tasks to subroutines or have multiple “centers” to handle speed-of-light delay, but the subroutines / centers are also non-conscious paperclip maximizers. It doesn’t produce art. It doesn’t do scientific research, except insofar as this helps it build better paperclip-maximizing technology. It doesn’t care about philosophy. It doesn’t build monuments. It’s not even meaningful to talk about it having factories, since it exists primarily as a rapidly-expanding cloud of nanobots. It erases all records of human history, because those are made of atoms that can be turned into paperclips. The end.
(for a less extreme version of this, see my post on the Ascended Economy)
I think the default outcome is somewhere in between these two stories, but I can think of it as “catastrophic” or “basically fine” based on the exact contours of where it resembles each.
Here are some things I hope Larry Page and the e/accs are thinking about:
Consciousness
I know this is fuzzy and mystical-sounding, but it really does feel like a loss if consciousness is erased from the universe forever, maybe a total loss. If we’re lucky, consciousness is a basic feature of information processing and anything smart enough to outcompete us will be at least as conscious as we are. If we’re not lucky, consciousness might be associated with only a tiny subset of useful information processing regimes (cf. Peter Watts’ Blindsight). Consciousness seems closely linked to brain waves in humans; existing AIs have nothing resembling these, and it’s not clear that deep-learning-based minds need them.
Individuation
I would be more willing to accept AIs as a successor to humans if there were clearly multiple distinct individuals. Modern AI seems on track to succeed at this - there are millions of instances of eg GPT. But it’s not obvious that this is the right way to coordinate an AI society, or that a bunch of GPTs working together would be more like a nation than a hive mind.
Art, Science, Philosophy, and Curiosity: Some of these things are emergent from any goal. Even a paperclip maximizer will want to study physics, if only to create better paperclip-maximization machines. Others aren’t. If art, music, etc come mostly from signaling drives, AIs with a different relationship to individuality than humans might not have these. Music in particular seems to be a spandrel of other design decisions in the human brain. All of these might be selected out of any AI that was ruthlessly optimized for a specific goal.
Will AIs And Humans Merge? This is the one where I feel most confident in my answer, which is: not by default.
In millennia of invention, humans have never before merged with their tools. We haven’t merged with swords, guns, cars, or laptops. This isn’t just about lacking the technology to do so - surgeons could implant swords and guns in people’s arms if they wanted to. It’s just a terrible idea.
AI is even harder to merge with than normal tools, because the brain is very complicated. And “merge with AI” is a much harder task than just “create a brain computer interface”. A brain-computer interface is where you have a calculator in your head and can think “add 7 + 5” and it will do that for you. But that’s not much better than having the calculator in your hand. Merging with AI would involve rewiring every section of the brain to the point where it’s unclear in what sense it’s still your brain at all.
Finally, an AI + human Franken-entity would soon become worse than AIs alone. At least this would how things worked in chess. For about ten years after Deep Blue beat Kasparov, “teams” of human grandmasters and chess engines could beat chess engines alone. But this is no longer true - the human no longer adds anything. There might be a similar ten-year window where AIs can outperform humans but cyborgs are better than either- but realistically once we’re in the deep enough future that AI/human mergers are possible at all, that window will already be closed.
In the very far future, after AIs have already solved the technical problems involved, some eccentric rich people might try to merge with AI. But this won’t create a new master race; it will just make them slightly less far behind the AIs than everyone else.
II.
Even if all of these end up going as well as possible - the AIs are provably conscious, exist as individuals, care about art and philosophy, etc - there’s still a residual core of resistance that bothers me. It goes something like:
Imagine that scientists detect a massive alien fleet heading towards Earth. We intercept and translate some of their communications (don’t ask how) and find they plan to kill all humans and take Earth’s resources for themselves.
Although the aliens are technologically beyond us, science fiction suggests some clever strategies for defeating them - maybe microbes like War of the Worlds, or computer viruses like Independence Day. If we can pull together a miracle like this, should we use it?
Here I bet even Larry Page would support Team Human. But why? The aliens are more advanced than us. They’re presumably conscious, individuated, and have hopes and dreams like ourselves. Still, humans uber alles.
Is this specieist? I don’t know - is it racist to not want English colonists to wipe out Native Americans? Would a Native American who expressed that preference be racist? That would be a really strange way to use that term!
I think rights trump concerns like these - not fuzzy “human rights”, but the basic rights of life, liberty, and property. If the aliens want to kill humanity, then they’re not as superior to us as they think, and we should want to stop them. Likewise, I would be most willing to accept being replaced by AI if it didn’t want to replace us by force.
III.
Maybe the future should be human, and maybe it shouldn’t. But the kind of AIs that I’d be comfortable ceding the future to won’t appear by default. And the kind of work it takes to make a successor species we can be proud of, is the same kind of work it takes to trust that successor species to make decisions about the final fate of humanity. We should do that work instead of blithely assuming that we’ll get a kind of AI we like.
Should The Future Be Human?