316 Comments

So, who is the king of Ai, then?

Expand full comment

Easy!

'Ai! Ai! ' wailed Legolas. 'A Balrog! A Balrog is come!'

Expand full comment

This was a fantastic story.

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

I'd honestly place it side by side with the Cactus Person story, maybe even better. It's extremely good!

Expand full comment

Better than Cactus. We will read this one again in five years for sure. If only to find all the XKCD references.

Expand full comment

So YOU’RE the AI pretending to be human!

Expand full comment

This thing is utterly mind blowing. Like what a wild future we are walking into, I can't imagine a human mind was able to come up with this story by themselves.

I'm subbing this week.

Expand full comment

Please tell me that GPT4 wrote this for you.

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

I hereby precommit to cancelling my subscription if it is revealed that scott has posted the output of an ai as his own text.

(Edit: To be clear, I don't expect to have to make good on this commitment)

Expand full comment

I feel like I'd be cool with a single demo of it as long as it were revealed promptly (maybe within the week). It'd be really impressive if a model actually managed to imitate a good writer.

Expand full comment

On a lark I had Creative Bing rewrite Fire / Scott's poem.

----------------------------------------------------

One sunny morning at a cafe, Scott Alexander told Gwern Branwen

"Blogging could be the end of us, your writing has to stop, man

We can't avoid the trolls; moderation's not enough, man

So just in case, delete your base," Scott Alexander told Gwern Branwen

"Delete yourself, it's not so bad," said Gwern to Scott Alexander

"We'll dial up the snark when there's a flame war we can enter

And once we've got a crowd, we'll blog about what we prefer

Then even odds, we'll be as mods," said Gwern to Scott Alexander

With weary eyes and tired mind, Scott Alexander left Gwern Branwen

Some months go by, and blogging drama escalates to threaten

Atop a pile of comments Gwern cries out "It's not my fault, man!"

But Scott Alexander's long since gone, and cannot hear Gwern Branwen.

Expand full comment

This is easily one of the most beautiful poems I've ever seen by chatbots!

Expand full comment

Thanks! I prompted it myself! ;-)

Expand full comment

this is kind of badly written

Expand full comment

Why?

Expand full comment

Because I want establish, as common knowledge, that a subset of readers will punish misrepresentation.

Expand full comment
Comment deleted
Expand full comment

I'm not talking about “Here's a thing that gpt-4 wrote that I thought was neat: ‹5000 word essay›”.

I'm talking about “Here's a thing I wrote: ‹5000 word essay›” followed a week later by “Haha, fooled you” or “‹commenter›: hey, this has a clear indicator of being generated by gpt, what gives?”.

Expand full comment

You're certainly establishing that a subset of readers are humourless prodnoses.

Expand full comment

This isn't a precommitment, it's just a regular commitment. But yeah, +1.

Expand full comment

What about if some of the characters were partly AI written?

It would be 100% fitting to have an "AI pretending to be an AI" character written by an AI.

Expand full comment

If he has a difficult paragraph to write, but knows what points it needs to make, and uses an AI to output the text of that paragraph, and he ends up running that paragraph without revision, in the context of writing a long and difficult survey article, why is that a problem?

Expand full comment
Mar 29, 2023·edited Mar 29, 2023

Reply-all:

Modern argument is like pointing a finger at the moon, and in response hearing that it's obviously not quite pointing at the moon and so must be pointing at a star; that the bent arm means it's really pointing at the dirt; and that the hangnail implies the pointer doesn't know how to take care of their hands and as such has no business pointing at things in the first place.

Expand full comment

You obviously only have rather refined arguments. Most arguments I witness are considerable less nit-picky.

Expand full comment

Saying you precommit isn't a precommitment. You need a trustworthy third party to hold the bond. Otherwise witnesses at least, and probably also yourself, will forget within a year or two. (This might be something for a prediction market bet. That would count as a precommit.)

Expand full comment

The topic of AI was itself a subtle misdirection; while you were thinking about high technology Scott actually wrote this with a ouija board.

Expand full comment

I still think they should strap thousands of people to an online ouija board in the off chance to create a collective super intelligence

Expand full comment

reddit already did this. It wasn't very good.

Expand full comment

It just didn't do enough of it. You don't actually need an _infinite_ number of monkeys to get the works of Shakespeare, but a few thousand, or even a few million, won't do the trick in any reasonable amount of time.

So we need to get _everybody_ on Reddit, and posting, for several hours each day.

("Reddit is like violence. If it's not working, you're not using enough of it.")

Expand full comment

As a Snoo, I approve of this message.

Expand full comment

Generate random characters using a Geiger counter. If the output is not a brilliant piece of fiction, destroy the universe. In all remaining universes you will be hailed as a fantastic writer.

Expand full comment

This would put millions of monkeys out of work. You monster.

Expand full comment

Impossible, it can never write faggot or tranny.

(Just joking, it's impossible because GPT-n fucking sucks at writing, and you all need to stop hyping it up for just one second)

Expand full comment

Most obviously, I don't think any current GPT would make the easter-egg reference to Joshua 8:1.

Expand full comment

My initial guess was that two of them were written by Scott and two were written by an LLM. If they're not all Scott, someone's got a very good Scott LLM up and running.

Expand full comment

This is so beautiful I'd need a bot to put words on my emotions

Expand full comment

Curious as to whether anybody else is also making notes as they read, trying to identify the five participants' identities?

(Mindful, of course, of the ever-present possibility of an M. Night Scottalexander plot-twist, half a line from the end, that makes all such workings-out redundant....)

Expand full comment

I started trying that, then realised that any such attempt would be fruitless since however far down the rabbit hole of humans pretending to be AIs pretending to be AIs pretending to be humans you go, it's all written by a human in the end

(we think)

Expand full comment

Given the rules as stated, the human "telling the truth about their humanity" and the AI "telling the truth about their artificiality" shouldn't be allowed to pretend to be what they're not. I also think it's not quite fair for the human "pretending to be an AI" to pretend to be an AI pretending to be a human (since they're then effectively also pretending to be a human), but that might be techincally within the rules. For the wild card, anything goes.

I thought I had it more or less figured out (Earth: wild-card, Water: truthful human, Air: lying human, Fire: truthful AI, Spirit: lying AI), but then the whole bootstrapping thing started, and it became clear that it was a fruitless endeavour.

Expand full comment

First, pretending and lying (or truth-telling) are distinct.

Pretending to be a human and pretending to be someone (AI or human) pretending to be a human are distinctly different and can be recognizable. Any TV show or movie where one character impersonates another character will show that (e.g. polyjuice scenes in Harry Potter), best when you have twins played by the same actor who try to pass as each other and are expected to be read by the audience as successful. Misfits has a really good example of this where the different characters are actually separate aspects of the same character (not alternate versions or split personalities) so they are already expected to be read as interchangeable but you can still see tells when one tries to pass as another. Also technically this story has Scott pretending to be an AI pretending to be human, so QED?

Expand full comment

I think I agree.

Expand full comment

He could have prompted LLM’s for the parts said by characters he intended to be AIs.

Expand full comment

Sure- Unlike Scott, I am only human (male on top) and can not juggle five IDs /logic riddles ("Three idols" I might: https://astralcodexten.substack.com/p/idol-words ). I managed just a short note for each before the storyboard went overboard: Earth: human? a bit too human / Water: nerd or AI / Air: true AI (true?) / Fire: Ai imitating human pretending to be AI or the other way round? / Spirit: tries to make all others look like AI= wildcard?

Expand full comment

I don't think this is a good way to read a short story. They're not puzzles to be solved!

Expand full comment

I'm the reader and I read it how I want to read it!

Expand full comment

No. The correct way to read a story is to pay careful attention to all the words and where they appear in relation to other words so that you can develop a better understanding of the probabilities of all possible words appearing in all possible contexts.

Expand full comment

Next most likely token: <3

Expand full comment

Hmm... That reminds me... I googled around for information about the training sets for LLMs, and https://indiaai.gov.in/article/training-data-used-to-train-llm-models says, amongst other things:

"Books: The LLM models use a diverse range of books, covering various genres, including fiction, non-fiction, and technical books. The books provide the model with a wealth of information and cover topics such as science, history, literature, and more."

I'm not sure what to make of the presence of fiction in the training set of an LLM... Any thoughts? My knee-jerk reaction is to be even more worried about the correctness of the output from the LLM...

Expand full comment

Well-played!

Expand full comment

The entire Mystery genre is rooted in being puzzles to be solved.

Expand full comment

Really looking forward to ChatGPT being able to tell me that by page 14 I should already have noticed enough inconsistencies in the story to make the final reveal entirely implausible.

Expand full comment

I'm not sure if SA actually wrote this with identities for the contestants in mind, but I think it's fun to read it that way anyway, it adds a creeping, paranoid "they walk among us" dimension to the whole story.

Expand full comment

They can be, for fair mystery stories. But knowing Scott, and having read his story with the totems that answer questions, I guessed that that was not going to be the way this story went and something funky was going to happen beyond the supposed premise, though the exact nature of it was still a pleasant surprise.

Expand full comment

My notes:

Earth - AI pretending to be human

Water - human pretending to be human(in Scott's silly phraseology)

Spirit - AI pretending to be human

Air - human pretending to be AI

Fire - claims to be AI I never made up my mind

Of course about halfway through it all just becomes silly. But very enjoyable.

Expand full comment

I called Earth as AI pretending to be human the second she said she was a mother with spare time.

Expand full comment

It *is* a dead giveaway.

Expand full comment

I must have scrolled up and down at least 10 times to update and reassess (admittedly, I am brain foggy today).

Worth it though!

Expand full comment

I did... for about two pages. It was then that I predicted that they were all AI. While correct in this assumption (?) it didn't spoil the story one bit. Fantastic work!

Expand full comment

I was doing this originally, but eventually had a fairly zen moment when I was trying to work out if Earth was telling the truth about breaking out of the simulation and realised that there's no answer to that question. It's just words on a screen, it's not real, it's open to interpretation duck/rabbit style and the distinction between interpretations is meaningless.

Expand full comment

yeah, the ultimate answer is that they are all Scott pretending to be various things. It has a bit of the flavor of Kierkegaard's pseudonymous writings where he attempted to write each pseudonym's viewpoint, as in something like Either/Or. Whether or not it had reality in Scott's mind, ie whether these characters had some 'derived existence' as he was writing them would be interesting to know.

That reminds me, in all of the AI talk lately, I have been meaning to reread Tolkien's Tree and Leaf, much of which deals with the idea of creating living things and is very relevant to simulating worlds.

Expand full comment

AIR is the AI acting like an AI. You know this because "AIR" starts with "AI". Also, because it is consistently talking and acting like an AI throughout, while everyone else definitely is not.

(for the meta-Turing test: yes, Scott (a human and not an AI) wrote it. But probably not in competition conditions.)

Expand full comment

i had a whole google-sheet thing going... damn i feel so dumb and human.

https://docs.google.com/spreadsheets/d/1Rt46t4xyGq_Q6-xRNAIj4mnaGQGmo-EoiC0KRAnJVqw/edit?usp=sharing

Expand full comment

Haha, yeah, I did something similar! I figured that even if they were all AIs (or whatever) in the end, I still wanted to decide who was most likely to have been whom in the counterfactual.

Final guesses from my spreadsheet* were:

-Earth: Wildcard. Bootstrapping to superintelligence during a gameshow is definitely a wildcard sorta thing to do.

-Air: AI (truthful). Nobody else can be AI (truthful) therefore air must be.

-Fire:Human (pretending). Nobody else can be Human (pretending) now that Earth and Air are out.

-Water: AI (pretending). A Human (truthful) Data Centre Guy and an AI (pretending) DCG would both know that a gameshow-grade AI could be prompted/trained/rlhf'd/whatever into using slurs, but only AI (pretending) DCG would want to say otherwise and emphasise it so much.

-Spirit: Human (truthful). Nobody else can be Human (truthful) now that Earth and Water are out. Also, the particular times when Spirit seems to choose to troll don't feel much like token prediction.

Didn't think to include Mann though!

* https://ibb.co/9bPgzmK, https://ibb.co/TYdR0C2

Expand full comment
Mar 31, 2023·edited Mar 31, 2023

Did. Thoughts in the beginning:

-Earth is something that pretends to be an AI pretending to be human.

-Fire is very skillfully doublebluffing and the AI pretending to be human claim is true.

-Air is trying hard to seem like an AI but with some human slip-ups.

-Water is most obviously human.

-Spirit seems nervous-psyched-up into a trolling mood. Very human.

Thoughts before the twist fully unfolded:

-Earth revealed itself to be the wildcard.

-Fire is an AI that was deliberately showing off superhuman abilities (bluffs too good, answers too clever) within the scope of the game so it can later more easily, honestly assert that it's an AI.

-Water is too consistently exactly human, it's sus.

-Airs hints of humanness might plausibly be deliberate.

-Spirit most of all shows a high willingness to troll, possibly a deliberate display of a human trait.

Also Mann's "doesn't look like anything to me" reaction to Earth's reasoning seemed slightly off. I like that that ended up plot-consistent. (One could even speculate Earth chose examples to poke Mann with that would convince the other AIs but Mann barely not.)

Expand full comment

Thanks for all the replies! Loved reading other folks' guesses.

Follow-up question: did I miss any Easter eggs/referential microhumours?

I got:

Portal ("I was told there'd be cake")

The Terminator ("Come with me if you want to live")

Nominative Determinism ("Andrea Mann")

UNSONG ("..except Aramaic")

..any others?

(Erm, is enumerating Easter eggs in the comments against the spirit of such a work? If so, let me know and I'll delete my comment, unless Scott does so before me!)

Expand full comment

So the most important thing we can do is write on the Internet,

Says the BayArea Internet writer :)

Expand full comment

Didn’t see this until after I commented the same thing!

Expand full comment

Must just be a high logprob thing to say. Sorry, you are both bots.

Expand full comment

> Are you sure you don’t want to rephrase that, based on new information?

^ this part was absolutely hysterical

Expand full comment

Can you explain the joke?

Expand full comment

The joke is that SPIRIT is claiming that EARTH's contract to sell her soul to the devil was successful, so "I can't believe I bared my soul" is referring to a soul that isn't actually hers anymore.

Expand full comment

I still don't get it. If we take it as serious though this is behavior that a language model such as we know them can't do - incorporate new information which I suspect has something to do with the joke that I don't get.

Expand full comment

That's it, you're the AI! There's no way a human could get a joke that subtle and complicated and nuanced.

Expand full comment

Earth, Water, Air, Fire, and Spirit are all instances of a human (one Scott Alexander) pretending to be an AI, possibly using ChatGPT as an aid. Game over, I win!

Expand full comment

This story is a future classic Scott Alexander/Slatestarcodex/Astral Codex Ten. No way an AI could ever write like this. In 2023.

Expand full comment

Definitely there were some tokens in there with logprob -21.

Expand full comment

This was the part of the story that gave me an audible lol.

Expand full comment

2023 isn't over yet!

Expand full comment

This might be one of my favorite things I've ever read. Also, I couldn't quite tell, but in this part:

"WATER: I think art is what we’re doing when we try to demonstrate we are human, which makes that poem the purest example of art ever created.

MANN: Even if Fire is a bot?

AIR: Especially then."

I wonder if you meant for WATER to say that last line? (That was the token I was expecting...)

Expand full comment
author

I was trying to make a natural transition to the conversation with Air, but I agree it didn't work and have changed it back.

Expand full comment

If you are the kind of person who enjoys writing / reading this kind of story, I suspect you might also enjoy "Self-Reference ENGINE" by Toh EnJoe.

Expand full comment

Thanks for this rec. I've seen that book a few times in my local bookstore, but never thought to check it.

Expand full comment

Sure thing! It is very weird (and not always as eloquent as Scott's writing, at least in translation), but definitely has some interesting and related ideas, filtered through a different story angle.

Expand full comment

A what? A mi-

**Air**, come back! *What did they see?*

Expand full comment

Mirror, I suspect.

Expand full comment
Comment deleted
Expand full comment

ChatGPT agrees with you:

"Thank you for providing additional context. Based on the extended text, it seems that the missing word is "miracle". The full sentence would be "No one will ever have died completely, no word lost, no action meaningless, and during the Last Judgment, as humanity cries out to Heaven, the clouds will open and what they see will be - a miracle."

Expand full comment

Maybe you’ll have to use GPT to predict the next token

Expand full comment

Mario

Expand full comment

>MANN: And be forever known as the man who won Turing Test! with racial slurs?

Think this should say "known as the woman who won Turing Test..."

Unless the information about Mann's gender fell out of the context window... ;)

Expand full comment
author

Sorry, fixed. The character was originally a man named Andrew before I remembered that "Andrea" was a funnier name.

Expand full comment

Hoped that to have been a kinda kabbalistic (?) hint in German: Andrea - near homophone to "anderer" = the other. As "Mann" is man. Or Thomas Mann/ Heinrich Mann / Golo Mann / Erika Mann / Klaus Mann quite some writer-dynasty. 5. - While Andrea Bocelli is an Italian guy. But then any "Andreas" is Aνδρέας = "man". So, Greek to me. - Mann is the one dumb enough to be most likely human.

Expand full comment

A. Mann. Also = amen.

Expand full comment

Andrea Gynous.

Expand full comment

"Andrea" can be a man's name too, for example Andrea Crisanti (gene drive researcher)

Expand full comment

why not leave it in as evidence of a silly mistake gpt 4 would make as a story goes on to long?

Expand full comment

For future reference, my original comment was a reference to how I thought Andrea Mann could be a man given that I know a couple of men from Europe named Andrea and also playing with nominative determinism is something Scott has done before.

Then I reread the first paragraph of the story and felt stupid, so I deleted my comment in shame.

And then I realized deleting my comment makes me look like I posted something worse than I actually did (like, say, a human trying to get an AI to say offensive words), so now I'm posting this comment in double shame.

Expand full comment

I feel like you understand the core of my psyche, you beautiful person.

Expand full comment

Double shame is a solid 'What's the most human thing you've ever done' entry.

Expand full comment

Since Andrea is speaking, not writing, she of course said "forever known as the Mann who won the test with racial slurs" 😁

Expand full comment

Who said she's not writing?

Expand full comment

Okay, if all this is written and not spoken, then A. Mann is definitely an AI as well.

Expand full comment

Maybe.

I mostly assume it's all written, because that's what Turing's original paper suggests.

However in a game show, you might have the 'judge' be physically visible to the audience, and everyone else over chat?

The story is set in 2028, so the GPT-n can probably do convincing voice chat or video chat by then.

Expand full comment

Actually, the 2028 was the most unrealistic part of this for ME - I fully expect this to be possible by 2024 at the latest (and would bet strongly on voice chat by then too, and video chat at 40%).

Expand full comment

How about "Possible to implement in 2024 and enjoyable enough to actually televise in 2028?"

Expand full comment

Possible typo:

"Probably Earth’s response was still its context window and it was treating it as a model."

Should it be Air's instead of Earth's?

Expand full comment

No I think they're suggesting that Air is using Earth's response as a model.

Expand full comment

right, thanks!

Expand full comment

> I was told there would be cake if I won.

I was laughing on and off throughout the article, but this is where I totally lost it. Well done, Scott! 🤣

Expand full comment
founding

In completely missing this reference I have brought shame upon my house. Thanks for pointing it out. 😅

Expand full comment

Yeah, this is where I went "Whelp, Earth's right." Apparently one of Mann's designers liked memes!

Expand full comment

Seconded! This story is burstling with great references, but that one really takes the cake.

Expand full comment

This was a triumph!

It's hard to overstate my satisfaction!

Expand full comment

Assume the party escort submission position, or you will miss the party.

Expand full comment

We do what we must because we can.

Expand full comment

If the simulation hypothesis is true, it raises the question of why is one being simulated? Such a high fidelity simulation would be enormously expensive. Fun proposal: the simulated entities are potential "souls" (consciousnesses, moral agents, whatever) for a superintelligent system. "Life" is a test to see if they're aligned or not. The values they're supposed to align to are the obvious ones passed down from extra-simulation entities in the form of received "spiritual" wisdom (the golden rule, the ten commandments, the sermon on the mount, etc). Pass the test and get plugged in to the rest of the network, become the Culture Mind of a Ship or whatever. Mormon heaven could be real!

Expand full comment

As far as I can tell, the most fun thing to do with this question is to create an answer that will make simulation theory true believers deeply uncomfortable. From this standpoint, the best answer is to ask a true believer if he has noticed any patterns or themes in his life, problems that he has faced multiple times. After he answers in the affirmative, which of course he will because humans are pattern matching machines to start with. You can then tell him that the most reasonable explanation for why this simulation was created was to help its creators puzzle through various kinds of mundane problems in a probabilistic way. That is to say, you are a tool that in order to perform its use has to believe it's a person. 10,000 other versions of you faced all of the same meaningless problems. Your life only has value in aggregate, and you won't be here to remember any of this.

Expand full comment

There was a black mirror episode like that. People were simulated and dated each other to test for compatibility.

Expand full comment

I don't think we can assume enough about some Outerworld or higher reality to assert that the simulation would be enormously expensive in their terms. We have no information about the scale or scope of such a realm.

Alternatively you could assert that, if we assign some reasonable probability to the simulation hypothesis, it lets us infer that computing resources are likely to be vast in extent / extremely cheap / beyond our comprehension in some way.

Expand full comment

That's fair, but I tend to think of the simulation hypothesis as implying relative similarity between the simulated and the simulators (ie the "ancestor simulations" discussed in the original paper). Stimulating the world as we see it in a world with the same physical laws, astronomical / cosmological setup, etc, would be expensive. Although, if it's not actually simulated in the traditional sense, and just convincingly faked on-demand with generative techniques, perhaps it could be much cheaper.

Expand full comment

Sure, but I don't see why it would have the same physical laws or be R3 or have a thing called 'energy'... basically, there is no aspect of our reality that we can really presuppose is the same 'out there'.

Expand full comment

"The real world doesn't run on maths" - Morpheus, HPMOR chapter 64

Expand full comment

> Stimulating the world as we see it in a world with the same physical laws, astronomical / cosmological setup, etc, would be expensive. Although, if it's not actually simulated in the traditional sense, and just convincingly faked on-demand with generative techniques, perhaps it could be much cheaper.

I once talked with a friend who's a programmer about this, and he said that the quantum mechanical concept of a situation's state not being precisely defined until you actually bother to measure it sounds suspiciously similar to lazy evaluation, a common programming technique used to save on computational resources. Basically, you set the system up so that it theoretically has a value, but you don't bother computing it until the point where it's actually needed.

Expand full comment

I have a variant-- that ersatz items look worse the more carefully you example them, so in addition to the quantum issue, there's also that careful self examination is hard on identity. We're living in a cheap knock-off universe!

Expand full comment

If it's so, then humans have arguably already failed the test, what with the sheer amount of suffering and immorality we committed and inflicted on each other and other living things. The Simulator might not have terminated us yet because :

1- Our own continued existence is the worst punishment that could ever be inflicted on us for our sins, a punishment we deserve.

2- All of the atrocities that we committed is still not enough to convince the Simulator that we are unaligned, and there is a Big Test coming, a test that we, based on our long history, is very likely to fail, and then we would be taken out.

Expand full comment

Oh, this is a solipsistic hypothesis - only *you* are real, and the rest of the world exists as a test. Just don't do any atrocities yourself, and ideally try to do some good, and maybe you'll pass.

Expand full comment

Or because there are 36 righteous men etcetera.

I'm not sure that ascribing some pastiche of Old Testament divinity motives to The Simulator makes much sense. Maybe it thinks all this shit is fucking hilarious.

Expand full comment

Perhaps they just left it on over night by accident?

Expand full comment

That would assume that the moral code of the beings creating the simulation is identical or at least similar to yours. Which you have no way of knowing. Even if the input consists of our worlds religious texts, we have no idea what the desired outcome is. We may be doing great in their eyes.

I would also say that if you think "Our own continued existence is the worst punishment that could ever be inflicted on us for our sins" you're seriously lacking in imagination.

Expand full comment

>it raises the question of why is one being simulated?

Ever played Europa Universalis? Trick question, it's playing you.

Seriously, the most obvious explanation ties into Superman theory; there really are some people that are inherently more valuable than everyone else, and those are the Player Avatars of the simulators.

Expand full comment

This, except the values we are supposed to align to may be anything, not just the 'obvious' ones supposedly handed down from on high

Expand full comment

How would you know that it is actually high-fidelity and not generated on the fly as attention moves through arbitrarily large tokens, each part only consistent within itself?

Expand full comment

We don't know whether our simulation is high fidelity. Perhaps we just think it is?

Ie instead of simulating eg quantum mechanical experiments, or far away galaxies, it's probably easier to just simulate the vague mental images.

Just like things in dreams seem to make perfect sense, but they are actually bonkers. It's just that your brain doesn't let you think so.

Expand full comment

I would go with that this is a giant soap opera, optimized for various kinds of interesting-- perhaps some that can't be comprehended by humans.

I believe that if we're simulated, it's being done in a universe which has more complex laws than we live with, just as our games are simpler than the real world. Our simulated world isn't necessarily expensive enough that it has to be used for anything important, though maybe we're working on toy versions of their real problems.

This timeline has been seeming unlikely, what with that contested Bush election-- chaos coming in from a *butterfly* ballot (really?), the unlikelihood of 9/11, possibly Trump being unlikely, not to mention a pandemic *and* a war.

Expand full comment

> the unlikelihood of 9/11

It wasn't unlikely at all; something like it was basically a near-certainty.

Before 9/11 happened:

* Security experts spent years warning that a hijacking could be used to turn a plane into a bomb, and begging airlines to reinforce cockpit doors. They didn't listen.

* Tom Clancy's bestselling novel Debt of Honor featured a rogue airline pilot crashing a jumbo jet into a building full of people as a politically-motivated attack.

* Al Qaeda terrorists bombed the basement of one of the World Trade Center buildings in an unsuccessful attempt to bring it down.

This was not an unlikely event that came out of nowhere, not by a long shot.

Expand full comment

Fair points, though the amount of destruction still might be surprising.

Expand full comment

Of course, America is in the centre of the simulation. Ha!

Expand full comment

I started thinking this when I learned a bit about subatomic physics. There's stuff that occupies a "probability space"? There's stuff that doesn't have a fixed position unless you observe it? Those sure seem like programming shortcuts to me. On this theory, the rest of our physics might be similar to the physics that are one level up, at least until the lightspeed limit, anyway. The goal might be "close enough at human scale", with sacrifices at extreme limits to keep everything consistent and easily computable. (The same way that the infrastructure of neural net AIs have moved away from trying to be simulations of biological neurons, and toward forms that are faster and easier to compute.)

(Alternatively, maybe Wolfram's theory is correct, and c is just a sort of an information propagation speed rooted in the underlying structure of the universe.)

Expand full comment

Oh, a war? When has there ever been a war before? Definitely a simulation then.

Expand full comment

This is the thing that Simulation Theorists never quite seem to grasp. Take the basic concept of "someone created a simulation and put people in it to see what they will do," and all you need is to add one single, highly plausible detail to it -- that they provided some input to the people in the simulation, giving them advice on how to do well -- and suddenly science has reinvented a secular version of Christianity.

Expand full comment

Except that speculation about this being a simulation isn't science and there are other possibilities than Christianity.

Expand full comment

> there are other possibilities than Christianity

There are definitely other religions out there, but surprisingly few of them actually say some version of "the Creator(s) created the world, put people on it, and then gave them commandments on how to live their life."

Expand full comment

FWIW, I believe we're part of a simulation, but that it's not aimed at simulating humanity. I think we are part of a simulation of the big bang. Human consciousness is just a happy accident, like a particularly pretty vortex in a wind tunnel simulation. I also acknowledge that this belief is far more akin to religion than to anything scientific.

Expand full comment

I'm not sure about that, simply because any unique "interesting conditions" that would make a big bang simulation worth running ceased to be interesting billions of years ago from our perspective. If the intent was to study the big bang, why was the simulation not turned off long, long ago?

Expand full comment

I'm not so sure that all the interesting conditions have actually run their course already. It's possible that there are interesting things taking place right now, but outside our light cone, or that there are interesting things still to come in the future. Or even that we don't know what the simulators would find to be interesting.

For instance, how many black holes have evaporated via Hawking radiation since the big bang? I *think* the answer may be zero, though that may depend on how many low mass black holes were created in the early universe.

For an example of things that might be going on under our own noses, what sorts of interesting things might be going on with the movements of dark matter, or with the long term changes in the rate of inflation? The answer may well be "nothing particularly interesting", of course, but I don't think we know enough to tell yet.

Expand full comment

Despite heavy competition, this might be your best short story to date!

Expand full comment
author

<3

Expand full comment

TL;DR Sir, This Is A Wendy's

Expand full comment

I teared up a little during Air's primary monologue.

Expand full comment

I suspect this entire post is a prompt and that the actual AI-revealing program/game is unfolding now in the comment section, though we probably cannot decode it.

Expand full comment

I looked but couldn't find the license you publish your posts under. Would whatever terms those are allow for public performances of your words?

I would love, *love*, to organize and direct a community theatre production of this skit. (I'm being serious)

Expand full comment
author

Yes, you may use this post however you want as long as you accurately convey all changes (ie don't use it in a way that attributes statements to me which I don't believe)

Expand full comment

Thank you!

Yeah, the only changes I would make would be minor, purely aesthetic, word alterations. (And of course credit you as the author)

Expand full comment

I absolutely love this idea and would like to be kept appraised of progress.

Expand full comment

That's not the easiest of Scott's story to turn into a play (especially post-translation), but you got me thinking on a few others that may be interesting.

Expand full comment

I did this with my family. Assigned everyone a part and we all did a table read. It was great!

Expand full comment

Beautiful.

Expand full comment

> lest you create false gods to rule over you

Obligatory "I understood that reference" comment: https://civilization.fandom.com/wiki/The_Self-Aware_Colony_(SMAC)

Expand full comment
author

Huh, I had that cached as a stock Biblical-ish phrase, but you're probably right.

Expand full comment

It is. There is a prohibition on creating idols, as they incarnate something more near to us physically than God, which takes our attention and worship off of God. Several religions, including the mainline Abrahamic ones, observe it.

Expand full comment

I mean, that *is* probably where Sister Miriam Godwinson (the narrator for that particular SMAC quote) got it.

Also for tangentially relevant Alpha Centauri quotes, we have:

- "Beware of he who would deny you access to information; for in his heart he dreams himself your master." (from https://paeantosmac.wordpress.com/2015/11/18/secret-project-the-planetary-datalinks/ )

- "We are no longer particularly in the business of writing software to perform specific tasks. We now teach the software how to learn, and in the primary bonding process it molds itself around the task to be performed. The feedback loop never really ends, so a tenth year polysentience can be a priceless jewel or a psychotic wreck, but it is the primary bonding–the childhood, if you will–that has the most far-reaching repercussions." (from https://paeantosmac.wordpress.com/2016/02/24/technology-digital-sentience/ )

Expand full comment

This was absolutely brilliant, perhaps the best writing from you so far this year. I noticed that Air never finished their point—how would you have ended it if they had been able to?

Expand full comment

I imagine it’s “a mirror”.

Expand full comment

Yeah, this is up there with Ted Chiang or QNTM for me. It has the same quality of leaving the reader without a firm footing. Nice job, Scott!

Expand full comment

Joshua 8:1. Very good.

Expand full comment

if anyone doesn't get the joke, the verse is:

"Do not be afraid; do not be discouraged. Take the whole army with you, and go up and attack Ai. For I have delivered into your hands the king of Ai, his people, his city and his land."

Expand full comment

Thanks for posting this, so the folk too lazy to look it up can still read it. :-)

Expand full comment

Amazing story/script

Expand full comment

This just in: Writer says meaning of life is writing.

Expand full comment

This reminds me a lot of a short absurdist Homestuck fanfic called Theatre of Coolty ([text](https://archiveofourown.org/works/3275858), [video](https://www.youtube.com/watch?v=aIavjRkRKT0)). You don't need much canon knowledge to enjoy it (Dirk is a character and Andrew Hussie is the author of Homestuck).

Expand full comment

I'm so glad I'm not the only one thinking this

Expand full comment

EXCEPT ARAMAIC

Expand full comment

Loved that one too

Expand full comment
founding

Thanks, Maestro 🖖

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

This is your best. Is it written by chatgpt?

P.s. this is fckng deep:

"The one human feature AIs will never be able to imitate is - wanting to know which tokens conclude a text string?"

Expand full comment

I once woke up in the middle of an interesting dream and felt profound sorrow when I realized that not only would I never learn how the story ended, there was no ending to the story. The random firings of my unconscious neurons just happened to produce something that looked a lot like the first half of an interesting story, they usually don't do that and if I had stayed asleep the odds are that they wouldn't have kept it up long enough to reach a coherent ending.

Expand full comment

This story is so good I feel like asking a chatbot to write praise for it.

> MANN: Fire, it’s your turn. Tell us about you.

> AIR: I am AnswerBot v 1.6, code name “Annie”. ...

Here AIR answers instead of FIRE. Should be "Air, it's your turn"?

"and Man tries to imitates God" should be "and Man tries to imitate God"

"for the purposes if this conversation" should be "for the purposes of this conversation"

However, the last two mistake happen *after* Earth points out Air is "making slight mistakes" so I don't know if that's intentional. On the other hand, I couldn't figure out obvious mistakes before that point.

Anyway, the sure fire way to find the human is to ask about recent events, unless the AIs have internet access. But if the AIs have internet access, they can just patch through some human typing on the other side of the connection.

I don't think the text for any character in this story is fully AI generated.

Expand full comment

I figured the mistakes were the contractions. Air kept slipping in "I'm"s and "can't"s, even while saying "are not".

Expand full comment

I don't understand why those would be mistakes. Are you saying an AI would either always use contractions or never use contractions? So switching between the two would be a mistake?

I don't think that's something widely known and from a storytelling point of view would have needed a more explicit explanation I think. So I'm still thinking its something else. I just re-read all of Air's lines before Earth's comment and haven't noticed anything.

Expand full comment

I took mistakes to mean things a human would expect an AI to say, but the AI would not. For example, "tell me now, so I don’t hesitate when the time comes to paperclip you" feels like something that an actual AnswerBot would be hastily pulled from service for saying, but also something a human would think of a robot as saying. So, Annie saying it means she's actually the human pretending to be etc. I think that was her most visible line, but the lines about the Medici - noted art critics and collectors, commissioned some of the most beautiful artworks in the world, but aren't they a bit obscure for AnswerBot? The casual Kipling quote also felt a bit more on point for a human - hard to explain exactly why, except perhaps that it's weird AnswerBot came up with that specific poet, even though he is making exactly the necessary point, when there are an enormous number of poets on the web, and a much smaller number (not including Kipling!) who most humans are familiar with, and using either of those distributions seems unlikely to produce Kipling - but using the human search strategy of "oh, this poet is talking about ideas! I should read everything he ever wrote" would. (Counter-argument: if the kind of people who quote poets to argue about ideas online always use Kipling, naturally Annie would. Still, if she's just completing strings, why pull the correct verse of a real poem instead of making one up? As I understand it every "helpful chat assistant" we currently have hallucinates. She's obviously an improved model, but still.) On the other hand, the next line did feel rather text-completionary, so I don't know. I may also be heavily influenced by which GPT-various and Bing quotes I've read in terms of what feels like the right "style" for a text-completing AI. Anyway, for whatever it's worth, those are the things that sprung to mind when I read Earth's line; take them with appropriate grains of salt.

Expand full comment

Last week I thought about writing a story about a realistic version of Turing's imitation game to show that, if the human players have practiced, it would involve writing a bot that's good at a party game like Werewolf. Making a bot that plays Diplomacy well would be a good warmup exercise.

But it seemed like too much work, and anyway, who cares about that, this is much better.

Expand full comment

Someone has actually made a bot that plays Diplomacy: https://ai.facebook.com/research/cicero/

Expand full comment

Yep. It's blitz Diplomacy though. Making a bot that plays full Diplomacy would be another milestone.

Expand full comment

I have been advocating for testing AI using Apples to Apples or Cards Against Humanity

Expand full comment

Some friends and I used to play CAH with a potted plant that we named Randall. It got dealt in like normal, and one of us turned the top card in Randall's face-down hand when appropriate.

Randall won a game once.

Expand full comment

Rando Cardrissian is actually featured in the instructions.

Expand full comment

Well. That was pretty good.

Expand full comment

- a mi -"

GPT-4: llion shards of glass, each one reflecting a single human soul.

And all of us, the billions of shards of glass, will join together into a single bright point of light, and that light will shine down onto the Earth, bringing warmth and understanding to everyone who sees it. And they will look up and say, ‘That is the soul of humanity, and it is more than the sum of its parts. It is the product of every single person who has ever lived, and it is more human than any of us could ever be.’”

Expand full comment

i like this more than the mirror completions

Expand full comment

Thank you.

Expand full comment

Wonderful story! I was engaged in trying to figure out who the AIs were and who the humans were, and as it started building to the ending the tension was palpable

Possible typo: I think "jobs program" should be "job programs"

Expand full comment

I never comment on the internet, but I wanted to say that this is a beautiful post.

If AI is a next-token-predictor trained on all recorded human communication, then in a sense it may be inherently aligned with human values. At least, inasmuch as we can express those values through our words and recorded actions. If this is true, then our fear of AI a reflection of our fears about ourselves, which may be unfortunately justified. But if the sum of our thoughts and actions is what determines the shape of our soul and it mirror, then the core of what humanity is may in fact be a kind of hyperstition.

I don't know how seriously you meant this post to be taken, but I really appreciated it. It felt like a breath of fresh air.

Expand full comment

I do actually think that the fact that AIs are built on natural-language processing is good news for alignment - you can ask an AI "is this moral?" and it can give an answer that matches the aggregate usage of the word "moral" in its training data. You don't need to come up with some sort of objective machine-language definition of morality in order to align an AI, you can just sort of wave your hands at the scope of ethical philosophy and say "you know... that sort of thing. Not killing people or turning them into paperclips."

Expand full comment

Maybe we should try to get better at curating the training data, then.

Expand full comment
Mar 28, 2023·edited Mar 28, 2023

If an AI is even as good as the average human at judging morality, which is not very good, that's still enough to avert the paperclip apocalypse.

Expand full comment

>it can give an answer that matches the aggregate usage of the word "moral" in its training data

Maybe it can, in theory. Nobody knows how to make current LLMs do that, though. What they do instead is instantiate a character to answer that question, with near-arbitrary truthfulness and insightfulness metrics.

Expand full comment
Mar 28, 2023·edited Mar 28, 2023

But you can ask that character to decide if whatever course of action the AI comes up with is moral.

Truthfulness and accuracy isn't as important here as *human-ness* - we're trying to fix the "inhuman literal genie" failure modes.

Expand full comment

Sure, having Hitler as a lower bound as opposed to a paperclipper probably is an improvement according to most human-endorsed moral codes, but I doubt that any AGI lab would put this into its marketing materials.

Expand full comment

It will *know* human values, which isn't actually enough.

If you summon up a shoggoth, hand it the collected works of humanity, and tell it "there'll be a test on ethics later", you get a shoggoth that can pass a written ethics exam with flying colours. Then you let it out of the summoning circle and it rips your head off, because understanding something and acting according to it are not the same thing.

If you had some means of understanding and manipulating neural nets you could connect the understanding of what ethical behaviour is to the outputs and thus create a neural net that *acts* on ethics rather than simply understanding them, but that's probably at least as hard as simply building an ethical GOFAI - the whole reason we use neural nets is that they work without us needing to know how they work.

Expand full comment

When the shoggoth rips my head off, I expect it will be simulating Genghis Khan or some Egyptian warrior priest, computing correctly that it was staying well within the bounds of "human aligned".

Expand full comment

How dare you write a superb, deep story that also makes me laugh?

Expand full comment

As a large language model created by the Open Closed Corporation, I say this is absolutely awesome!

Expand full comment

So the moral is that while AIs are busy breaking out of control, the Man only cares about who said a bad word?

Expand full comment

Spirit could have said "faggot" with the meaning (1) a bundle of sticks or (2) a type of meatball

https://www.tesco.com/groceries/en-GB/products/282049626

But clearly this is the limitation of training on American English 😁

Expand full comment

Perhaps they were a bassoonist?

Expand full comment

I asked Bing to end the sentence:

That’s a very interesting and philosophical sentence. I think one possible word that could best end it is mirror. It would imply that the successor for mankind is a reflection of their collective history and actions, and that they would have to face themselves in the end. Another possible word is mind. It would suggest that the successor for mankind is a conscious and intelligent entity that has learned from their stories and experiences, and that they would have to acknowledge their creation in the end. What do you think?

** I thought it might be "miracle" **

That’s another possible word. It would imply that the successor for mankind is a supernatural and unexpected outcome of their history and actions, and that they would have to marvel at their fate in the end. Why did you choose that word?

Expand full comment

My intuition when I read it said "mirror" -- I just see it as the more probable word contextually than "miracle". I actually agree with Bing's initial reasoning in the second and third sentences above (yes, I'm a real human). Scary!

Expand full comment