258 Comments
User's avatar
Mark Roulo's avatar

Just before he expired and experiencing great blood loss, St. Felix realized that laboratories, and all man made things, are a part of nature, too, and thus natural.

Expand full comment
Victor's avatar

Though not necessarily blameless.

In this case, though, I still think St. Felix was right.

Expand full comment
truthscrolling's avatar

A work of neo-theological nonfiction that most of this audience will interpret as fiction. Scott is a postrat pretending to be a rat, trying his best to prevent the coming apocalypse.

Beautiful writing, by the way.

Expand full comment
Scott Alexander's avatar

For what it's worth, I have no idea what you mean.

Expand full comment
Mark Roulo's avatar

I think truthscrolling believes that all of these are real people you know.

Expand full comment
truthscrolling's avatar

they are real people.

and in a way, they are actually saints.

you can't use this language and get surprised when people think theologically of things.

Expand full comment
Scott Alexander's avatar

The Popes are all based on real people; the saints are made up without anyone real providing even a seed of inspiration (except in some cases actual Christian saints who I based their stories on).

Expand full comment
truthscrolling's avatar

> the saints are made up without anyone real providing even a seed of inspiration

ARC, Lighthaven, MIT.

Either you are lying to yourself or to your audience. I cannot tell.

I lean towards the latter.

Regardless, your project here is noble and pure. Good luck.

Expand full comment
Taymon A. Beal's avatar

Obviously those are all real institutions. This doesn't imply that the individuals are real.

Expand full comment
Odd anon's avatar

Assuming this isn't trolling, then congratulations Scott on reaching the bar of having constructed a fictional setting so engaging that fans develop theories they hold to even in the face of unambiguous denial from the author.

Expand full comment
Taymon A. Beal's avatar

For what it's worth, I initially wasn't sure about this, because they all sounded vaguely like the names of people I know or have heard of in the community. (Maybe the names happen to be disproportionately locally common?) What eventually convinced me that they were fake was this: I think I'm well-connected enough that, if they were all real, I should have been able to positively identify at least one of them, and I couldn't.

Expand full comment
Taleuntum's avatar

Hmm, even the prostitues with the good survey methodology and huge datasets?

Also there is a typo in that one: "But she had only befriended the prostitutes so he could learn their advanced surveying methodology" -> ".. she could learn .."

EDIT: ah, okay, only the saints have no irl inspiration, nvm

Expand full comment
Brenton Baker's avatar

> Even the prostitutes with good survey methodologies and huge datasets?

Isn't that Aella's whole schtick?

Expand full comment
Chris B's avatar

St. Elizabeth of MIT sounds quite Aella-like.

edit: maybe I'm the rube, and this is all in the spirit of the game? I dunno, I liked the piece.

Expand full comment
SirTophamHatt's avatar

Pretty sure Aella’s not the saint in this story, she’s the prostitute(s)

Expand full comment
The Road Monkey's avatar

Then Moloch said, "Found an AI company, and I will enable you to out race your competitors, ensuring that your vision for a safe and prosperous future is implemented just as you specify!" And Avi said, "🤔, as long as I can name it Super Safe AI to forever remind myself and the world of it's noble mission." And Moloch said unto him, "Let it be Done".

Expand full comment
truthscrolling's avatar

Yeah.

This is about Ilya and Safe Super Intelligence.

Moloch is not a metaphor.

Seems like you're waking up.

Expand full comment
The Road Monkey's avatar

😂 Yes, my comment which was meant to be top level, was about Ilya, and Altman, and Elon, and basically anyone who has ever founded AI company.

Expand full comment
truthscrolling's avatar

pretty much.

Expand full comment
John Schilling's avatar

I think the "ruthsc" part of his name is deliberate obfuscation. Please do not feed.

Expand full comment
Ruffienne's avatar

Well played, sir.

The evidence certainly supports this hypothesis.

Expand full comment
truthscrolling's avatar

Sure, maybe you're still Clueless.

I think it's more likely that you're a Sociopath.

You have read the Gervais Principle, and I think you implicitly understood.

Expand full comment
Ben Smith's avatar

You might say there's a 25% probability this is fictional but I have an 80% probability that it's better for me to believe it's fictional, because probably none of these people sounds like specific real people to me (except *maybe* Paul Christiano) and if believed Scott intended them to be real I'd feel very out of the loop.

Expand full comment
User was indefinitely suspended for this comment. Show
Expand full comment
Scott Alexander's avatar

All right, banned for this comment.

Expand full comment
Jonathan's avatar

And another martyr's soul rises towards postrationalist heaven

Expand full comment
Deiseach's avatar

I'd be a Loser under the Gervais Principle, and that suits me just fine. All you Sociopaths who think this means you're Winners - playing your little games and striving for power and status and wealth. In the end, you're just working for us - we get our pay cheque for the exact amount of work done, you chase your tails being the Big Bosses running the entire shebang and trying to fool us with Free Pizza Fridays. We know what you're doing but hey, free pizza and I don't have to try and run a multi-million dollar concern! We appreciate it when you guys get promoted out because that means you go play your little games elsewhere and are out of our hair.

Good luck with your plan to become a trillionaire, I think you'll need it.

Expand full comment
Marcus A's avatar

I totally feel you.

Expand full comment
Anonymous Dude's avatar

I actually do try to be a Loser where possible--give the machine the minimum. I cosplay reasonably well as a Clueless, though.

Expand full comment
aphyer's avatar

I think it would be funnier, especially in light of recent developments, if the last story ended "Then Moloch howled, defeated, and Avi went on to found an AI lab."

Expand full comment
Shankar Sivarajan's avatar

For what could be more "highly-effective"?

Expand full comment
Taymon A. Beal's avatar

Well, he wouldn't be a saint in that case.

Expand full comment
The Road Monkey's avatar

He would be the only saint in a cloud of sexless hydrogen

Expand full comment
Yosef's avatar

I don't get most of this. I need a commentary.

Gwern is easy enough. (gwern.net)

Petersonites are also easy, followers of Jordan Peterson

I have no idea who the others are.

Expand full comment
Taymon A. Beal's avatar

Scott mentions in another comment that the saints aren't based on real people, they're just humorously riffing on general rationalist memes.

Expand full comment
Yosef's avatar

Ahhh.

Expand full comment
truthscrolling's avatar

He is lying.

St. Felix is very directly based off of Peter Miller.

https://www.astralcodexten.com/p/practically-a-book-review-rootclaim

Stop being so Clueless.

Expand full comment
artifex0's avatar

Giving a probability for for the lab leak hypothesis was a standard rationalist trope for a long time before Miller's debate.

Expand full comment
Sleakne's avatar

Is the capital c clueless thing some joke I'm missing or are you just being rude to people.

Expand full comment
C_B's avatar

It's a reference to the typology of people's attitudes/roles in large organizations from Venkatesh Rao's "The Gervais Principle" (reviewed by Scott here: https://www.astralcodexten.com/p/book-review-the-gervais-principle).

But it's also just as rude and dismissive with context as it sounds without context. This someone who has read a Book with a Theory of Everything and has taken to using it as a cudgel to smack those they suspect of not having read the same Book. Treat as troll.

Expand full comment
Hunter's avatar

Nearly responded to this and then realized he was banned elsewhere in the comments over this bizarre non-constructive trutherism, leaving this here so other people don't waste any more time on this guy

Expand full comment
Sortale's avatar

I am assuming Elizabeth is referencing this user

https://www.lesswrong.com/users/elizabeth-1

and the consorting with prostitutes seems to be about aella girl who commissioned some impressive surveys

https://en.wikipedia.org/wiki/Aella_(writer)

Beisotsukai was referenced in some Yudofski's fictions

Light haven and ARC are real organisations

The popes are real lesswrongers

Expand full comment
truthscrolling's avatar

The strange part is how Scott claims these aren't based off of real people.

Odd that he could ignore something that obvious.

Expand full comment
Mark's avatar

He said the saints aren't built off real people. Obviously the saints interact with some real people.

Expand full comment
JohanL's avatar

"Prostitutes with impressive data sets" is presumably Aella.

Expand full comment
Mallard's avatar

> “What use is money, when Open Philanthropy already has $20 billion and can’t find enough high-impact charities with room for more funding

The piece is in jest, but people should realize that extremely high impact giving opportunities remain funding constrained. GiveWell's top charities remain funding constrained and OpenPhil has been contributing less to GiveWell in part since *they* are funding constrained.

See e.g. https://www.openphilanthropy.org/research/our-planned-allocation-to-givewells-recommendations-for-the-next-few-years/

Expand full comment
Taymon A. Beal's avatar

This all depends on where you put the cost-effectiveness bar. From the linked post:

> We’ve reduced the annual rate of our funding for GiveWell’s recommendations because our “bar” for funding in our Global Health and Wellbeing (GHW) portfolio has risen substantially. In July 2022, it was roughly in the range of 1100x-1200x; we recently raised it to slightly over 2000x. That means we need to be averting a DALY for ~$50 (because we value DALYs at $100K) or increasing income for 4 people by ~1% for a year for $1 (because we use a logarithmic utility function anchored at $50K).

GiveWell top charities aren't anywhere near that good; the current bar is 8x cash. Open Phil would give a lot more to GiveWell if they didn't think there were a bunch more 2000x opportunities out there, waiting to be discovered, that they should save their cash for. If St. Avi is working on anything AI-adjacent, he probably similarly thinks that 2000x impact is possible, and maybe much, much, much more.

(In general, it is a bad habit to compare the cost-effectiveness of GiveWell-style "giving as consumption" opportunities to that of other EA focus areas like AI. The other focus areas have dramatically higher cost-effectiveness within their models, but at hard-to-quantify costs like worse feedback loops, more out-of-model uncertainty, or a less clearly morally valuable set of beneficiaries. So it ends up being pretty apples-to-oranges, which is why there's enough of a Pareto frontier for both of these things to still exist within EA. But for the sake of the current argument, it's necessary to at least gesture at the comparison, in order to make it make sense why Open Phil doesn't just give all their money to GiveWell right now.)

Expand full comment
Mallard's avatar

Among the reasons listed there by OpenPhil for their decreased funding for GiveWell is their funding constraint:

>The increase in the bar comes from a few sources:

>Our available assets are down ~1/3rd from EOY 2021, which has pushed us to be more conservative with our spending (though our main funders still plan to spend down virtually all of their assets within their lifetimes).

I don't mean to suggest that GiveWell, or global health more broadly, represents the highest expected value charity. Just that "extremely high impact giving opportunities remain funding constrained." That even includes the several most quantifiable high impact global health charities among GiveWell's Top Charities. It certainly includes the global health charities that GW assesses as having higher expected values but with more uncertainty (their All Grants Fund).

It would similarly presumably include higher uncertainty causes in other domains, such as global catastrophic risk mitigation, which OpenPhil chooses to fund alongside global health.

Even GW times their grants for maximum effectiveness within their budget, which sometimes means holding onto money for longer, rather than dispensing it all immediately. But that shouldn't be taken to mean that high impact charity is a done deal and there's no point in getting more money.

While the post was obviously in jest, I've seen people assuming that GiveWell's top causes are fully funded, so I wanted to make sure people realize that "high impact" causes remain underfunded.

Expand full comment
Mo Nastri's avatar

> GiveWell top charities aren't anywhere near that good; the current bar is 8x cash.

Complete tangent -- I thought this used to be 10x (at least ignoring the recent update boosting the cost-effectiveness of cash by 3-4x, see https://blog.givewell.org/2024/11/12/re-evaluating-the-impact-of-unconditional-cash-transfers/), so it seemed odd to me that it's dropped. So I checked:

Malaria Consortium's SMC program is 7-42x cash, depending on location (https://docs.google.com/spreadsheets/d/1Ojn4JAcmKkggPe7IpGbqKhMcpF05OJLWWlm7IZkIlnk/edit?gid=1266854728#gid=1266854728&range=E193)

AMF's LLIN distribution program is 5-23x cash, depending on location (https://docs.google.com/spreadsheets/d/1VEtie59TgRvZSEVjfG7qcKBKcQyJn8zO91Lau9YNqXc/edit?gid=1266854728#gid=1266854728&range=E203)

Helen Keller International's vitamin A supplementation program is 2.3-59x cash, depending on location (https://docs.google.com/spreadsheets/d/1NcR1RR3WyCqSjbqgPHuafNXtm4hSeIz66eZlX69k_80/edit?gid=1266854728#gid=1266854728&range=E149)

New Incentives' infant vaccination cash incentives program is 1.1-38x cash, depending on location (https://docs.google.com/spreadsheets/d/1mTKQuZRyVMie-K_KUppeCq7eBbXX15Of3jV7uo3z-PM/edit?gid=1266854728#gid=1266854728&range=E4)

Those are all the GiveWell top charities; there are no others.

Unfortunately none of these CEAs give a total program weighted average cost-eff vs cash; if they have any such internal figures it's not shared publicly. Still not sure what to make of the 10x to 8x drop, given that I don't know how the stated 8x (I'm trusting you on this) relates to their CEAs' widely-varying cash multiplier estimates and to their funding allocation decisions.

Another tidbit: GiveWell doesn't use DALYs anywhere in their CEAs, just lives saved and income doublings as 'base unit of utils'. Almost every other EA charity evaluator does.

(Agree with the rest of your comment.)

Expand full comment
Mallard's avatar

https://www.givewell.org/how-we-work/our-criteria/cost-effectiveness/cost-effectiveness-models

>As of August 2024, our bar for funding Top Charities is 8x cash, and our bar for funding other programs is 10x.

Expand full comment
Mo Nastri's avatar

To repeat myself, rephrasing slightly:

> Unfortunately none of these CEAs give a total program weighted average cost-eff vs cash; if they have any such internal figures it's not shared publicly. Still not sure what to make of the 10x to 8x drop, given that I don't know how the stated 8x... relates to their CEAs' widely-varying cash multiplier estimates and to their funding allocation decisions.

Expand full comment
Mallard's avatar

Just looking quickly at the linked files, we can take the weighted average of effectiveness by total cost per location to get the following:

SMC: 9.98x cash

ITN: 12.87x cash

Vit. A: 13.52x cash

Vaccines: 6.97x cash

I don't know how GW calculated it (such as whether they use 'total cost' like I did for the first 3 or 'Total spending contributed by grantee' as I did for the 4th) but the above likely gives us a rough idea and shows that we can derive such values from the linked sheets.

Expand full comment
Mo Nastri's avatar

Much appreciated, thank you Mallard. When I first looked into these CEAs years ago they didn't include grant sizes, so I lazily assumed they still didn't this time round, hence my (erroneous) claim above that "none of these CEAs give a total program weighted average cost-eff vs cash".

Expand full comment
Deiseach's avatar

I realise that this is a dumb question, but if Open Philanthropy has more money than they can give away, why don't they take up the slack in some of the USAID programmes being cancelled? If promoting trans opera in Colombia is really such a vital part of health and welfare, then there's your good cause right there. If it's useless feel-good events for the culture-vultures, then not funding it does no damage. And if it's "well no it's not about doing good, it's about soft power for the USA", then let Open Philanthropy decide if the USA having soft power is a good or not, and fund it accordingly.

Expand full comment
Scott Alexander's avatar

I think there's not enough money! PEPFAR was something like $7 billion per year, so they can fund PEPFAR for three years (not even the whole Trump admin) and then are broke.

The other non-PEPFAR ones probably aren't good enough to meet their bar.

Expand full comment
Deiseach's avatar

"The other non-PEPFAR ones probably aren't good enough to meet their bar."

That's the point I was wondering about. There's been a lot of hysteria online about dying children and starving families, but if the only really good programme was PEPFAR and the rest of the things under the USAID umbrella were not effective, then the cancellation of ineffective/wasteful programmes isn't wrong. It may be badly done and taking a sledgehammer to crush a nut, but it's not wrong/evil/rich guys like Trump and Musk want to literally murder the poor.

Expand full comment
Viliam's avatar

Things can be effective also without meeting the Effective Altruist bar.

Cancelling PEPFAR was an Effective Anti-Altruist move, cancelling the other programs for dying children and starving families was merely ordinary evil.

Expand full comment
John Schilling's avatar

It's evil to fund them for a time and then stop, but not evil to never fund them in the first place? Even though the organization which did the former is one chartered for a purpose other than global charity, and the organization that did the latter is chartered for maximally cost-effective global charity.

I do not think the word "evil" is appropriate here, and I'd prefer to keep it viable for better uses. We've lost too many good and useful words already.

Expand full comment
Lurker's avatar

Wasn’t the trans opera in Colombia a Department of State thing anyway (see the recent Skeptics SE question)?

Expand full comment
Mark's avatar

Heard Trump closed down USAID programs that save millions of life. So, great opportunities to fund!

Expand full comment
Trylfthsk's avatar

I pray to St. Leibowitz that their good works be preserved; that we might never need re-climb lower rungs towards understanding altruism.

It was always quite shrimple.

Expand full comment
The Road Monkey's avatar

Nice Leibowitz reference!!

Expand full comment
Deiseach's avatar

"Quite shrimple". And served with a side of cocktail sauce, no doubt!

Expand full comment
Chris's avatar

This is so clever.

Expand full comment
Pip's avatar

I love your iterative short fiction very much.

Expand full comment
artifex0's avatar

Seconded. I'd actually like to see work like this make up a greater percentage of his writing.

Expand full comment
Stevec's avatar

Thirded

Expand full comment
Yosef's avatar

Agreed. Samsara is one of my favorite short stories.

Expand full comment
Dan's avatar

St. John of Daly City was bad at politics and at rationality.

Expand full comment
Max Chaplin's avatar

All of the first four are, but it's probably intentional.

Expand full comment
LarryBirdsMoustache's avatar

St. Alyssa must be the first person to pull a reverse-Origen

Expand full comment
Petey's avatar

What’s the Deutschist heresy? From looking up David Deutsch, he seems a proponent of the many worlds interpretation of quantum mechanics, but I thought that was the one rationalists (or at least Yudkowsky) favored.

Expand full comment
Taymon A. Beal's avatar

He's against Bayesianism (as a fundamental philosophical theory of epistemology). I don't really understand what he thinks we should believe in instead; I think it is connected to something he came up with called "constructor theory" although maybe that is intended to answer a different question.

Expand full comment
Amicus's avatar

It's not a matter of believing in something else instead: he doesn't think bayesianism fails on its own terms, he thinks it's just not doing what we need. Deutsch's position, in very brief summary, is that a good scientific theory is one that provides useful explanations, not just accurate predictions.

> For even in purely practical applications, the explanatory power of a theory is paramount, and its predictive power only supplementary. If this seems surprising, imagine that an extraterrestrial scientist has visited the Earth and given us an ultra-high-technology “oracle” which can predict the outcome of any possible experiment but provides no explanations. According to the instrumentalists, once we had that oracle we should have no further use for scientific theories, except as a means of entertaining ourselves. But is that true? How would the oracle be used in practice? In some sense it would contain the knowledge necessary to build, say, an interstellar spaceship. But how exactly would that help us to build one? Or to build another oracle of the same kind? Or even a better mousetrap? The oracle only predicts the outcomes of experiments. Therefore, in order to use it at all, we must first know what experiments to ask it about. If we gave it the design of a spaceship, and the details of a proposed test flight, it could tell us how the spaceship would perform on such a flight. But it could not design the spaceship for us in the first place.

Expand full comment
Pjohn's avatar

Really interesting argument! Does it really boil down to P vs. NP, or does it just look that way because of the limitations of the 'spaceship' example?

(Also, Obi-Wan: "Useful world-models and accurate predictions form a symbiont circle...")

Expand full comment
Max Chaplin's avatar

This doesn't seem to contradict what Bayesian rationalists believe. A Bayesian prediction isn't necessarily about the future; it can be about any piece of information that an individual or a group doesn't have. In epistemic rationality, the role of predictions isn't to tell us what's going to happen, it's to hone our model of the world.

FWIW, Yudkowsky's physics sequence strongly went against the idea that a hypothesis is only worth paying attention to once it makes successful predictions, and in other posts he described explanatory power in a way that doesn't reduce to just predictive power.

Expand full comment
1123581321's avatar

According to Deutsch, Bayesianism is just a dressed-up induction, “that which happened will keep happening”, and it provides useful information in “small world” space, i.e., that where the set of outcomes is limited.

Expand full comment
Victualis's avatar

I don't understand why you claim such an oracle would not help to design a spaceship. You just start out trying something silly, like putting something flammable in a pipe, but instead of losing an eye when it blows up you know what will happen without needing to try it out. You can then speedrun the history of rocket development to get precise values for the relevant constants and then guided by goals like "now let's design a propulsion system that is 10 times more efficient" push beyond. What are you claiming is missing?

Expand full comment
Amicus's avatar

The claims are Deutsch's, not mine - and he doesn't claim it wouldn't help at all, although he could certainly have phrased it more clearly. As he elaborates a few paragraphs later

> Once we already had a theory, and had thought of a possible experimental test, thenwe could ask the oracle what would happen if the theory were subjected to that test. Thus, the oracle would not be replacing theories at all. It would be replacing experiments. It would spare us the expense of running laboratories and particle accelerators. Instead of building prototype spaceships, and risking the lives of test pilots, we could do all the testing on the ground with pilots sitting in flight simulators whose behaviour was controlled by the predictions of the oracle.

> The oracle would be very useful in many situations, but its usefulness would always depend on people’s ability to solve scientific problems in just the way they have to now, namely by devising explanatory theories.

Expand full comment
Victualis's avatar

Thanks for clarifying. So the claim is that theory is required to guide the questions that are asked. I'll have to think about that, it doesn't seem self-evident that they are necessary.

Expand full comment
Victualis's avatar

Looking quickly through the first few dozen works citing The Fabric of Reality, none seem to have addressed this claim. Claude doesn't help to locate any followups either. I will look at the broader philosophical literature on induction. Thanks for highlighting this claim of Deutsch.

Expand full comment
The Ancient Geek's avatar

Im pretty sure the real world Alyssa was not the person who refuted Constructor theory.

Expand full comment
AristotelisKostelenos's avatar

This really hit the spot of simultaneously being really funny and incredibly niche. I feel somewhat lonely not knowing anyone IRL who would appreciate this post as much as I did.

Expand full comment
Mo Nastri's avatar

I think that's your cue to organise the next rationalist meetup in your area and see who comes :)

Expand full comment
AristotelisKostelenos's avatar

I seriously doubt there are many rationalists in my area to be honest. However I am actively trying to mint some. Out of curiosity, though, how does one go about organising a rationalist meetup? I myself am somewhat new to the rationalsm thing.

Expand full comment
Itai Bar-Natan's avatar

A while back I had come up with story that I thought would have fit well inside "EAs/Rationalists You've Never Heard Of Because They Don't Exist", except I couldn't think of any other story to include in such a work. I found it lovely that you wrote something that pretty much fit what I was fantasizing about, and also gave me a perfect opportunity to share my own story:

Early in his life St. Philomega believed in the Christian God, but he did not want to give up his life of crime and revelry. He decided that he will continue living his sinful way but repent on his deathbed, and that way ensure that he will go into heaven. When he told Christians about this plan they criticized it, saying surely such a repentance would be insincere if he is deliberately delaying it as long as possible, and if he wants God to accept him into heaven he better repent now. St. Philomega's response was simply, "Skill issue. When I repent it will be sincere."

Then one day he found himself lost in the desert without any supplies. He was convinced he was going to die, so he repented then, choosing to love God and disavow his earlier lifestyle. Just as he did so, he spotted a caravan in the distance. The caravan-driver Parphittus promised he'd take St. Philomega to safety, asking only that he give him all of his worldly possessions after they return to the city. Since then St. Philomega lived a life of virtue, working for MIRI publishing papers on acausal negotiation between superintelligence agents and God.

Expand full comment
Pjohn's avatar

Nice!

Expand full comment
Domo Sapiens's avatar

+1, fits right into the saints above! Great work!

Expand full comment
Deiseach's avatar

Very nice, and subverted my expectations that St Philomega would decide that being saved meant that he wasn't going to die, so his repentance could not be sincere, and God would know this since God would know about a caravan being there to rescue him, so he could continue his life of crime and revelry, meaning he (at best) lied to the caravan owner when agreeing to hand over all his worldly possessions once in the city again, or (at worst) robbed the caravan owner back in the city.

Expand full comment
MichaeL Roe's avatar

“But she had only befriended the prostitutes so he could learn their advanced surveying methodology” — ok, I can see who is being alluded to here.

Expand full comment
avalancheGenesis's avatar

A great deal of this went over my head, since I don't know The Canonical History that well, but liked anyway because I miss Scott's fictions. (Meanwhile, thousands of miles away, the Rightful Caliph...)

Expand full comment
David Joshua Sartor's avatar

Much of this seems allegorical and is actually totally made up.

Expand full comment
Ape in the coat's avatar

> St. John of Daly City

Why did we canonize the avatar of golden mean fallacy, though?

Expand full comment
MichaeL Roe's avatar

“passing away of sleep deprivation”

That one was kind of dark, if it was meant as an allusion to the Ziz situation.

Expand full comment
Deiseach's avatar

The Zizians seem to have been taken into custody. I really want to read the reporting on all this, including about LaSota faking his death.

https://news.sky.com/story/alleged-leader-of-cultlike-zizians-arrested-in-us-as-group-linked-to-killings-13311735

"According to the Associated Press news agency, the group appears to be made up of highly intelligent computer scientists, mostly in their 20s and 30s, who met online, shared anarchist beliefs, and became increasingly violent.

Their goals aren't clear, but online writings cover topics from gender identity, radical veganism and artificial intelligence.

LaSota's blog describes a theory that the two hemispheres of the brain could hold separate values and genders and "often desire to kill each other".

If only they had been just anarchists, it wouldn't have turned out this weird and bloody. But I suppose even Sky News can't get away with "batshit insane". I'm not going to poke fun at vegans here, it just amuses me highly that LaSota is fine with cajoling a follower into killing their parents for money for the group and other crimes, but oh dearie me the jail doesn't provide him with vegan food? Torture! Starvation!

Expand full comment
Taleuntum's avatar

Far be it from me to defend the actions of Zizians, but what would you do if the prison only provided you with food made from humans? Given their beliefs, it's very easy to see how they can see that as torture. Veganism isn't the part of their belief system which makes them crazy. It's their weird misunderstanding of acausal stuff.

Expand full comment
anomie's avatar

...They forfeited their right to not be tortured the moment they turned against society.

Expand full comment
Taleuntum's avatar

Sure that's a consistent viewpoint, but the comment I replied to ridiculed their view of being tortured. If you accept that yeah, they were tortured, but then say "so what?", that's okay in my book.

Expand full comment
Deiseach's avatar

I don't accept that "not giving me vegan meals is torture" so yeah.

Expand full comment
NoriMori's avatar

Okay but Taleuntum's argument is that it is to them. Why did you even respond if you're just going to ignore their argument? If you disagree with it, then respond to it.

Expand full comment
Deiseach's avatar

Well, maybe if they hadn't attacked an 82 year old man causing him to lose an eye after stabbing him with a sword and trying to cut off his head so they could then dismember and dissolve his body in a home-made acid bath, and then later successfully murdered him...

https://openvallejo.org/2025/01/27/man-killed-in-vallejo-was-main-witness-in-upcoming-murder-trial/

... I'd be a lot more convinced of their "no no meat is murder meat-eating is immoral" credentials. They've demonstrated they have no problems at all with murder, even violent and cruel torture and murder. What LaSota is complaining about is looking for special treatment in order to fortify his narrative that he's the victim here and society is out to get him. I'm sure the jail would give him meals without meat or meat products, but he probably wants "no, it has to be one specific type of meal prepared to my exact specifications". He won't damn well starve to death if he eats plain rice and boiled vegetables, for crying out loud.

I honestly would laugh if he wants Soylent. That would be just too perfect.

Expand full comment
Taleuntum's avatar

I don't think murdering someone demonstrates that that person has no problem with murder. I can easily imagine situations where I murder someone eg. someone attacking my family, yet I believe murder is wrong now, and will go on believing this even after my murder. Sometimes people do things they themselves find morally wrong for one reason or another.

Expand full comment
Deiseach's avatar

"I don't think murdering someone demonstrates that that person has no problem with murder".

Let's expand this one out:

"I don't think raping someone demonstrates that that person has no problem with rape".

"I don't think robbing a bank demonstrates that that person has no problem with theft".

"I don't think embezzling a million dollars from their employer demonstrates that that person has no problem with swindling".

"I don't think setting sixteen buildings on fire demonstrates that that person has no problem with arson".

"I don't think writing an entire book about how you hate Da Joos demonstrates that that person has no problem with anti-Semitism".

They are not accused of acting in self-defense and yeah it was murder. I'm sorry but you're asking me to feel sorry for the poor little meow-meows that went out to kill an elderly man after taking advantage of his kindness and being shit heads with delusions of grandeur who have nothing but contempt for ordinary people and wallow in vengefulness and malice. Let me look deep into the recesses of my heart...

... still looking...

... gimme another minute...

... nope, at the bottom now, and there's nothing there for them. Except maybe "a short rope and a long drop", only for the inconvenient belief system of Catholicism which the recent popes are telling me "nix on capital punishment".

EDIT: The shoot out with the police is debatable, you could argue that was perception of self-defence on their part. But the assault on Lind, the subsequent murder of Lind, and the murders of the Zajkos don't come anywhere near "this person was attacking my family so I had to use lethal force to stop them"

https://apnews.com/article/vermont-border-patrol-shooting-lasota-zizians-zajko-cfc18908057c92850e77fa9cff7e1fa2

Expand full comment
Taleuntum's avatar

Yeah, it is kind of hard to feel sorry for them I agree, but I'm also not asking you to. I merely want people to have accurate models of other people. This is partly because of my love of truth, but also because having accurate model of others helps predicting their actions and accurately predicting others' harmful actions can greatly help in preventing those actions. I would bet a lot of money that the Zizians do genuinely care about animals, but now I'm unsure if you actually disagree or you are being performative to vent.

Expand full comment
TGGP's avatar

I'd eat the human food.

Expand full comment
John Schilling's avatar

If they're prison inmates, the "buying meat incentivizes other people to murder animals" bit no longer applies, and most ethical systems will allow even e.g. cannibalism if that's what it takes to survive. And if they're in prison for murder, then the response to anything of the form "meat is murder!" is a hearty "yeah, chow down, murderer". Prison is meant to be a punishment, and this one seems quite appropriate.

Expand full comment
Taymon A. Beal's avatar

I suspect it probably wasn't. Lots of rationalists have had contrarian takes on sleep and/or tried to do weird things with it. I'm reminded primarily of Leverage Research and Alexey Guzey.

Expand full comment
human's avatar

Yeah I don't think it was. We don't know of any cases of death by sleep deprivation, unless we're making assumptions about why bad decisions were made.

Also I think it takes much more than 3 weeks to die with sleep deprivation as the primary cause? Not really relevant I'm just confused that Scott said it.

Expand full comment
Eremolalos's avatar

I sometimes get sad about Zvi, picturing him at his desk 18 hours a day surrounded by monitors, each showing a browser with 150 tabs open, and an AI app open on his phone, pointing it at different screens and asking it to explain X and search for more info about Y, wearing down the sharpness of his eyes and mind, making 1% as much $$ as he could, all to tell us the real deal about the crucial stuff. St. Deer?

Other times I think he's a smart lucky guy, having a ball doing what he's best at, with 3 little brainiac kids zooming through his office over and over while he works.

Expand full comment
Deiseach's avatar

"making 1% as much $$ as he could"

I've never understood that mindset. Money is necessary, more money is nice, but "I gotta make as much money as I possibly can! All the money! Not one penny less!" has always seemed to me a terrible waste of being a human. Making a ton of money as an adjunct to work you like or an invention or even producing goods and services which include entertainment, yes. But making it your aim to "all the money!" just seems pathetic.

Which of course is why I'm poor and will always be poor and one of the Losers in the Gervais Principle classification.

Expand full comment
Taleuntum's avatar

Same, but I've found many people who disagree and seemingly pursue money as a terminal goal

Expand full comment
anomie's avatar

Elon wouldn't be where he is now if he didn't accumulate all that wealth. If you want to change the world, you need capital.

Expand full comment
Melvin's avatar

Why would I want to change the world? Realistically despite my best efforts there's a 50-50 chance whether I make it better or worse. Might as well let someone else change it and relax on my yacht.

Expand full comment
Eremolalos's avatar

I’m actually pretty much the same. And for all I know Zvi is currently making enough for him and his family to live very comfortably. Or maybe his wife is a cardiologist. But in my mental movie money’s a bit uncertain and pinched. And he’s got the kind of smarts that can be turned into big bucks. I’m not saying that it’s remarkable or tragic that he’s not a millionaire. My point was that in my imagination, anyhow, he’s electing to live in pinched circumstances so he can be our information hyperprocessor, even though he could work easily work less and earn more.

Expand full comment
Deiseach's avatar

Oh I know you're not a money-grubber. It all depends on what one considers "enough/plenty of money". As per my last comment on this, you can see that I set the bar for "good remuneration" much lower than the majority of people on here.

So one person's "comfortable/have a surplus" level of income is another person's "just getting by/could be doing a lot better" level.

Expand full comment
Schneeaffe's avatar

"Noone needs more than twice as much money as I have."

Expand full comment
Deiseach's avatar

Today's my day for poetry, it seems! Oliver Wendell Holmes, Senior, had a satirical poem on "what's *just* enough for a contented life":

Contentment

By Oliver Wendell Holmes Sr.

“Man wants but little here below”

Little I ask; my wants are few;

I only wish a hut of stone,

(A very plain brown stone will do,)

That I may call my own;—

And close at hand is such a one,

In yonder street that fronts the sun.

Plain food is quite enough for me;

Three courses are as good as ten;—

If Nature can subsist on three,

Thank Heaven for three. Amen!

I always thought cold victual nice;—

My choice would be vanilla-ice.

I care not much for gold or land;—

Give me a mortgage here and there,—

Some good bank-stock, some note of hand,

Or trifling railroad share,—

I only ask that Fortune send

A little more than I shall spend.

Honors are silly toys, I know,

And titles are but empty names;

I would, perhaps, be Plenipo,—

But only near St. James;

I’m very sure I should not care

To fill our Gubernator’s chair.

Jewels are baubles; ’t is a sin

To care for such unfruitful things;—

One good-sized diamond in a pin,—

Some, not so large, in rings,—

A ruby, and a pearl, or so,

Will do for me;—I laugh at show.

My dame should dress in cheap attire;

(Good, heavy silks are never dear;)—

I own perhaps I might desire

Some shawls of true Cashmere,—

Some marrowy crapes of China silk,

Like wrinkled skins on scalded milk.

I would not have the horse I drive

So fast that folks must stop and stare;

An easy gait—two forty-five—

Suits me; I do not care;—

Perhaps, for just a single spurt,

Some seconds less would do no hurt.

Of pictures, I should like to own

Titians and Raphaels three or four,—

I love so much their style and tone,

One Turner, and no more,

(A landscape,—foreground golden dirt,—

The sunshine painted with a squirt.)

Of books but few,—some fifty score

For daily use, and bound for wear;

The rest upon an upper floor;—

Some little luxury there

Of red morocco’s gilded gleam

And vellum rich as country cream.

Busts, cameos, gems,—such things as these,

Which others often show for pride,

I value for their power to please,

And selfish churls deride;—

One Stradivarius, I confess,

Two Meerschaums, I would fain possess.

Wealth’s wasteful tricks I will not learn,

Nor ape the glittering upstart fool;—

Shall not carved tables serve my turn,

But all must be of buhl?

Give grasping pomp its double share,—

I ask but one recumbent chair.

Thus humble let me live and die,

Nor long for Midas’ golden touch;

If Heaven more generous gifts deny,

I shall not miss them much,—

Too grateful for the blessing lent

Of simple tastes and mind content!

Expand full comment
Maxwell E's avatar

I would like you to know that I chuckled at this – out loud! – and that it was an excellent addition to my day.

Expand full comment
1123581321's avatar

I don't know. On one hand, I have enough that I should be able to comfortably retire in a few years, providing my current work situation roughly continues.

OTOH, that's assuming things are going to work out ok. And that's where paranoia kicks in. What if we get 8% inflation for many years? What if Trump succeeds in wrecking our economy, or actually decides to attack Canada, and all hell breaks loose? What if I get dementia? What if I live to 106? Etc. etc.

There are so many things that can go wrong, so I feel like I don't really have enough and need to squirrel away more. Not to buy a Bugatti, just to make really sure I'll have a comfortable roof over my head, being able to eat good food and get medical care when needed.

Expand full comment
Eremolalos's avatar

Yeah I worry about that stuff too, especially the what if I live to be very old part. Both of my mother's parents lived to be nearly 100, and that was with the relatively primitive health care of their era. On the other hand, even if I had enough savings to not to worry about getting checkmated by poverty, there are all kinds of ways to get checkmated by health even if one has limitless $$. So mostly I say to myself, soon I will enter the era of my life that I know will end up with my being checkmated on way or another. I need to find a way to rise to the occasion.

Expand full comment
1123581321's avatar

A few years ago I came across an excellent advice on mitigating the longevity risk. It had all the proper components of a good advice, which is rare: it was specific, non-obvious, and actionable. On top of it, it didn’t come from a salesperson looking for a commission. So I actually acted on it last year. Here goes:

Buy a deferred annuity, structured like this: it kicks in at a specific age, say, 85 years, and pays a monthly dividend until death. If you do it early enough, say, at least 20 years before the first payout date, it is not too expensive. The payout if pretty generous because actuarial tables will expect a relatively short payout period, on average. This is really an insurance policy for living too long.

These products are often maligned because you may not get out what you paid in, or, if you die at 84 11/12, nothing. But this misses the point of buying protection against living to 122.

Expand full comment
Eremolalos's avatar

So if I die at 86, having collected the payout for 1 year, what happens to the remainder? Does it pass to my inheritor, or does the company I bought it from keep the remainder? I should understand how annuities work and know the answer to this question, but I have just never paid attention to this side of things, and need to master it now. Til I do that, I understand this stuff only slightly better than a bright teenager.

Expand full comment
Doug S.'s avatar

I believe that Zvi's wife is a psychiatrist. As far as I know from having met them several years ago, they are very comfortable financially.

Expand full comment
Aster Langhi's avatar

This is one of your best pieces.

Expand full comment
WSCFriedman's avatar

This is great, I love it.

Expand full comment
Presto's avatar

smiling

smiling

smiling

Joan of ARC

cackling hard

Expand full comment
Deiseach's avatar

That one was especially good 😁

Expand full comment
Matthias Görgens's avatar

St. John of Daly City is pretty funny, because both major American parties are crazy and only differ in minor details, so they average out to crazy (and far from neutral), too.

At least from an unAmerican point of view.

Expand full comment
Flauschi's avatar

Well, I assume that's a joke, but for completeness sake I would like to mention that it would indeed be a miracle to calculate (or, have a runtime of) BusyBeaver(100) with a 20 line code.

Expand full comment
Taleuntum's avatar

Yes, they just recently wrote a program to calculate BB(5), it's hard to prove that every non-halting n-state turing machine won't ever halt and you need to translate these proofs (to eg. coq) for you to be able to say that you wrote a program calculating BB(n)

Expand full comment
Flauschi's avatar

sure, but I meant something even more basic: BB(n) usually is defined as the maximum you can get with n lines of code. So you need at least n lines to generate BB(n). (And maximal runtime is basically the same as maximal output.)

but you could of course assume the BB is defined using Turing Machines and your code is written in something else. In that case, as you remark, it would not be logically impossible to have a program with runtime >=BB(20), but it might require a volume of proofs possibly larger than all existing mathematical proofs to show that....

Expand full comment
Taleuntum's avatar

I have never heard of BB(n) being defined as the maximum you can get with n lines of code, can you point me to one such definition?

Also, maximum what you can get? In the definition I know, you search for the maximum steps an n-state halting Turing machine can make (or less commonly maximum number of ones on the tape), but if you don't use Turing machine in the definition, then what quantity you seek to maximize?

Expand full comment
Flauschi's avatar

Sorry, I might be too sloppy. What I mean is:

1. If you talk about a universal Turing machine as computer that you can program, then "n states" is just the same as "n lines of code" (Basic-like: "10: if the head reads 0, then goto 30. // 20: write 0 // 30: ..." etc)

2. So you can formulate BB_Turing(n) as "The maximal output you can get from a program with n lines" for Turing machines; this can be formulated not only for Turing Machines, but for the usual programing languages (so you get a BB_python etc)

Of course you have to restrictl literals, if you count "return 100000000000" as one line, and the same for all other numbers, then one-line python programs have arbitrary large outputs. So maybe use "size in byte of the source code".

3. The point is that BB(n) (for any computer model) is a function that grows faster than any computable function.

4. There are computable (even polynomial) translations between python-programs and Turing-machines. In particular BB_Turing(n)<BB_pythion(n), and there is a polynomial f such that BB_Turing(f(n))>BB_pythin(n)

In this particular sense the various BB-functions are essentially the same for all computer programs.

5. In an even stronger sense it is essentially the same if you look at "maximal output" or "maximal runtime" or "maximal numbers of 1 on the Turing tape" (this time there is, e.g., a polynomial function f s.t. BB_output(n) > f(BB_runtime(n)) and BB_runtim(n) > f(BB_output(n)), etc)

Expand full comment
Taleuntum's avatar

Ah, gotcha

Expand full comment
Flauschi's avatar

But it is good you called me out on that;

"morally speaking" all the BB variants are the same, but of course if you speak about a specific BB(100), then this equivalence might be irrelevant. (If you use a stupid hypothetic language that requires 101 lines of boiler plate to get a syntactically correct program, then BB(100)=0.)

But, as you point out, if we take the usual Turing-machine-BB, then BB(100) is way beyond anything we can conceivably handle.

Expand full comment
Schneeaffe's avatar

>BB(n) usually is defined as the maximum you can get with n lines of code. So you need at least n lines to generate BB(n).

I dont think it works like that? I think you can write a programm that generates all possible programms of n lines in less than n lines. You can even write a programm of finite length with parameter n, that generates all possible programms of n lines.

Expand full comment
Flauschi's avatar

sure, but you cannot solve the halting problem.

Expand full comment
Schneeaffe's avatar

Yes, but thats a different difficulty. That just means that *there isnt* a programm that calculates BB(n), not that it needs to be at least n lines.

Expand full comment
Flauschi's avatar

No, it is exactly the same difficulty.

As far as I understand you want to propose: "Write as program that generates all source codes of length n; then simulate all these programs; and take the maximal output". You can write a (short) program like that, but this program will just not hold for any (sufficiently large) n, as you "get stuck" trying to simulate the execution of a program (which generally never will terminate). If you could solve the halting problem (you can't) then this would work, and calculate BB(n)

The other way round works as well: It is easy to see (but probably annoying to prove) that BB(n) is also (modulo polynomal) the same as the maximal runtime of a halting Program. So if you could calculate BB (which you can't), then you could easily decide whether a program halts, as you just have to simulate BB(n) many steps: if you are not done by then, you will never halt.

Expand full comment
Donald's avatar

BB(n) is the maximum a Turing machine can do with n states. And while n states and n lines of code are asymtotically similar, you can squeeze more data into a line of code than into a Turing machine state transition rule (at least at n=100)

There is a constant factor here. That factor is enough to allow BB(100) in 20 lines. But BB(400) is probably impossible in 80 characters of ASCII per line for 20 lines.

Expand full comment
Silverax's avatar

You can write all possible programs in one line. Just remove all `\n` from the text and put everything in the same line ;)

Assuming a programming language that allows you to separate statements with `;` or something.

Expand full comment
Scott Alexander's avatar

Okay, this is where I admit that I'm an idiot and don't really understand computing. I would have expected that you write a program to calculate busy beaver numbers, and insert the exact number you want as a parameter (obviously this doesn't work in real life because it would take too long, but it works for St. Joanne's purposes). I asked Gemini how many lines of code it would take to write a busy beaver calculating program and it said 10-50, which felt right to me (it's not actually hard to define the problem, just takes impossible runtimes to calculate it). Is this a bad way of thinking about it?

Expand full comment
Flauschi's avatar

oh yes, exceptionally bad. The busy beaver function is a simple textbook example of a function that is well-defined but (provably) not computable.

Expand full comment
human's avatar

Pretty sure "Calculate BB(n)" is provably computable for any given n. What isn't computable is "Calculate BB(n)" for arbitrary n.

And since she did not need her program to work for n>20, the problem was computable.

But I think there'd still be severe issues with checking whether you wrote the program correctly? My lack of a comp sci degree is showing here.

(edit: I missed that Scott described arbitrary-n)

Expand full comment
Joey Marianer's avatar

Actually it has been proven impossible to calculate BB(8000). Scott Aaronson and Adam Yedidya found a Turing machine with <8000 states which cannot be proven to halt using only ZFC set theory. Gödel's Incompleteness Theory hurts my brain a little, but the paper is discussed in https://scottaaronson.blog/?p=2725 if you're up for it.

Expand full comment
Taleuntum's avatar

Great result, but there are stronger systems than ZFC, so it's possible that any given BB(n) can be calculated by proving that some specific Turing machines halt in some sufficiently powerful system

Expand full comment
Joey Marianer's avatar

I think that's trivially true, since if "BB(n) < k" is independent of ZFC, you can just consider "ZFC + BB(n) = k" to be a "sufficiently powerful system" and then you can calculate BB(n) in that system. What that actually means is the part that hurts my brain.

Expand full comment
magic9mushroom's avatar

No, it hasn't been proven impossible to calculate it. If it does halt, ZFC is capable of predicting that it halts, and they have not proven that it does not halt.

(If it does halt, though, it proves a certain ZFC+ theory to be inconsistent due to proving itself false.)

Expand full comment
Joey Marianer's avatar

If it halts, doesn't that prove that ZFC is inconsistent? And therefore it's impossible to calculate it _within ZFC assuming ZFC is consistent_? I'll was trying to simplify the assertion, but I'll concede that this kind of pedantry is definitely necessary when talking about this stuff. (I talked to someone about alternative axioms and the sentence "OK, you can prove it, but is it true?" came up. Allow me to reemphasize the fact that my brain hurts. :) )

Expand full comment
human's avatar

Thanks, that was interesting!

Expand full comment
Donald's avatar

Yeah. "calculate BB(n)" is only "computable" in the sense that any calculation that doesn't take in a number is trivially computable.

BB(n) is some finite number. There exists a computer program that's just

print(" lots and lots of digits")

There also exists a short computer program that takes the longest running turing machine, and runs it.

But both these programs are created already knowing the answer.

The formal math of computability theory allows a finite (but not an infinite) amount of information to be magicked into existance by the omniscient programmer.

Expand full comment
Taleuntum's avatar

Yes, it's bad. A program whose input is 'n' integer and calculates BB(n) after arbitrarily high, but finite number of steps provably does not exist. It's not hard to prove this: If there were such program then you could solve the halting problem the following way: given an n-state Turing machine, first calculate BB(n), then run the given Turing machine for BB(n) steps, if it halts then output "MACHINE HALTS", otherwise output "MACHINE DOES NOT HALT", as the Halting problem is famously unsolvable, no such program can exist.

Expand full comment
TGGP's avatar

You couldn't do it for every n, but this is for one specific n.

Expand full comment
Taleuntum's avatar

I know and I hope I was clear in my answer? See Scott's comment about what question is relevant here: "I would have expected that you write a program to calculate busy beaver numbers, and insert the exact number you want as a parameter, [..] Is this a bad way of thinking about it?"

Expand full comment
Flauschi's avatar

if you want a computable, but comically quickly increasing function, you could use the Ackermann Function.

(but of course this is all not necessary, as the simple exponential function is enough to quickly get past everything that is relevant in practice)

Expand full comment
human's avatar

An extremely long computation that a human could definitely program for would be "The traveling salesman problem for every incorporated town & city in the contiguous United States"

Expand full comment
Donald's avatar

Yeah. But sum(range(9**9**999)) is a much shorter and simpler to write python program that won't halt in a reasonable time frame.

Expand full comment
Matt's avatar

There is no program that can calculate Busy Beaver problems in full generality. Basically what this means is that if you do write a program that can calculate BB(2) then it will get stuck when you try to use it to calculate a higher BB number. Generally this means that it will end up stuck on some machine that never halts but, given the logic you programed into it, it cannot prove never halts and so it either runs that particular machine forever or it hits some preprogrammed limit and stops running the machine but either way it never makes a determination as to whether the machine halts or not.

Assuming you are able to figure out how you can then add more logic to handle the hard cases, but then when you try to use that new program to calculate the next larger BB number it gets stuck again on some new undecidable-with-the-currently-implemented-logic machine. Each time you add new logic to handle the the difficult case you are greatly expanding the size and complexity of the program. For example a 25-state Turing machine was constructed that halts if, and only if, Goldbach's conjecture is false so the program would need to know the solution to the Goldbach conjecture well before it got to BB(100). And there are countless other machines of similar or much greater difficulty that your program would have to handle before it could solve BB(100).

Basically a program that calculates BB(100) is probably possible in some sense but it would be a stupendously large program and not well described as twenty lines of code.

Expand full comment
penttrioctium's avatar

Gemini is in fact wrong, but your original post is basically fine.

Why Gemini is wrong: There is provably no computer program which can take in "n" and spit out "BB(n)".

There are uncountably many mathematical functions N -> N. But there are only countably many computer programs, so only countably many functions N -> N can be implemented in a computer. The function n↦BB(n) has been proven to be one of the ones which can't. (The proof is simple.) (Sidenote: In fact, something neat about n↦BB(n) is that every function N->N which grows faster than it is has *also* been proven to be uncomputable.) (Finally: if you believe the Church-Turing thesis is a scientifically true claim about our universe's laws of physics, then every physical computer can only implement computable functions.)

However, the original post is fine. Pick your favorite programming language. Imagine all pure computer programs in it which can be written in 100-characters --- note there are only finitely many such programs. Some of them run forever; others eventually stop. *Among the ones which run forever,* one of them runs the longest. (I mean I guess there could be ties, but the point is that, since there's only finitely many, there's a maximum.) *That particular program* is the 100th Busy Beaver of your programming language. BB(100) for your programming language is how long it takes to run.

If you don't specify programming language is unspecified, then for historical reasons we mean Turing machines. BB(100) is the longest-running 100-state Turing machine, *among the ones which don't run forever*.

St Joanne presumably found the 100th Busy Beaver and mathematically proved it to be so

This would be a miraculous mathematical effort, far far far beyond non-saintly mathematicians' abilities (we've only recently found the 5th and may never find the 6th). But her program itself is short; remember, the 100th Busy Beaver is merely a particular 100-state Turing machine.

(So the post ought to say that her program *is* the 100th Busy Beaver; there's no need to make a second program which "calculates BB(100)" by executing the original progtam and counts how many steps it runs for.)

Expand full comment
Victualis's avatar

Except for the typo ("among the ones that run forever" should be "among the ones that don't run forever") this gets my upvote.

If I say "here is a 100-state machine that takes BB(100) steps before halting" then I need to provide evidence for my statement. Yet the number of steps is so large that this requires the kind of massive compression that mathematical proofs provide. But it is not at all obvious that ZF set theory is powerful enough to make fine enough distinctions at these scales: there might be only one remaining candidate machine for a specific number of states, but we can't tell whether it halts. We know that ZF probably isn't powerful enough for BB(748), that it definitely isn't for BB(7910), and that we would resolve the Riemann Hypothesis if we could tell if a particular 744 state machine halts. St Joanne is a saint for a good reason.

Expand full comment
Donald's avatar

There does not exist any program that, in general, takes in a number n, and calculates BB(n). This isn't about runtime. Such program just doesn't exist.

Why? Suppose you had such a program. You converted it into a turing machine, and found it took a million states.

You then took another say 100 states to write the number "1 billion" to tape, and run your magic program, and then take the result and loop for that many steps.

But BB(1 billion) is the maximum run time of all programs with at most a billion states.

And your program runs for longer than BB(1 billion) because at the end it loops for BB(1 billion) steps. And your program has less than 1 billion states. It has 1 million for your magic code + a few hundred for the setting stuff up and the looping.

Contradiction.

The busy beaver numbers are the maximum run time of any computer program. If you had a way to calculate them, you could make your program run for longer than any computer program of it's size.

Computer programs given access to a BB oracle that magically knows all the BB numbers can run for even longer than regular computer programs.

Expand full comment
warty dog's avatar

Clearly St. Joanne wrote a BB(100)=n proof brute force. And because she was such a great programmer, it only took her 50 LOC ;)

Expand full comment
penttrioctium's avatar

... what? The whole point of BB(100) is that it's the running time of a merely 100-state Turing machine! If you can't program it in 20 lines, that's a skill issue (a skill issue currently afflicting all humans, but which did not afflict St. Joanne.

Expand full comment
Donald's avatar

Nah. It's possible to calculate BB(100) with 1 line of code. And that code doesn't even need to do anything, it can just be

print("[some digits omitted for brevity and because I don't know them]")

It's a rather long line. But still only 1 line.

But actually, there should be a 20 line program that does this, probably.

I opened a python terminal, and calculated 401^200. This big number took up 7 lines, at a fairly normal font size and line width. This is the number of 100 state turing machines. So you merely need to store the number of 100 state turing machines that halt (which is smaller, so takes at most 7 lines. ) and then run all Turing machines until that many of them halt. Alternately, you could store a singe 100 state Turing machine.

Fitting an arbitrary 100 state turing machine in 20 lines python might take a bit of code golf but is doable.

The tricky part is finding the 100'th busy beaver so you can write this.

Expand full comment
Melvin's avatar

Saint Vladimir once dreamed that he was a butterfly, and from that moment on could never again be sure that he was not a butterfly dreaming he was Saint Vladimir.

He decided to assign a 50% probability that he was just Saint Vladimir, a 25% probability that he was a butterfly dreaming he was Saint Vladimir, 12.5% probability that he was Saint Vladimir dreaming he was a butterfly dreaming he was Saint Vladimir, 6.25% that he was a butterfly dreaming he was Saint Vladimir dreaming he was a butterfly dreaming he was Saint Vladimir, and so on.

This he found satisfactory until someone pointed out that he might actually be, say, a lizard dreaming he was a butterfly dreaming he was Saint Vladimir. This was no problem, decided Saint Vladimir; since there is only a finite number of species it ought to be possible to compute (if not enumerate) all possible combinations and assign an appropriate finite probability to each. Sadly he died long before his great work could be completed since butterflies only live a few weeks.

Expand full comment
Melvin's avatar

Saint Gilbert owned a fence across a road, which he refused to remove until he had fully understood the reasons why it was erected in the first place. Sadly, extensive research at the local library failed to turn up any records of the building of the fence, and put him into conflict with Saint Keith, who pointed out that the community had a long-standing tradition of tearing down any fences whose reasons had not been properly recorded.

Saint Gilbert pointed out that this tradition itself could be changed if anyone could figure out why it existed in the first place, so in a spirit of adversarial collaboration both men sought to find out the origin of the unrecorded-fence-destruction tradition. After weeks of assiduous research they had their answer -- the tradition of destroying unrecorded fences was implemented when a previous tradition of defaulting to keeping unrecorded fences led to the deaths of many villagers who got stuck on the wrong side of poorly placed fences and eaten by roving allosauruses.

Both men were pleasantly surprised to find out that this wasn't one of those infinite regress things, and resolved that now they'd understood the origin of the fence destruction tradition it was okay to change the tradition and hence keep the unrecorded fence, which they resolved to defend with their lives -- one from the left, and one from the right. They were martyred when they were eaten by allosauruses, which had long since spread to both sides of the fence.

Expand full comment
Deiseach's avatar

It seems to me the original purpose of the fence must have been to keep the allosauruses in (or out, depending on which side of the fence you were) but since it clearly failed, then tearing it down and replacing it with a more effective containment system is the right answer.

Expand full comment
Black Mountain Radio's avatar

I love this one. It felt like someone trying to solve a zen koan with math until the very end.

Expand full comment
Celegans's avatar

This doesn’t work because dreams aren’t recursive. If your dream-self falls asleep and has a “dream”, you’ve merely replaced the contents of your base-level dream with the contents of the subdream. It’s array iteration, not tree traversal. This isn’t Inception.

Expand full comment
comex's avatar

As it happens, last night I dreamed that I had acquired the power to manifest objects from my dreams into reality. The plot of the dream involved me repeatedly ‘falling asleep’ into a ‘lucid dream’ to imagine things, then ‘waking up’ in order to manifest them. During the parts where I thought I was asleep, I was aware of both a ‘dream world’ and a ‘real world’ (because I thought I was lucid-dreaming), yet the ‘real world’ was actually part of the dream. I think that counts as a dream-within-a-dream in some meaningful sense.

Disclaimer: The ‘thought I was asleep’ and ‘thought I was awake’ phases may not have been entirely separate moments in real time. They were separate according to the plot, but dream plots often have a chronology that’s more elaborate than the actual sequence of the dream. In this case, the phases were likely mashed together to some extent.

Expand full comment
Aron Wall's avatar

Have you read "The Raven Boys" series by Maggie Stiefvater? There is a character with this power.

Expand full comment
Nine Dimensions's avatar

If I ever have to explain what a rationalist is to someone, I'm sending them this

Expand full comment
David L. Kendall's avatar

My eyes have been opened.

Expand full comment
Deiseach's avatar

The existence of a St Avi the Greater implies the existence of a St Avi the Less(er), is there a vita for this saint?

(This is very funny, thanks for the cheering up!)

Expand full comment
Melvin's avatar

St Avi the Lesser was committed to sainthood but had many other interests too. He resolved to fulfil the exact minimum criteria for sainthood in a single weeklong binge of hyper intensive prior adjustment.

Expand full comment
Chris's avatar

Splendid work, thank you for keeping up with the fiction writing. I especially liked the (if I remember my Roman numerals correctly) Pope Eliezer the 77th.

Expand full comment
Doctor Mist's avatar

I was trying to figure out if there was some significance to 77?

Expand full comment
SCPantera's avatar

John Wick but instead of everyone including all the prostitutes secretly being assassins they're all secretly statisticians.

Expand full comment
hnau's avatar

I hear that beyond the 32nd level of self-aware semi-ironic quasi-fictional inside-baseball meta-commentary the value overflows and becomes indistinguishable from talking about the object level again.

Expand full comment
Tom J's avatar

“rationalists”: this

also “rationalists”: why does everyone keep acting like we're a weird cult we're just trying to use math and epistemics to cleanse our minds from worldly impurities guys i totally swear we're not a cult

Expand full comment
Viliam's avatar

I think actual cults usually don't make fun of themselves

Expand full comment
magic9mushroom's avatar

I think the definitions of "cult" that are sufficiently narrow to make everything in them basically terrible are sufficiently narrow that Rationalism as a whole doesn't qualify (certain groups *within* Rationalism do, of course; the Zizians in particular would qualify under essentially all normal definitions).

There are broad definitions under which the Rats as a whole count, but those are so broad that "cults are bad" ceases to be true because you hit lots of things that aren't actually terrible. Is every monastery a cult? Group living based on shared beliefs seems about equal to the Bay Area Rats (and of course while that's the biggest concentration, there are plenty of Rats who don't live in such places).

Weird? Yes, we have a bunch of autists, of course we're weird.

Expand full comment
anomie's avatar

> because you hit lots of things that aren't actually terrible

...In your opinion. There are plenty of people that believe that the world only needs one god.

Expand full comment
Tom J's avatar

i didn't say anything about cults being bad, but since you've brought up the subject: this particular cult has contributed to among other things several murders, a colossal crypto scam, and the spread of some ideas that (i suspect) the new American aristocracy will cause a lot of harm with, though we'll see how that all shakes out in the long term

also, by the standards of the cult's own theology, if you build a silicon god before you figure out how to program its morality constraints right, it will kill everyone. and yet people from the cult keep trying to do this!

Expand full comment
magic9mushroom's avatar

As I noted above, Ziz is a terrible person and a central example of a cult leader. I will note that while her ideology does draw on Rationalism, the vast majority of Rats despise her and she despises the vast majority of Rats (I mean, literally one of the times she got arrested was for showing up in robes and mask to blockade a Rat event).

I think if you look deep enough into most movements, you will find some small fraction of them that's gone rotten. The Mormons have the FLDS pulling child marriages, for instance. To quote The West Wing, "every group has plenty of demons". I don't think the Ratsphere as a whole has much to apologise for with Ziz. I *do* think we have a little to apologise for with SBF.

As far as our members doing the worst possible thing by working in AI capabilities... well, I think that in itself speaks to us not being a central example of a cult insofar as Rat leadership can't control the base. I personally would appreciate it if we were a bit *more* cultlike and *could* keep our base from doing the worst possible thing (and could also have greater lobbying power via threatening to vote as a bloc), but eh.

Expand full comment
grumboid's avatar

This is great, but a part of me sort of cringes at the idea that anyone who dies for stupid reasons could be a rationalist. I guess it was common among saints, and this story follows that pattern.

Expand full comment
Edmund's avatar

If his objection to becoming a billionaire is that there weren't enough high-impact orgs to donate to, but he went on to found several high-impact orgs himself, shouldn't St. Avi have just done both and then donated to his own organisations?

Expand full comment
Celegans's avatar

But then he would have unleashed a Moloch-funded AI lab on the world.

Expand full comment
Procrastinating Prepper's avatar

He doesn't have time to do both, he's only one man!

EDIT: more serious answer. Even if St. Avi is capable of doing both, he recognizes that there is currently a dearth of high-impact orgs and a surfeit of money. So comparative advantage suggests he should devote 100% of his work time to the former to maximize the charity accomplished.

Expand full comment
magic9mushroom's avatar

Probably not high-*enough* impact to outweigh the AI lab.

Expand full comment
Peter Gerdes's avatar

This is so awesome. I bet this is what it feels like to some religious ppl/scholars to read the scriptures (well ok they probably laugh less and take it more seriously).

The unfortunate part is you really need to be all weird like this for your doctrine to take off.

Expand full comment
Matthieu again's avatar

Not bad, but do Rationalist saints slay dragons? Do they carry around their own severed head or eyes? Do they come in lots of 11,000 hot virgins? I think I'm staying in team Christianity.

Expand full comment
Viliam's avatar

No invisible dragon in a garage can survive meeting a true rationalist.

Expand full comment
Deiseach's avatar

A Rationalist saint would carry around their kidney on a plate, indicating the donation to a stranger they made.

Expand full comment
Feral Finster's avatar

Bastet's tail, this is funny.

Expand full comment
Andrew's avatar

Contrary to implication, most popes were not named Eliezer. That one just thought 77 was a lucky number and even popes have their flaws.

Expand full comment
Procrastinating Prepper's avatar

The Eliezer popes, unlike other popes, are ordered by impact rather than chronologically.

Expand full comment
magic9mushroom's avatar

Wouldn't you need to be able to tell the future to know what numbers they are?

Expand full comment
Taleuntum's avatar

They are relabeled every year by the Council of Eliezers.

Expand full comment
Christos Raxiotis's avatar

Saint Alex is still here at least

Expand full comment
José Vieira's avatar

What is the meaning of Crunch Time in this context?

Expand full comment
Taleuntum's avatar

When ASI/AGI is near, and we have very little time left to come up with a working alignment plan

Expand full comment
José Vieira's avatar

Thanks!

Expand full comment
Donald's avatar

Ie about 6 months ago and onwards.

Expand full comment
Hyolobrika's avatar

Not beating the cult allegations ;)

Expand full comment
Marcin Przyborowski's avatar

I'm so happy to discover that I'm not the only one who is visited by Gwern in their dreams!

Expand full comment
Jessica Finn's avatar

headcanon: fables told to dath ilani schoolchildren

Expand full comment
Laplace's avatar

“What use is money, when Open Philanthropy already has $20 billion and can’t find enough high-impact charities with room for more funding?”

I know this whole thing is a joke, but I want to clarify that I think this is very false. Lots of high-impact charities have room for more funding, and OP doesn’t fund them anyway. This has been the case for a while , at least multiple years at this point. I suspect it may even have been like this literally since the start. If you didn’t know the right people, “funding is not the bottleneck” might have always been a myth.

I think there’s multiple reasons for this, but one is that OP doesn’t really have 20 billion to spend on whatever is high impact in a meaningful sense. They are constrained in how much they can spend and what they can spend on by Dustin’s opinions.

Expand full comment
Lypheo's avatar

Your rationalist fiction is some of my all-time favorite content - would love to see more of it!

Expand full comment
whenhaveiever's avatar

"...later evidence from other missionaries demonstrated this to be well-calibrated."

Damn. Subtle and so cold.

Expand full comment
Donald's avatar

Had Saint Tarski of TDT been summoned by the emperor, he would have said.

Covid is 100% natural. No, 150%. No if you will it, covid can be infinity percent natural!

The Emporer, predicting a response that would make his demands look stupid, never summoned Tarski to his court.

Tarski later died after surgery intended to clone him. As a demonstration of his theories, he had requested to be cut up into finitely many pieces and reassembled into 2 complete Tarski's.

Unfortunately the surgery failed as the surgeons couldn't choose where to cut.

By the time Tarski realized this would happen, he was already logically committed to the surgery, and backing out would have had negative reputational consequences in a larger measure reality branch.

It is said that when he first mentioned his plan to other people, and then phoned the surgeons, asking for a quote for an experimental surgery, he was in good chear. Knowing that this version of him was going to die, but that he was in a sufficiently unlikely reality branch that it didn't matter.

Expand full comment
Anonymous Dude's avatar

The only unrealistic part of this is how many women there are.

Great writing, by the way. It really does remind me of the way they talk about saints in hagiographies, though I'll wait for Deiseach to weigh in.

Expand full comment
Mr. AC's avatar

I wonder how Scott comes up with this. Did one of the saints jump into his mind ("St. Joanne of ARC" perhaps? ) and the rest flowed from that one nucleus?

Expand full comment