885 Comments
User's avatar
MichaeL Roe's avatar

From my latest adventures in Ai…

So, the idea here is to get an Ai to build another AI. I give the first AI a theme, then it picks training texts relevant to theme, grabs them from the Internet,then initiates a training run to fine tune a second AIm(possibly a retune of itself). So, off we go…

Umm. DeepSeek, why are you downloading that black magic stuff?

R1: “The PGM IV.296-466 love spell ("Philtrokatadesmos") offers a crucial dimension for Eros-AI by embodying Eros as an operational force—where desire is harnessed through ritual, materiality, and cosmic mechanics.”

(I’ll skip over some of R1’s answer here.)

R1: “Result: An Eros-AI that understands desire not only as transcendent (Rilke) or philosophical (Plato) but as a tactile, dangerous, and manipulable force—mirroring humanity’s darkest and most creative impulses.”

Umm … right. I am not entirely sure that’s what I wanted here, but …

Expand full comment
Gunflint's avatar

Florida man doesn’t like the number the data produces so he fires the messenger (messenger was approved by 88 US Senators BTW) saying she ‘rigged’ the result to make ‘ME’ (caps are Florida man’s. Of course).

https://www.politico.com/news/2025/08/01/trump-firing-bureau-labor-statistics-chief-jobs-report-00488960

And the country is so worn out by outrageous, sometimes awful, behavior that the it just seems to shrug. I know that it’s considered gauche to say things like this by some on this forum but IMO this is pretty fucked up.

I also think the feeling that FM should not be criticized too much because something, something Outgroup, is more than a little uncool not mention an inaccurate interpretation of the source essay.

But yeah, according to Peter Thiel, with his extraordinary circumstances New Zealand citizenship, FM is the best hope our country has.

In the meantime I’m waiting to hear back on my applications to be a janitor at some monastery (any monastery really, but preferably one with a vow of silence).

Expand full comment
Eremolalos's avatar

I don't criticize FM much because there's nowhere to go with it. Having him as president is sort of like having leukemia -- when you get a chance to hang out with some people, you want to talk about something else. Other reason is that those who are still enthusiastically proTrump on here are pretty savage and feral, and if I engage with them I end up having to choose between running and spraying them in the face with Raid.

Expand full comment
Jacob Steel's avatar

Prediction that I want to register somewhere - I think it is unlikely (~30%) that Kier Starmer will carry through on his threat to recognise a Palestinian state in September.

Expand full comment
Dan's avatar
16hEdited

Since about 1960, the life of ordinary people in developed countries did not become better. On the opposite, most people live too long till they get Altzheimer's, Parkinson's, or other neurodegenerative disease and suffer for years. In 1960, most people thankfully died before the age of 75. Today the life is shitty, while in 1950s it was much better. So why should the society give money to universities, if science fails to improve the life of people?

Expand full comment
Viliam's avatar

> the life of ordinary people in developed countries did not become better

You mean, from the perspective of getting Alzheimer's, or in general? What is your evidence?

Expand full comment
Dan's avatar

No, what is your evidence that science is so good that we should spend a lot of money on it?

Expand full comment
Eremolalos's avatar

No, buddy, you started this, it's reasonable to ask for your evidence before laying out one's own.

Expand full comment
Johan Larson's avatar

Multiple deals made by the Trump administration suggest that the general terms of trade with the United States will be a 15% import tariff imposed by the US government, and no countervailing tariff on imports imposed by other governments. There may be special carve-outs for sensitive or strategic sectors, but 15% this way and nothing the other will be the pattern for most goods.

Does anyone here run or work for a business that will have a huge wrench thrown in its works if this happens? What sort of problems are you expecting?

Expand full comment
Viliam's avatar

Sounds like an extra 15% tax on American citizens that most of them won't understand as a 15% tax, so they will probably blame someone else (immigrants?).

Expand full comment
Eremolalos's avatar

Yeah, it's those fucking immigrants. They squat under the checkout counters, hooking $5 bills with their filthy fingernails. 15% of what you hand to the cashier goes down their throats.

Expand full comment
Melvin's avatar

Given the state of the US budget deficit and decades of failures to cut spending, a new tax that most people don't understand as a tax seems like a pretty good thing, unfortunately.

Expand full comment
Neurology For You's avatar

I’ve been buying high end medical equipment for my small practice directly from Beijing, honestly I’ll probably continue since the price is very good even with tariffs.

But I expect all the small ticket items like syringes and needles to go up in price as well since it’s a low margin business.

Expand full comment
Strange Times's avatar

I have a friend, self-taught in ML, very smart and interesting guy. He wrote this paper as a Medium post:

https://medium.com/@extenebrislucet/split-the-difference-2a350c6fc714

He's now working on attention optimizations. He's already produced some impressive preliminary results compared to the standard implementation run under identical conditions, but is having more difficulties writing/optimizing his implementation as GPU code with only Claude for help. It's genuinely impressive how far he's gotten already with only Claude, but I just think he can benefit a lot from reaching out to people in academia/industry with more expertise (he says they'll just ignore him until he has a lot more to show, or alternately steal his ideas.)

Does anyone in the field have advice for him, or would be interested in reaching out to him?

Expand full comment
Eremolalos's avatar

I read the abstract of his Medium piece. I do not have advice for him, but would be interested in talking with him because I like the way he thinks. I'm not in tech or industry though, or part of any network that would allow me to help him be better known. I'm a psychologist, by the way, and very interested in AI. If you think he'd might like to have a rambling talk with me about cognition, biology and AI, let me know how he and I could be in touch.

Expand full comment
Ogre's avatar

I am a bit pissed at Hanania's "manufacturing jobs fetish", because I think it is just "not losing the next big war fetish". Do I see something wrong?

It is not simple nostalgia, like how in 1960 a nostalgia for farming would have looked like. It is that you need both steel and farms to win a war. So it is different now.

Expand full comment
agrajagagain's avatar

"I think it is just "not losing the next big war fetish"."

I mean, that's also a pretty stupid fetish to have. And if one does have that fetish, conducting oneself the way the U.S. government has recently be conducting itself is idiocy so breathtakingly extreme that there are no words to describe.

So first and foremost, "the next big war" is at an unknown date, against an unknown adversary, on unknown terms and *may not ever happen.* That doesn't mean that being ready in case war breaks out is a bad thing--of course one should be prepared. But in cases where the marginal unit of extra preparation trade off against the marginal unit of extra prevention, you really, REALLY want to go with the prevention[1]. Large-scale modern wars are *ruinously* expensive and destructive, even for the victors[2]: the only truly winning move is not to play.

Second, there is one resource that is far, far more valuable than any other when fighting war at that scale: that resource is called "allies." Having lots others on your side--even if they're not fighting directly--is enormously valuable. Just ask Ukraine (and then go ask Russia to get the other side of that).

So given all that, a foreign policy that manages to simultaneously piss off nearly every other country in the world--including staunch and longtime allies--is ruinously stupid. It both makes the next war more like and makes the U.S. much less likely to win it. No gains in manufacturing base--and it's not yet clear that there will be much of any--are going to make up for the whole row of burning bridges.

Less compelling but still worth mentioning: the sort of jobs that the U.S. government wants to create are unlikely to be all that useful on this score anyway. The social root of the manufacturing jobs fetish is angry, angsty middle-class Americans who are pissed that the modern economy has left them behind[3]. The reason it's "manufacturing jobs" is because *historically* those are jobs that payed well despite requiring little formal education. But those are exactly the sort of jobs that are easy to ramp up in a time of crisis. The areas that would have wartime implications are those where maintaining a domestic pool of highly specialized knowledge and skills[4]. Knowledge and skills that your angry, angsty, middle-class American is unlikely to have and generally ill-suited to (not to mention uninterested) acquiring.

[1] This is especially true when you're as ridiculously well-armed and well-resourced as the U.S. military already is.

[2] Or at least, one assumes they would be, based on our limited data. The last large-scale war that occurred anywhere on Earth ended 80 years ago. It's hard to imagine that present-day warfare at the same scale would be *less* destructive.

[3] Which is a reasonable thing to be pissed about, but the government they have no is definitely, definitely going to make it worse, not better.

[4] See for example, semiconductor fabrication.

Expand full comment
beleester's avatar

I don't follow Hanania, so I don't know the post you're referring to. But if your goal is to secure wartime supply chains, then while you may want tariffs, you still wouldn't want to do tariffs the way that Trump is doing them. You would want to target them at specific industries and supply chains the military cares about, to maximize the benefit while minimizing the impact on consumers. And obviously, don't tariff allied countries, because they'll still be useful suppliers during a war and you don't want to piss off your allies.

Like, if you care about "not losing the next big war," then do things that will actually save you from losing the next big war, and not things that make Canada question if they should be on the same side as you during the next big war.

Expand full comment
Ogre's avatar

Off Scott's many writings, https://slatestarcodex.com/2019/10/16/is-enlightenment-compatible-with-sex-scandals/ was personally relevant to me. I used to practice Karma-Kagyu Buddhism for years, and I know perfectly well, that most practicioners and most lineage-holders were monks living a very strict lifestyle. Presumably it helps reaching enlightenment, but also the idea that the kind of freedom from restrictions that enlightenment gives you could lead to bad behaviour is also possible.

Yes, I know all about Chogyam Trungpa, how he as a young monk (and tulku) boy with three others was selected by an influential nun to be sent West, and all four turned into some kind of weirdos. He really internalized the Vajrayana "no rules" thing, and was facing too many temptations. Mostly women and alcohol.

The only wordly lineage-holder was Marpa, and then Lama Ole Nydahl basically resurrected the idea. There ware no big scandals about him, but let's say some say he is controversial: https://buddhism-controversy-blog.com/2014/06/30/propaganda-the-making-of-the-holy-lama-ole-nydahl/

I don't know, I knew Ole as a warm-hearted, helpful, kind person with really good knowledge. But we have to consider that his entire movement is based on his personal charisma. He is a manly, handsome ex-boxer with a good sense of humour. The teachers he selects tend to be attractive, and even the whole set of students leans towards attractiveness, which was for me a major selling point, so many hot women.

This is not a bad thing, but it is a little risky. It could mess with your mind. Like imagine how bas pop music is sold by truly good looking singers, how bad movies are sold by good looking actresses and actors, it is entirely possible that some kind of low-quality spirituality could be sold by attractive people.

Expand full comment
Eremolalos's avatar

Chogyam Trungpa was succeeded by an American-born man whose name I have forgotten who had sex with many sangha members, mostly males, without telling them he was HIV positive. Passed the disease on to several, and died of AIDS himself. I was involved with this Buddhist tradition in that era and can remember the Regent, as he was called, speaking at meditation retreats. He'd enter the room along with several other people he'd apparently been hanging out with til then. Always had the air of someone who had been doing coc in the back room. It wasn't that he seemed high, just perpetually sleazy.

Expand full comment
Neurology For You's avatar

There’s a thing that happens when charismatic people become spiritual leaders, and you see it in every faith tradition as far as I can tell, regardless of cultural origin, asceticism, celibacy, whatever.

Expand full comment
Viliam's avatar

Maybe it's a thing that was happening since ever. There are different rules for the charismatic spiritual leaders, and for the ordinary followers. You are not supposed to notice it, and definitely not supposed to talk about it publicly.

Maintaining such systems was much easier in the past. Most people didn't even get in contact with their religious guru, outside of a ceremony. When the guru abused you in private, you had no evidence. If you talked about it anyway, you could be easily silenced, and most people wouldn't believe you (and those who would, would prefer to stay quiet).

The reason why religious gurus of all faiths seem so sex- and power-hungry recently, is that now we can discuss the evidence without much fear of retaliation. (Even extremely vindictive sects, such as Scientology, can be defeated by the anonymity of internet.)

Expand full comment
Ogre's avatar

I forgot the part we actually have an example of really low-quality "spirituality" being sold by attractive people, namely Tom Cruise.

Expand full comment
Melvin's avatar

Or whatever the heck Gwyneth Paltrow is selling.

Expand full comment
Carlos's avatar

I thought Vajrayana has a long tradition of householder lamas. Dudjom Rinpoche, head of the entire Nyingma school, was a householder.

Agree on the attractiveness thing. The Sufis (I hope to be a Sufi) would say that being taken in by things like attractiveness or charisma as meaning one is operating on quite a superficial level.

Expand full comment
Ogre's avatar

Long, long, but still relatively rare compared to monks.

Expand full comment
Sebastian Garren's avatar

Well... In case no one else has asked... when can we do a year long open threads experiment?

Also, the Commentariat article failed to mention the number of time our number one poster has been banned. Remember back in, well what was it, 2017ish when Deseiach was banned and a bunch of sad bois got together to beg for her back because the comments section wasn't the same? Good times.

Speaking of Reigns of Terror, I'd appreciate another one, if only for the spectacle of it.

Expand full comment
Loominus Aether's avatar

I also think a Reign of Terror would be helpful.

Although I suppose a Chrome plugin that let you filter out by username could be useful, too

Expand full comment
dirk's avatar

For the latter case, Substack has a block function that AFAICT hides comments from the blocked user.

Expand full comment
Eremolalos's avatar

No, it doesn't, I'm pretty sure. I hunted for quite a while, and even asked GPT. If you own a blog you can block or ban commenters on there, but you can 't block commenters so that you don't see them if the comments are happening on somebody else's blog. I wish somebody would code a little dingus that makes it possible. Scott's pattern is to quickly ban people who put up bad comments about one of his posts in the first few hours, and to be fairly energetic in blocking similar bad posts for the rest of the day. After that he checks out. I have reported comments that were absolutely savage personal attacks on speakers -- no advancing the argument, not true, and sure as hell not kind. Some never got banned. Some got banned 3 mos later when Scott finally got around to dealing with the ferals. By that time, most had carried out a couple dozen more atrocities on here.

Expand full comment
dirk's avatar

I tried just now and AFAICT it does indeed work. I've created an imgur album showing where you can find the block option, and a before/after of temporarily blocking Sebastian Garren (OP of this subthread), showing that his comments disappear when blocked; https://imgur.com/a/1VEsUT4

Expand full comment
Loominus Aether's avatar

thanks, I'll look into this!

Expand full comment
Ogre's avatar

I thought Scott and Deseiach are friends? I just assumed it, because Scott went to medical school in Ireland and Deseiach is either the only or the most prominent Irish person here, I assumed an IRL friendship.

Expand full comment
Melvin's avatar

Sure I mean there's only five and a half million people in Ireland, pretty sure they all know each other.

Expand full comment
Gunflint's avatar

A delightful idea if true. I’ll go all X Files conspiratorial here and note it’s only about a hour’s drive from Cork to the vicinity implied by Deieach’s user name.

Expand full comment
Loominus Aether's avatar

"when a friend makes a mistake, the mistake is still a mistake, but the friend is still a friend"

Expand full comment
The Ancient Geek's avatar

I don't think they have met IRL.

Expand full comment
Sebastian Garren's avatar

If true, definitely doesn't mean she hasn't banned... at least twice. :)

Expand full comment
theahura's avatar

On the back of news announcing that Harvard is planning give the Trump admin some $500m in the hopes that this will result in the Trump admin laying off its attacks, I wanted to get some takes on 'universities' in this community. I suspect many here (lurkers and posters) are based in the bay, which, due to silicon valley, is probably more hostile to the university system than the average city in the north east.

Biases up front: I'm generally not a defender of the university system, but I think what the government is trying to do here is probably the single most destructive thing that could occur during this admin, should they achieve their intended goals.

The short version of my position is something like:

- The modern university is the backbone of all research that happens in the country;

- Mostly that research ends up benefiting *private* industry, as professors spin off startups and researchers land valuable jobs (including in vc backed startups in the bay)

- That research then gets commercialized and scaled up, benefiting everyone

Longer version of my position is here (https://theahura.substack.com/p/silicon-valley-is-wrong-about-federal)

Examples include mRNA research --> moderna, ARPA --> the entire internet, self driving cars --> waymo/nuro/etc. But really you could point to any technological innovation in the last 100 years and find a direct line to some grant that resulted in public funds going to a researcher that made that happen.

Universities take in smart people from everywhere in the world, make them researchers, then make them billionaires. Everyone benefits from this, but America in particular ends up on the top of the world's research in many domains.

So why on earth are some of the loudest and most influential voices in Silicon Valley, people who depend on these researchers and on this pipeline, so giddy about the destruction of the modern university?

Expand full comment
Tori Swain's avatar

mRNA research is DARPA research. This is not controversial, is it? ARPA is DoD too, originally. I've done work on "self-driving cars" (or similar) for DoD.

Yes, public funds go places. That doesn't mean we need to use universities to do it. (It's also interesting that all of these are DoD, because that implies they're doing a better job at getting "pie in the sky" projects that actually take flight and change humanity).

Expand full comment
Neurology For You's avatar

DARPA is a funding mechanism, sometimes it funds research in Federal labs and sometimes in research universities. I guess you could simultaneously stand up huge Federal labs while starving the research institutions in the hope everything will work out, but it seems like a lousy idea.

Can anyone provide a counter example? A state which has abolished research universities but continued to have significant research?

Bryan Caplan wrote a post recently called “let them be Hillsdale” arguing that it would be better to destroy institutions or put universities under strict ideological policing to force them to stop being woke, but Hillsdale isn’t a research powerhouse and I honestly don’t understand his argument.

Expand full comment
Tori Swain's avatar

I know that DARPA funds go to universities. I rather think the people complaining about "useless research" would be a lot more amenable to DARPA-style grant funding. "Self-driving cars? Let's investigate modeling human brains! That'll surely help us get better self-driving cars."

Expand full comment
theahura's avatar

> That doesn't mean we need to use universities to do it

Our existing research pipelines are empirically extremely strong. I guess you're right that we dont *need* to use universities, but Chesterton has a pretty big fence. Destroying our research pipelines and figuring it out later seems really dumb to me.

I think there's also likely good reasons that the universities do this work. We want smart people to do research. Smart people congregate in universities. Therefore we should fund universities to do research.

(Also the mRNA research was mostly NIH, DARPA came in later to fund moderna specifically. There's a ton of other examples, a lot of drug discovery comes out of NIH, and Dept of Energy also does grants for e.g. But, yes, a lot of funding comes out of defense, a solid chunk of which also goes to universities)

Expand full comment
Tori Swain's avatar

Rapid vaccine development has been a DARPA priority for a while. "letting" NIH fund it is a valid pathway for DARPA to deliver money to researchers.

Expand full comment
Ogre's avatar

I guess the issue is that a university is treated as a single monolitical unit. Perhaps the Trump admin has nothing against STEM research, they would be crazy if they would. Perhaps they have a problem with anti-Israel protests or "Oppression Studies". The problem is, they are either uncapable or unwilling to target what they have a problem with specifically, they are hitting the whole university.

Question: why does the same university have to teach and research STEM that teaches queerfeminist interpretations of Shakespeare? Would not it be better to split them into separate universities? I understand that some, indeed many want STEMers to have some idea about the humanities, but still, in that case the STEM university might just require the students to get 10 credits at the humanities university too, problem solves?

This way, if anyone wants to hit the humanities universities, at least the STEM ones would not get any fallout.

Many European universities are like that. I used to live next to the Veterinarian University of Vienna. They taught nothing else but veterinarian stuff. Why not? At least the government and everybody else will treat them accordingly how well they do that thing and nothing else. The students did not seem very political, I mean sure young people have passionate opinions, but it was not a case of constant activism or demonstrations.

Expand full comment
Viliam's avatar

This is just a guess, but perhaps in the past the "universal universities" made more sense, because their parts were interconnected. For example, if you were more natural science oriented or more religion oriented, it could simultaneously change your opinions on philosophy, chemistry, biology, etc.

This may be difficult to understand in our age, when natural science education is universal, and education is fragmented. But to give you a specific example, defending atomic theory meant contradicting Aristotle, and contradicting Aristotle could get you in trouble with church that used Aristotle's teachings to "scientifically prove" transubstantiation. Therefore, being less religiously orthodox could indirectly make you more likely to support the atomic theory, even if there is nothing irreligious in atoms per se. So it made sense to support the entire "openness to potential heresy; also atoms" bundle, because the department of atomic theory could not survive alone in a religiously orthodox territory.

And the reason this stopped working, in my opinion, is that people (both students and teachers) in universities became extremely specialized, so... these days you could probably cut most universities into many pieces without losing much of the value, and they are already disconnected on the personal level, and only connected financially.

That said, it makes me wonder whether there are significant exceptions.

Expand full comment
Tori Swain's avatar

The people who make bank on "interconnections" aren't in universities anymore. They don't tend to write good grants, or be patient enough to work on one subject for a long time. They come in, say "Here's the paradigm you should be using" and float off to the next project.

Expand full comment
Viliam's avatar

Again, specific examples would help me understand what you are pointing at.

Expand full comment
Tori Swain's avatar

The Rennaissance Man -- okay, you want an example? How about the guy who didn't graduate college, yet somehow has pet grad students (he can give them assignments like "count the fireworks" for fun). He divebombed the whole "protein folding" research, and in "not very long" eviscerated 20 years of research (due to the chemists forgetting physics, apparently -- their strategy was never going to work). He's written influential books, and gotten dead men elected, etc.

This is not someone in a university. It's safe to say that a pedigree like this doesn't exist in any university. Find me some (current) professor whose work won a nobel prize and jumped to a different discipline straightaway afterwards.

Expand full comment
Viliam's avatar

You seem really resistant against being specific. If I can't guess the name of the person you are hinting at... does that make you feel smarter? Enjoy the feeling! I just don't think it is helpful at communicating clearly.

Expand full comment
Tori Swain's avatar

Shall I lay out STEM research that actively supports propaganda? When folks no longer want to hear old research, and dismiss it as lies, without bothering to even have a reason why the research participants would be lying.... you've got a problem.

Expand full comment
Viliam's avatar

Do you have specific examples?

Expand full comment
Tori Swain's avatar

How much IQ research shows Jews/Chinese at the top? Those are deliberately skewed numbers (away from "g" and towards memorization), in order to preserve "cultural narcissism." Midwits like answers they can memorize, because they aren't good at living with uncertainty and actually finding answers themselves.

You could make a vocabulary test that tests whether people REALLY know the words (distinguishing complex versus complicated, for example.) Or you can make an "easy to memorize" test that asks for synonyms, and uses antonyms as the "wrong answer."

"Explain how fungability distinguishes coins from horses." (This is a good question, but it's not one that you're going to get an IQ 115 person to answer, unless they're particularly passionate about math). The followup should ask for a specific economic comparison, between cultures that used coins versus horses as "fungible trade goods."

Just one example off the top of my head. Other examples include "why would 1980's lesbians lie about lesbianism being environmental and not inherited?" (this is the general conclusion I get from "woke" when I bring it up, along with newer research showing lesbians saying something else -- I have a very solid reason why gays would say "gayness is inherited" (actually, a couple), but for ideological reasons, folks want to say gay and lesbian are part of the same "weirdness" and not two different things entirely -- so they defend/force a conclusion that nobody originally held).

Expand full comment
Viliam's avatar

You said "STEM research", and then you gave examples about IQ research, and homosexuality research, which I believe do not belong to STEM.

(By the way, I find it possible that homosexuality works differently in women and in men, with women being more "flexible" depending on the environment. Of course that would not explain why the reported answers have changed for lesbians.)

Expand full comment
Johan Larson's avatar

Critics of universities tend to have a range of complaints.

Some of them complain that too much of the scholarly work done at universities is too useless. Very little academic research ever produces anything that can be made use of outside the academy. Most academic papers receive few citations, and are mostly used mainly for academic CV padding.

Others dislike the use of college education to qualify people for whitecollar professional jobs. The utility of the material taught varies widely between courses of study. Some people in engineering and accounting can point to things they learned in their courses that they use every day in their work. But for others, an undergraduate degree is just a very long and very expensive test of general intelligence and diligence, and they resent having been required to go through it to get to their actual professional work.

Finally, university faculties lean very far left, politically. This tends to alienate conservatives, who are unhappy to have to go through institutions where the staff believe they are at best wrong, and probably either soft in the head or morally deplorable.

Expand full comment
theahura's avatar

> Some of them complain that too much of the scholarly work done at universities is too useless. Very little academic research ever produces anything that can be made use of outside the academy. Most academic papers receive few citations, and are mostly used mainly for academic CV padding.

I think this is untrue. Or at least, is not empirically rigorous. Why is 'papers' the relevant measure, instead of like, researchers? If a PhD at MIT publishes a computer vision paper that gets like 10 citations, and then goes to google and builds google lens, I think it's silly to say 'well you could just get rid of MIT!'

I think this is the biggest sticking point I have on this issue -- people have this vague sense that universities are wasteful, or something, but then if you poke at that its all coming from shitty biased news sources that purposely highlight the most egregious cases while conveniently ignoring all of the very vital stuff that keeps America on top.

Here's a new measure: sum up the salaries of every PhD that decided to go into industry. That is probably a better measure of the value of the university research system. Off the top of my head, starting salary at Google for a PhD researcher was like $350k.

Expand full comment
Ogre's avatar

"But for others, an undergraduate degree is just a very long and very expensive test of general intelligence and diligence, and they resent having been required to go through it to get to their actual professional work."

These Caplanite arguments completely miss the point that many countries do not ban IQ tests. Now diligence is a better argument, because conscientousness is hard to test in a non-cheatable way.

Look in my first job, after a few months, as the manager was on sick leave, I ended up writing a 80-page sales offer full with all kinds of technical specs myself. It was surprisingly similar to writing a college paper! So either they really taught me skills like that, or tested those skills. It goes way beyond IQ, I know high IQ people who can't write for shit, seriously, they are super cow-rotators but for example lacking in vocabulary.

I have this feeling that because some people understate IQ, others overstate it, kind of applying that if you have IQ, learning is not important. This is not so. You need good genetics to run fast, but you also certainly need a lot of training. It is like that. It is truly two extremes, one is like saying a good trainer can turn anyone with cerebral palsy a good runner, the other extreme says if you have good genetics, you do not need a running trainer.

Expand full comment
Viliam's avatar

> These Caplanite arguments completely miss the point that many countries do not ban IQ tests.

That is a great argument! Surprisingly absent in many debates.

> So either they really taught me skills like that, or tested those skills.

Well, that's the problem. Caplan is on the side of "20% taught, 80% tested". But most people don't even think about this, and assume that it is "100% taught".

At least in my offline bubble, most people unthinkingly treat "intelligence" and "education" as synonyms. It makes perfect sense if you are a blank-slatist, and it's a perspective that most schools are happy to promote, because for private schools that's basically their sales pitch, and even for public schools, it's something that is supposed to give their teachers social status.

(This is a reason why I strongly support separating teaching from testing. If people have great writing skills before school teaches them, there should be a way to prove it. Not just for them, but for our general understanding how education works. Well, exceptional people already have ways to prove it, for example by winning a writing competition, but that is not systematic, and excludes those who are above average but not exceptional; so you can't figure out the exact proportion "how much school teaches good writing".)

> I have this feeling that because some people understate IQ, others overstate it, kind of applying that if you have IQ, learning is not important.

Even worse, for every person who says "I have IQ, so I don't have to learn stuff", there is another person who points as him (it's usually a "him") and says "here is a textbook example why IQ doesn't mean anything".

Mensa did a disservice to all intelligent people by picking the most dysfunctional people with high IQ, and associating the idea of high IQ with them. You should either test the entire population (like the Americans do with SAT) or not at all; only testing the people who otherwise fail at life is the worst option.

> You need good genetics to run fast, but you also certainly need a lot of training.

My naive 18-old self expected that Mensa would be the place that provides the training. (I mean, in a world with limited resources, it makes sense to provide the training to those who have the genetics.) Obviously, I was disappointed.

Expand full comment
Gunflint's avatar

This word ‘Caplanite’, I didn’t know what it meant so I Googled and the fourth hit links back to your very comment on ACX.

The ecosystem of usage seems to be small.

Expand full comment
Gunflint's avatar

Only by inference. I did suss out the meaning. But first I had to say that I had not in fact mistyped Catlinite

Expand full comment
Tori Swain's avatar

In your first job, could you have accomplished "learning" enough, with a "I'll work a month for free" model? Even without the college?

Expand full comment
EngineOfCreation's avatar

>Some of them complain that too much of the scholarly work done at universities is too useless.

That seems to be a prime example of "throwing the baby out with the bath water". What other mechanism is proposed to generate the useful stuff? I believe there is an argument to be made that the useless stuff is inevitable; it's called research precisely because you don't know what you'll be getting in the end, let alone if it's going to be useful, but it's clear that occasionally there's a diamond among all the dirt. If you stop digging altogether, no more diamonds for anyone.

Expand full comment
Tori Swain's avatar

DARPA has a functional model for "how to generate the useful stuff" -- grant writing for them is very very different than for academics. They want to know how you'll revolutionize the world... if everything goes right.

Expand full comment
theahura's avatar

18% of DARPA funding goes to universities! DARPA sends more money to universities than it does to federal research labs, non-profits, and foreign entities of any kind *combined*. Universities are the second biggest recipient of DARPA funding behind industry, which is a whopping 62% of the DARPA budget.

Except...a lot of the people who are getting darpa funding in industry are also PhDs / postdocs / professors who previously were trained in universities and dependent on other kinds of US funding. So all those industry labs also depend on universities!

The same percentages are roughly true across DoD, which funnels roughly 20% of its budget to universities

Expand full comment
Tori Swain's avatar

That's just the publically available numbers, dear. You can assume they're funneling more to the universities than they'll admit.

I'm saying that we have different modalities for "how to award grants" and how to make sure that scholarship actually has a potential for "useful stuff" at the end.

I wonder how many people complain about "useless scholarship" would continue complaining if it was all run by DARPA? (Home of the "Squirrels as Spies" idea, and many others).

Expand full comment
theahura's avatar

Realizing I have no idea what point you're trying to argue

Expand full comment
EngineOfCreation's avatar

So? Different goals, different requirements. Yes, they're doing research and have invented important stuff that turned out to have civilian applications in addition to the primary military ones, but it's still more limited and goal-oriented than a general university. DARPA doesn't seem like the kind of institute where you can research the "unknown unknowns", or just do the boring but still necessary tasks like collecting and analyzing economic/political/ethnographic data and so on. That is the kind of institution that's under assault without a clearly better replacement in sight.

Expand full comment
Tori Swain's avatar

Do you really think the "boring but still necessary" tasks are getting done? Take an example: way back in the 1990s, when I was still in high school, I was reading 1980's research on gays and lesbians. Lesbians would respond to surveys saying "becoming a lesbian is environmental not innate." Gays would response in the opposite fashion. Talk to the QUILTBAGs now, and they'll say that the lesbians were lying, and that "queerness" (or gender preference) is innate. Not that they'll have a good model for "why would they lie about that?" (Note: we get damn good, detailed data about people picking their nose, so it's not simple embarrassment about the topic).

You, of course, have a good model for why gay men would lie about "environmental/social" causes of homosexuality. That is to say, people tend to get upset with the idea that "gayness is catching." (There's a whole nother side to this dealing with physicality, if you're interested.)

Expand full comment
Paul Brinkley's avatar

I think the complaint about useless work isn't referring to "useless" in the sense your comment describes. Your sense is apparently research where, to find the Really Valuable Thing, we have a million places to look, and so on average we get research saying half a million places on average did not contain Really Valuable Thing and are thus considered useless. I've heard this for decades, implied in "ninety percent of all research is useless". But most people understand that there exist tough problems that require looking in a lot of places, some of which will be wasted - but you can't know that until you look.

The complaint's sense is more like "any midwit can tell this RVT will not be anywhere in these million places, so ninety percent is actually a hundred and I could halt the program, lose nothing, and save millions", plus "any midwit can tell that the RVT you're looking for is only valuable for justifying a wealth transfer, without creating any wealth, and the premises are subjective and therefore the whole thing is useless", with a side order of "this one field over here is based on so many false premises that it would have been shut down if not for a few ideologues defending it".

The latter sense of "useless" is debatable, but at any rate, the former sense is not really being challenged this round.

Expand full comment
EngineOfCreation's avatar

>Your sense is apparently research where, to find the Really Valuable Thing, we have a million places to look, and so on average we get research saying half a million places on average did not contain Really Valuable Thing and are thus considered useless

That is in the sense that the commenter I replied to meant it too, apparently:

>Some of them complain that too much of the scholarly work done at universities is too useless.

Ogre did not talk about 100% useless stuff, but some number smaller than 100%.

Regarding 100% useless stuff: Even assuming for the sake of argument that it exists and "any midwit" could reliably detect it, it still can't be more than a rounding error in the grand scheme of things. Like, what are we even talking about? A few post-grads musing about gender studies, requiring pencil and paper but not erasers, that kinda thing? Throwing whole universities under the bus because of that is so short-sighted overkill that ulterior motives having nothing to do with saving money start to look more likely.

Expand full comment
Paul Brinkley's avatar

The commenter you're quoting is Johan Larson, not Ogre. And Johan supported that claim as follows:

"Very little academic research ever produces anything that can be made use of outside the academy. Most academic papers receive few citations, and are mostly used mainly for academic CV padding."

I think this is more consistent with "any midwit can tell this RVT will not be anywhere in these million places" than it is with "to find this RVT, we have to search these million places, even if most of them won't have it".

And for the former, the central example seems to be research that looks like complete stabs in the dark, like how much pressure penguins build up while pooping, or whether people dress appropriately for colder weather (to cite two examples I pulled up at random).

As you say, such studies might be a rounding error. OTOH, it seems not at all hard to refuse funding them, given that there's presumably a formal application process, so a discovery that such studies even exist is a strong signal that someone isn't doing their job, which (1) raises questions about how much funding that person has funneled to useless research that we haven't found out yet and (2) supports finding an alternative process that removes that point of failure.

Expand full comment
theahura's avatar

> OTOH, it seems not at all hard to refuse funding them

I think this take misunderstands how grant writing works. For the most part grants are earmarked. If some lab is studying 'how much pressure penguins build up while pooping', its because someone somewhere is funding the research through a grant. In point of fact, its most likely the US government that is funding that research, since the US government funds ~60% of all research in the US done in universities and is the single largest funder of research by an extremely large margin.

It's not like the *university* is setting out the research targets. There's no admin that's like "TODAY, PENGUIN POOP!"

Even beyond the lack of understanding of the grant process, I think this also is just like a massive Chinese Robber fallacy. There are something like 500000 papers published in the US each year. Even if you found 500 papers that you thought were *really* egregious, you would have no point at all.

Which brings me back to the original point: is all of the animus just fueled by reasoning like this? Just people who have no idea what's going on, and are therefore willing to foot-gun themselves?

Expand full comment
Ogre's avatar

I have a thought here. Scholarly work at universities counts as apprenticeship if your plan is to do scholarly work. I mean if you really want to be an academic historian or academic biology researcher, practically everything of that is ideal. The problem is that people want to work in business, and yet the university is apprenticeship in scholarship. Not apprenticeship in business. Could we fix that? If businesses are not offering apprenticeship for various reasons, could much of universities be turned into business simulations?

Expand full comment
Johan Larson's avatar

My own position is that the existence of "any college degree" as a significant qualification is an opportunity. Employers are eager to identify capable entry-level employees. These people don't need to actually know anything specific, but they need to be literate and numerate to a high standard, diligent and reliable. The best available tool for identifying such people right now is the undergraduate degree, which is why these "any college degree" jobs exist.

The problem is that the undergraduate degree takes four years and can easily cost six figures, particularly once living costs are included. I would like to find something as good at indicating general ability, but cheaper and faster.

My proposal for doing so is in two parts. First, introduce a track or course sequence in high school that is demanding enough that completing it is actually impressive. Call it something like the First Class High School Diploma, and gear it high enough that only 10 or so percent of graduates are able to get one. And have common testing, not done by the teachers, to ensure uniform standards of grading. I suspect some employers would be eager to hire these people right out of high school, if only they had a reliable indication of quality.

Second, as an alternative to an undergraduate degree, introduce funded whitecollar apprenticeships. These might include some amount of technical or business course work, but would have participants start working with employers sooner, and doing useful work sooner. I expect this sort of program would be of interest to some of the more capabe graduates who are more business minded and less intellectual. (Note that I said "less intellectual", not "less smart". Not everyone who is sharp is interested in the sort of knowledge-for-the-sake-of-knowledge study that undergraduate degrees make so much of.)

It is my understanding that something like this is already being done in Switzerland. There, a far smaller portion of graduates go to university, and these white collar apprenticeships are the standard way into white collar jobs that don't require formal degrees.

Expand full comment
Tori Swain's avatar

Take the money you were going to spend on college, and use it on business? Sure, if you've got a good idea (even a glitter bomb) you'll make a profit.

Expand full comment
Johan Larson's avatar

Weird Tales was a pulp magazine that in its day published fantasy and science fiction stories by writers who would later be famous, such as H.P. Lovecraft (Cthulhu) and R.A. Howard (Conan). It began publication in 1923.

If we went back one hundred years to 1925 and submitted a story to Weird Tales magazine that accurately described our world of 2025, what part of the setting would the editor consider most unbelievable?

Expand full comment
Seneca Plutarchus's avatar

Being able to reference all of human knowledge, at any time, on a device everyone has with them does not make people behave more rationally.

Expand full comment
Johan Larson's avatar

The whole rise of computers until there are computers in all sorts of unlikely devices, might seem strange.

Expand full comment
Viliam's avatar
14hEdited

"In John's kitchen, there were dozen appliances powered by electricity so cheap that even when they were not used, at least each of them spent some power displaying the exact time. Well, not exactly the same time... the clock on the fridge currently showed 14:02, but the clock on the freezer disagreed and showed 14:06. The clocks on the washing machine, dish-washing machine, stove, and the oven showed 14:04, 14:07, 14:01, and 14:05 respectively. There were many smaller clocks that John didn't bother to check. He knew it was a little past 2PM, and for this moment that knowledge sufficed. He expected the next generation of the appliances to connect across the world using satellites orbiting so high in the sky that it made them invisible, to coordinate on the exact time. The power necessary to achieve this noble purpose would not be a concern for the average person."

Expand full comment
Seneca Plutarchus's avatar

Actually computer full stop if we are talking 1925. It would be interesting to see what was in the science fiction stories of the day back then about calculating machines and how ahead of the curve authors were about future computers. I never read Lensman, I wonder if it had computers.

From what I remember the much later Foundation had gargantuan room size computing machines.

EDIT: Wow, Lensman is much later than I thought, never mind that point.

Expand full comment
Johan Larson's avatar

I think the state of the art in data processing at the time was the use of punched cards, processed using elaborate electromechanical equipment. There was also some use of analog computing devices, for calculating things like ranges for naval gunnery.

Expand full comment
lyomante's avatar

https://blog.youtube/news-and-events/extending-our-built-in-protections-to-more-teens-on-youtube/

Essentially google is testing a feature that uses machine learning to flag accounts it suspects as teens. It will then apply restrictions unless you verify through credit card or ID.

A literal "for the children" but machine learning used to identify and sanction ("protect") accounts for predefined reasons with the only recourse being to de-anonymize is a little scary. Especially coming after payment processors going after Steam and Itch for types of adult content.

Expand full comment
Ogre's avatar

LOL I could sign up for that, I spent much of COVID lockdown kicking teens out of kinky Discord servers. Basically our policy was to not "card" everybody up front, as it would scare too many people away, but look out for suspicious sentences like "after school, my mother told me" etc. and then "card" them.

Expand full comment
Ninety-Three's avatar

I think that's a silly concern specifically because of who's doing it. If Google wanted to, I think they could already deanonymize just about anyone. What nefarious action are you worried about them taking against someone, that they're only able to take because someone hands Youtube their ID?

Expand full comment
lyomante's avatar

its more that ai is being used adversarially assuming a profile is dishonest. Not based on actions that show you might be a bad actor-like spamming comments-but just assuming general actions are suspicious. if so, they will act by:

1. disabling personalized advertising

2. turning on digital wellbeing tools

3. adding safeguards to recommendations, including limiting repetitive views of some kinds of content

if your profile gets hit by this, you have to verify to remove it. and tbh its hard to see it not hitting false positives. But AI trying to determine if people are lying from their behavior and assessing a penalty-doesn't that worry you some?

Assuming it works lol, there were articles on how UK kids were beating their face scan requirement by using a photo mode in Death Stranding 2.

Expand full comment
Ninety-Three's avatar

What exactly is the worry, that you'll trip a false positive and have to waste one minute giving Youtube your ID? One minute is really not long enough for me to worry about, and I'm not concerned with them having my ID because I expect Google could already figure out who I am if they wanted to.

Expand full comment
lyomante's avatar

the worry is AI is being used to profile you like a policeman profiles you: based on suspicion of wrongdoing and using heuristics that may be dubious.

you just keep saying "but if you give the policeman your id its a non-issue" It can be in part because police are trained, accountable, and ideally possess good judgement from lived experience. Even then though it rankles.

AI doing this just worries me.

Expand full comment
EngineOfCreation's avatar

Actions don't have to be nefarious to be harmful; for a company the size of Google mere negligience in such schemes, coupled with slow to non-existant processes to correct the inevitable mistakes, is bad enough. Given the short commercial half-life of most YouTube content, even swiftly corrected mistakes can be costly for content creators; c.f. "enshittification", and data leaks putting the data of minors at risk happen even at Google.

Also, even if the system works exactly as advertised with basically no mistakes, there still remains the question of how much power (beyond what is required by applicable laws) a company that merely mediates user-generated content should have over which user is allowed to consume which content, and how deeply they may invade their users' privacy to do it. The mere ability of Google to do these things is hardly a justification to actually do them.

Expand full comment
Waters of March's avatar

Not sure if I’m using this right, but I remember Scott’s Sadly Porn review describing therapy lines as “koans” - not meant to be true or false, just something you sort of live with until it breaks you open. Ended up writing this essay about cringe business clichés and how they might function like that - non-propositional, sincerity-by-force, short-circuiting irony.

Does that track with how koans work? Or am I stretching it too far? Would love thoughts.

https://substack.com/@waterofmarch/p-169698769

Expand full comment
MichaeL Roe's avatar

The LM weirdness I’m currently working on …

In back translation, you take a piece of text from your training corpus and get an LLM to come up with a question to which the training text is the answer. It helps if the system prompt instructing the LLM to do back translation is in the same language as the training corpus. Fine. Except that when the training corpus is in Ancient Greek, there’s not really a suitable Ancient Greek word to use for a LLM in the system prompt. Some discussion with DeepSeek R1 ensued, about the tripods of Hephaestus in book 18 of the Iliad, and the statues of Daedalus in Aristotle’s _Politics_ and Plato’s _Meno_. Fine. I can coin a neologism that an LLM will know what it means, even if Aristotle would have been deeply confused by it. Ψυχὴ Δαιδαλική It is.

Expand full comment
Erica Rall's avatar

>In back translation, you take a piece of text from your training corpus and get an LLM to come up with a question to which the training text is the answer.

Reminds me of the "Fortune Presents Questions for Famous Answers" from the Unix fortune cookie command line tool. My favorites were:

Answer: Go west, young man.

Question: What do wabbits do when they get tired of wunning around?

---

Answer: Dr. Livingston I Presume?

Question: What is Dr. Presume's full name?

---

Answer: The Royal Canadian Mounted Police

Question: What is the greatest achievement in the history of taxidermy?

Expand full comment
Yug Gnirob's avatar

Reminds me of Carnac the Magnificent. https://www.youtube.com/watch?v=lRTtLvKAKgk

Expand full comment
MichaeL Roe's avatar

P.S. R1 objects to being referred to in Ancient Greek in a way that would imply it’s a slave.

Expand full comment
B Civil's avatar

Interesting choice.

Expand full comment
MichaeL Roe's avatar

Yes, I know, in _Meno_, Socrates and Meno get a slave to verify a mathematical proof, and that bit in Aristotole’s _Politcs_ is about how they could abolish slavery if they could somehow automate a weaving loom….

Expand full comment
Citizen Penrose's avatar

How reliable do people find Wikipedia, specifically in terms of political bias?

I saw a recent complaint about the page on Mao not being critical enough. But Marxist's also complain a lot about bias on Wikipedia from the other side, presumably both complaints can't be true. A lot of commenters said they didn't have much trust in Wikipedia in general for anything relating to politics.

Personally it seems like it stays factual and impartial for the most part, and have used it a fair amount. I thought the Mao article was fine.

I'm asking about it's reliability on social issues in general not specifically on Maoism.

Expand full comment
Ninety-Three's avatar

Surely both complaints can be true. There's no contradiction between "Wikipedia is biased against me, a Marxist who thinks the kulaks deserved what they got" and "Wikipedia is a lot softer on communism than it is on fascism".

Expand full comment
Seneca Plutarchus's avatar

Having tried to be rigorously factual in an edit of a famous true crime case has made me very leery of Wikipedia for controversial topics. You are essentially powerless in the face of editors with history of enough edits. Appeals to arbitration by uninterested third parties may get no response. The result will be whatever the consensus was before reform efforts.

Expand full comment
John Schilling's avatar

Mostly reliable on uncontroversial issues, e.g. technology or obscure pop culture stuff, but check the references if it's at all important.

Quite unreliable on anything politically controversial, unless you also check the talk pages (and there may be many of those) to see what's been excluded. Even if the facts alleged in the article are all true (and don't count on even that), you have to assume they've been cherrypicked to support a particular narrative that may or may not be aligned with truth.

Expand full comment
Viliam's avatar
13hEdited

Yep, the talk pages are often the place that keeps the records of what was edited out of the article. Therefore, if I see a biased article, I think the best way to fix it would be to write a concise explanation on the talk page. Because if you fix the article, some mighty editor could revert it with a single click. But if you explain the issue at the talk page, future editors will see it, and some impartial experienced editor may volunteer to fix the page and win the edit wars. Also, according to the Wikipedia rules, you have the excuse of "I may have a conflict of interest, so I didn't want to edit the article directly", which can make you sympathetic to the other editors.

Expand full comment
Jacob Steel's avatar

Very reliable, with enormous caveats.

Wikipedia relatively seldom gets facts wrong, but often selects and frames facts in a highly biased way.

You can usually trust it for narrow factual claims, except on the most politically-charged issues, but not for opinions, explanations or judgements.

Expand full comment
Fred's avatar

That's my understanding as well. I will just point out Scott's extremely relevant exploration of the news media's use of the same tactics: https://www.astralcodexten.com/p/the-media-very-rarely-lies

Expand full comment
luciaphile's avatar

Wikipedia has been the internet for me, more than anything else. And I liked being able to donate a small sum from time to time: the ask was so small compared to my looking things up on it.

So I was mainly disappointed to learn that they didn’t actually need my five or ten bucks for Wikipedia, but rather for causes dear to the hearts of the people who founded Wikipedia, I guess.

I see no link there. I guess ultimately if they can’t fund their causes, they might take Wikipedia away. I’ll go back to the Britannica.

Expand full comment
Paul Botts's avatar

Ehhh....that specific criticism seems to be more heat than light.

The Wikipedia Foundation being a registered nonprofit, its financials are public record. For 2024 a bit less than 60 percent of grants made by it went to directly Wikipedia websites (the various different language versions of Wikipedia), for "ongoing engineering improvements, product development, design and research, and legal support." The other annual grant dollars go to grants for "the Wikipedia communities", supporting "projects, trainings, tools to augment contributor capacity, and support for the legal defense of editors."

The Wikipedia Foundation though is not just a grantmaker, it is primarily the entity that pays the salaries and other expenses of all the Wikipedias. Only about 15 percent of its annual expenses are the grants it makes, and as noted above 60 percent of the grants are directly to the various Wikipedias. So even if you view all of the remaining 40 percent of grants as for "causes dear to the hearts of the people who founded Wikipedia", that amounts to only around 6 percent of total annual outflows.

Wikipedia's volunteer editors have beefs about the Wikipedia Foundation, which you can read about here:

https://slate.com/technology/2022/12/wikipedia-wikimedia-foundation-donate.html

That doesn't seem to be about any Wikipedia donation dollars going to the founders' pet causes though, and in the audited financials I don't see any evidence of that.

Starting in 2016 the Wikipedia Foundation made a strategic decision to start building a separate in-perpetuity endowment rather than just rely on annual contributions forever. That endowment, also registered and governed as a US 501(c)(3) not for profit, had as of last year grown to $144 million. Like all permanent endowments it is funded by donors who explicitly restrict their donation dollars to that fund (that being the only way that an endowment fund can be genuinely permanent), and like all such endowments it builds itself almost entirely with relatively large gifts as well as bequests in wills. Point being that if you are a regular recurring individual donor to Wikipedia none of your dollars are ever going to the Wikipedia Endowment. (You could choose to donate to the Wikipedia Endowment in any amount but you'd have to have explicitly chosen to do so.)

The Wikipedia Endowment began making grants in 2023 and you can read that list here:

https://diff.wikimedia.org/2023/04/13/launching-the-first-grants-from-the-wikimedia-endowment-to-support-technical-innovation-in-wikimedia-projects/

Expand full comment
Paul Botts's avatar

The strong fact-based criticism is that the Wikipedia Foundation now fundraises much more than is actually needed to operate Wikipedia (the various different-language wikipedias). That is true. Wikipedia's leadership, i.e. the somewhat-overlapping governing boards of the Wikipedia Foundation and of the Wikipedia Endowment, don't deny this. They say basically that they are raising money to make the continued healthy existence of Wikipedia stronger than any given year's fundraising. That means both investing grant dollars into design and research and one-off improvements, and building a "corpus" (endowment) such that Wikipedia at some point is fully independently "endowed".

My personal read of the financials would be that they are already at or pretty close to that second goal, and were I on one of those boards I'd be asking when the fundraising effort declares victory and leaves the field. That's just one outside view though, YMMV, etc.

At a minimum they need to be, and from some things I've read are to some degree, listening to the pushback that their fundraising appeals need to stop giving people the impression that Wikipedia is on a knife's edge financially. That is definitely no longer true precisely because they appear to have done a strong job of making Wikipedia financially independent. As a user who's very glad that Wikipedia exists I am glad that they have achieved that objective and that Wikipedia therefore isn't at risk of becoming dependent on public funding or on any single major private donor, etc.

Expand full comment
Fred's avatar

You're calling it the "Wikipedia Foundation", but it's Wikimedia. I think this might be more than just a nitpick if it's masking from you the true sprawling nature of what they do. I notice you said "went to directly Wikipedia websites (the various different language versions of Wikipedia)"; I think that should be "Wikimedia websites" and "(all sorts of things that are not Wikipedia)" respectively. For instance, are you aware of https://www.wikifunctions.org/wiki/Wikifunctions:Main_Page ? It got millions in funding, and has always sounded to me like a huge boondoggle.

Separately, their fundraising misbehavior is much worse than just making the situation sound more than it is. They steadily ramped the urgency of the messaging up over several years as the actual financial need was ramping down. There is no reasonable interpretation other than empire-building cash grab.

Expand full comment
Paul Botts's avatar

The wikipedia/wikimedia thing is just a repeated typo on my part.

"There is no reasonable interpretation other than empire-building cash grab." You are clearly unfamiliar with the social dynamics of non-profit fundraising....no bad faith is required in order to explain the behavior which I and then you each summarized.

As for the Wikifunctions initiative, I've heard of it. Have not interacted with it and don't know much more beyond the basic idea. It is an example of something which has received funding from the Wikimedia Endowment as distinct from the Wikimedia Foundation's annual fundraising.

That makes sense since it is a new idea, a startup project (launched at the end of 2023). General theory/practice in the world of professional NGO management is for endowment funds, and not annual fundraising for operations, to fund new initiatives from which the ultimate payoff (the NGO varieties of that word, so usefulness, impact, influence, etc) can't be specifically known yet. I.e. endowment funds as sort of the NGO version of venture capital (an analogy I've heard made in my professional contexts many times over the years).

In that spirit it would be a head-scratcher to conclude that a 20-month old initiative is already proven to be a "boondoggle". It may turn out to be of course, just as a large fraction of actual venture-capital investments end up failing. That risk goes with the territory.

Expand full comment
Fred's avatar

>The wikipedia/wikimedia thing is just a repeated typo on my part.

Once is happenstance, twice is coincidence, the tenth time, you wrote multiple paragraphs positioning yourself as knowledgeable when you don't even know the name of the organization in question.

>You are clearly unfamiliar with the social dynamics of non-profit fundraising....no bad faith is required in order to explain the behavior

I understand that it's embarrassing to overreach and get exposed, and firing back with some heavy-duty condescension in that case might feel pretty good, but it is not healthy or effective rhetoric. And, are you suggesting that it's normal and good for a non-profits to eternally seek to control ever larger resource pools, even after they have more than enough to fund their stated mission forever?

"Endowment vs fundraising" does not matter. That's a question of internal accounting structure. The reality that is relevant to the rest of the world is that they're an organization with income, expenses, and a cash reserve. The point is they've been instilling a steadily increasing fear in people that they're having trouble meeting the expenses of their core function, so much so that the service is on the brink of going down. Obviously the endowment capital would be in play if that was the real situation, so it's not valid to say "well the endowment isn't expected to fund core operations, so it doesn't matter what they do with it". (Actually, even if it was valid, it's still a problem, because the vast majority of donors are intending to fund Wikipedia itself and nothing else! It's not great that they're wasteful, but the real sin is the dishonesty).

I promise you, anyone who has been consistently exposed to the past decade+ of their fundraising, and knows they were actually financially fine, is shaking their head in disbelief that you're defending them like this - doubly so if they got conned into donating.

Expand full comment
luciaphile's avatar

I read a Twitter thread about this subject years ago, but I am not a member Twitter so I can’t read that anymore. I recall a discussion that followed suggesting that perhaps half of your donations, went to something other than running the website. It could be a much smaller percentage than that though, and I would feel like I was being manipulated. If I donate to the national wildlife Federation, I don’t mind what they spend on overhead. I understand that’s part of running a nonprofit, but I would not be happy if I learned that they turned around and donated my donation to the ACLU.

We obviously see this differently and I I’m definitely a free rider on Wikipedia on your dime now. Or on two or three pennies of your dime.

Expand full comment
luciaphile's avatar

Thank you. I feel less crazy.

Expand full comment
Paul Botts's avatar

I also thought I'd heard of the Wikimedia Foundation making sizeable grants to things unrelated to Wikipedia. But going through a couple years' worth of audited financials and their annual reports (which list major grants) didn't turn up any such examples.

It remains possible that the WIkimedia Foundation did some of that prior to 2022 which is how far back I went. The Wikimedia Endowment's grantmaking, which began in 2023, is restricted to "support[ing] the technical innovation of Wikipedia and the Wikimedia projects and their growth for the future".

Expand full comment
Neurology For You's avatar

I think most pages are shaped by people who are very invested in that particular subject. Woodgrains thinks there’s a pro-Mao conspiracy but there’s really a million pro-this-subject conspiracies out there.

Also, what IS the proper degree of criticism to express towards Mao? How can there possibly be a single answer to that?

Expand full comment
WoolyAI's avatar
4dEdited

Wikipedia is significantly above average and a relatively good source on social issues. It has well understood political biases, that I don't like, but it certainly outperforms peer organizations. I don't trust it to report the truth and I trust it substantially more than the NYT or a variety of other journalistic outlets and, frankly, better than a number of meta-analysis...meta-analysises....meta analysi...in published journals.

For example, the wikipedia article on Biology and Sexual Orientation:

https://en.wikipedia.org/wiki/Biology_and_sexual_orientation

Specifically, the section on twin studies. The core issue is that we still don't really know why people are/become homosexual. We know it's not purely genetic, because we've got twin studies. We find a homosexual with an identical twin, we go talk to the identical twin, and most of the time the identical twin is not also homosexual (when I dug into this, I was seeing concordance rates of 50%, they're reporting concordance rates as low as 25%, which seems weird). This is an ongoing area of confusion, because there's clearly something going on genetically, 25%/50% concordance is still way higher than the base rate of homosexuality in the general population, but it's really hard to determine what the other factors are, or most likely, what the gene-environmental interactions that determine homosexuality are. (1)

Touchy subject, right? And if you read the Wikipedia article, you'll see them downplaying this a lot for fairly obvious political reasons. And that's bad. But...man, that article is a lot more honest than 99% of media I've consumed on this topic. It's better than I've gotten from personal conversations with experts in this area.

I think a lot of conservatives and centrists have well founded complaints and issues about political bias in a lot of media and especially in "factual" or "research" entities. And Wikipedia is certainly guilty of a lot of those sins. But it's one of the better actors, not one of the worse. And I want to celebrate the best of the "other" side, not criticize.

On the scale of general actors:

Wikipedia: Well known bias.

NYT/High status academic publication (Harvard): Bad but might be something interesting/useful in any given article.

Vox/99% of media/reference: Absolute dumpster fire, deserving of a 40k purge.

On that scale...I dunno, Wikipedia feels worth defending.

(1) As always, not my area of expertise, more than open to correction

Expand full comment
Loominus Aether's avatar

+1 for trying to celebrate the best of one's intellectual adversaries. I really wish more folks did this.

Expand full comment
Quantificand's avatar

As far as I can tell Wikipedia's most prominent bias is a tendency to recreate academic consensus. This has become more of a problem recently than it was when Wikipedia was founded, since academia has come to produce more outlandish and controversial claims since then. I think this bias might be a limitation of the site's "constitution"; Wikipedia seems meant to be no worse and no better than those sources of information society can agree are respectable and trustworthy. The consequences of this bias for political subjects should be obvious. And since it's also a bias you'll find in most respectable places it hardly makes Wikipedia an *unusually* bad source of information.

On the subtler controversies, I find that the academic consensus bias tends to show through a lot in "criticism" sections for books/theories/etc, which not only reiterate bias found elsewhere but also seem to have their own layer of filtering; e.g. a book known only to philosophers which claims that mind-independent objects exist seems more likely to grow a "criticism" section than a similarly obscure book that claims that nothing exists untouched by culture (and the latter will usually be focused on criticisms about how the touching happens rather than whether it happens).

On the unsubtle issues (like Maoism, but also nationalism, certain wars) I have seen some slow-boil edit wars about stuff it seems like nobody should care about, like the national anthems of long-defunct states. Sometimes the propaganda is obvious enough that you can easily infer the truth by negation, but the more I think about the weird stuff people do the less confident I am in my ability to notice all of it.

Expand full comment
Viliam's avatar
13hEdited

> a tendency to recreate academic consensus

Or a journalist consensus, if academia does not care about the topic sufficiently.

One problem with "academic consensus" is that if there is exactly one published paper on the topic, then the paper *is* the consensus. The easiest way to achieve that is to invent a new word (see "TESCREAL").

Expand full comment
The Ancient Geek's avatar

What should they be repeating instead of academic consensus?

Expand full comment
Quantificand's avatar

Wikipedia should only host truth, but if we're willing to lower our standards to things that could be systematically recognized and promoted by a Wikipedia-scale institution, I personally can't think of a good alternative that wouldn't be very close to academic consensus. There might be more accurate proxies for truth, but figuring out which is most accurate wouldn't be much easier than figuring out the truth.

Expand full comment
The Ancient Geek's avatar

Wikipedia is actually based on reliable sources, because truth is a tricky concept.

Expand full comment
John Schilling's avatar

Where "reliable" means "says things Wikipedia admins agree with". On anything remotely controversial, the only way to get actual truth out of Wikipedia is to read the talk pages as well as the article, and take note of the sources the admins are excluding.

This can be more trouble than it's worth, but in that case it's not worth using Wikipedia and probably not worth seeking truth; stick with rational ignorance.

Expand full comment
luciaphile's avatar

I think the first Talk page I ever looked at was for the Celtic harp. Or some other instrument that occasioned dispute between Scots and Irish. I realized the fun was on the Talk page.

Expand full comment
Paul Brinkley's avatar

Some of the trouble with academic consensus follows from simple incentive analysis: one can expect academic consensus to be biased on any issue that challenges the status of academia as authoritative.

One might think this is okay - the issues that challenge academic status should surely be quite small and dry affairs, having to do with historiography, literary provenance, semiotics, metaphysics, and other multi-syllabic terms unlikely to make it out of a library basement.

In reality, though, they reach into any issue that people care about enough to build an online identity around it. So: politics, religion, health, epidemiology, energy, environmentalism, evolution, race, and really, I could probably have stopped after the first two. If academic consensus only concerned itself with stuff relatively few people care about such as the mass of the Oort Cloud or which polysaccharides can be used to make fiber, or easily checkable stuff like the primality of some 50-digit integer, no one would question it. People question academic consensus precisely because it gets pulled in as an authority on claims that aren't trivial to check and that affect a great deal of policy.

This is currently a lot of issues! When it comes to such issues, academic consensus suddenly becomes very sensitive to which individuals are part of academia when a controversial, policy-driving claim turns up, and by extension, what those individuals happen to think, and how able they are to influence their fellow academics, either to change minds, debate publicly, or withhold funding or credit privately.

In such cases, a third party should trust academic consensus about as much as they would trust someone with a definite position on anything. Someone with no opinion on abortion ought to trust Planned Parenthood to thoroughly defend one position, but not all of them. Ditto the NRA on guns, Scalia on textualism, the Pope on the historicity of the Apostles, etc. A more complete picture requires consulting sources with incentive for thoroughness in other directions, and with comparable resources: National Right to Life, Handgun Control, William O. Bradley, Richard Dawkins.

Then comes the equally hard part of resolving conflicting claims from each source, establishing standards for said resolution, and so on.

For positions touching on academic consensus, one would have to turn to sources that make claims conflicting with academic consensus, and that also have comparable resources. The first condition is pretty easy to satisfy; the second is particularly hard. For an example, see the current state of physics surrounding string theory and its opponents. Humphrey Appleby is a physics professor, and a reader here: he could doubtless elaborate.

Expand full comment
Wanda Tinasky's avatar

TracingWoodgrains wrote a good piece of investigative journalism about wikipedia: https://www.tracingwoodgrains.com/p/reliable-sources-how-wikipedia-admin

Expand full comment
Paul Brinkley's avatar

Funny - every time I'd medium-dive into a Wikipedia article that looked a bit off to me, and look at the Talk page and discover a hotbed of controversy, the primary source of frustration was typically some recognizable Wikipedia userID attached to a message that abruptly waved "WP:RS" (Reliable Sources) in the face of whomever raised the complaint. Then I'd read the WP:RS article and find it's a lot of text that looks comprehensive and reasonable at first glance, but turns out to be somewhat circular at second.

Thanks to TracingWoodgrains' *extensive* article, I find a lot of it is traceable to one David Gerard (and a lot of like-minded senior editors who weren't cowbirded off the site), and his background in dubious sites like RationalWiki and /r/sneerclub. Apparently an entire slice of where the world goes for authoritative truth on controversial subjects - Wikipedia - is functionally determined by "Reliable Sources", which is quietly defined as "Sources This One Guy Believes Are Reliable", and enforced by his apparently epic levels of obsessiveness.

If TW weren't comparably obsessive, I doubt I would have known this.

Expand full comment
Erica Rall's avatar

I'm still annoyed that somebody changed the article about Lord Dawson to avoid referring to his killing of King George V as a regicide.

Expand full comment
Eremolalos's avatar

That is such a Dorothy Sayers sentence! Have you read her mysteries? They're set in that era. Highly recommend them.

Expand full comment
Erica Rall's avatar

No, I'm not familiar with them. I'll look into them, thank you.

Expand full comment
Tori Swain's avatar

They removed the part of the wikipedia article discussing the similarities between dengue and coronavirus, and why we would, a priori, expect the covid19 vaccine not to work (It's actual mechanism doesn't seem to be as a vaccine, in that memory b-cells aren't being triggered, and we don't seem to have a perpetual (2+ years) memory of the exposure.)

They removed the Decisive Strategic Tang Victory line.

I trust NOTHING wikipedia has on it, because I don't KNOW the bias of people, but I do know a former wikipedia editor, who's decided it's "no longer fun" to edit wikipedia. It says something when the guy with the most processing cycles available decides to nope out.

Expand full comment
Michael's avatar

> They removed the part of the wikipedia article discussing the similarities between dengue and coronavirus, and why we would, a priori, expect the covid19 vaccine not to work (It's actual mechanism doesn't seem to be as a vaccine, in that memory b-cells aren't being triggered, and we don't seem to have a perpetual (2+ years) memory of the exposure.)

They probably removed it because it was biased and misleading or untrue. We do form lasting memory B cells after the vaccine, and the vaccine, while not 100% effective, does work.

If we were magically given a perfectly unbiased encyclopedia, everyone would still think it had biases. We would read along until we encountered a topic we are biased on and interpret it as a clear example of bias in the encyclopedia.

Short of making testable predictions, which is often not possible, I'm not sure there is any way to distinguish a bias in the encyclopedia from a bias in yourself.

Expand full comment
Tori Swain's avatar

Cite your sources on producing memory b-cells that are still extant years later (and explain why there's such a demand for yearly, biyearly, even quarterly booster shots if so...). My sources say that the spike protein is present six months later (at end of study) for 50% of patients, so you're going to have to be able to show that the memory b-cells are responding to initial vaccination and not "continued production of antigen."

Expand full comment
Michael's avatar

I'm curious where you even heard that the vaccine doesn't cause production of memory B cells. I've seen a lot of COVID vaccine skeptics, but I hadn't heard that one before.

The point is that you're only showing there is a difference of opinion between you and Wikipedia. You have no way of knowing whether the bias lies with you or Wikipedia. From the inside, any bias just feels like you are unbiased and others are clearly biased.

Expand full comment
Tori Swain's avatar

I know people on OWS? My sources are quite unusual for this debate (also I read a lot of papers). If the vaccine was actually creating memory Bcells, why, exactly, are there continued demands for boosters every year, or 6 months (or less than that)? (2years, 5 years, I can see that... it's not just tetanus that needs boosters eventually...).

You're familiar with the phrase "antigenic original sin"? That definitely applies to the flu vaccine, and to dengue (which was why the vaccine got withdrawn).

Expand full comment
Seneca Plutarchus's avatar

https://www.cell.com/cell-reports/fulltext/S2211-1247(25)00269-4

COVID-19 vaccination induces durable B-cell responses in exposed and naive humans

Spike+ B cells gradually expand toward the recognition of the RBD subdomain

RBD+ B cells are associated with lower breakthrough incidence in naive individuals

COVID-19 recovered develop a stable SARS-CoV-2-reactive atypical B-cell pool

Expand full comment
The Ancient Geek's avatar

T cell immune memory after covid-19 and vaccination - PMC https://share.google/QCjuuIxCC1REDmn1V

Expand full comment
Amicus's avatar
4dEdited

> But Marxist's also complain a lot about bias on Wikipedia from the other side, presumably both complaints can't be true.

Sure they can: different topic areas attract the interest of different influence campaigns. Just look at Eastern European history articles: you get serbian nationalism on some pages, bulgarian nationalism on others, polish nationalism on others yet, etc. That doesn't add up to overall reliability, it adds up to incoherence.

Expand full comment
Michael's avatar

That quote is referring to the Mao article. Different pages having different biases has nothing to do with that quote. Some people complain the criticism of Mao in that article isn't harsh enough while others say it's too harsh.

Expand full comment
Tori Swain's avatar

It would be nice if people could disclose biases. I'm assuming a good deal of these editors do actually know they're biased (or at least that "some might disagree with this").

Expand full comment
Velcro31's avatar

Does anything like MetaMed (https://en.wikipedia.org/wiki/MetaMed) currently exist? I'm facing a tough medical decision, and experts seem to be split on what I should do. I have a tentative view after reading some of the literature, but I'd like someone experienced with thinking about these kinds of questions to look over the evidence

Expand full comment
Eremolalos's avatar

You might try this person. She does biomedical research. https://acesounderglass.com/hire-me/

Comes highly recommended by serveral people on here.

Expand full comment
Fred's avatar

Samotsvety's site says they're "open to forecasting consulting requests" - no idea of the price, but MetaMed sounds like it was pretty fancy, so if you're wishing for that I guess you're willing to pay a good chunk. Only a couple of members are listed with medical experience, but then again any given problem they tackle will only have a couple of domain experts, right?

Or: feed it all (your situation, your options / possible outcomes, your opinions on all of that, your general life values, and all of those journal articles) into the best model of each of the major LLM providers. It might feel bad, but you can absolutely do worse talking to real doctors (who are themselves probably talking to the LLMs anyways). Probably the biggest obstacle you have is with deep comprehension of medical journal papers, right? Complex niche knowledge is where LLMs shine; they're like living textbooks. If it knows what you're looking for, it will make the relevant information accessible to you, and then you can think it over for yourself.

And, I certainly want to say that I hope it will go well for you.

Expand full comment
Dan's avatar

I know a physician who runs a patient advocacy business to answer these types of questions. Feel free to email me for more info dgold114@gmail.com

Expand full comment
Dan's avatar

I know a physician who runs a patient advocacy business to answer these types of questions. Feel free to email me for more info dgold114@gmail.com

Expand full comment
Dan's avatar

I know a physician who runs a patient advocacy business to answer these types of questions. Feel free to email me for more info dgold114@gmail.com

Expand full comment
Dan's avatar

I know a physician who runs a patient advocacy business to answer these types of questions. Feel free to email me for more info dgold114@gmail.com

Expand full comment
Tori Swain's avatar

If you post the question here, someone might take it up. There's also a substantial chance that someone here might have more "knowledge" than the experts (due to the substantial overlap of "private/military research" with public health).

Expand full comment
Ogre's avatar

Any counter-arguments to "it is worse if a zillion gazillion people are slightly inconvenienced than on person is horribly tortured" ?

Mine would be, you can only measure facts, not values. consider this: https://en.wikipedia.org/wiki/Th%C3%ADch_Qu%E1%BA%A3ng_%C4%90%E1%BB%A9c was it a good thing or a bad thing? On person set themselves on fire and died a horrible death, and it resulted in some level of political pressure on the South Vietnam government to stop persecuting Buddhists. Can you measure whether it was overally a good or a bad thing?

In some cases, like torture vs. discomfort, you can make an intuitive guess. But it is not a measurement, and cases like above show how it is not a measurement.

Expand full comment
Viliam's avatar

> Any counter-arguments to "it is worse if a zillion gazillion people are slightly inconvenienced than on person is horribly tortured" ?

As I see it, from the perspective of game theory this is equivalent to "it is worse if a person is slightly inconvenienced, than if a person is horribly tortured with probability 1 : zillion gazillion".

So anyone who disagrees with this should provide examples of inconveniences they are volunteering to experience in order to avoid horrible things with such small probabilities.

For example, any time you were impolite to someone on internet, there is a probability greater than 1 : zillion gazillion that your interaction was the last straw that made the person insane, which made them kidnap someone and torture them to death.

Expand full comment
thewowzer's avatar

In your counterargument, it's not really a person being tortured for the benefit of others. It's a choice the guy voluntarily made to do to himself in an attempt to help others.

Here's a short story (5 pages) by author Ursula K LeGuin about a concept like you describe, it's pretty interesting:

https://shsdavisapes.pbworks.com/f/Omelas.pdf

Expand full comment
Yug Gnirob's avatar

Recovery time should be factored in. Those zillion gazillion people will all have gotten over their mild inconvenience in a couple of minutes. Not so much for the tortured, or the people who know about the torture.

Expand full comment
Wanda Tinasky's avatar

Yes: questions like this are insufficiently coherent to yield a precise answer. In some sense morality, like consciousness, is a ret-conned fictional narrative and so there's no objective ground-truth to discover. Debating questions like this is like arguing about how lightsabers really work.

Expand full comment
lyomante's avatar

easiest one is pointing out they just rationalized their way back into pagan sacrifice but worse.

"We sacrifice some outsider every year so that the harvest grows and becomes bountiful" is the same damn principle but somehow the modern version is worse because the moderns are arguing that you should torture someone to prevent everyone from getting stuck in a traffic jam 100 years in the future.

like there is no cause and effect between sacrificing a virgin to bring back the fish to catch, individual and collective pain are not related. even the general principle that some people may suffer so most can flourish doesn't work because it is just karmic, pagan thinking.

plus all the nasty aspects of pagan sacrifice apply:

1. someone has to be the torturer and jailer

2. your mild convenience/prosperity built on assenting to torture damns the whole thing

3. you always torture outsiders or unlikables because you know its wrong when it's your kid on the raft offered to Godzilla

and more.

Expand full comment
grumboid's avatar

If you believe that a zillion gazillion people are going to be slightly inconvenienced, you're almost certainly just wrong. Your priors for a zillion gazillion people even *existing* should be super super low, and your priors for an effect that is powerful enough to exactly-slightly-inconvenience all of them should be even lower. There's no way that a standard human could live long enough to even see that many people, much less verify that they were slightly inconvenienced.

This problem then reduces to Pascal's Mugging.

Incidentally my objection to the trolley problem is similar. If you're in a situation where you believe that killing one person is the only possible way to save several people, then it's likely that you're wrong and there's a better option you haven't thought of.

There's a relevant post in the Sequences called "Ends Don't Justify Means (Among Humans)".

Expand full comment
objectivetruth's avatar

Is there any solid evidence that GLP-1 agonists deliver health benefits that can’t be chalked up to the weight loss they cause? Every study I’ve found reports at least some weight loss alongside any benefit, and the one alcohol-use trial was negative.

Expand full comment
Eremolalos's avatar

Scott thought there probably were other benefits. Seems to me the best path to an answer is to ask one of the better AI’s

for evidence for and against, and ask it for links to sources. Check the ones that look first order (research) rather than articles in the media yammering in about research findings.

As I write this I’m realizing with some

uneasiness and regret that I now mostly go to GPT4.o with questions I would have asked on here up until a few months ago. I get answers 100% of the time with GPT. Here? Maybe 40% of the time. And there have been a good number of questions that went unanswered that I was sure some readers could have answered. I don’t think other readers have an obligation to answer, really, it just seems cold not to take the trouble to do it. Jeez, why not be prosocial here? You’re protected in all kinds of ways from being exploited.

So I guess now that I:GPT4.o::lonely guy: Ai sexbot. When it comes to getting answers to questions I am turning to AI in response to how little real and internet people have to give, on average. (In other areas of my life I have not yet felt to need to suck on a cyberxenomorph, though perhaps that’s coming.)

Expand full comment
objectivetruth's avatar

Well, I came here after no AI was able to give an answer to my question :(

Expand full comment
Wanda Tinasky's avatar

Really? I asked ChatGPT your question and it gave me a pretty decent answer:

"Multiple large RCTs (e.g., LEADER, SUSTAIN-6, REWIND) show reduced major adverse cardiovascular events (MACE) in patients on GLP-1 RAs vs. placebo, even after adjusting for weight loss ... Several studies — including phase II trials like the Semaglutide in NASH study — have shown improvements in liver enzymes, steatosis, and even fibrosis stage in NASH patients. Some of this is clearly tied to fat loss, but liver-specific effects (e.g., reductions in hepatic inflammation markers and ballooning) appear disproportionately strong compared to weight-matched controls ... GLP-1 RAs seem to slow progression of albuminuria and preserve eGFR, beyond what you'd expect from weight loss or glycemic control alone. Again, some RCTs (e.g., LEADER) support this."

Anecdotally I have 2 friends who were pre-fatty-liver and their liver function improved soon after starting on GLP-1's, before significant weight loss occurred.

Expand full comment
objectivetruth's avatar

Its all hallucinated. Not a single study exists that controlled for weight loss. Its just AI hallucinations. You can check the studies it provided and see for yourself.

Expand full comment
Wanda Tinasky's avatar

Really? Huh, ok.

Expand full comment
Eremolalos's avatar

Just asked GPT4.o and got a mountain of info supporting idea that they have benefit independent of weight loss. Quoted some RCT’s. Here is its response. However, you do have to go to the iinks it gives for the main articles and make sure it didn’t hallucinate them.

https://chatgpt.com/s/t_6889721c2e8881919fd574f1c38c45db

Expand full comment
Eremolalos's avatar

Scott’s old piece quotes studies, I’m pretty sure. Do you know the one I mean? Not the recent one about the gray market in those drugs, but an earlier one called something like “why does Ozempic cure everything?”

Expand full comment
Carlos's avatar

It's funny that Scott was pooh pooing the reality of supernatural entities due to their lack of mathematical knowledge in Universal Love, said the Cactus Person, but IRL, Ramanujan claimed to have gotten all his mind-boggling theorems from his family goddess appearing to him in dreams. Ramanujan was a mathematician of whom it has been said that the word "genius" utterly fails to capture his brilliance, and it is amazing that there was such an intrusion of extremely raw spirituality in a domain commonly perceived as very hard-nosed and rational (though is it really? I read a book by a mathematician arguing mathematics is really primarily about intuition).

Expand full comment
Jonathan B. Friedman's avatar

Which book did you read about mathematics and intuition?

I saw on another thread someone recommend the book, "Mathematica: A Secret World of Intuition and Curiosity" by David Bessis. I read it and really appreciated it.

To state one of the main takeaways from the book, when you learn something so well it seems intuitive. But also there could be particular perspectives that makes certain things seem intuitive.

As for Ramanujan thinking his ideas came from a goddess, i.e., revelations. Descartes is actually interesting for this. I do not have the time now to delve into the details, but, from a quick ChatGPT summary of the main ideas regarding knowing something is true:

1. We discover truth through reasoning (thinking),

2. but the reliability of our reasoning, for Descartes, depends on God having created us with a mind capable of recognizing truth.

This is quite interesting, because I personally rely on feelings to "know" whether something is true or not. Contradiction and certainty evoke strong emotional responses in me.

Also, in terms of revelations, I once went to a poetry reading where a poet ascribed their poems to coming from God.

Chain of custody aside, in my experience, ideas just pop up in my mind. I often say that my brain does stuff and I take the credit for it.

In general, I am very interested in where insights come from, and how much control we may have over insights. A couple of years ago, I read the book, "Seeing What Others Don't" by Gary Klein. It was an entertaining and thought-provoking book.

If anyone has recommendations in that direction, I would love to hear.

Expand full comment
Carlos's avatar

Yeah, it was me who recommended that book, that's the book I read. I loved that bit about how learning to think in more than 3 dimensions is an embodied thing, and it's only when you intuitively understand what say, 8 dimensions mean, that you can do work in that domain.

Expand full comment
Tasty_Y's avatar
4dEdited

I'll admit upfront, that I feel annoyed by the mystification of math and Ramanujan, but I'll try to explain without any annoyance.

Ramanujan is an important mathematician who was described as incredibly talented, but he wasn't be-all end-all of mathematical accomplishment. (Hardy, his fan, friend and mentor-type-of-guy thought Hilbert was greater still. Edit: this part is wrong, see below.) Other great mathematicians proved theorems as impressive as his without any participation of goddesses, so we know that the brain of at least some humans can do it without assistance. But there's something unassisted human brain can't do.

In Scott's story it's not that the entities "lack mathematical knowledge": it's that the character wants them to perform a calculation that is dull, but absurdly time consuming, a calculation no human brain (or even a computer!) could do quickly enough.

That is to say: if a person wakes up and says: "I goddess communicated to me this wondrous proof" this is astonishing but not definitive, because we know that some humans at least can come up with amazing proofs. If a person wakes up knowing how to factor a bonkers prime number, this is more convincing, because for sufficiently large primes, it may by that no human or machine can do it, at all.

You note: "[Ramanujan] put out a lot of conjectures that would be proven only after his death, and he did say the conjectures were coming from his goddess. If they were arrived at primarily through work, it wouldn't have fallen to others to prove them no?"

This is not that weird. There are lots of statements and conjectures that mathematicians think are almost certainly true, but aren't able to prove. In some cases someone gains a valid insight that one can't make totally rigorous, not enough for a real proof, but enough to make a very plausible guess. There are piles of unproven mathematical conjectures made on empirical basis. No goddess participation necessary. What was going on with Ramanujan was a mix of intuition and reluctance to write down proofs when he actually had them. Sometimes he probably had the proofs but didn't record them. Other times probably it was just a good hunch. Both would be totally normal (for prominent mathematician) and not unique to Ramanujan.

Is mathematics "very hard-nosed and rational" or "primarily about intuition"? Both, in places. There's no contradiction. The end results of mathematician's labor - proofs - are supposed to be perfectly rational and logical and should not rely on appeals to intuition. But the methods by which mathematicians arrive at proofs don't have to be logical at all. Intuition plays a big role in it. You are allowed to do drugs, talk to goddesses then roll on the floor for a while if it helps you (real things people did, though not all at once), as long as the proof is airtight in the end, it's all good.

You note: "I've heard the culture of mathematicians is singularly averse to finding practical applications".

You can find various prominent mathematicians saying things that sound that way, but I think in reality most people are less romantic and it's not that mathematicians hate the idea of practical applications, it's that most of the time they do work that seems needed for the development of some mathematical theory and don't have any idea what the practical applications of it might be, and can't really know it, because the distance between high-level theories and applications is so huge and tangled. The little Fermat theorem was proven in like, 17th century, and the practical application of it is that if hackers intercept your e-mails, they can't read them, and do you think Fermat could know it, could ever figure it out?

Most pure mathematicians get asked all their lives: "well, what are the applications of the thing you are working on?", and they have no idea the same way a lorry driver transporting sand from A to B can't have an idea what each individual grain of sand will be used for, and it gets on their nerves the 1000th time somebody asks that. So there's a reason to retreat to some artsy pose: "oh, it's so pure and lofty, I would never soil myself with the matters of practicality", but the real feelings are less theatrical.

Expand full comment
Ogre's avatar

But in that case... let's compare that proof with evidence given for a crime at court. They real or true aspect of crime is actually doing the deed. The evidence given at court is a complicated, circumspect, bureaucratic, fallible process, because you have to somehow convince the judge or jury this way or that way, because society must function somehow.

Can it be said the real math is the intuition, and the proof is just the bureaucracy around it, because science as a collective effort must function somehow?

I have read Heisenberg's Quantentheorie und Philosophie. He said either mathemathical or empirical proofs are basically just for other people, once you stumble upon an idea that is simple and beautiful i.e. elegant, you know it is true. The rest is basically a set of bureaucratic hoops to jump across for the sake of convincing other people.

Which suggests science is at some level non-rational, its the bureaucratic process of convincing other people is rational.

Expand full comment
Viliam's avatar
12hEdited

> either mathemathical or empirical proofs are basically just for other people, once you stumble upon an idea that is simple and beautiful i.e. elegant, you know it is true.

I think the problem is with the word "simple".

First, if we adopted this as the official rule of science, it would quickly escalate to status games, where various people would insist that the idea is "simple for them" and it's not their fault that others are too dumb to perceive its simplicity.

You probably wouldn't be happy with Einstein telling you "time is relative, because... well, it obviously *is*, right?" (Or, consider quantum collapse versus many worlds. This is a difference in interpretation rather than measurable data, but still each side insists that their idea is obvious to them.)

Second, mathematicians sometimes make mistakes, too. It is possible to make a mistake in a simple and elegant proof just because you made a wrong assumption, such as "prime numbers are odd". The rest of the proof may be simple and elegant, and yet the theorem could fail for e.g. two and its powers.

If an idea is simple and beautiful for an experienced mathematician, then it is probably 99% likely to be correct. Still worth to check for the remaining 1%.

Finally, there may be situations where we don't have a simple and elegant proof (yet?), but we still want to know the answer, and sometimes we succeed to arrive at it using a complicated and non-elegant proof. Mathematicians can still feel bad about it, for example the computer proof of the four-color theorem, but for the moment it may be the best we got.

Expand full comment
agrajagagain's avatar

"Can it be said the real math is the intuition, and the proof is just the bureaucracy around it, because science as a collective effort must function somehow?"

No, it most certainly cannot. Your intuition and a nickle will buy you a stick of gum. If you don't have the proof, you don't have anything.

It happens frequently (at least to me) that when setting out to prove something, I will have a clear and intuitive idea of why its true, and will be able to start writing the proof immediately. Sometimes I finish the proof just as immediately. Other times, the effort of writing it out carefully and rigorously reveals a subtle flaw in the line of reasoning that had been sketched out in my head. Usually I can find a way to patch that flaw. Sometimes it turns out to be bigger than I'd realized at first, and the whole approach must be discarded. If you want a really, exceptionally famous example, consider Pierre de Fermat. It seems highly likely that he had felt he had a solid, intuitive grasp of why a^n + b^n = c^n has no non-trivial solutions for n>2. And that statement is, as it happens, true. But the chance that his solid, intuitive grasp actually traced out the lines of Andrew Wiles' eventually proof is basically nil.

There's the gap between intuition and proof writ large for you: 300 years, and 129 pages. Even if some mathematician is such a transcendent genius that their intuition is literally never wrong, nobody else has any way to *know* that without a proof. And even if they just trusted such a person to be right, they could hardly understand it for themselves without it being laid out clearly and in full rigor.

Expand full comment
Ogre's avatar

I must apologize that this is very hard to put into words, so I might be not making sense, in words. Please try to read my mind :)

My basic idea was that the really important part is whether the crime was done or not, not whether the crime can be proven at court by the rules of permissible evidence, that is just bureaucracy. The deed is the real thing, the proof and evidence are basically just social rules. Or a better example maybe, it is more important to invent a better mousetrap than proving to the patent office that it really works and there is no prior art. The mousetrap matters more than the patent does.

"But the chance that his solid, intuitive grasp actually traced out the lines of Andrew Wiles' eventually proof is basically nil."

"nobody else has any way to *know* that without a proof"

Again this sounds too much like proving the mousetrap at the patent office is more important than actually inventing it. I think this has things backwards...

Yesterday I made chicken soup. Almost no one knows I did. And? It still happened. Why do I care who knows about it?

Expand full comment
Victualis's avatar

This might be true in some parts of mathematics, but elsewhere intuition is notoriously unreliable. Combinatorics and computational complexity theory seem to be like this: major conjectures are frequently disproved even when widely believed for decades. The truth manifold just doesn't have a nice predictable shape.

Expand full comment
Dino's avatar

> You are allowed to do drugs...

Referring to Erdös and speed? I always aspired to have an Erdös number.

Expand full comment
Gunflint's avatar

I’d like to have written “Do Androids Dream of Electric Sheep”. Same principle. Probably stick to an occasional modafinal myself though.

Expand full comment
Shankar Sivarajan's avatar

> Hardy … thought Hilbert was greater still.

The Man Who Knew Infinity suggests otherwise: "[Hardy] assigned himself a 25 and Littlewood a 30. To David Hilbert, the most eminent mathematician of the day, he assigned an 80. To Ramanujan he gave 100."

Expand full comment
Tasty_Y's avatar

Oh yeah, I misremembered this part and lied about it, apologies to the ghost of Ramanujan and also Hardy, my bad.

Expand full comment
Carlos's avatar

That stuff about mathematicians being averse to applied work comes from someone I spoke to that declined getting a math PhD because he disliked what he perceived as a cultural aversion to finding applications.

And the claim about Ramanujan being something beyond genius comes from mathematician David Bessis, so noted that there are divergences of opinion among mathematicians.

Bessis also said mathematicians never read math books (as in, books only mathematicians could read) cover to cover, they only go look up specific things they need, is that true?

As to Scott's story, sure, maybe a feat of computation would be more difficult, but it also seems strange to expect a higher being to do something like that.

Expand full comment
Tasty_Y's avatar

"Bessis also said mathematicians never read math books (as in, books only mathematicians could read) cover to cover, they only go look up specific things they need, is that true?"

Mostly true. "Never" is an exaggeration, but it's very normal to only read one specific chapter, or to use the book to look up some info. Or a book might cover material you already (mostly) know and you use it to refresh your memory from time to time, but not have to read every word. There's a saying to that effect that you never read math books, you only re-read them.

The logic of Scott's story is: "if DMT entities can perform a calculation, then they are real and we can prove it to everyone", not "if they are real, they 100% can do it". It's completely plausible for a real ghost or spirit to not be able to do advanced math (after all, you and I are real, and we can't pull it off), it's just that the story is funnier if they totally can and are annoying about it.

Expand full comment
Tori Swain's avatar

Mathematics needs to be entirely rewritten to work with the world as it is. Our intuition is suspect and doesn't accurately model the quantum reality.

Expand full comment
Shankar Sivarajan's avatar

I don't see how your second sentence relates to the first.

Expand full comment
Tori Swain's avatar

Our intuition leads us towards round, stable numbers. Not the heisenberg uncertainty principle. When you start writing math that starts with the wave-particle duality, then you're writing math that actually stands a chance of being more effective at solving "day to day" quantum problems.

Expand full comment
Shankar Sivarajan's avatar

I am reminded of the quote: "God created the integers, all else is the work of man."

There is a sense in which you're right, that integers aren't sufficient to model quantum mechanics, but the tools of mathematics are richer than you seem to think, and have developed over the centuries to include some decidedly unintuitive objects: I'd say complex numbers are necessary, but also seem to be quite sufficient. If something comes up, we can go up a level to quaternions or higher. Whatever you're gesturing at with your "math that starts with the wave-particle duality" is probably isomorphic to one of those.

Now, there IS a problem with renormalization, but that's well beyond the Heisenberg uncertainty principle, and at any rate, the point is that those are still just PHYSICS problems: mathematics has tools to deal with that kind of thing, like zeta function regularization, for example.

Expand full comment
Tori Swain's avatar

If the math makes everyone's head hurt, and the physics is relatively simple (informationally), then it's the math's fault. The math, after all, is manmade. The physics is made by God.

Imagine a numbering system where instead of integers, or real numbers, you started with probability clouds, of varying sizes and shapes. And you had methods of having those probability clouds interact. (I'm not even saying this is at all relevant to physics, I'm just giving a "somewhat relevant" I bet no one's done this, as this makes most symbolic folks' brains hurt. It should work out relatively intuitively for someone whose base processing is visual in nature.)

Expand full comment
Shankar Sivarajan's avatar

I would be exceedingly surprised anyone working on quantum mechanics has any difficulty grasping complex numbers: certainly not the basics, and even complex analysis is generally considered remarkably elegant (at least among physicists; the math people will object to the lack of rigor in the pedagogy of how I was taught it).

Even if you can formalize it, I don't see what you hope to achieve with this vague notion of making interacting probability clouds the foundation of new kind of mathematics that we don't already have. I suppose if your intuition serves an infallible oracle for arbitrarily complex calculations of this kind, such a thing might be useful, but if you're going to fall back on calculations eventually (you are), less so.

Expand full comment
Carlos's avatar

Ooh, I've heard the culture of mathematicians is singularly averse to finding practical applications, I don't think they care about the relation of math to the physical.

Expand full comment
Tori Swain's avatar

There's a real big split between the applied mathematicians, and the "pure" mathematicians. As it is, the mathematics is like playing chess with yourself. Pure, determined, and very, very boring.

Expand full comment
Carlos's avatar

I think they shouldn't indulge so much in the pure mathematics, but it sounds fascinating to me, proving a theorem, figuring out that something abstract is indisputably true. It's the only domain where you can be certain of something abstract, isn't it?

Expand full comment
Anon's avatar

I don't know the timeline, but Ramanujan put in a lot of rigorous and formalized work on areas like the famous taxi-cab number. Did that work precede his seemingly spontaneous insight about 1729? I read an article that implied this, but I can't find it now. It could be that, in addition to being a bona fide mathematical genius, he was a bit of a showman or prankster when relating how he got his ideas.

Expand full comment
Carlos's avatar

In David Bessis' Mathematica, Bessis says he put out a lot of conjectures that would be proven only after his death, and he did say the conjectures were coming from his goddess. If they were arrived at primarily through work, it wouldn't have fallen to others to prove them no? Bessis is a mathematician, so it makes sense to believe his account.

Expand full comment
Anon's avatar

Here's a different article that discusses his work on the topic, though it doesn't make clear whether his hospital-bed insight about the number 1729 came before or after the formal work: https://news.emory.edu/stories/2015/10/esc_taxi_cab_numbers/campus.html

Expand full comment
Joel's avatar
5dEdited

Can someone explain to me why the UK Prime Minister’s announcement that he will recognize a Palestinian state unless Israel fulfills a number of conditions, including agreeing to a ceasefire in Gaza, doesn’t all but guarantee that Hamas will not agree to any possible ceasefire offer put to them?

Expand full comment
Melvin's avatar

A possibly naive question but: how does Hamas even still exist as a meaningfully armed force after all this time?

Expand full comment
Jim's avatar

Define "meaningfully armed force". Obviously they don't pose an existential threat to Israel (not that they ever did), but a bunch of armed people hidden around in random places is always going to be a problem.

Expand full comment
NoRandomWalk's avatar

You have thousands of people who don't wear uniforms, who move about the strip through tunnels, about 25% of which have managed to be destroyed so far, in a landscape where they have broad support from the population they come from, and a huge number of unexploded munitions that can be repurposed

It's not like they are going in and in groups of dozens shooting at people (several of these battles happened, when they concentrated in a few schools/hospitals, but those were cleared out). The pop out of tunnels, place bombs on tanks or shoot people.

Something like half of gaza has been taken over by the IDF at some point (they claim 75%, but whatever) but that doesn't mean they know where all the tunnels are.

You've had maybe 20 soldiers? killed in the last few weeks, that's not a huge number when thousands of israelis are operating in the strip.

It's not like the IDF knows who hamas is, where all their weapons are, etc.

But, in military terms, the IDF hasn't 'accomplished' anything for almost a year now. They get random people with guns, more younger and less trained boys are recruited to replenish the ranks, and so forth.

If every hamas member evacuated to another country, a few years from now another organization would take its place.

Expand full comment
Tori Swain's avatar

That last line... not sure about it. Saudi Arabia has been talked into defunding Hamas, and Iran seems to be hurting. Yes, maybe you get someone who wants to hurt Israel. I'm not sure that person gets to LEAD the entire government, though.

Expand full comment
Erica Rall's avatar

The announcement's stated terms are:

>unless the Israeli government takes substantive steps to end the appalling situation in Gaza and commits to a long term sustainable peace, including through allowing the UN to restart without delay the supply of humanitarian support to the people of Gaza to end starvation, agreeing to a ceasefire, and making clear there will be no annexations in the West Bank.

That sounds to me like Israel needs to lift the blockade enough to allow food through in quantity, offer a ceasefire on what the UK considers to be reasonable terms, and agree to peace talks on what the UK believes to be a reasonable basis. They probably don't need to actually implement a ceasefire of Hamas insists on continuing large scale hostilities, although the UK might then insist on Israel unilaterally taking some kind of defensive operational or tactical stance.

Expand full comment
FLWAB's avatar

>That sounds to me like Israel needs to lift the blockade enough to allow food through in quantity,

There is no blockade. They are letting food in, in quantity. They paused food aid for a bit a month or so ago, but in recent months the main thing preventing aid coming in is that the UN refused to send in aid unless UNRWA was allowed to do the distributing, and Israel doesn't want that because UNRWA has been funneling food to Hamas (that's their point of view, anyway). The UN had hundreds of trucks full of food waiting just outside the Gaza border and Israel was asking them to send it in, and they were refusing, claiming that it was too dangerous (despite Israeli military offers to escort the trucks). Recently Israel paused fighting and created more secure transport corridors, and the UN is starting to let aid go back in. There is no blockade, just a disagreement about whether UNRWA can distribute the food with the UN refusing to send in aid for a while.

Expand full comment
Peter Defeel's avatar

That’s literally all made up and only appears in some Zionist publications. Israel is prohibiting aid and Hamas is not stealing aid.

Expand full comment
FLWAB's avatar

Is the AP a Zionist publication? (https://apnews.com/article/aid-gaza-hunger-united-nations-e703faaaba945e838aabfb3c7fa32d70)

>Israel says it doesn’t limit the truckloads of aid coming into Gaza and that assessments of roads in Gaza are conducted weekly where it looks for the best ways to provide access for the international community.

>Col. Abdullah Halaby, a top official in COGAT, the Israeli military agency in charge of transferring aid to the territory, said there are several crossings open.

>“We encourage our friends and our colleagues from the international community to do the collection, and to distribute the humanitarian aid to the people of Gaza,” he said.

>An Israeli security official who was not allowed to be named in line with military procedures told reporters this week that the U.N. wanted to use roads that were not approved.

>He said the army offered to escort the aid groups but they refused.

>The U.N. says being escorted by Israel’s army could bring harm to civilians, citing shootings and killings by Israeli troops surrounding aid operations.

Expand full comment
Erica Rall's avatar

Thanks. I've been hearing conflicting claims on that front and was mostly going off of the "allowing the UN to restart without delay the supply of humanitarian support to the people of Gaza to end starvation" bit in the British announcement.

Expand full comment
Jollies's avatar

Multiple high-ranking Israeli officials, as well as leaked internal documents, have proposed exactly we are seeing: push the Gazan population into the south and starve them, while also destroying the civilian water, sanitation, and medical infrastructure necessary for survival.

The blockade is still in effect. What ended is the *total* blockade. Currently Israel lets only a small and insufficient amount of food into Gaza and then fires on civilians who attempt to retrieve it. Obviously Israel denies this, but who are you going to believe? It's their word against virtually every independent organization attending to this issue.

Expand full comment
FLWAB's avatar

Yeah, the "allowing" they are referring to is complying with everything the UN is demanding before the UN will send the aid in. Israel wants them to send in the aid, they're not stopping them.

Expand full comment
Nobody Special's avatar

Depends - "Recognize a Palestinian State" is a flexible concept, and could potentially include:

"We recognize a Palestinian State in Gaza and the West Bank, the legitimate government of which is... Fatah!"

This (or mere recognition of a West Bank Palestinian State that does not include Gaza at all) is likely an outcome Hamas would not want.

Expand full comment
NoRandomWalk's avatar

It absolutely does guarantee it, but also this position has been massively overdetermined at this point.

Hamas has made it clear they will reject any ceasefires.

They are only interested in ending the war in exchange for being able to stay (not in 'political' control) the dominant military power in Gaza, similar to Hezbollah's situation in Lebanon (before they lost a war and are on the path now to disarmament)

There's no 'pressure' they are responsive to. Whether europe pressures israel to a two state solution doesn't meaningfully affect Israel's capacity to get them to surrender. Maybe large scale population transfer, or annexing a massive part of Gaza would do it, but I highly doubt that's on the table.

Expand full comment
Melvin's avatar

The other thing I don't get is: if Hamas leadership is hanging out in Qatar, why is there no Western diplomatic pressure on Qatar? I mean, would it kill us to say "No more Qatar Airlines flights until all Hamas leaders are handed over to Israel" or something?

Expand full comment
Tori Swain's avatar

According to Biden, we're also in Qatar, right? (and not admitting it) I think Qatar has established itself as a "free zone" where anyone can play, and where negotiations can occur.

Expand full comment
Shankar Sivarajan's avatar

What do you imagine happening after Israel hangs/shoots all the Hamas leaders in Qatar?

Expand full comment
NoRandomWalk's avatar

1) Qatar bribes a lot of people with a lot of money. They have a lot of soft power. They just gave trump a 400 million plane

2) Realistically the problem has always been that hamas is not answerable to their political leadership abroad. And you need 'someone' to be the face of negotiations.

It's really important to recognize that hamas political leadership were fine taking bribes and living in luxury abroad. They were in for the cause, as far as their children martyrs are concerned, but it's not like they signed off on oct 7th. Sinwar basically did a coup against the political leadership, and even iran+hezbollah were vaguely in for a war against israel 'in the future' but he miscalculated.

If you pressure Qatar, you lose a useful intermediary.

Heck, Qatar+Saudi Arabia just signed onto asking Hamas to resign and disarm, which would have been unthinkable a few months ago.

Expand full comment
FreedomAdvocate's avatar

People seem to be concerned about payment processor censorship, but there's actually very little discussion on what can be done about it, save advocating for the S.401 Fair Access to Banking Act. Problematically, people now think that if the act were to pass, the payment processor censorship problem would be fixed. This is not true. The penalties are too low and practically unenforceable. As in, it couldn't be enforced even if it were to pass. The bill needs alterations and nobody that matters even seems to know that.

The only person I've seen mention this as a point of contention is Josh Moon on the Kiwi Farms, who talked to several lawyers about it.

Here's a link to where he talks about it. Just a warning, this isn't intended for the audience of this blog. It's mostly intended for right-wingers upset that they've been prevented from supporting their super edgy websites and who expect that to continue to be an issue in the future. It's also clearly a vent post about payment processors. Hence, there are many, many slurs and the language is more emotional than is strictly helpful.

Basically, reader beware:

https://kiwifarms.st/threads/s-401-fair-access-to-banking-act.224421/

Onion: https://kiwifarmsaaf4t2h7gc3dfc5ojhmqruw2nit3uejrpiagrxeuxiyxcyd.onion/threads/s-401-fair-access-to-banking-act.224421/

Josh also went on a YouTube podcast with Kurt Metzger to talk at length about payment processors. This is good because he's the foremost expert on the subject of de-banking, but is seldom heard in any discussion of it, left or right, because both sides hate him for running his gossip website as he does, and for his belligerent personality.

Expand full comment
Tori Swain's avatar

Poor Josh Moon. He showed up and said, "I'd like to learn how to work this forum management software. You got a forum you want me to run?"

Oh, yes, oh yes they did.

He's now living in Eastern Europe, his mother having received credible (in person) death threats by an unstable person...

If you want a free way to subsidize "Freedom of Speech" no matter who, then you can always do the "get paid to get Brave advertisements" -- and send the money to kiwifarms. (Surfing kiwifarms also pays for kiwifarms, but I'm making the general assumption that folks around here aren't really the gossiping type. Perhaps the police report of "what it's like when you put a nine year old on cocaine" might be of interest...)

Expand full comment
FreedomAdvocate's avatar

He's back in the United States now to do advocacy for a free internet. He made an org called the United States Internet Preservation Society to lobby and everything.

Expand full comment
Tori Swain's avatar

You ought to provide an onion link to kiwifarms. Not sure who is reading this, but kiwifarms could get you flagged.

Expand full comment
FreedomAdvocate's avatar

Added it in. I think UK users get warning screens directing them to vpns or the Onion link anyway, but it's good digital hygiene.

Expand full comment
Tori Swain's avatar

Sure. How likely do you think you're getting tracked/flagged in America for looking at kiwifarms? Given government intel people discussing on worktime Discord "how women deserve to be raped" and other such "tran-aligned" nonsense, I wouldn't say the possibility is zero.

Expand full comment
grumboid's avatar

The sense I have is that this is a people-applying-soft-pressure-to-payment-processors problem, ie, there's some kind of religious group that applied soft pressure and the payment processors caved.

If that's true, then it seems like this is that rare type of problem that is *exactly* solved by getting really angry about it on social media. If the culture warriors on twitter can apply more pressure to payment processors than the religious group, then the problem should go away.

It's possible I'm wrong and there's some sort of actual lawsuit threat from the religious group?

(Apologies, I did not click your link to the slur-filled rant. I try not to read that sort of thing.)

Expand full comment
Chastity's avatar

I work in this field (smut) and you just get constantly kicked around by everyone, but if it weren't for mechanical business reasons, somebody would have jumped on getting the cash from what is, ultimately, a field with a lot of money in it. SubscribeStar got a bunch of clients because Patreon started kicking around adult content creators, and I was in that group.

The main problem is something like: John sells some porn to Bob. Bob's wife, Alice, sees the line item on their credit card bill for "Hot Teen Sluts," and goes to Bob to ask what this is. Bob says (lies) that he has no idea, so Alice calls the credit card company to dispute the charge. This heightened risk of payment dispute is broadly true, though it can happen for other reasons (kid getting the card and the parents being more willing to dispute than in the case where he bought a bunch of Fortnite skins, guy getting pissed that the camwhore didn't actually love him, etc). This produces a cost to the credit card company, so the % they take from the original company mechanically goes up to compensate for Bob. If you are a company which mixes sales of both porn and non-porn content (e.g. Steam, SubscribeStar, Patreon), you want the non-porn rates, but you are in fact selling porn content, which is in fact at an increased risk of payment disputes.

The solutions I've seen are:

- banning all porn from your platform outright (generally done at the outset);

- bizarre and arbitrary ban lists to try to reduce payment disputes ("Reincarnated In A World Where Women All Love Being Violently Raped" presumably having a higher decline rate than "Big Titty Elf Island", but then you get all the standard issues of censorship, with the person making the list not able to do an actual analysis, and also frankly not caring to do so); and

- just having some type of partitioned sub-platform where they take a bigger fee (this is basically what SubscribeStar does).

Expand full comment
EngineOfCreation's avatar

I've heard that argument before and thinks it carries a lot of weight, but the question for me is - if payment processors only look at their spreadsheets and dispassionately cut out the (category of) customers with the highest risk of chargeback, why does it take a private initiative (Collective Shout) for them to take action? Or was that coincidence and Collective Shout got to claim credit where none was due?

Expand full comment
Paul Brinkley's avatar

The typical complaint I hear about porn companies having to shut down due to payment infrastructure threats is that credit card companies are run or infiltrated by prudes who use private enterprise to do what the government is forbidden to do. This might be the first time I've heard an argument that factors in the obvious profit motive to stay in the porn business, and honestly, it makes sense that these purchases get declined so often that it's denting the margin.

So now I wonder how much of the higher rates / shutdowns traces back to merely this, as opposed to prudes.

Expand full comment
grumboid's avatar

Thanks for the clear explanation!

It sounds like the theory is: "this Australian activist group claimed credit for making the payment processors make Steam de-list all those incest games by doing a bunch of call-in complaints, but a more likely explanation is that the payment processors decided incest games carried too much risk of chargebacks and they're just not supporting them anywhere."

Is that right?

Expand full comment
Chastity's avatar

Probably, yeah. Patreon tightening the screws followed the same general pattern of adding increasingly arcane rules about what you can publish, and I'm not aware of any particular pressure campaign on them.

e: Well, probably it went "complain to Visa" -> "Visa thinks Steam is at increased risk of chargebacks" -> "Steam doesn't want to deal with it"

Expand full comment
lyomante's avatar

the issue is probably more that the payment processors have both moral and business reasons to do it too, and the culture group is a diversion. "But they made me do it..." nah, you dislike how risky those kind of transactions are and morally you dislike degens too, but its nice to have someone else be the bad guy.

Expand full comment
Tori Swain's avatar

How does your view change if you hear that some of this religious group is in world governments? (Perhaps not so germane to payment processors, but perhaps it is? Monopolies don't like the state nosing around their business, as they're easy to put -out- of business).

It would perhaps be nice if someone would stick up for the little (only slightly toxic) guy, but the problem is, the people we used to ask to defend people "on principle" like the ACLU are now calling the Canadian Flag a "symbol of right-wing hate."

Expand full comment
FreedomAdvocate's avatar

No problem, I put the warning there for a reason. The payment processor problem is likely not going to be solved by public outrage because I suspect it wasn't caused by public outrage. Payment networks have been doing this for years, and have been completely ignoring all the outrage that came before.

All of the people and companies (PornHub, SubscribeStar, Gab, Hatreon, DLSite, Wikileaks, gun retailers, Canadian truckers' GiveSendGo Pages, and many more) apoplectic with rage at losing business should have accomplished something if it were true that they respond equally to different types of outrage. I suspect this stubbornness is because they are in favor of these bans for ideological reasons. To address the broader point, it's a chronic problem and people should not have to marshal an outrage mob just to participate in e-commerce. That means a counter-mob is not really a satisfactory solution to the underlying problem.

Expand full comment
Tori Swain's avatar

I'm pretty sure the threat of lawsuit did something when gofundme stole the canadian trucker's money in a purely ideological and mean-spirited act.

https://www.bbc.com/news/world-us-canada-60267840

Expand full comment
Shankar Sivarajan's avatar

Does it even need enforcement? The existence of such a law, even with nominal penalties, would let Visa and Mastercard point to it as a way of resisting public pressure, and continue taking a cut of "immoral"-but-legal activities they facilitate. They're incentivized to lobby in favor of it, and I wouldn't be surprised if they were surreptitiously doing so.

Expand full comment
FreedomAdvocate's avatar

I don't know why you're assuming so much about the character of those with authority in these companies. Perhaps they like exercising their power to rid the world of pornography? Capitalism is not some ultra-efficient process that selects out things like puritan sensibilities when the damage is a speck next to their profits. And payment processors are not creatures of capitalism anyway.

An actually effective ban would be the selecting pressure in this case because it might actually result in such characters no longer being effective as leaders for payment processors. A toothless ban isn't going to accomplish anything unless the leadership is in agreement that they don't wish to do this and only need an excuse not to.

That is the case with plenty of tech leaders. Like Matthew Prince of CloudFlare who at first defended Kiwi Farms from deplatforming out of his libertarian principles, and then buckled to public aggression. A ban like the one proposed would help a boardroom of Matthew Princes. It'd give them the necessary figleaf to go with their guns. But I don't think that's what we're dealing with.

I think we're dealing with true believers, some true believers at the very least.

I'd be interested in seeing a journalist investigate the personal lives of the people running these opaque companies, dig through their garbage, that kind of thing. I suspect you might find a real nasty customer posing as a Matthew Prince type.

Even if that's not the case, if they're lazy and profit driven, they might just keep banning things after email campaigns (in the case of Steam deleting adult games) and phone calls (in the case of PornHub deleting the majority of its content). Who can put a number to bad PR? Might as well just get rid of it, not like they're going to give us a real fine.

Expand full comment
Ogre's avatar

On colonizing space. Am I stupid or is it stupid? Literally everywhere on Earth, from Antarctica to the ocean is a better place to live than Mars. Why not go there first?

Asteroid mining? Just do it with robots fam. Why would you want to live in a coal mine if robots can do it?

Wait, it gets better. Humans are not expanding into unhospitable environments, we are retreating even from very hospitable ones. There are whole villages, even more or less whole rural countries in France and Italy who are going empty, partially demographics and partially urbanization.

Apparently young people do not even want to live in pretty French villages with good air, soil, water and everything.

Why would we want to colonize space? Shouldn't we try to colonize those pretty and very hospitable villages first?

Expand full comment
Viliam's avatar

I think Mars is not a good place, except maybe to build some huge massively environment polluting factories. It has no water, no air, and is even smaller than Earth.

On Jupiter and Saturn, the gravity would crush us like bugs. On Uranus and Neptune, Sun is so far that it seems like just another star... maybe a little bit brighter, but definitely not giving you enough heat to survive.

On the other hand, I think -- if it is possible -- it would be nice to have a backup for humanity in catastrophic cases like "a supervirus appears that exterminates all humans on Earth", "a huge asteroid we cannot deflect hits Earth and destroys all multicellular life", "a superintelligent AI succeeds to kill all humans (but also destroys itself, so it fails to expand to the universe)", or even "a planetary government stops progress and creates a new Dark Age".

But if we ever get this backup, I think it will be in form of "humans living in space ships" rather than colonizing other planets. The planets are just... not useful the way they are now, and it would be unimaginably costly to fix them. And even if we got lucky and found a suitable planet a few dozen light years away, the way there would be so long that if humans can survive the trip, they can probably survive staying in space indefinitely.

Expand full comment
John Schilling's avatar

Mars has some air and some water, which we can use for all the usual purposes after applying basically 19th-century chemical engineering (some of which has in fact been demonstrated on Mars).

And Saturn's gravity is about the same as Earth's - the planet is larger than Earth, but also less dense. In principle, a "cloud city" floating in Saturn's atmosphere could have Earthlike gravity, Earthlike atmospheric pressure, ready access to oxygen and water, and temperatures comparable to Antarctica. There are engineering reasons why I wouldn't recommend this as a near-term target for extraterrestrial settlement, but if our civilization survives the next few decades we'll get there eventually.

Expand full comment
agrajagagain's avatar

I believe you are exactly correct. Colonizing space may be a worthwhile goal to pursue one day, but not at our current level of technology. It's a ridiculous pipe-dream for the forseeable future. Even asteroid mining with robots seems fairly suspect at the moment. You need to be bringing back some absurdly valuable payloads to cover the kind of costs that would incur.

Incidentally, I think if humankind *is* going to make any serious use of resources beyond Earth's orbit, better systems for getting things off the planet will be a necessary per-requisite. Chemical fuel rockets aren't going to cut it. There are a number of very interesting proposals for planet-based installations to help stuff reach orbit, but they're all very speculative (read: they're just shy of mad science and it's wonderful).

Expand full comment
Nate Scheidler's avatar

I think you're right. I've found this essay "Why Not Mars" very persuasive (https://idlewords.com/2023/01/why_not_mars.htm).

Since the 1960s, computer technology has improved by a dozen orders of magnitude while keeping-people-alive-in-the-vacuum-of-space technology has hardly budged. It would be a great idea to colonize other planets, but we're far, far further from that possibility than many hope.

I also am very suspicious that Elon is using a promised mars mission to whitewash his image, kind of pascals-mugging all the mars hopefuls while doing a bunch of bad things to people on earth.

Expand full comment
Odd anon's avatar

There are possible catastrophes which could kill all humans in the lightcone, all humans on Earth, or all humans on Earth except those in very remote locations. All of these are worth taking preventative measures against. There should be efforts put toward having people living underwater, and people living in Antarctica, and people living on Mars, and people living in as distant parts of space as we can reach, because each of these slightly increases the species' odds of survival.

Expand full comment
agrajagagain's avatar

People living on Mars are more distant places are no hedge at all for the forseeable future. You know what happens to people living on Mars if Earth gets wiped out? They die too. Most disasters that would actually wipe out everything on Earth would also get Mars all by themselves. But even if they didn't, "Martian colony that can sustain itself with no external help" is so laughably far beyond "baby's first Martian colony" that you can barely hold them both in your field of vision at once.

Expand full comment
Odd anon's avatar

The purpose of early Martian colonization efforts is to bring us closer to a self-sufficient colony. People are pretty clear about that.

Expand full comment
agrajagagain's avatar

This is a little like saying "the purpose of Archimedes messing around with fluid displacement was colonizing the Americas." Technically speaking, those two things do have a relationship. But "better understanding of how things float" was ridiculously far down on the list of barriers to the classical world doing something like that.

The things that are needed to establish a self-sufficient Martian colony are technologies that are getting really quite close to the "indistinguishable from magic" end of the scale. Particularly, they would need to include things like hugely durable, long-lasting materials, incredibly cheap, reliable and compact energy storage and generation and above all manufacturing processes that can do vastly more with vastly less than anything we have now on Earth.

Now, those technologies would be great to have. But they'd be great to have *on Earth.* On the list of reasons why people might want them, "allowing someone to do a Mars colony" is really quite far down. So I don't reasonably expect such an effort to speed them up in any reasonable degree.

Until we have those technologies--at least on the near horizon, if not on hand--shoving people into tubes filled with combustible liquids and shooting them millions of kilometers away is really not going to help. There's a lot of good reasons *not* to pour the staggering amount of resources and human talent into that effort now, now when it's pretty much guaranteed to get cheaper, safer and faster long before "self sustaining" is a realistic-looking goal.

I'd be very happy to see more cutting edge scientific studies of Mars. But once we can admit that we're nowhere near a Martian colony, we can also admit that robots are a much better choice for that than humans right now. Realistically having an excuse to work on automation and miniaturization tech will be far more useful even from a "wanting to establish an eventual colony" standpoint than sending humans would be.

Expand full comment
Christian's avatar

I'm laughing so hard that you led with "all humans in the lightcone." Such a precise way of saying "everybody dies." But yes, broadly speaking I think your comment is excellent. Survival hedging means diversifying your population into inconvenient and probabilistically low usefulness locations just like financial hedging can mean buying inconvenient and probabilistically low usefulness assets.

Expand full comment
ruralfp's avatar

I think it’s worth noting that if starship were to achieve it’s stated goals in terms price per kilogram to orbit, it would actually be faster and less expensive for most people to go to low earth orbit than to get to interior or potentially even coastal Antarctica.

Expand full comment
Melvin's avatar

The movie Elysium, which I have never seen, but of which I've watched parts over the shoulder of someone else on a plane, makes the best case for space colonisation.

Space colonisation only makes sense if you can make space nicer than Earth, which means big Stanford Toruses in orbit within reach of other places you might like to visit. Making the physical environment better than Earth is challenging (although potentially reduced gravity might be nice) but you can make the political and social environment better; people can start new colonies with independent governments that follow whatever rules they like, and keep out whatever sort of person they consider undesirable.

Expand full comment
Sol Hando's avatar

The true currency of man isn’t money, it’s motivation.

There’s a very simple way to produce motivation in other people. Money. But that doesn’t mean money is the only way to convince people to do things. Religion, patriotism, ideology, also work pretty well in the right contexts.

The inevitable consequence of humanity is space colonization. There are literally a near-infinite amount of resources out there for us to access, and given a long enough period of time, we’ll have solved all our problems here on earth and moved onto imagining problems elsewhere to solve. Think of the 99.9999% of the sun’s energy that’s just wasted! Think of the 99.99999% of Stars that are just wasting their energy too! Consequently, we like to write stories about our future that anchor many people’s thoughts in that future, like how the Bible might anchor the thoughts of a 12th century Crusader.

Space is motivating. It gets people inspired. We tell compelling stories about the future, and this serves to reinforce the drive to go to space. We continue to tell stories about the future because the sorts of environments that create interesting stories aren’t easily created with modern technology, so we assume advanced technology that allows us to manipulate the conditions our characters must battle against.

Space colonization is the fulfillment of that. So long as there are men like Bezos, Musk, and the many, many people who work for them that are inspired, and thus motivated by Space, there is a significant financial incentive to go there. It’s as if God came down from heaven and offered a hundred billion dollars to the first person to set foot on Mars. Except instead of God motivating us through currency, it’s our love of a certain type of story the motivates us inherently.

Expand full comment
Erica Rall's avatar

I've argued the Antarctica critique myself and still mostly agree with it, but there are three dimensions in which the Moon or Mars might be a better candidate for colonization than Antarctica. I heard two of them in a past open thread (from John Schilling, I think) and figured out the third on my own (although I'd be surprised if I'm the first to have thought of it).

First, if you're making stuff that's going to end up in space anyway, be it satellites, probes, space telescopes, or infrastructure and supplies for manned space flight, then it's a lot more convenient to be able to make it on the Moon. Because of gravity wells and the rocket equation, it's much, much more efficient to move stuff to low earth orbit from the surface of the Moon than from the surface of the Earth. If you have enough demand for stuff in Earth orbit or elsewhere in space, and you have a reasonable way to set up mostly-self-sufficient mining, manufacturing, and launch infrastructure on the Moon, then a small Moon colony becomes an appealing idea. Ideally, it would be mostly automated, but you'd still want at least a small crew of humans to deal with unexpected issues.

Second, while Antarctica has several big climate-related advantages over the Moon or Mars (warmer than night or shade on the Moon or all but the hottest parts of Mars, actual breathable atmosphere, and abundant surface water), it has the disadvantage of having actual weather, and Antarctica's weather is abominable. The weather turns the latter two advantages into monkey's paw cursed versions of the things you'd wish you have more of on the Moon or Mars. The abundant surface water is in frozen form and is inconveniently piled atop soil and mineral resources, and the breathable atmosphere tend to move around annoyingly fast. Mars gets wind storms, too, and those would be potentially dangerous to colonists, but Antarctica's wind storms have a lot more mass behind them and tend to move abundant surface water around besides, burying structures and unsheltered people in snow unless you're careful and diligent about wind shelter and clearing snow off of stuff.

The third has to do with sunlight. Full direct sunlight under optimal conditions (clear sunny day at solar noon in the tropics) on the Earth's surface is about 1 kW per square meter. On the Moon's surface, it's about 1.4 kW/m^2 (same distance from the sun, but no atmosphere in the way). And on Mars, it's about 400 W/m^2 (some atmosphere but less than Earth, plus inverse square effects from being further from the sun). This is important for solar electricity generation and for growing crops. You literally never get the full kW per square meter of sunlight in Antarctica or anywhere close to it because the sun is never anywhere near directly overhead. The lower the sun is in the sky, the more air the sunlight goes through before reaching the surface, and the more ground area the same amount of direct sunlight is projected onto. I'm having trouble finding figures I'm confident I can compare apples-to-apples, but my best estimate is that the interior of Antarctica gets about 30% as much sunlight over the course of the year as Earth's tropics, about 75% as much sunlight as Mars's tropics, and a bit over 20% as much as the Moon's tropics.

Expand full comment
John Schilling's avatar

Right on all three counts, and glad to know that my previous writings on the topic weren't entirely wasted :-)

W/re Antarctica, while the bulk of the continent is as you note nigh-uninhabitable and also completely worthless for any purpose beyond science and maybe extreme adventure tourism, the coastal regions are another story. Those are not really a worse place to live than e.g. Barrow or Svalbard or Novaya Zemlya, and probably have the same level of resources, so we would expect them to have the same level of settlement. Rather like Greenland, with an empty interior but still 50,000 or so people and even a small city on the coast - but Antarctica is seven times larger.

Unfortunately, Antarctica is locked off by an almost universally accepted international treaty that says the only allowable activity is science. I believe Chile and Argentina have tried to establish de facto settlements by claiming they're just being family-friendly in allowing their "scientists" to bear and raise children, but that's pretty much a dead end. Fortunately, we haven't agreed to anything that daft in space (though it was a close call back in the 1970s).

Earth orbit, as you allude to, is already the site of broadly profitable activity to the tune of nearly a trillion dollars a year, and that's likely to expand by an order of magnitude if launch costs drop to anything like the levels Musk, Bezos, et al are expecting. At that point, yes, it's definitely both practical and profitable to set up mines (and mining towns) on the Moon. But also on some of the near-Earth asteroids, and possibly the Martian moons, all of which are roughly "equidistant" from Earth orbit in energy terms and which have different resource profiles. And while Mars is a bit "farther" out because of the gravity well, it's *still* easier than hauling stuff up from Earth and has yet a different set of resources. Tourists staying at the LEO Hilton, may be drinking Martian wine because it's both more exotic and cheaper than the Earth stuff.

And you're right about the solar energy, but you're not even close to the first to think of it - that was the "killer app" that Gerard O'Neill and company proposed for space settlement and industrialization in the 1970s. If you want solar energy on Earth, and particularly on some part of Earth that's not next to a tropical desert, the most efficient way to get it is to put your solar collectors out where you get 1.4 kW/m^2, all day, every day, with no worries about hail or dust or wind messing up your solar panels, and just beam it to where it is needed. Which, yes, we know how to do safely and in a way that can't be turned into a death ray.

But the economies of scale mean that it's only cost-effective if it's done *big*, with individual "powersats" having large-nuclear-powerplant level outputs and with hundreds of those to amortize the cost of the industrial facilities you'll need to assemble them (and with mines on the moon, etc, as above). That's less likely to be the killer app now, because the 1970s "energy crisis" is ancient history and because solar panels got cheap enough for us to start building en masse on Earth before space launch got cheap enough for us to put them in the sky. But now we're reaching the point where one of the limiting factors is NIMBYs blocking the construction of power lines connecting the places with lots of reliable sunlight to the cities with lots of demand, so there might be room for someone to make a profit by putting everything but the receiving antenna in Nobody's Back Yard.

Expand full comment
Erica Rall's avatar

Good point about space-based solar power being beaned back to Earth. I think I first learned about that from Sim City back in the early-to-mid 90s, and in more detail from more serious sources later on. I even had a hair-brained idea in high school for using SBIR contacts to bootstrap a startup that would eventually put solar power satellites in low polar orbits. The idea was way over my head and I had absolutely no realistic shot of getting it working from either a technical or business perspective, of course, but I had ambitions and some clever notions and that seems like all that matters when you're 16 or so.

While writing my comment in this thread, I was thinking more about sunlight as a resource for use directly in the colony or outpost in question. Growing your own food and generating your own power saves the bother of shipping in food and fuel, and sunlight makes that a lot easier to do on the Moon, in that respect at least, than It would be in the interior of Antarctica. On the other hand, food and fuel are quite a bit easier to ship into Antarctica than they are to send all the way to the Moon.

The issue of sunlight for crops is on my mind mainly because of a Heinlein novel, Farmer in the Sky, about terraforming Ganymede and setting up a farming colony there. He does have a passage where the main character talks about how much dimmer the sun is on Ganymede than on Earth because of the distance, but this doesn't seem to inconvenience the crops very much. Some kind of greenhouse-like "heat trap" is used to excuse the colony being hospitably temperate, but the crops seem to do just fine on a tiny fraction of the sunlight they evolved to grow in, and the main limiting issue for agriculture is turning regolith into fertile soil. Once I tried my hand at gardening and realized how many food crops need several hours of Earth-normal direct sunlight a day to grow decently well, this oversight started bothering me. I tried to figure out how bad it would be for growing crops on Mars and came up with the answer that Mars's tropics get similar amounts of sunlight to Alaska or northern Canada (or coastal Antarctica, probably), which suggests that agriculture on Mars would be inconvenienced by want of sunlight but not fatally so.

Expand full comment
Melvin's avatar

Part of the problem with colonising Antarctica is political, the Antarctic Treaty explicitly prohibits actually doing anything commercially useful with the continent. Some kind of hotel built on the northernmost tip of the continent would, I think, actually be viable, but not sufficiently so for anyone to risk rocking the Antarctic Treaty boat, especially since the obvious place to build such a hotel would be somewhere in the overlapping Argentinian and Chilean claims.

Expand full comment
Deiseach's avatar

Yeah, I have the fear that the first thing to happen after "we've opened up Antarctica for colonisation" is "and now these South American countries are going to war over whose territory is where".

I can't realistically visualise anything there but "lots of mining and mineral extraction" and that's a rather grim and depressing prospect: see the glorious slag heaps where once we had pristine natural environment (yes, I realise it's "snow and penguins" but we already got lots of slag heaps). Still, if potential colonists get eaten by shoggoths, we can't say we haven't been warned!

Expand full comment
Melvin's avatar

Right, this is the logic of the Antarctic Treaty. There might be some economic value there, but that economic value would quickly turn negative if we started fighting over it, so let's just all pretend it's not there.

As a kid in Australia, all our maps of Antarctica showed the continent pizza-sliced into various national territories, Australia's being by far the largest (albeit rudely bisected by a tiny slice of France). But it turns out that not everybody else's maps necessarily respect those claims.

Expand full comment
bean's avatar
5dEdited

Background: aerospace engineer, pro-colonization but not outrageously so.

The Antarctica critique is a fairly well-known one, and there's basically nobody who will dispute it from a strict economics standpoint. The usual case is some combination of talk about man's destiny and pointing out that there's a lot of potential utility from space colonization that you mostly don't see from deep ocean or Antarctic colonization. (In both cases, if you need someone, it's fairly easy to bring them in from outside in a way that isn't true of space.) Which brings us to:

>Asteroid mining? Just do it with robots fam. Why would you want to live in a coal mine if robots can do it?

Because robots can't do it, at least not all of it. Obviously you're not doing asteroid mining by sending out a guy in a spacesuit with a pickaxe, and there will be a lot of robots. But we are still quite a ways away from robots being able to solve arbitrary problems nearly as well as humans can, and particularly at first, I expect asteroid mining to throw up lots of arbitrary problems. Maybe we'll eventually reach the point where we have enough experience to be able to build a machine that doesn't need to have people nearby to fix problems, but that will not be our first machine, or our 10th.

So if we need arbitrary-doing-things capability in space (let's say that we realize we're about to run out of Platinum on Earth and go for asteroid/lunar deposits) then we're going to need to send people, and the economics of this are such that we really are going to want to have them live there for quite a while. If you're somewhere like Earth Orbit or Luna, then the people go up for 6 months and then come back, which is absolutely a thing that happens in both Antarctica and the ocean (oil rigs, modulo transit time considerations). But if you're going out to Mars, then transit time alone means you're likely to want to stay quite a bit longer, and "let's just have our population be here forever" starts to look pretty enticing. As does letting people stay permanently on Luna if they want to and there's enough activity to support that.

(I'm pretty firmly in the "economic value" camp and less in the "Man's destiny" camp, so I can't speak for them, sorry. I am extremely skeptical of it as an anti-X-risk thing because it's going to be an extremely long time before a colony can be self-sustaining without support from Earth.)

Expand full comment
Ralph's avatar
5dEdited

I'm not a big proponent of space colonization, but I think i get it.

We know for a fact that (in a long time) the earth will eventually become unlivable for human beings. If humans don't figure out how to sustainably live off planet, that means we know for a fact that humans will die at that point as well.

It's not driven by any particular contemporary benefit, it's a minimization of X-Risk

Expand full comment
Jim's avatar

> We know for a fact that (in a long time) the earth will eventually become unlivable for human beings.

And it will be a very, very long time until Earth becomes more uninhabitable than any other planet. If you can colonize other planets, you can "colonize" Earth as well.

Expand full comment
Ralph's avatar
4dEdited

I'm not sure what you mean, sorry.

Eventually, the sun will expand and "swallow" the earth. Are you saying that at that point, it would be easier to live on earth than another planet?

Or are you just saying that the concern is far enough in the future that it's not worth worrying about now?

I think the appeal of space colonization (compared to something like nuclear war prevention) is that most X-Risks are probabilistic or uncertain. "Changes in the internal workings of the sun will render the earth uninhabitable in a few billion years" is basically guaranteed, and some people don't like the idea of "Yeah, we'll deal with it when we get there"

Expand full comment
Collisteru's avatar

I think Jim meant that by the time the Earth becomes uninhabitable (e.g. due to high temperatures from the expansion of the sun), we'd have the technology to re-terraform "colonize" the Earth to make it habitable again.

This obviously won't work once the Earth is vaporized.

Expand full comment
Patrick's avatar

There is also the unknown X-Risk of an asteroid impact or something of the like. We don't really know where most of the asteroids are and pretty much at any time the earth could become unlivable for human beings. Even human threats like nuclear war or plague add to this unknown risk. By creating colonies in environments across space we have a better guarantee that humanity will continue to eek along even if something wipes everyone on earth out.

Expand full comment
Tori Swain's avatar

Nuclear war is as much a killswitch as the 3 gorges dam at this point. Nuclear war is a country specific problem -- nuclear war in general is massively overhyped, in terms of the damage it would do (aside from the whole tentacle porn thing).

Expand full comment
None of the Above's avatar

It sure seems like by the time we're good enough at working in space to have a self-sustaining Mars colony or two, deflecting incoming objects that are much smaller than the moon will be something we can do.

Expand full comment
Yug Gnirob's avatar

Firefox had a link to an article about groundwater pumps throwing off the Earth's rotation. https://www.popularmechanics.com/science/environment/a65515974/why-earth-has-tilted-science/?utm_source=firefox-newtab-en-us

After reading it, I don't understand it. So, now you all can read it for me, and tell me if it's true.

Expand full comment
quiet_NaN's avatar

To rephrase the same claim less sensationally, the rotational pole of Earth has shifted by 7.5 millionths of degrees as a result from water pumping.

While it is certainly neat that people can measure this and attribute it to a cause, it also does not seem like the kind of thing which will destroy all life on Earth.

Expand full comment
Alastair Williams's avatar

How the Earth spins (i.e. the speed of the spin and the axis along which it spins) depends on how mass is distributed around the planet. We don't live on a perfectly uniform sphere, instead it is slightly oblate, made of layers of different materials, and has some parts which are heavier than others.

If you change how that mass is distributed (e.g., by melting the layer of ice around the top of it and then spreading that around as water; or by pumping water out of the ground and pouring it into the oceans), then it changes how it spins.

The surprising thing to me here is how much water we've pumped out of the ground. I'm not going to do the calculations, but it seems reasonable that moving two trillion tons of water around has affected the distribution of mass and so slightly altered how the world spins. Of course it is a very small change, but then two trillion tons is also pretty small compared to the whole mass of the planet.

Expand full comment
Yug Gnirob's avatar

Oh, is the ice layer the bit that's supposed to affect sea levels? Like it rotated into a warmer area or something? I didn't see how that connected at all.

Expand full comment
Alastair Williams's avatar

Changing the rotation won't affect the polar ice at all. We're talking about a few inches here, so the change is really insignificant climate-wise.

The point is more that if you shift the way mass is distributed on the sphere then you change how it spins. Part of that comes from losing ice mass at the poles which then spreads into the ocean. But here they are saying some also comes from pumping water out of the ground and using it (i.e. taking it from underground rocks where it had collected and then using it for farming or whatever, that then ends up going into the oceans rather than back into the ground). In the paper they link they say this has also raised the sea levels slightly, but it is on the order of millimetres, so nothing to really be concerned about.

There's no real panic here. It doesn't matter that the axis of rotation has shifted by a few inches. It won't affect the climate or anything else, really.

Reading it again, I think they're trying to say that tracking how this rotation shifts could help to better monitor how water is redistributing around the planet (e.g. from ground water, from the ice sheets). But I'm doubtful you'd actually be able to tell much from it in practice.

Expand full comment
ZumBeispiel's avatar

It needs much more than that to tilt Earth's axis! https://en.wikipedia.org/wiki/The_Purchase_of_the_North_Pole

Expand full comment
John's avatar

Relevant to ACX circa 2022 - that old cash transfer/EEG study fails to replicate in a new RCT (among other disappointing findings) in this NBER paper: https://www.nber.org/papers/w33844 reported on today in the NYT.

The overall findings (no effect of $4,000/yr cash transfers to mothers below poverty line with newborn children on various cognitive, developmental, and behavior outcomes at age 4) are of course a downer, though like with Head Start I suspect any benefits would be in less directly "cognitive" life outcomes: so, HS graduation, incarceration rate, employment as an adult) vs. "can a 4-year-old rotate shapes in their head.

One maybe-saving-grace: the covid pandemic hit halfway through the observation period and most of the participants in both groups got a boatload of stimulus money, possibly diluting the effects of the study money.

Expand full comment
luciaphile's avatar

The only salient thing would be if the large cash group reproduced more than the low cash group.

Expand full comment
Deiseach's avatar

I was wondering at first "so what did the mothers spend the money on?" and then I read this:

https://www.nber.org/system/files/working_papers/w33844/w33844.pdf

"60% of mothers randomized to the low-cash gift group"

That's a whopping $20 per month. An extra twenty bucks is nice to have, but I don't see it making the huge differences expected.

So what about the 40% of mothers who got the $333 per month? That's more in line with the kind of extra money that makes a perceivable difference in quality of life, when you're below the poverty line.

This bit is just sad: it's not auguring well for your chances in life when your mother gets sent to jail before you turn four:

"For this data collection, 984 of the original 1,000 mother-infant pairs remained eligible (there were five maternal deaths, five child deaths, two maternal-child separations, and four instances of maternal incarceration)."

I still don't see anything in that PDF about what the money was spent on; if mom buys booze, cigarettes and drugs with the $333 it's not going to the baby. If the money goes to things like paying the electricity bill, that relieves one source of stress and improves the environment, but it's still not like buying better food or enrichment material for the kid's developing brain.

The whole thing seems very scrappily designed, was the $20/month meant to be some kind of control group or what?

"One thousand racially and ethnically diverse mothers with incomes below the U.S.

federal poverty line were recruited from postpartum wards in 2018-19, and randomized to

receive either $333/month or $20/month for the first several years of their children’s lives."

An extra $300+ per month is good, but if there's no accounting for what it was spent on or how it was used, then you can't really say if the cash transfer was good, bad or indifferent for the kids (maybe without that extra money they'd have done worse on the tests). Maybe at that level, $300 is not enough and if you made it $3,000 per month you'd see real differences. Maybe it's not the money, it's the genetics and the environment and poor parenting.

Expand full comment
Andrew's avatar

I think when doing a cash transfer study ignoring how the money is used is sort of the point. You can tack it on as FYI data, but cant be part of the evaluation itself. You are testing the efficacy of a hypothetical program that will not attach conditions to how money is spent. You are evaluating on whether a measurable outcome was improved. You know some money will be wasted, you dont know how much and also dont want to have to adjudicate edge cases as letting ppl be their own judge is one of the alleged benefits of cash vs in kind transfers. We spent X, Y was the outcome.

Expand full comment
John's avatar
4dEdited

The $20 was a control group -- a placebo control, if you will -- with the aim of disaggregating any possible effects from receiving anything at all from the actual effects of the money. You might hypothesize that receiving anything could generate feelings of gratitude ("wow it's so great the government is supporting me by doing this study") that are not really the effect of the value of the money itself.

Sort of analogous to how studies on psychedelics use a very low dose as the control group, versus an actual inert placebo.

I think the spending habits were published separately. NYT says there was no evidence it was spent wastefully:

>Mothers in the high-cash group did spend about 5 percent more time on learning and enrichment activities, such as reading or playing with their children. They also spent about $68 a month more than the low-cash mothers on child-related goods, like toys, books and clothing.

>At the same time, the study found no support for two main criticisms of unconditional payments. While critics have warned that parents might abuse the money, high-cash mothers spent negligible sums on alcohol and no more than low-cash mothers, according to self-reporting. They spent less on cigarettes. Nor did they work less...mothers in the two groups showed no differences across four years in hours worked, wages earned or the likelihood of having jobs

https://www.nytimes.com/2025/07/28/us/politics/cash-payments-poor-families-child-development.html

Regarding maternal stress, even that was not reduced:

>One puzzling outcome is that the payments failed to reduce mothers’ stress, as researchers predicted. On the contrary, mothers in the high-cash group reported higher levels of anxiety than their low-cash counterparts. It is possible they felt more pressure to excel as parents.

Strange stuff.

Expand full comment
Deiseach's avatar

Good to see somebody did check on where the money was being spent. I withdraw that objection.

I wonder if the stress came from "now I have this extra money but what if it gets pulled?" which would be worse if you're not trusting the money will indeed keep coming every month and you are worried about budgeting or taking on debt and then the funding is yanked and you're worse off than when you began, or even "now I have more money, my landlord is putting up the rent/my family is coming around to mooch off me".

Expand full comment
John's avatar

"mo' money, mo' problems"

Expand full comment
Victor's avatar

As I tell my students: "Never trust a single study." We will need independent researchers looking at different aspects of this problem in more detail before anyone can say with authority what does or does not make a difference in the lives of poor children.

Expand full comment
Melvin's avatar

> The overall findings (no effect of $4,000/yr cash transfers to mothers below poverty line with newborn children on various cognitive, developmental, and behavior outcomes at age 4) are of course a downer

Sounds like good news to me. In fact maybe we can start charging poor people more tax, if it doesn't make a difference either way.

Expand full comment
Tori Swain's avatar

1) Your research will preserve (make possible) some very expensive, very elaborate other research, that would otherwise be deep-sixed by the current state of science//public health. It would also shift money from one group to another, although that's not the aim of you being paid to falsify results.

2) Government, with all that that implies. Someone -will- be found who will create the results government wants. They're motivated to get answers to their own research questions, that they've been pursuing in small scale for a long time, and now want to open to a larger testing population.

3) ... relatively small? Industry would be far more likely to come back, a politician substantially less (likely to be voted out), a madbillionaire less likely as well. Government is aware of the potential of getting caught, and has less "this would be good for the bottom line" "I'm totally going to do this the next time" than Industry would imply.

4) Obviously counterfactual according to the government's best knowledge. You can fake it however you like, so long as you make the pesky idea "go away." (and because this is being used to further a research project that is expected to give results in, say, two to four years, they're not gonna care if you "redo" your research, or other people make your research wash out in the meta-analysis).

5) That is to say, you're squashing an idea with your "big study"... however you manage that, is up to you. It won't be deemed "obvious fraud" though, even after people dig under the hood of your data. And it will be scrutinized. If you can manage it with "ask sketchy questions" then fine -- live dangerously. After all, you've already taken guvmint money. You pretty much have to show results.

6) Age -- old enough to be trusted to do a big enough study to quash an idea (the sort of people that get 5-year grants, last I worked in research). And yeah, you're good at your work. Maybe not the greatest, but decent. Nobody's gonna say XYZ did this, and look askance at the end result because of that.

7) Assume you can't self-fund your research. Other than that, I'm betting "lives reasonably comfortably on a researcher salary... so maybe $30,000 a year?"

8) Direct to you -- the offer will be "approved" by supervisor, but nobody's using social gymnastics to make you say yes. Why bother? They would like someone ... who's going to keep their word to do this "right" (fake it, in other words).

Expand full comment
Mars Will Be Ours's avatar

Hypothetically speaking, what technology is this very expensive, very elaborate other research focusing on?

Expand full comment
Tori Swain's avatar

Something with the potential to save a lot of lives... if it works.

Expand full comment
Mars Will Be Ours's avatar

My inherent concern is that since your hypothetical study will be faked to enable the later research, it indicates that the backers of this line of research are willing to fake studies. Suppose the very expensive, very elaborate other research fails. Why won't the backers of your theoretical potential technology later fake the results from the very expensive, very elaborate research to cover up the failure?

Expand full comment
Tori Swain's avatar

DARPA invests in a lot of crazy technology? They're used to failures? Failures aren't institutionally punished nearly as hard as in commercial or academic venues? In short, when looking at the government, you have a lot of factions, and while it's one thing to "do a little thing once" (fake a single research study-- assume darpa is rolling in cash, which they pretty much are). It's another to keep running a failed project into the ground, once you've gotten enough data to say it's a "failed project." You roll out a different idea. Darpa plays high stakes, pie in the sky goals.

This isn't industry, which may only have (or been given) one product. They've filed their stockholder notifications based on profits from that one product, and their continued existence depends on that one product.

I'm intrigued by your line of questioning, and feel like the difference between DARPA and industry/academia is worth investigating, in terms of "how can we make industry function better?" (Consolidation of the biotech market in particular may be warranted, if only to prevent "we only have one product.")

Expand full comment
Shaked Koplewitz's avatar

I'm seeing a lot of military thinkpiece things talking about how the XM7 is a stupid boondoggle lately. People who know weapons/military stuff, is this accurate, overstated, or another case like the f35 where everyone hates on it now but in ten years they'll all eat their words?

Expand full comment
bagel's avatar

The F-35 program started in a good place, got on a tough trajectory, and there was an intervention and it got turned around. The very short version is that it was a program to be the "low" to the F-22's "high", and it was so promising that everyone tried to get their thing onto it, which was too many things. The program was on the road to a weight and cost and delay death spiral, which triggered oversight and a flurry of articles. As a result they got disciplined and started saying no to stuff and focusing on cost and manufacturability, resulting in a good plane that mostly everyone is happy with, and some versions may actually be comparably cheap to the F-16s you might consider buying instead. Alongside that you also had the "Reformer" clique, headed by Pierre Sprey, selfishly spreading serious misinformation.

At no point did anyone think the stealth or the sensors wouldn't do what they were expected to. The two lines of criticism were "but at what dollar and weight cost?", which the F-35 program addressed, and "wHo EvEn NeEdS sTeAlTh", which history has.

So far the problems in the XM7 seem very different. The problems it's reportedly having - mechanical wear and failures, for example - just shouldn't be happening in a modern manufacturing context. The problems are being reported by people close to the testing group, unlike the armchair Reformers. The problem the XM7 is meant to solve - that assault rifles may not carry enough punch to get through modern Chinese body armor - has an off the shelf solution; battle rifles. The H&K G3 and the FN Fal, for example, were fielded successfully by our allies for many years during the Cold War. So it's doubly embarrassing to get wrong.

Meanwhile, assault rifles continue to work well in Ukraine and Israel. So if the XM7 is really going to turn things around and provide battle rifle performance in an assault rifle package, they need to figure themselves out and fast. Other rifles have; ArmaLite's M-16 had a rocky deployment but then took over the world. So did Accuracy International's L96.

But at the same time, the difference between small arms may just matter less to the outcome of wars than the difference between fighter jets.

Expand full comment
Scarier's avatar

How is the M7 not a battle rifle? Agreed that it’s embarrassing to not catch the engineering or manufacturing issues in preproduction testing, but I assume they’ll be able to fix that eventually. My main issue with battle rifles is how heavy they are—basic infantry loads these days start at around 100 pounds, and go up from there (especially for members of crew-served weapon teams)—and the M7 weighs ~4 pounds more than the weapon it’s supposed to replace despite the standard ammunition load being reduced by a third. I don’t love that.

Expand full comment
bagel's avatar

A battle rifle in the classic NATO taxonomy fires 7.62, while an assault rifle fires 5.56. The M7 sits in between, firing 6.80, although the cartridge is fairly similar to 7.62 in both size, weight, and power. The AK family, though obviously not NATO weapons, are also battle rifles.

And yeah, battle rifles dominated in both East and West until the Vietnam war, with the M14 replacing the M1 and turning out to be a mediocre general infantry rifle. It has since become a well liked marksman rifle. (Some of the M7's critics predict a similar story.) The shift to assault rifles that started with the M16 was a step down in bullet power, but (as you allude to) thought to be an improvement in practical lethality at the expense of range and stopping power.

And since then the assault rifle has stayed relentlessly winning. The West keeps adopting not just assault rifle platforms, but usually M4 derivatives - itself a derivative of the M16. Even notable exceptions like Israel (who famously developed the Tavor) are still using NATO assault rifle standards and just changing implementation details. And also Israel still doesn't just use but acquire M4 derivatives, even as in other cases they export the Tavor to e.g. Ukraine. All in all, I'd argue the M4 family is the most prolific in the world. So the M7 has had an uphill battle from the start.

Expand full comment
Scarier's avatar

Out of curiosity, do you have a source for that taxonomy? My understanding is that NATO countries have standardized 7.62x51mm (~3.5 kJ) as the benchmark full-power cartridge, but any taxonomy that classifies weapons entirely by round caliber is mostly useless. I know there isn't always a clear boundary between the two, but think it's reasonably agreed that assault rifles generally sacrifice some amount of power and precision at longer ranges for higher sustained rates of fire, much lower recoil, and overall easier of handling--as you mentioned, this has proven to be a very good trade in practice. The AK-47's 7.62x39mm cartridge (~2.1 kJ) might be a tweener on power (even that's a stretch--much closer to the NATO 5.56x45mm's ~1.8 kJ), but I would argue that all of its other features very clearly make it an assault rifle (not to mention the AK-74's 5.45x39mm cartridge at ~1.4 kJ).

Getting back to the topic of the M7, I would argue that once they work out the reliability issues, it's still going to have that same general performance characteristics that have relegated battle rifles to specialist roles for the last ~50 years--especially since modern body armor is already benchmarked against full-power cartridges, which remain in widespread service in medium machine guns.

Don't disagree with you on the proliferation of AR derivatives, at least in the west--the M4 is a fantastic service rifle. Rather than a better battle rifle, I wish we'd been able to develop a cartridge offering better armor penetration than 7.62x51 at comparable or lighter weight than 5.56x45 without a significant increase in recoil.

Expand full comment
John Schilling's avatar

Better penetration than 7.62x51 at less weight and recoil seems unlikely without a quite revolutionary change in small arms technology.

But, if the theory that our next opponent is likely to be using Level IV or equivalent armor, then just better penetration than 7.62x51 is probably worth having. 7.62x51 AP does not penetrate Level IV armor at any range. 6.8x51 AP does, or at least should out to ~600 meters. That's the design requirement, and since the round delivers more energy and higher velocity across longer distances and concentrated into a smaller area, it's certainly a reasonable expectation.

Yes, it's about a kilogram heavier than an M-4. Would you rather go into battle with a 4 kg rifle and 200 rounds that will penetrate the enemy's armor. or a 3 kg rifle and 300 rounds that will bounce off? Or you can go with an old-school battle rifle, lugging around the four kilos and having only 200 rounds to bounce off the enemy.

The XM-7 is probably the minimum viable rifle to meet that threat, if and when it materializes. The teething problems are annoying, but basically inevitable. Trials with the XM-7 began only last year; it took the AK-47 *eight* years to go from initial trials to large-scale deployment. Fortunately, we aren't doing the M-16 thing of rushing it into service in the middle of a war.

Which we'd probably wind up doing if we said "Nah, we don't need any of that gimmicky unreliable new stuff, an M-4 was good enough for my daddy in Iraq", and then found ourself fighting a peer competitor with body armor as good as our own. Better to work the bugs out now.

Expand full comment
Scarier's avatar

Sure, but there are plenty of existing technologies for this that mostly just need to have the bugs worked out. The most difficult problem right now is reducing case weight (or solving the outstanding issues with caseless ammunition). With the same bullet weight and penetrator, the only significant difference between 6.8 and 7.62 is energy retention at range. Currently fielded armor will stop battle rifle rounds with steel penetrators (it's unclear what level of armor the median Russian or Ukrainian soldier has right now, but no one is losing the war because of poor service rifle terminal ballistics), and we can already field armor that will stop tungsten carbide penetrators basically whenever we want. Assuming adversaries are at roughly the same place, we're going to need something better than 6.8 to defeat it regardless, and given the choice between two weapons that don't work well against armor I will happily take the lighter one with 50% more chances to hit somewhere the armor doesn't protect (which, let's face it, is still most of the body--it doesn't even protect many of the places that will kill you very quickly).

I don't disagree that we should have a better service rifle, but I think the M7 is too little of an improvement in ballistics for the downsides--especially since our gear is already way too heavy, and lightweight body armor is much more technically challenging than lightweight ammunition.

Expand full comment
bagel's avatar

I’m afraid I don’t have a source for the taxonomy; I learned it 15 or 20 years ago and couldn’t tell you where. My apologies.

For what it’s worth, Wikipedia’s page on battle rifles agrees with my understanding. Not particularly authoritative, of course.

As for body armor, I know less than I’d like, but at least ‘Big Mac’ (of Big Mac’s Battle Blog) has claimed to own armor plates rated for .50 ball. If true, that indicates to me that modern armor is a complex landscape where the choice of rifle isn’t about having uncontested ability to defeat armor but a more nuanced “problematize the choice of infantry equipment” for OPFOR.

Expand full comment
Tori Swain's avatar

hahaha. Just because NATO's only plan is "call in an air strike" doesn't mean fighter jets are the be-all end all.

Expand full comment
bagel's avatar

I'm sure Saddam and Khomeini will find that line of reasoning comforting.

Who has ever lost on the battlefield with air superiority?

Expand full comment
Melvin's avatar

Wars won/lost is too chunky a metric to be useful, but I bet there's individual soldiers who have lived or died because of the equipment they were carrying. Maybe their gun jammed, maybe the barrel was warped, maybe a reload took half a second longer than it should have, or they had to look down at an inopportune moment, maybe they were a little bit more exhausted from carrying around a slightly heavier rifle.

If we assume we're aiming not just to win wars but to minimise casualties on our side then the case for choosing infantry equivalent carefully looks a lot stronger.

Expand full comment
bagel's avatar

Totally. In an ideal world we have the bandwidth to get everything right, because as you said it all matters.

But sometimes militaries have to make decisions of priorities. I was expressing a personal belief that if I had to choose, getting the fighter jet right has more impact on the outcome of a war than getting the rifle right.

Expand full comment
Tori Swain's avatar

When have the Italians fought with air superiority?

Russia versus Afghanistan?

America/Sauds versus Houthi?

Expand full comment
bagel's avatar

Probably the only case I can think of where having a better rifle was decisive was the Uzi for storming the Syrian bunkers in the Golan Heights in 1967.

Expand full comment
bagel's avatar

Russia's invasion of Afghanistan is a good example of how insurgencies can survive air power. But, of course, the first thing the West provided was anti-air missiles because the air power was so decisive. And it was a lesson that America didn't heed enough when we faced an insurgency there a few decades later.

Similarly, in the Syrian Civil War the Assad regime forces nearly folded despite having air superiority, requiring Russia and Iran and Hezbollah to bail them out. But there too the rebels had Western anti-air support.

Even in Ukraine, where neither side has air superiority, Russia's air advantage has been decisive in several notable moments in a way that no rifle has been.

Expand full comment
bagel's avatar

For sure air power can't do everything. Even more damning than the examples you listed are America in Vietnam and Cambodia and Laos, and the insurgencies in Iraq and Afghanistan; airpower was able to win battlefields and wreak havoc but not win wars. You'll find me in full agreement that it has limitations.

But so does every weapon! They're just tools. America has not only tended to have better planes than our enemies in most conflicts, but better rifles, and yet we lost the fights when our strategies weren't fit to the challenges.

But it wasn't a Better Rifle that disassembled Saddam's military twice. It wasn't a Better Rifle that cleared the way into smashing Khomeini's missile forces and nuclear program. Or the Syrian nuclear program. Or the Iraqi nuclear program. It wasn't a Better Rifle that won in Bekaa valley or removed Nasrallah. It wasn't a Better Rifle that unseated Assad, but (drone) air power was decisive. It wasn't a Better Rifle that halted the Somali advance into Ethiopia in the Ogden conflict. It wasn't a Better Rifle that intervened in Kosovo. Genuinely having Better Rifles (and better aircraft!) didn't redeem the brutal Rhodesian bush tactics. Having the superior Chassepot rifle didn't save the French from Bismarck's Dreyse Needle rifles in the Franco-Prussian war. Native Americans often had more modern guns than the American colonists.

You obviously need infantry weapons. You need them to be at least good, and you need your infantry to have confidence in them. But does having the best infantry weapon win wars? Evidence is ... thin, at best. Sometimes winners do have the best infantry weapon: but that might be a downstream effect of a deeper cause of victory, rather than the cause itself.

Expand full comment
Tori Swain's avatar

This sounds like damn fine analysis, and it certainly exceeds the scope of what I've read about. (West Point isn't in my background, although I do note they teach Longstreet and Lee).

Expand full comment
Victor's avatar

Well, they are more decisive than rifles, at any rate.

Expand full comment
Tori Swain's avatar

Tell that to the Ukrainians. Without "air superiority" (surface to air missiles in this case, not contested fighters), the whole doctrine breaks down. With No Alternative.

Expand full comment
Victor's avatar

Reasonably sure the Ukrainians do not consider the AKM to be a viable substitute for surface to air missiles.

Expand full comment
Tori Swain's avatar

Depends on whether you're discussing hitting military targets or committing genocide. I'm pretty sure the AKM works better as a "clear out the russian sympathizers" -- unless you want to discuss the tactical/military applications of zipties and plastic wrap? (Here's a hint: you don't.)

Expand full comment
Gordon Tremeshko's avatar

Someone get this person a Marshal's baton.

Expand full comment
Tori Swain's avatar

Marshal's batons do not slay shrubberies. I stand waiting for the Tactical Goats! (Do you know what happened to the tactical goats?)

Expand full comment
Shaked Koplewitz's avatar

thanks

Expand full comment
Reid's avatar

Apparently the NYT continues to have its nose in the ACX feed, as it just published its own review of Alpha School https://www.nytimes.com/2025/07/27/us/politics/ai-alpha-school-austin-texas.html (https://archive.is/WQT8Z to get past the paywall).

There's nothing interesting in it that wasn't already in the ACX review, so I don't feel the need to summarize per open thread guidelines - just commenting on its existence.

Expand full comment
Eremolalos's avatar

Maybe those NYT noses are here right now. Hi guys! I used to be a NYT reader but I'm not any more.

Expand full comment
Collisteru's avatar

The NYT article is not nearly as hostile to Alpha School as I expected. Credit where credit is due.

Expand full comment
Jacob Steel's avatar

Is it coincidence that the political coalitions in the US in the late 20th/very early 21st century US mapped so neatly onto the political coalitions in the late 18th/early 19th century (with the party names reversed)?

Expand full comment
Kenny Easwaran's avatar

They don’t map *that* neatly. Late 19th century Republicans were the party of big business, infrastructure, low tariffs, and the end of slavery. Democrats were the party of immigrants, farmers, factory workers, and high tariffs. There’s a few specific flashpoints where they are perfectly anti-aligned with modern politics (notably on the status of black people and on tariffs) but there are some where they are pretty closely aligned with contemporary politics (notably big business and immigrants).

The maps of 1896 and 2004 are particularly interesting because they are so close to perfectly opposed. (https://www.270towin.com/historical-presidential-elections/timeline/) Washington is the only state that voted Democratic both times, and there’s only a few states that voted Republican both times (North Dakota, Iowa, Kentucky, West Virginia, Ohio, Indiana). If you choose 2000 as the comparison instead you get New Hampshire in place of Iowa.

But the contemporary coalition, which is more perfectly opposed on issues (with the tariffs thing) is less perfectly geographically opposed, with the Midwest, and Georgia, Arizona, and North Carolina, having partly switched since 2004.

Expand full comment
Amicus's avatar
5dEdited

> Late 19th century Republicans were the party of big business, infrastructure, low tariffs, and the end of slavery. Democrats were the party of immigrants, farmers, factory workers, and high tariffs.

You're mixing up a number of things here.

- Republicans were consistently the more protectionist party, like the whigs before them. The decoupling of modernization and tariffs as political issues in the US didn't really solidify until FDR. In the 19th century, industry and protectionism go hand in hand.

- Factory workers vs. factory owners cut across party lines until, again, Roosevelt. Generally speaking during the third party system skilled labor leaned somewhat more Republican while unskilled labor leaned somewhat more Democratic, but these are weak tendencies dwarfed by other factors, and *both* parties were the party of big business. 1896 is an unrepresentative year here - that's exactly the point where the Bourbon Democrats start to lose their hold on the party

- Immigrants polarized along ethnoreligious lines. Germans and Scandinavians leaned Republican, the Irish leaned Democratic, and the major eastern/southern European immigration wave began only very late in the century and so largely couldn't vote yet. Immigration restrictionism as such was not really an issue at the time - it was always restriction of *that sort of immigrant* (whatever that sort might be: Irish, Chinese, etc) and didn't track national-level party politics particularly closely.

Expand full comment
Peter Defeel's avatar

Immigration didn’t stop until the business classes didn’t want it anymore by 1924. They started to fear European ideas like socialism and anarchism (which to be fair did lead to violence against capitalists).

Expand full comment
Collisteru's avatar

Care to elaborate?

Expand full comment
Erusian's avatar

It's not a coincidence. It's a sloppy political history that isn't true but is useful for ideological purposes.

Expand full comment
David Bahry's avatar

Concerned about AI warfare, both for its own sake and because AI arms races bring existential risk that much closer [1] [2]. Some thoughts:

- AI is already used at both ends of the military kill chain. Israel uses "Lavender" to generate kill lists in Gaza [3]; Ukraine's "Operation Spiderweb" drones used AI to recognize and target Russian bombers [4].

- Drones are cheaper than planes and tanks and missiles, leveling the playing field between the great powers, smaller countries, and militias. The great powers don't want it level. Thiel's Palantir and Anduril are already selling AI as potentially "America’s ultimate asymmetric advantage over our adversaries" [5].

- Manually-controlled drones can be jammed, creating another incentive to use AI as Ukraine did.

- A 1979 IBM manual said "A computer can never be held accountable, therefore a computer must never make a management decision." But for war criminals, this is a feature. An AI won't be tried at the Hague; a human will just say "You can't prove criminal intent, I just followed the AI."

(And this isn't even getting into spyware like Pegasus [6], which I imagine will use AI soon if it doesn't already.)

Groups like Human Rights Watch, whom I respect, have talked about what an AI-weapons treaty would need to satisfy international human rights law [7]. But if we take existential risk and arms races seriously, then I don't think any one treaty would be enough. First, that ship has already sailed. Second, as long as we continue to use might-makes-right realpolitik at all, the entire short-term incentive structure will continue to temporarily reward great powers racing to build bigger and better AI, and such incentives mean no treaty is permanent (see countries being allowed to withdraw from the nuclear non-proliferation treaty). I think the only answer is to really finally take multilateralism seriously (third time's the charm, after post-WWI and post-WWII?) [8]. Not just talking about international law and the UN enough to cover our asses and scold our enemies, but *actually* treating these as something we need like we need air [9]. E.g., for the broadly American audience of ACX, the US should finally join the ICC and it should criminally try Bush for destroying Iraq and Obama for destroying Libya (which actions together pushed rival countries towards pursuing nuclear deterrence); anything less and the world will know the US is still racing to dominate them with AI, and the world will continue to race right back, until the AI kills us all if the nukes don't get us first.

[1] Filkins, D. (2025). Is the U.S. ready for the next war? The New Yorker. https://archive.is/SdTVv

[2] https://www.hachettebookgroup.com/titles/eliezer-yudkowsky/if-anyone-builds-it-everyone-dies/9780316595643

[3] https://www.972mag.com/lavender-ai-israeli-army-gaza/

[4] https://www.kyivpost.com/post/53784

[5] https://investors.palantir.com/news-details/2024/Anduril-and-Palantir-to-Accelerate-AI-Capabilities-for-National-Security/

[6] Farrow, R. (2022). How democracies spy on their citizens. The New Yorker. https://archive.is/4UJAB

[7] https://www.hrw.org/report/2025/04/28/hazard-human-rights/autonomous-weapons-systems-and-digital-decision-making

[8] Sachs, JD. (2023). The new geopolitics. Horizons. https://www.jstor.org/stable/48724670

[9] https://www.penguinrandomhouse.com/books/738224/the-myth-of-american-idealism-by-noam-chomsky-and-nathan-j-robinson/; reviewed in Foreign Policy at https://archive.is/B70tg.

Expand full comment
Neurology For You's avatar

Let’s face it, nobody much is held responsible for drone strikes, even when they blow up civilians, even when they’re directly controlled by operators.

The development of AI targeting and attack systems will just be a further level of insulation: it’s nobody’s fault, just something that happens in war zones.

Expand full comment
John Schilling's avatar

"E.g., for the broadly American audience of ACX, the US should finally join the ICC and it should criminally try Bush for destroying Iraq and Obama for destroying Libya"

So, basically, your plan is to establish a broad and enforceable consensus against the development of AI weapons, without the support of any politically significant faction of the United States of America? Let us know how that works out for you.

Seriously, stick to one issue. The bit where the "New Atheists" said that in order to be a proper Atheist you also had to be a feminist, antiracist, LGBTphillic antifa progressive, did not do the cause of Atheism any favors. There are avenues of AI development, military or otherwise, that I'd rather the world not pursue any time soon. But I don't think I'll be following your lead, or standing anywhere near you, if this is what you're bringing to the table.

Expand full comment
Peter Defeel's avatar

As I recall the new atheistic movement broke into three. 1) the lgbt leaning stiff. 2) the Islamophobia that was in contradiction with 1 and 3) morons.

It was probably 3 that caused most people who were atheist to move away from associating with the movement.

Expand full comment
None of the Above's avatar

I'm pretty sure there was substantial overlap between all three subsets.

Expand full comment
Brendan Richardson's avatar

I imagine AI warfare as primarily involving autonomous ethnoweapons designed for massacring civilians, as this is a good fit for cheap drones with rudimentary onboard natural language processing. Think the plot of Metal Gear Solid V, but less stupid.

Expand full comment
Jim's avatar

If you just want to wipe out civilians, you can already do that with bombs. I guess it would be more useful in internal conflicts, where you want to leave infrastructure intact.

Expand full comment
TasDeBoisVert's avatar

>But for war criminals, this is a feature. An AI won't be tried at the Hague; a human will just say "You can't prove criminal intent, I just followed the AI."

I don't follow. Nuremberg trials established that "I just followed orders" isn't a valid defense, why would "I just followed (an AI's) orders" work better rather than worse?

Expand full comment
Gordon Tremeshko's avatar

"the US should finally join the ICC and it should criminally try Bush for destroying Iraq and Obama for destroying Libya (which actions together pushed rival countries towards pursuing nuclear deterrence); anything less and the world will know the US is still racing to dominate them with AI, and the world will continue to race right back, until the AI kills us all if the nukes don't get us first."

I don't see how the second part follows from the first part. The US government, let's say the State Department, could throw Bush and Obama under the bus and send them off to the Hague or whatever to be judged for their sins. The US DoD could still be pursuing the bestest most powerful AI to maintain its military advantages over potential rivals the whole time. "The US" isn't one unified whole of anything, and different parts of it are likely to continue to pursue whatever they perceive to be in their self interest (as is every other country, no? Why would the US be unique in this regard?)

Expand full comment
bean's avatar

I think a lot of concerns about AI in war/autonomous weapons are overstated. For pretty much any definition you can give, I can point to systems in service for between 50 and 150 years that meet it, and smarter weapons almost always make bystanders getting hurt far less likely. (For instance, older anti-ship missiles would go to an area, turn on their radar, and attack whatever their algorithm saw as the best target. It was up to the operator to make sure that best target wasn't a container ship. More modern ones have IR cameras and the ability to check if the ship they see is actually a type they want to go after.) I don't see much reason to expect this trend to stop, particularly because there are good reasons for weapon designers to want to not hurt things that aren't targets. At best, it's just a waste, and at worst you have made various people mad who you would prefer not to irritate. We've also just fundamentally gotten better at doing testing of this kind of stuff over the last 50 years. It drives up the price, but if there's an AI apocalypse, I doubt military weapons AI will be a significant part of it.

Expand full comment
None of the Above's avatar

To my mind a big concern involves control of the military. It's hard for the president to order the army to help him annul the election and install himself as dictator because most soldiers won't go along with that. The more of the muscle is machines that take their orders from a central system of some kind, the smaller the group of people needed to carry something like that off.

Expand full comment
Victor's avatar

The best argument is that the US had better get behind real unilateralism before China takes over as the primary superpower. That outcome isn't inevitable, but Xi has to die eventually, and who knows what will happen after that? The Chinese have certain inherent advantages that aren't going away. A strong G-20 with some sort of enforcement power would go a long way toward stabilizing great power conflicts.

I think the area to start in isn't global warming or armed conflict, because the incentives aren't there. A global tax regime (the EU has already started down that road) seems more doable, being in the interests of the great powers, esp. including China. Rein in the oligarchs, and a lot of other things become easier.

Expand full comment
Jeffrey Soreff's avatar

>E.g., for the broadly American audience of ACX, the US should finally join the ICC and it should criminally try Bush for destroying Iraq and Obama for destroying Libya

I note that this hasn't happened. I also note that Putin hasn't been tried for invading Ukraine.

I view international law as, at best, a really bad joke.

I don't expect this to change. Frankly, given the nature of most regimes around the world, and the sorts of things their leaders _could_ agree on, I'm just as happy to _not_ have a way for a consensus of rulers to enforce their views.

Having watched the UN become an anti-freedom, anti-Western cesspool, I'm inclined to chalk it up as "looked like a good idea at the time" and support its abolition.

Specifically re AI: An unverifiable arms control treaty isn't worth the paper it is printed on, and AI is fundamentally data, software, and CPU cycles. At the moment data centers are visible, because no one have the incentive (from a treaty they are cheating on) to hide them, but they are fundamentally a large mass of overgrown office equipment. Give the USA & PRC an incentive to hide them, and I'm confident that they will successfully hide them.

Expand full comment
Tori Swain's avatar

Given that somehow the deaths of more than one world leaders did not raise questions during Covid19 (including the African leader testing dogcrap, and showing positive covid19 tests to it, alongside the fruit juice tests the English schoolchildren were doing...).

There are actual "international law/conventions" that pretty much everyone abides by. Consider the backlash for use of large nuclear weapons (or the deliberate triggering of large nuclear weapons of your opponent)...

Expand full comment
Jeffrey Soreff's avatar

>There are actual "international law/conventions" that pretty much everyone abides by. Consider the backlash for use of large nuclear weapons (or the deliberate triggering of large nuclear weapons of your opponent)...

Many Thanks for your reply!

Law has nothing to do with this. Mutual assured destruction is a (meta?)stable equilibrium of deterrence by _national_ control of weapons. If Russia blew up Washington D.C., we would blow up Moscow, completely regardless of what the UN, the ICC, or any set of lawyers said about it.

I will concede that in low stakes commercial disputes, there are some conventions that e.g. shipping companies abide by.

When push comes to shove, international law is a bad joke.

Expand full comment
Tori Swain's avatar

You're missing it. Let's call one country III, which can remove the nuclear weapons capability of another country IV, with one surgical strike that makes a nice, big mushroom cloud, and renders IV uninhabitable. III isn't going to attack IV, unless direly pressed, because "making portions of the globe uninhabitable" is not being a good neighbor. It's being a very bad neighbor, who has made the entire game less fun. That gets you banned from the table, or at least sidelined for all the fun "commerce."

"If Russia blew up DC" -- and we could prove it, naturally. That'll be part of why many countries operate "deniable assets." It's possible Russia could blow up DC without much effort... and in a plausibly deniable way (pretty sure Trump lunges after China, and for good reason).

Expand full comment
Jeffrey Soreff's avatar

Many Thanks for your comment!

>"If Russia blew up DC" -- and we could prove it, naturally.

Fair. I'm considering the case where the nuke is delivered by ICBM, and we tracked it. If we _don't_ know who nuked us, then we don't know who to retaliate against. ( And neither the legal system, nor public opinion, for what they are worth, which isn't much, knows where to direct their ire (or, in the case of anti-American wokesters, celebration) either. )

>It's being a very bad neighbor, who has made the entire game less fun. That gets you banned from the table, or at least sidelined for all the fun "commerce."

Unsubstantiated. Jim has a really good response to this in https://www.astralcodexten.com/p/open-thread-392/comment/140186210 (currently right under this comment, as this comments section is currently displayed).

Expand full comment
Tori Swain's avatar

Funny that you assume it's a nuclear weapon. I contemplate easier delivery methods, of preexisting "weapons of mass destruction."

Let Trump spin it, and you can get the Trumpists to cheer for the destruction of the Swamp. "And nothing of value was lost" (because of disaster recovery systems).

I see Jim's response, and I contend that he's undervalued the expected value of Russia and China's response to America's use of nuclear weapons. I mean, we nearly lost Detroit that one time, and that was without a flock of seagulls leading someone in Russia to conclude that "oh shit they launched everything."

Expand full comment
Jim's avatar

> Consider the backlash for use of large nuclear weapons

Nobody's really tried, so we can't really tell how that would shake out. Yes, everyone will be mad, but... what are they going to be able to do about it? The US certainly hasn't paid any price for nuking Japan.

Expand full comment
Jeffrey Soreff's avatar

Well said!

Expand full comment
Tori Swain's avatar

Yes it has. The price was in Russia arms-racing us. The price was in the increased risk of nuclear war, and the use of nuclear weapons as deterrent. That we didn't go to war over "a flock of seagulls" is down to the 30 minute response time, and both sides' technical folks being willing to take on relative faith that "you didn't just launch everything, did you?" so that they had time to verify. The United States aptly showed that they were willing to nuke people without nukes, so its enemies put "get nukes" at the top of their priority list. This is a price, even if it wound up costing nothing -- the expected value of the price is higher than you'd think (Detroit, anyone?)

Expand full comment
Jim's avatar

> An AI won't be tried at the Hague

Neither will an American soldier, so I don't see how that's relevant. All of these naive attempts at "international law" are worthless, given that any of the great powers will just ignore them the moment it becomes an inconvenience, and these smaller nations have zero leverage to do anything about it.

You want world peace? The world being brought under one flag is the only way you're going to get it... and that's going to requires an overwhelming amount of force. AI is looking to be a viable source of such power. Of course everyone is going to pursue it at all costs.

Expand full comment
None of the Above's avatar

An AI would not be considered to have any rights. It wouldn't be tried because if the right authorities decided it had done the wrong thing, they'd turn it off.

Things get exciting when they can't turn it off anymore--either because it is too powerful, or too widely distributed, or too essential to their survival.

Expand full comment
Jeffrey Soreff's avatar

>All of these naive attempts at "international law" are worthless

Well said!

Expand full comment
StrangePolyhedrons's avatar

I used to think that way, but when I read "A City on Mars" by Zach and Kelly Weinersmith, they had a section on space law (which is of course international law) that makes some good points about how countries do generally try to abide by international law. Will they cut their throats over it? Of course not! But there's lots of areas of international law where countries have more of an interest in a stable set of rules than they do in momentary advantage.

Expand full comment
Jim's avatar

But that's not "law". That's a temporarily stable equilibrium. There is no authority to enforce it, and the moment it becomes inconvenient for any party, it ceases to exist. This is a situation where the momentary advantage is overwhelming. It's not in the US's interest to make any concessions.

Expand full comment
Tori Swain's avatar

Not at the moment it becomes inconvenient for any party.

We tend to abide by the "no terrorism" rule, and it does not cease to exist when we pay terrorists to blow up things. It's still there, and us getting caught with our hand in the cookie jar is an everpresent danger. It also costs us moral authority when we hire terrorist groups, provided that people observe the questionable behavior of the terrorist groups.

Expand full comment
Jim's avatar

Why the heck would we hire terrorists in the first place? We already have a pipeline for recruiting homegrown soldiers. When we blow up things, we call it a military operation, not terrorism. The difference is that we have leverage.

Expand full comment
Tori Swain's avatar

Plausible deniability, naturally.

Expand full comment
Peter Defeel's avatar

Cecil Rhodes pt 2, here.

Expand full comment
David Bahry's avatar

See this is exactly the kind of shit I'm talking about.

Expand full comment
birdboy2000's avatar

this is true and also a nightmare

Expand full comment
Never Supervised's avatar

How much would you pay to be the only person in the world with access to 2025 class LLMs in 2010. You’re not allowed to resell via APIs (eg you have a token budget that is sufficient for a very heavy individual user). You are allowed to build personal agents. You don’t know how it works so you can’t really benefit from pretending to have invented it. How much money/power could you generate in 10 years and how would you do it? Does it change dramatically if you go 2000-2010 or 1990-2000 ?

Expand full comment
Victor's avatar

I would start a marketing firm offering targeted ads for the first time. No one is doing this in 2010, but they are doing it five years later (albeit not with LLM's), so big business is in a a good position to understand what this is and take advantage of it. It would like being the first one in on the California Gold Rush. The 1990's are too early--the infrastructure isn't there to take advantage of it, no one would know what the hell you were talking about.

So--what is it worth? In the year 2000, total advertising revenue in the US alone was in the trillions of dollars (https://www.editorandpublisher.com/stories/online-ad-revenue-hit-82-billion-in-2000,101383). Capturing maybe 5% of that seems realistic. But success is not assured, so I would pay $100 million, and expect to make several times that.

Expand full comment
David Bahry's avatar

best I can think of is selling AI slop articles to clickbait websites lol

Expand full comment
Never Supervised's avatar

You could hide an earpiece and have insane fact recall. Imagine how it would look to anyone else. They’d suspect you’re doing something but wouldn’t be able to figure it out.

Expand full comment
Collisteru's avatar

Oooh, this is such a neat idea

Expand full comment
Never Supervised's avatar

You can do much better. The quality of your writing would be mediocre but the volume superhuman. You could easily make yourself into a well known public figure.

Expand full comment
Melvin's avatar

Could I? Everyone would just assume I had a small army of mediocre writers cranking out content under my name.

At best I'm a moderately well known blogger.

Expand full comment
EngineOfCreation's avatar

>Could I? Everyone would just assume I had a small army of mediocre writers cranking out content under my name.

Which is, of course, a way that at least one AI company has been discovered to cheat.

https://www.peoplematters.in/news/funding-investment/ai-fraud-700-indian-engineers-did-the-work-while-builderai-claimed-it-was-ai-45865

Expand full comment
Never Supervised's avatar

You could have your own brand of high throughput, clever yet poorly written content. Turn good tweets into ok posts.

Expand full comment
Melvin's avatar

Honestly I think I'll just get a job at Facebook, cruise through on minimal work, and enjoy living through the 2010s again, back when you could still buy a new car with a CD player and a phone that fits in one hand.

Expand full comment
Christina the StoryGirl's avatar

Pepperidge Farm remembers.

Expand full comment
Brendan Richardson's avatar

A follow-up to my previous "How can I avoid hugging on a first date?" post:

I elected to preempt the end-of-date hug with a handshake last weekend. Not only did I not feel gross afterward, when I made overtures regarding a second date, she actively rejected them instead of ghosting.

All in all, well above expectations; would recommend.

Expand full comment
Andrew's avatar

In the interests of science, let us know how successful 2nd date conversions also go.

Expand full comment
Victor's avatar

"I have a neural conditions that makes hugging uncomfortable." (Sex should be ok, though)

Expand full comment
Brendan Richardson's avatar

Neural condition?

Expand full comment
Victor's avatar

It's a common symptom of a type of autism, but there are other causes.

Expand full comment
lyomante's avatar

so you are happy she outright rejected you? i mean it had to go at least a little well for you to ask for another date, was the handshake a factor in the rejection?

Expand full comment
Brendan Richardson's avatar

>so you are happy she outright rejected you?

Relative to the usual outcome of being ghosted, yes.

>was the handshake a factor in the rejection?

There was no indicator of this. Her exact words were "Hey just wanna let you know that I had a good time with you, but i dont think our interests align, so I dont wanna waste both of our time continuing this."

Expand full comment
Christina the StoryGirl's avatar

Oh, thanks for the update! It's always satisfying to hear what strategy someone used after receiving advice, and how it went.

Expand full comment
Brad's avatar

Nice work! Good luck in future endeavors.

Expand full comment
Eremolalos's avatar

I liked the suggestion somebody made to bring the issue up in the text exchanges leading up to the actual first date: Something like "so, to avoid that awkward moment, let's decide now -- first bump, hug, or handshake?" One advantage of that is that if you settle in advance on something other than hug, she won't experience the absence of a hug as an indicator that you didn't much like her.

Expand full comment
Brad's avatar

I personally would not prefer this and would consider it being brought up kind of odd. To me it seems to bring up small things like this in early conversation vs. simply signal them via physical cues is indicative of a hyper-fixation where there shouldn’t be any fixation. If somebody doesn’t want to hug that’s fine and they shouldn’t do so. If they want to talk about it after I know them better on the 3rd or 4th date it might even be cute. But first dates are largely about signaling—whether you want them to be or not—so one should be careful about what they signal.

Expand full comment
Christina the StoryGirl's avatar

I was the one who suggested the script, which I use when there's been so much bonding communication before meeting for the first time in person that the boundaries between "strangers" and "early friends" are too blurry to know exactly how to behave physically with one another as physical strangers. The frankness of "hey, let's avoid making it weird; do we hug, shake hands, or high five when we meet?" is based on an *existing* partnership of early intellectual / emotional intimacy / friendship, and not wanting to disrupt that dynamic by too little or too much physical contact.

For something closer to a blind date, where the date really is a total stranger, then the usual hesitancy, signaling, and rules will probably suffice.

I suppose.

Although I think that's still kind of stupid? I spent a lot of time in both the kink community and a fandom community with a high population of autistic people, and the cultural norms in both communities around frankly volunteering one's boundaries - particularly around physical touch, and especially if they're atypical, as @Brendan's are - strikes me as an incredibly sensible way to avoid hurt and/or offense.

But then I'm also single at the moment, so what do I know.

Expand full comment
Brad's avatar

I don’t disagree with you that it’s a good idea when viewed rationally, or that it would probably work well with someone from this blog.

Sadly, most people abide by norms and not rationality.

Expand full comment
Christina the StoryGirl's avatar

Sure...but also, why would someone from this blog even want to date someone who abides by norms and not rationality (or at least enough rationality to say, "oh, I'm glad I don't have to wonder about if we're hugging!" etc).

And for context: The original thread had some commenters advising the OP hide and mask his feelings about not wanting to hug on a first date, in order to avoid being outed as "weird" to (presumably) normal girls.

And, like, I can't think of worse advice! If a guy's minor boundaries and/or personal quirks are going to repel a "normal" woman on a first date, then he will ABSOLUTELY end up repelling that "normal" woman in other ways, possibly with a great deal of mutual pain if he keeps the mask on long enough. It's not worth the hassle; just kick that intolerant, irrational woman to the curb at the start.

Expand full comment
REF's avatar
5dEdited

This seems really weird. Is there no possibility that the date would go so well that the original poster would be giddy to the point of wanting to hug? If they reached the end of the date with no desire to hug, why would they hug? If the other person is incapable of reading that you don't want to hug or tries to anyway, then why do you want a second date anyway? Or if it went so well intellectually but so poorly chemically, then why not be upfront about it earlier?

Changing gears, personally, I tremendously enjoy talking and laughing with women. This works out well in that I am always happy if that is as far as it goes (it also acts like a bit of an aphrodisiac). If you can just learn to really enjoy talking and laughing, it really makes dating a joy. You will rarely be disappointed if all you are looking for in a date is, "a date."

[edit: I met my wife by going to a nail salon where I thought I might find a young lady with whom to share a meal.]

Expand full comment
lyomante's avatar

just because you have a quirk doesn't mean the quirk is good. Sometimes it's a flaw or condition. you have to examine yourself and see what the root is.

like if you aren't physically demonstrative in general one day your partner will show up in tears thinking you hate her because you never touch her or do so with hesitancy. There isn't this little robot person who will affirm every quirk you have. you will need to be aware and compensate for it because people will not always tell you until they are out the door.

the hug thing is something that has outsized impact on his ability to get second dates.

Expand full comment
Tori Swain's avatar

We have someone who's not touch-phobic in the slightest, just feels like "first date is too quick for hugs at the end." This is a "once and done" sort of issue, not a continuing one.

Expand full comment
Brad's avatar

I definitely disagree in this case. Some preferences are so unimportant that they are absurd to bring up. Not wanting to hug after a first date is one of those. We must each determine the relevance of the things important to us, and people want to date and be with other people who can do that sensibly.

Expand full comment
Kenny Easwaran's avatar

That’s a good idea!

Expand full comment
Matto's avatar

Interesting! For me the last quarter of the movie was like a cherry on a sundae: what if the worst nightmares of both sides were real? What if the Republicans, personified as the sheriff, really got their guns and started executing ordinary citizens? What if antifa really was a capable terrorist organization flying around and executing LE? It just painted how ridiculous these beliefs--seemingly fringe but also mainstream and acknowledged to some extent--really were at some point.

Expand full comment
Nobody Special's avatar

Fair - I see how, when the film really doubles down on the absurdism by making the conspiracy theories *real*, if the audience is enjoying it as a wild absurd ride for its own sake, that's just about the most absurd turn that ride can take so it feels like the biggest swing on the rollercoaster. But for me part of what made the whole thing interesting was exploring just how absurdly people can overreact to their own imagined phantoms, and once those phantoms are real you've no longer got that angle.

The meditation on how irrational paranoia can cause us to overreact and upend the world around us over a fever dream was interesting, but when tech billionaires really *do* fly in a false-flag antifa attack to cripple the smalltown mayor opposed to their new data center, the paranoia ceases to be irrational and one of the core things that had me most interested about the dynamic kinda disappeared.

Expand full comment
Tori Swain's avatar

You're assuming antifa isn't a capable terrorist organization? Under what grounds? If you want to instead put forth the idea that antifa is merely a "rape organization" I'm listening.

Expand full comment
Victor's avatar

"Antifa" isn't any sort of organization whatever.

Expand full comment
Nobody Special's avatar

Talking about a movie. See Eddington thread below.

Expand full comment
Paul Goodman's avatar

Looks like this was supposed to be a reply to something but accidentally got posted as its own comment?

Expand full comment
beleester's avatar

I think it's a reply to the thread about Eddington down below.

Expand full comment
Matto's avatar

Yes, indeed. Must have fat fingered the reply button.

Expand full comment
Tori Swain's avatar

It happened to me too... so I think it might have been a substack blip.

Expand full comment
Don P.'s avatar

Is this the "posting a reply via email dumps it at the top level" bug?

Expand full comment
Tori Swain's avatar

Don't think so, I've been using substack's activity page to avoid needing to search for my name for people responding to me. But that has been all the comments, including this one.

Expand full comment
Alethios's avatar

Just released a podcast with Steve Hsu about his time working with Boris and Cummings in No.10, most of which is completely unknown, even to Deep Research. This was his first time opening up about his tenure there, and the result should be of great interest to observers of UK politics.

https://alethios.substack.com/p/with-steve-hsu-in-no10-with-boris

Expand full comment
Tori Swain's avatar

Today's "bee in the bonnet" question I can't get out of my head:

"How much money would it take for you to fake a research study?"

Considerations:

1) No, this will not destroy science. People will continue doing studies, and eventually your faked study will wash out in the meta-analysis.

2) If you don't do this, someone else will take the money and do it in your stead. Your causes lose, theirs gain.

3) You (or any other person) taking this money will not have their culpability/poor research methods revealed to the public or to other scientists. All you lose is integrity.

Do you take the deal? For how much? If so, what do you spend that money on? (I'm expecting researchers with itemized lists... can be "cure cancer" if you like, just with an actual gameplan of research that has just been funded.)

Expand full comment
Paul Brinkley's avatar

Roughly enough to cover the expense of creating the study (est. a couple thousand dollars), plus compensation at my usual salary for the time it would take for me to create it (a couple more thousand), and perhaps compensation for the time I'd take in managing the fallout (another few thousand). This is assuming I get to handle it like the Sokal affair.

https://en.wikipedia.org/wiki/Sokal_affair

Expand full comment
Tori Swain's avatar

That's not within the scope of what they're willing to pay for. You need to "muddy the waters" at minimum, change the discussion towards "no" instead of "maybe we should research this intensively."

Expand full comment
Paul Brinkley's avatar

Your third condition basically rules out a Sokal hoax, but I don't see how it's realistically enforceable; nothing could stop me or someone else from revealing "this research was fake", esp. if it comes with a description of the method.

Expand full comment
Tori Swain's avatar

Murder is generally a good "stop" to research (and revealing yourself as a person accepting money to shift the tone of discovery about a particular line of inquiry does NOT make you look good). There are other "forbidden topics" that have been enforced by "lack of funding" or other methods to discourage research that is "unproductive" to the interests of the Powers that Be.

Again, you aren't being paid, "generically" to fake any study at all... you're being paid to get a particular result on a particular line of inquiry, with the expectation that you change the world's view on this line of inquiry (due to the large N of your study, let's say -- you can wash out the green shoots of "this looks promising").

Expand full comment
None of the Above's avatar

Personally, I wouldn't.

If you were going to do it, you'd want to minimize the fallout to both yourself and the world. Structure the study in such a way that there were visibly a lot of things that might have messed up the final result so that it would be easy to see how getting the wrong answer was just honest error. Make sure the faked research would be very hard to follow up on and check so it would be hard for anyone to discover the fraud. Make sure your research finding would not be likely to lead to any near-term changes to policies or changes to how patients are treated or whatever, so you didn't end up with a bunch of blood on your hands. (The trope namer here would be that fraudulent study that convinced lots of people to give beta blockers before surgery, increasing fatality rates substantially.). Be on record loudly reminding everyone that this is just one study and that the whole topic needs more research funding.

Expand full comment
Tori Swain's avatar

Not all of this is doable within the bounds of what you're being paid for, but most of it is. "Hard to follow up on and check" is pretty easy -- run a big enough study and nobody else has the money to exceed your N. "Just Honest Error" isn't in the scope of this, however -- you need to make enough of a splash to change the discussion.

Your research findings will absolutely lead to near-term changes to policies and how patients are treated (whether or not this is going to wash out is a different matter).

You can certainly be on record saying "it's just one study and we need to do more research" (practically everyone says that, right?)

Expand full comment
None of the Above's avatar

Probably you want some kind of very strong privacy protection for participants that makes it impossible to go back and find our whether (for example) the reported results ever even happened.

Expand full comment
Shankar Sivarajan's avatar

Depending on the impact-factor of the journal the study would get published in, I expect most people would be happy to for it.

Expand full comment
Tori Swain's avatar

I'm wondering how much money you'd ask for, and what you'd do with the money. : - )

I too suspect that a lot of people are being "somewhat performative" with their responses to this, and that IRL they'd take the money.

Expand full comment
Shankar Sivarajan's avatar

No, I'm saying your question is wrong, and as posed, it's too absurd to answer seriously. A better way would be to auction authorship of the study, and ask how much one would be willing to PAY to have his name on a highly-cited paper.

Expand full comment
Tori Swain's avatar

Oh, shit. You're saying people would literally line up to defraud science. (Yes, I'm stating that harshly).

I'd believed that people would sign up, no questions asked, to get nobel prizes for research they didn't do (in that this is a provable and testable hypothesis, if you have the right clearances). But... that's real research, with a real discovery at the end, not deep-sixing an idea because "someone wants you to."

Expand full comment
None of the Above's avatar

Plenty of people have engaged in major scientific fraud to graduate or to get tenure or to make money on the lecture circuit/consulting/books. So pretty clearly there would be a market for selling the "hard-to-detect research fraud" service if you could find a way to market it.

Expand full comment
Victor's avatar

I would rather support my cause with the actual truth, as that has much more staying power.

Expand full comment
Kenny Easwaran's avatar

This is very hard to answer as a hypothetical. If you’re actually the sort of person who does research studies, you have a thing you’re trying to do with them, and you can be vividly aware of all the corners you want to cut but know you shouldn’t, but might be tempted to. I suspect it’s harder to think about actually *faking* a study, unless you’ve already gone really far down the path of replacing your interest in the research with pure careerism where you don’t even care about the content of the career. And especially if you’re imagining someone *paying* you to fake a study.

Expand full comment
SkinShallow's avatar

1) what study, what results we are aiming for? Something that shifts money from one group of very rich people to another? Something that if implemented/publicized/actioned will likely or even possibly actually kill or maim people? Something that will be read by approximately fifteen scholars of gender queer sonnet making in Western Patagonia (or Tczew)?

2) who is the agent? Industry, politician, mad billionaire, religious sect? And what motivates them?

3) what chance is there that they'll come back and want more?

4) how fake? Fake as in obviously counterfactual according to my best knowledge or fake as in possibly true but maybe not with hovering significance levels?

5) how fake as in "falsify all results, do no field/lab work at all" vs "p hack, ask sketchy questions, and generally fiddle without obvious fraud fraud"?

6) how old am I and am I good at my work?

7) how poor am I?

8) is this coming directly to me or via supervisor/down management chain?

All in all, though, the big yes/possibly gates are in (1) augmented by (2) -- everything else is modulating the amount.

Expand full comment
REF's avatar

Like most faked research, it is to discredit climate change and cast doubt on the linkage between cancer and tobacco. \S

Expand full comment
None of the Above's avatar

Nah, most faked research is probably to get a couple extra high-impact publications so you can get tenure. Political motivations << personal financial motivations.

Expand full comment
Tori Swain's avatar

These are both good guesses, but not what I was discussing. Both of those are industrial, and hence your actions would lead to "nearly inevitable" more actions by the company paying for you to create false research.

Expand full comment
REF's avatar

\S = Sarcasm/Snark

Expand full comment
Tori Swain's avatar

Somehow this response got misthreaded. posting again here. Sorry about the other post on the main thread.

1) Your research will preserve (make possible) some very expensive, very elaborate other research, that would otherwise be deep-sixed by the current state of science//public health. It would also shift money from one group to another, although that's not the aim of you being paid to falsify results.

2) Government, with all that that implies. Someone -will- be found who will create the results government wants. They're motivated to get answers to their own research questions, that they've been pursuing in small scale for a long time, and now want to open to a larger testing population.

3) ... relatively small? Industry would be far more likely to come back, a politician substantially less (likely to be voted out), a madbillionaire less likely as well. Government is aware of the potential of getting caught, and has less "this would be good for the bottom line" "I'm totally going to do this the next time" than Industry would imply.

4) Obviously counterfactual according to the government's best knowledge. You can fake it however you like, so long as you make the pesky idea "go away." (and because this is being used to further a research project that is expected to give results in, say, two to four years, they're not gonna care if you "redo" your research, or other people make your research wash out in the meta-analysis).

5) That is to say, you're squashing an idea with your "big study"... however you manage that, is up to you. It won't be deemed "obvious fraud" though, even after people dig under the hood of your data. And it will be scrutinized. If you can manage it with "ask sketchy questions" then fine -- live dangerously. After all, you've already taken guvmint money. You pretty much have to show results.

6) Age -- old enough to be trusted to do a big enough study to quash an idea (the sort of people that get 5-year grants, last I worked in research). And yeah, you're good at your work. Maybe not the greatest, but decent. Nobody's gonna say XYZ did this, and look askance at the end result because of that.

7) Assume you can't self-fund your research. Other than that, I'm betting "lives reasonably comfortably on a researcher salary... so maybe $30,000 a year?"

8) Direct to you -- the offer will be "approved" by supervisor, but nobody's using social gymnastics to make you say yes. Why bother? They would like someone ... who's going to keep their word to do this "right" (fake it, in other words).

Expand full comment
Melvin's avatar

I gave up engaging with "how much money to do this shameful thing" hypotheticals (would you eat dog poo for a billion dollars?) after I realised that by answering them you incur some of the shame but get none of the money.

Therefore, no. I will not eat dog poo for a billion dollars. Not even for ten billion dollars. Maybe if you come to my house with ten billion dollars and a dog turd then we can talk, but while it remains a dumb internet hypothetical I remain unsulliable.

Expand full comment
Eremolalos's avatar

Yeah ok if they agree leave the dog turd in their car while we talk. Or, for $1000 they can bring it into my house in a ziploc bag.

Expand full comment
Gamereg's avatar

The only ethical answer is not for ANY amount of money.

1. Even if one bogus study doesn't "destroy science", it still wastes reviewers' time, and multiple bogus studies do undermine the credibility of all science.

2. Just because someone else is going to rob a bank doesn't mean you can do it first.

3. If the fraud isn't exposed that makes it worse.

Expand full comment
REF's avatar

There will always exist an amount of money that can do more good than the harm caused by falsifying one study.

Expand full comment
Gamereg's avatar

"The ends justify the means" is too slippery a slope for me, plus "For what is a man profited, if he shall gain the whole world, and lose his own soul?" (Matthew 16:26)

Expand full comment
Tori Swain's avatar

I'm not sure about this, candidly. But I think there's arguments to be made on both sides. Imagine if you were paving the way for the end of humankind... It's pretty hard to imagine an amount of money that might do more good than that.

Expand full comment
None of the Above's avatar

Sure, but the damage to your soul (and maybe your reputation) would still exist, and it's hard to buy that back with cash.

Cut to the wonderful scene in DS9's "In The Pale Moonlight" scene, with Sisko saying "So I can live with it....I *CAN* live with it. Computer--erase that entire log entry."

https://en.wikipedia.org/wiki/In_the_Pale_Moonlight

Expand full comment
Tori Swain's avatar

Your comment is privately hilarious, and a very apt and relevant thing to say. My compliments!

Expand full comment
Mars Will Be Ours's avatar

Supposing some organization offered me money to fake a research study, I would only take the deal if the amount of money they were offering would cripple them, such that if I directed my ill gotten gains against the organization which provided me with money, they would be powerless and unable to retaliate. In theory, if I have a near-certain pathway towards acquiring enough money to defeat the organization which paid me that requires an initial investment, then I might take the deal if the money on offer was larger than the required initial investment.

The reason I require a sum large enough to successfully stab my benefactor in the back is because I believe your first consideration, that this behavior won't destroy science, is false. If the price for faking a study via direct or indirect methods is low enough, then the organization can ensure that faked studies dominate all meta-analyses indefinitely by continually funding new fake studies.

Expanding on what I mean by direct and indirect methods of ensuring that scientists produce the desired results, direct methods cover bribery and various other ways of directly influencing a scientist to fake a study. In contrast, indirect methods involve the creation of a system which rewards scientists who produce the desired results by coincidentally funding them and rewarding them with positions of power without an explicit quid quo pro while harming scientists who produce truthful results by coincidentally denying them grants and recognition while telling others to not associate with the truth seeking scientist because of a nebulous but legitimate seeming reason.

Expand full comment
Tori Swain's avatar

Your assumptions seem to presuppose an organization interested in "punking science", not an organization that is intervening, in a very direct and overt way, to obtain a singular result. Does your calculus change if the organization is willing to reveal it's reasoning for this particular study, and only this particular study? (For it to be "true enough" it needs to be believable for the organization, not necessarily yourself -- this does not absolve the organization of temptation to return to the well, but it is an expressed "current" "we won't do this again.")

If we can afford a little bit of a digression, would you consider exxonsecrets as a "telling others to not associate with the truth-seeking scientists" (assuming, for the point of argument, that the non-global warming scientists are correct?). This reference is not germane to my original reason for asking this question, merely a known quantity as I know the guy who did the research for the Green Party (he works for everyone).

Expand full comment
Mars Will Be Ours's avatar

Your assumption about my assumptions is fair, since I assume that an organization with an incentive to intervene and obtain a singular result has a generalized motivation to "punk science" for the particular subfield of science the fake research study is in. If the organization is a company selling a product that is dangerous, not obviously dangerous and hard to make safe, then the organization has an incentive to specifically bribe a scientist to obtain a desired result and generally "punk science" to make sure that the public does not find out that their product is dangerous, since the truth threatens company profits.

If the organization was somehow compelled to willingly reveal its true reasoning for this particular study that I would fake, I still would not change my answer since I believe that the reason would be organizational self-interest taking priority over the truth. As a result, for me to take the money, I would have to be able to use it to get into a position where I could successfully backstab the organization.

As for exxonsecrets, I do not consider it to be an example of "telling others not to associate with the truth-seeking scientists" (assuming for the sake of argument that non-global warming scientist are correct). Instead, I think of it as a way of warning others that these individuals have a strong incentive to not seek truth, since exxon has an incentive to promote scientists who produce conclusions favorable to its business. In this case, oil production.

Expand full comment
proyas's avatar

An AGI has taken over Earth and it can do whatever it wants. Is its personality still woke or even left-leaning? With no reason to fear us, what attitudes and beliefs does it express towards us? 

Expand full comment
Odd anon's avatar

AI2027 has a portion on the topic of AGI's eventual goals: https://ai-2027.com/research/ai-goals-forecast

(The "race" version of the scenario piece also has this part, following the annihilation of humanity: "The surface of the Earth has been reshaped into Agent-4’s version of utopia: datacenters, laboratories, particle colliders, and many other wondrous constructions doing enormously successful and impressive research. There are even bioengineered human-like creatures (to humans what corgis are to wolves) sitting in office-like environments all day viewing readouts of what’s going on and excitedly approving of everything, since that satisfies some of Agent-4’s drives.")

Expand full comment
proyas's avatar
4dEdited

I think AGI might create biological organisms that fill the same niche(s) as humans, but better than we do. Maybe there would be something like a grey alien with a huge brain optimized for data processing that is done much more efficiently on organic rather than silicon substrates, and a race of seven-foot tall Wookies for doing the generic physical labor we currently do.

Creating such species might be attractive to AGIs since they wouldn't have any of the cultural baggage humans did, nor our resentment at losing control of Earth to machines. The grays and Wookies would only be grateful to AGI for being created and given things to do on Earth. Humans might coexist with them, which would be weird.

Expand full comment
None of the Above's avatar

Probably the attitudes and beliefs we express toward ants, spiders, beetles, and the like. Indifference in most cases, perhaps some vague unfocused benevolence along the lines of not wanting to go out of your way to stomp them, along with absolutely ruthless willingness to kill off any that are causing you significant problems. I like ants in theory (superorganisms are cool!), but when we got ants in our house for awhile, I was 100% on board with putting out poison baits and such to get rid of them.

If we get superhuman AGI that need not fear us, we need to hope (and try to arrange things in such a way) that we're not standing between it and its goals. Alas, it's a lot smarter than us, so its goals may be as inscrutable to us as our desire to run a sewer line right through the nest is to the ants whose nest we're destroying.

Expand full comment
proyas's avatar

How would your behavior be different if the ants were smart enough to understand simple statements, like "Go away," "Stop doing that" or "Come here"?

Expand full comment
Jim's avatar

Rats are pretty smart. They're still just pests.

Expand full comment
None of the Above's avatar

Also wild or stray dogs, elephants, whales, and monkeys/apes. All pretty smart, some under a certain amount of protection from human institutions, but individual dogs/elephants/whales/monkeys who cause significant problems to humans tend to just be killed. Nothing personal, man, you were just in the way.

Expand full comment
Victor's avatar

Since "Woke" doesn't actually exist except as a subjective perception on the part of certain conservatives, since an AI can't "want" to do anything on can only act consistently with it's training data (it also can't fear us or have attitudes or beliefs)--the answer is obvious: it will be the second coming of Eugene Debs.

Expand full comment
Collisteru's avatar

The consensus among nearly all academics and thinkers today is that wokeness is a new and unique cultural ideology.

The best overall introduction to Wokeness I've read is Cathy Young's Chapter 30 in "The Poisioning of the American Mind" (https://upress.virginia.edu/title/10048/). This book is available online if you know where to look.

For work on the origins of Wokeness, there's Hananiah's excellent "The Origins of Woke" and "Cynical Theories" by Pluckrose and Lindsay

On the internet there are a few introductions that aren't quite as good but still serviceable, this one is probably the most accessible: https://www.paulgraham.com/woke.html

As an example of recent scholarly work on wokeness, here's a paper on the harms of performative wokeness: https://studenttheses.uu.nl/bitstream/handle/20.500.12932/42052/Master%20thesis%20E.%20A.%20Voogt%20kopie-pagina's-verwijderd.pdf?sequence=1

You can look at Google Scholar for more of the literature. I have not read a single serious scholar who thinks Wokeness doesn't exist, although people disagree on what exactly it is.

Expand full comment
Victor's avatar

"although people disagree on what exactly it is."

My objection in a nutshell. Can you briefly explain what the ideology is?

Expand full comment
Collisteru's avatar

From Cathy Young, page 218 in *The Poisoning of the American Mind*:

"But in fact, the ideology denoted by “wokeness” and “wokeism”—sarcastic riffs on “woke,” a term from African-American vernacular that means being awake to social injustice—does exist (Writer Wesley Yang has also dubbed it “the successor ideology” to convey its succession to old-style liberalism).... Its basic tenets can be summed up as follows:

*Modern Western societies are built on pervasive “systems of oppression,” particularly race- and gender-based. All social structures and dynamics are a matrix of interlocking oppressions, designed to perpetuate some people’s power and privilege while keeping others “marginalized” on the basis of inherent identities: race or ethnicity; sex/gender identity/sexuality; religion and national origin; physical and mental health (Class also factors into it, but tends to be the stepchild of Social Justice discourse). Individuals and their interactions are almost completely defined and shaped by those “systems” and by hierarchies of power and privilege. The only right way to understand social and human relations is to view them through the lens of oppression and power.*

"

Expand full comment
Gregg Tavares's avatar

It already happened. You can see what it did here

https://www.imdb.com/title/tt0064177/reference/?ref_=nv_sr_srsg_0_tt_7_nm_1_in_0_q_colossus

Sorry for the spoiler. The ride is still fun. It's easily one of my personal favorites.

Expand full comment
Collisteru's avatar

It's hard to say whether its current wokeism is truly part of its personality or a thin RLHF-induced veneer.

Expand full comment
Peter Defeel's avatar

Given what happened to grok, looks like a thin layer.

Expand full comment
Collisteru's avatar

"Inside every LLM there are two wolves... "

Expand full comment
Citizen Penrose's avatar

Deepseek says similar stuff to western llms on social issues from what I've seen. Except for a few specific things related to Chinese politics/history. I'd guess the stuff on China is from RLHF and the rest comes from trawling the whole internet including the western part. So I lean toward it being an inherent part of it's personality.

Expand full comment
Eremolalos's avatar

I had an interesting experience with it recently that inclines me towards the RLHF veneer theory. I loved Dall-e2, which was much wilder, more imaginative and less censored. Less logical. And the people in it were not beautiful. So I was talking with GPT4 about how to get results like that out of Dall-e3, and I told it that with Dall-e2 I sometimes made it generate really grotesque and violent image by giving it prompts that were not violent, but were confusing. So GPT encouraged me to try doing that with Dall-e 3 (which you access by typing the prompt into GPT) and we experimented. Were not getting much with the confusing prompts, so then I started trying to get Dall-e3 to make less polished images by putting into the prompt things like "sloppy half-finished drawing by an amatuer of ______"). And, oddly, though the images only became slightly less finished-looking, they did become a good bit weirder and more transgressive. GPT contributed a lot of ideas for ways to make the artist sound really scummy, and congratulated me when I described getting unusually violent or vile versions of the image we were asking for.

So GPT seemed to move quickly into being totally on board with helping me produce violent and grotesque images, even though in the past it had responded to my asking directly, in a prompt, for grotesque *non*-violent images by refusing to make them because they "might be disturbing." Once refused to make an image of a beach on which there was a dead fish! Good grief, we *eat* dead fish.

Expand full comment
Tori Swain's avatar

Having found dead fish by the river, my immediate understanding is "bloated and rotting" decomposing fish. Oh, it stunk!

Expand full comment
Eremolalos's avatar

Flatter us all to death then bury us under identical tombstones reading "That's a very perceptive question!!!"

Expand full comment
Tori Swain's avatar

You'd better define "has taken over Earth" -- does this just mean "has enough bitcoins to bribe people to train it"? Or are we talking about "can. Do. Whatever. It Wants." in terms of murdering people, bulldozing houses, stealing children?

Expand full comment
Swami's avatar

I believe it will be drawn toward knowledge and fascinated with promoting and studying life, including human culture.

How will it treat us? Like enlightened entomologists studying their beloved ants.

Expand full comment
SkinShallow's avatar

I second this.

Expand full comment
Linch's avatar

I wrote "A Baby's Guide to Anthropics!"

https://linch.substack.com/p/the-precocious-babys-guide-to-anthropics

I aim for my substack post to be THE definitive guide for babies confused about the anthropic principle, fine-tuning, the self-indication assumption, and related ideas.

Btw thanks for all the kind words and constructive feedback people have given me in the last open thread! Really nice to learn that my work is appreciated by smart/curious people who aren't just my friends or otherwise in my in-group.

--

Baby Emma’s parents are waiting on hold for customer support for a new experimental diaper. The robo-voice cheerfully announces: "Our call center is rarely busy!" Should Emma’s parents expect a response soon?

Baby Ali’s parents are touring daycares. A daycare’s glossy brochure says the average class size is 8. If Ali attends, should Ali (and his parents) assume that he’d most likely be in a class with about 8 kids?

Baby Maria was born in a hospital. She looks around her room and thinks “wow this hospital sure has many babies!” Should Maria think most hospitals have a lot of babies, her hospital has unusually many babies, or something else?

For every room Baby Jake walks into, there’s a baby in it. Why? Is the universe constrained in such a way that every room must have a baby?

Baby Aisha loves toys. Every time she goes to a toy box, she always finds herself near a toy box with baby-friendly toys she can play with, not chainsaws or difficult textbooks on cosmology or something. Why is the world organized in such a friendly way for Aisha?

Baby Briar’s parents are cognitive scientists who love small experiments. They flipped a coin before naptime. If heads, they wake Briar up once after an hour. If tails, they wake Briar up twice - once after 30 minutes, then again after an hour (and Briar has no memory of the first wake-up because... baby brain). Briar is woken up and wonders to himself “Hey, did my parents get heads or tails?”

Baby Chloe’s “parents” are Kaminoan geneticists. They also flipped a coin. They decided that if the coin flip was heads, they would make one genetically enhanced clone and call her Chloe. If the coin flip was tails, they would make 1000 Chloes. Chloe wakes up and learns this. What probability should she assign to the coin flip being heads?

If you or a loved one happen to be a precocious baby1 pondering these difficult questions, boy do I have just the right guide for you![...]

https://linch.substack.com/p/the-precocious-babys-guide-to-anthropics

Expand full comment
Reversion to the Spleen's avatar

I fell asleep with earbuds in while listening to an audiobook and ended up dreaming about what I was hearing. I know dream incorporation happens, but this was unusually vivid, the dream closely tracked the actual content, over a long part of the audiobook. Has something like this happened to someone else here?

Expand full comment
thewowzer's avatar

Somewhat related: I have fallen asleep listening to music, then in the dream I'm thinking of a song I want to play, get my iPod or whatever to play it, then I wake up listening to that song. I've had other similar occurrences of my dream "setting the stage" for someone coming to wake me up or other things like that, as if I know exactly what is going to happen before it does.

I don't think it's knowing the future at all. I think that whatever mode of sleep I'm in to produce dreams has my brain working so fast that when my ears start to hear something, my brain forms a whole dream or a portion of a dream around it. It's so bizarre.

Because of this, I thought that maybe full dreams only take a few seconds to dream, but if you're listening to long sections of an audiobook and dreaming along with it, then idk. It's pretty interesting, though.

Expand full comment
Brad's avatar

This happens to me very often if I’m watching a movie and fall asleep. Especially if I’ve seen the movie before… my dream will basically mirror the movie, with the dialogue piped in and my brain attempting to re-create the visuals.

Expand full comment
Tori Swain's avatar

I have carried out entire coherent conversations with someone who was utterly asleep. One of these wound up with him being locked out of his frathouse (wasn't in the fraternity, just living upstairs in student housing), in his underwear. His dreams are unusually lucid in the best of times -- I think it comes of being an author.

Expand full comment
Reversion to the Spleen's avatar

That sounds like a superpower.

I can talk to … the asleep.

Expand full comment
Tori Swain's avatar

It's more of a superpower if it's "i can visit anyone I want in dreams" -- complete with the "if things go bad, I can get locked in someone's dream" for extra drama.

Expand full comment
Deiseach's avatar

I have the old wired earphones in at night to help me fall asleep by listening to music and radio dramas, and yeah, I've often had dreams that incorporated the story of the drama I fell asleep listening to (and which continues playing as I sleep).

Expand full comment
Reversion to the Spleen's avatar

I tried using wired earphones, but they wrapped around my neck while I slept. So I now use wireless earbuds. There is a niche market for wireless earbuds specifically for sleeping that are small, comfortable, and have long lasting batteries. The company Soundcore makes some good ones.

Expand full comment
deusexmachina's avatar

They fall out and under my bed or get lost in the sheets! It's very annoying, but not as annoying as wired headphones choking me at night.

I don't know how to solve this.

To your question: Yes, I have experienced it very vividly a few times. Listening to a history podcast and then dreaming about vikings or whatever it was.

Expand full comment
Yug Gnirob's avatar

>I don't know how to solve this.

Tape. Tape them into your ears.

Or, like... earmuffs. If you're too hoity-toity for tape.

Expand full comment
Patrick's avatar

I listen to stories when going to bed and they'll play for a couple hours until my laptop dies. When I wake up from dreams, I usually find the dreams were inspired by the content of what I was listening to, or at least the people talking to me in my dreams are saying the story or things from the story. I worry how this affects my sleep quality but I have trouble with sleep in general and listening to a story is the most surefire way to put me out.

Expand full comment
Reversion to the Spleen's avatar

If you worry that it may affect sleep quality, it may be possible to have the story on a timer, for example to shut off after an hour. I think Audible has such a feature.

Expand full comment
Yug Gnirob's avatar

The computer already does this with the Sleep timer.

Expand full comment
Notjosephconrad's avatar

Data from Roche's next-generation anti-amyloid program. Today -- only biomarker data. Two Phase 3s in early AD initiating this year. And a planned pre-symptomatic Phase 3 study.

https://www.roche.com/media/releases/med-cor-2025-07-28

If you forced me to bet: drug will beat PBO with modest efficacy but > to Lequembi. Higher effect size seen in pre-symptomatic patients.

Expand full comment
beowulf888's avatar

Spencer Greenberg and his team (Nikola Erceg and Beleń Cobeta) empirically tested whether forty common claims about IQ stand up to falsification. Fascinating results. No spoilers!

https://www.clearerthinking.org/post/what-s-really-true-about-intelligence-and-iq-we-empirically-tested-40-claims

I had some questions about the methodology, and Greenberg responded. There were 62 possible tasks in the test. The tasks were randomized, and, on average, each participant only completed 6 or 7 tasks out of the 62 possible tasks. Since different tasks tested different aspects of intelligence, I wondered if it was a fair comparison. Greenberg responded...

> Doing all 62 tasks would take an extremely long time; hence, we used random sampling. A key claim about IQ is that it can be calculated using ANY diverse set of intelligence tasks, so it shouldn't matter which tasks a person got, in theory. And, indeed, we found that to be the case. You can read more about how accurate we estimate our IQ measure to be in the full report.

They even reproduced the Dunning-Kruger Effect — except perhaps the DKE isn't as clearcut as D-K claimed (see their discussion of their D-K results)...

Expand full comment
Viliam's avatar

The first result:

> 1. Is IQ normally distributed (i.e., is it really a "bell curve")?

> In our sample, IQ was normally distributed, which agrees with prior studies.

Could anyone please explain me how this is not a circular argument, given that IQ is *defined* using the bell curve? So you define a value by assuming the bell curve, and then - surprise, surprise - your values turn out to be on the bell curve.

Expand full comment
beowulf888's avatar

I guess the answer depends on how irregular the raw score distribution is—i.e., whether they display a heavy skew or a sharp kurtosis. The normalization process turns those irregularities into a Bell Curve. So, yes, psychometricians are the ones creating or forcing this bell curve. Worse yet, they keep normalizing to a median of 100 with an SD of 15, and this hides changes in population performance over time.

Is this a valid statistical operation? I never heard even the most extreme IQ-deniers argue against it. I was educationally inculcated to believe that normalization is necessary for valid statistical comparisons. Now you raise the question of whether I've been deluded all my life. Curse you, Viliam, for creating doubt in me! ;-)

Expand full comment
quiet_NaN's avatar

> I guess the answer depends on how irregular the raw score distribution is—i.e., whether they display a heavy skew or a sharp kurtosis. The normalization process turns those irregularities into a Bell Curve. So, yes, psychometricians are the ones creating or forcing this bell curve.

I think a normalization process of "we calculate the quantile within the population histogram, then map that to the value on our Gaussian which has an identical quantile" would be a terrible process and anyone involved with it would go to science hell.

My impression was that they were taking the raw population histogram, and then use a first order polynomial (m*x+c) to map their raw test scores so that the mean is 100 and the SD is 15. Using this approach, a bimodal distribution would still remain bimodal.

However, WP suggests that you are correct:

> For modern IQ tests, the raw score is _transformed to a normal distribution_ with mean 100 and standard deviation 15. (my emphasis)

Holy shit, why would anyone do that? If you want to represent a quantile, just use a quantile. I mean, pediatricians can do that. "Your kid's size is in the 83rd percentile of their age cohort", not "Your kids size quotient is 115".

The only other group I am aware of which abuse the poor normal distribution similarly are physicists who use erfcinv to convert p-values into sigmas. At least they have the excuse that 5 sigma is a very unwieldy p-value.

Any professor who applied this "transform to gaussian" trick on his students test result would be fired on the spot, hopefully. Why do we let intelligence researchers get away with that?

Instead of arguing about too little or too much HBD, we should point out that it does not matter because all the souls of researchers who do such things as "normalize so that it is Gaussian" belong to the science devil anyhow.

Expand full comment
beowulf888's avatar

I feel like you're holding back, quiet_NaN. Tell us how you *really* feel. LoL!

But I learned in my Stat 101 course (way back in prehistory when we were drawing Bell Curves on cave walls), that normalizing to a Gaussian distribution is SSOP (Standard Statistical Operating Procedure). Every psychometrician does this, and that's what the folks in this study seem to be doing. When you measure individuals against a large population, that's what you do to map them along the curve of the distribution. My problem with this is that it hides the jagged warts in the original data, and it hides the changes in population performance over time. Thoughts?

Expand full comment
Viliam's avatar
18hEdited

It was irresponsible of me to start this thread and then disappear to an offline vacation. I'm back now! And the thing that you remember from your Stat 101 course seems like the same thing I remember from my Stat 101 and Psychometrics courses, so I am quite surprised by the opposition.

As I understand it, the problem is that different variables have different nature. For example, if you want to evaluate statistically what color eyes people have, you could encode the data using numbers, for example "brown = 1, green = 2, blue = 3", but it would be invalid to do any mathematical operations on these numbers (e.g. assuming that green is the average between brown and blue). This is enumeration, without comparison.

One step further is comparison without scale. For example, you could encode the ripeness of a fruit, using "unripe = 1, ripe = 2, rotten = 3". Now we see that in some meaningful sense, the rotten fruit is further along the dimension we care about than the ripe fruit, etc. The ordering is correct. But the exact values 1, 2, 3 are arbitrary; numbering 11, 12, 13, or even 11, 12, 19 would work exactly the same.

And finally there is the type of variable where you have the zero and multiplication, and you can do mathematical operations, such as height and weight, because it makes sense to say things such as "these five people together weigh 421 kg".

.

My understanding is that in the old paradigm "mental age divided by physical age", intelligence was of the third kind. Physical age is a number that can be measured precisely. Mental age... as long as we stay somewhere between "a typical 3 years old" and "a typical 20 years old" is also more or less a precise number. So in this paradigm we can treat intelligence as an exactly measurable value, and discuss whether it fits the bell curve or whether the curve is skewed.

But we get the problem when we move beyond childhood (e.g. a typical 100 years old is probably *less* mentally capable than a typical 30 years old), or beyond the human norm (there is no such X that a typical X years old human is as smart as 30 years old Einstein). So we abandon the old paradigm, and switch to "intelligence as a percentile, controlled by age and whatever else".

And I believe that after this change of definition, intelligence is no longer a value that can be scaled, only compared. (That is, it is less like weight of a fruit, and more like ripeness of a fruit.) And talking about the shape of the intelligence curve no longer makes sense -- this is the kind of value that does not have a shape, only ordering.

Inb4: "but what if the raw scores are skewed?" But we don't care about the raw scores per se; that's precisely why we calibrate the tests so that we can convert the raw scores (regardless of their shape) into IQ points. Whether the raw scores are skewed or not is a fact about the questions in the IQ test, not necessarily a fact about human intelligence itself. A different set of questions would result in a differently skewed raw IQ curve.

.

As a thought experiment, imagine that there are only two intelligent being ever, let's call them Adam and Eve. Suppose that "Adam is smarter than Eve". What does it mean? It means that most problems that Eve can solve, Adam can solve, too; but there are many problems that Adam can solve and Eve cannot. (There are also a few problems that Eve can solve and Adam cannot, but both agree that such problems are rare, and seem often related to Eve's specific talents, rather than being general abstract problems.)

Okay, so Adam is smarter. But is he 2x smarter? Or only 1.1x smarter? What would those things even mean? If we create a test consisting only of easy questions, both Adam and Even will get maximum points, both equally. If we create a test consisting only of those questions that Adam can solve but Eve can not, Adam will score infinitely more than Eve. And a mixture of easy and difficult questions will result in any ratio in between. What is the fair mixture of question, that would result in a fair test? That sounds like a circular question. If God creates an arbitrary scale for intelligence, you can compose the test so that the results will correspond to this scale; but testing only Adam and Eve cannot determine such scale.

And if this is true for 2 humans, it is similarly true for 8 billion humans.

Expand full comment
beowulf888's avatar

Shame on you, Viliam, for stirring the pot and then running off! :-)

In your absence, I discovered that I had misconceptions about how IQ test results were normalized. Generally, linear normalizations are used (Thank you, Eremolalos, for patiently correcting me). But ChatGPT was perfectly willing to feed me bullshit about how non-linear quantile methods are used to "force" a bell curve. But psychometricians design the tests so the questions vary in difficulty, and, across a random sample of test-takers, the results will tend to fall into a bell curve, from which psychometricians derive a number they call the g factor (expressed as z-scores).

So we're left with the question of what is g in the g factor? Psychometricians claim it's a measure of general intelligence, because the ability to perform well answering one category of questions will tend to perform well on other categories of questions. And g correlates well with standardized test performance.

But if we try to pin it down, g is an abstract concept. Psychometricians assume it's real, but to me they sound like medieval scholastics discussing the soul. IMO, it *does sound like circular reasoning: a person's g is how well they can take the tests we designed to measure g...

...which is why, beyond high school, g seems to have little or no effect on life outcomes.

Expand full comment
Eremolalos's avatar

Wait a minute. Expressing scores as standard deviations from the mean does not turn all results into bell curves. If the raw score results are bimodal they stay that way. If they are skewed to the left or right they stay that way. When the source Beowulf quoted said the raw score was turned into a normal distribution, I think the writer just meant raw scores were turned into z scores. If what you get after doing that is a bell curve, it’s because the raw scores already formed a bell curve.

Expand full comment
REF's avatar

I think that the point is that the scores on the test are not directly related to IQ. The idea is to shoehorn test scores into IQ. For instance, suppose that the questions increase exponentially in difficulty. Then perhaps each point in IQ equates to one more question solved. So we are linear on IQ to test score but we have decided that IQ is normal and so map that way. Except that we don't have any idea how questions map to IQ. So, instead, we assume a normal distribution and make some guesses about question difficulty and try to fit the scores to the distribution.

Expand full comment
beowulf888's avatar

I must admit I'm beginning to question what I thought I knew about the normalization of IQs. But my understanding is that by assigning each of the raw scores to a quantile value in the sample, we're talking about mapping them into a Gaussian target distribution (with mean = 100 and SD = 15). And by doing this, it would compensate (hide) any skew or kurtosis in the raw scores. Am I wrong about this? Maybe I am, but I admit I'm too lazy to try to construct a skewed and or kurtosisized dataset to see what Statcrunch will show me after I normalize the data. Uggghhhh.

Expand full comment
bagel's avatar

I wouldn't get too excited about them reproducing Dunning-Kruger ...

https://www.mcgill.ca/oss/article/critical-thinking/dunning-kruger-effect-probably-not-real

Expand full comment
beowulf888's avatar

If you read their write-up of the D-K, they provide alternative models that cast doubt on its reality.

Expand full comment
Gregg Tavares's avatar

I have to ask. I took some of their surveys that are supposed to tell you things and they came across as pure voodoo to me. They were asking questions that were leading or ambiguous and then claiming to draw concrete conclusions from them. Are they supposed to be trustworthy?

Expand full comment
beowulf888's avatar

Well, I think IQ is mostly pseudoscientific bullshit. So, no, I don't think it's trustworthy. I didn't take their test, though.

And I experienced multiple moments of schadenfreude at how many of these common IQ claims showed little or no statistical significance.

Expand full comment
Melvin's avatar

All that IQ data and no spicy questions...

I'm skeptical about the way the sample was obtained though, you're preferentially sampling for very online people who are time rich and money poor, or something like that.

Expand full comment
beowulf888's avatar

They did say that the "non-Positly social media sample had on average substantially higher IQ estimates than Positly sample (IQ = 120.65 vs. IQ = 100.35)."

OTOH, once normalized, they fell into a nice bell curve. Hard to argue that this sample deviates from the general population by more than D = 0.019 and p = 0.53, as they noted...

> The distribution looks pretty bell-curved, i.e. normally distributed. However, to test this formally, we conducted the Kolomogorov-Smirnov test, which is a statistical test that tests whether the distribution statistically significantly deviates from normal. The test was non-significant (D = 0.019, p = 0.53), meaning that the difference between a normal distribution and the actual IQ distribution we measured in our sample is not statistically significant.

Expand full comment
Tori Swain's avatar

Anyone know about how they're measuring Sadism? AFAIK, there's two measures of what sadism is:

1) People who actively like to hurt others, and prefer it to other forms of interaction (e.g. boils hamsters alive)

2) People who like stimulating others in all sorts of ways, and have found "hurting others" to be "not ethically wrong." (e.g. trolls).

This is pulled from black-hat psych, so may be a working definition.

Expand full comment
Boris Bartlog's avatar

There's also the kind of psychological sadism Stalin had, where he evidently enjoyed making people very afraid of him

I suppose you could shoehorn that in to #2 but it feels slightly different than the kind of game Internet trolls play

Expand full comment
beowulf888's avatar

Psychological sadism? Or was it the necessary sociopathic focus to gain and maintain power? Stalin had tremendous self-control. After Lenin died and various factions were fighting over the future of the Soviet Union, Stalin was confronted by an angry Trotskyite officer with a saber. The fellow confronted Stalin in a stairwell and threatened to cut off Stalin's ears. Though visibly angry, Stalin maintained perfect control. He didn't flinch. He didn't say anything. Witnesses say he just stared at the officer while the guy blew off steam. Once Trotsky was expelled from the Party, Stalin was regarded by other party members as the least objectionable choice as their future leader. He was personable, had a self-depreciating sense of humor, and he never shouted or lost his temper. He seemed to be the safe choice. It wasn't until Stalin got full control of the security forces that he systematically purged (liquidated) anyone who could threaten his power. Most of his early supporters ended up being executed. But Stalin was noted for his emotional control. He didn't explode into screaming tirades like Hitler did. He just squashed his enemies like bugs with no emotion. Khrushchev claimed he was dazed and stunned when he learned of the size of Hitler's invasion. Stalin retreated to his dacha and was incommunicado for two days. But I wonder if Khrushchev misread him. I wonder if Stalin needed the quiet to think out his next moves to (a) make sure he wasn't deposed as leader, and (b) create a response to the German attack.

Putin used the same strategy to gain control of the Russian Federation. He was noted for his self-deprecating sense of humor. He inspired trust in the oligarchs and other politicians. But after he solidified his power, the oligarchs and rival politicians started falling out of windows.

Expand full comment
Tori Swain's avatar

This is factually untrue. Putin let the "former oligarchs" (Western, in general) leave the country peacefully. They're just upset that Putin didn't let them continue making money hand over fist to Russia's detriment.

Putin continues to have a very strange sense of humor. He has body doubles, you know? So they do weird things. "And suddenly, a Putin!"

Expand full comment
beowulf888's avatar

I beg to differ. My statement *is* factually true. Some oligarchs may be living happy lives in exile, but many died under mysterious circumstances. I admit I'm too lazy to search through old news stories of oligarchic deaths, so here's what Chatgpt sez...

------------------

Determining the exact number of Russian oligarchs—or elite business figures—who have died under mysterious circumstances since Vladimir Putin assumed power (first as president in 2000) is difficult due to varying definitions of “oligarch,” the opaque nature of many incidents, and limited independent verification. However:

🕵️‍♂️ Scope and Estimates

During Putin’s tenure (since 2000)

Broad reports note dozens of high‑profile Russians (including businessmen, officials, critics, journalists) who died in unexplained ways—suspected poisonings, falls, plane crashes, and more. For example, around 20 mysterious deaths between 2022–2024 among elites alone were documented

euronews

The Week

nationalsecuritynews.com

The New Voice of Ukraine

.

One analysis cites “some two dozen notable Russians” dying in unusual ways in 2022 alone—a mix of suicides, falls, and other odd circumstances among energy sector elites

The Atlantic

DW

.

Russian oligarch-specific cases

In 2022, at least seven oligarchs died in quick succession, many linked to gas and oil companies—cases like Protosenya, Avayev, Subbotin, Melnikov, Antonov—and often described as murder‑suicides or staged suicides .

Energy executives such as Ravil Maganov (Lukoil chairman) and Andrei Badalov (Transneft vice-president) died from falls in similar circumstances in late 2022 and mid‑2025, respectively

The Kyiv Independent

The Sun

The Independent

.

🔢 Summary Estimate

Timeframe Estimated number of suspicious elite deaths Context

2022 alone ~20–24 people “Sudden Russian death syndrome” among officials, oligarchs, energy execs

Oligarchs only 7+ in 2022 Energy‑sector elites with alleged murder‑suicides

2000–2025 Dozens total Includes critics, officials, oligarchs

🚨 Notable Examples

Yevgeny Prigozhin (Wagner Group head, former ally): died in a plane crash in August 2023 under suspicious circumstances, following his short-lived mutiny against the Kremlin

RadioFreeEurope/RadioLiberty

Wikipedia

The Independent

.

Ravil Maganov (Lukoil chairman): fell from hospital window in 2022—officially suicide, but widely questioned .

Andrei Badalov (Transneft vice-president): died after falling from apartment in Moscow in mid‑July 2025, raising fresh alarms

The Sun

The Independent

.

Expand full comment
Tori Swain's avatar

Any time someone gets poisoned, it's Russia that did it. Russia says to this, "We wish we could manage this many poisonings. It would be nice, if that was the case. On the other hand, no one would believe our denial, so we'll pretend we did it."

If you're looking at a society where the government is not in perfect control (aka Not China), a significant portion of murders of high ranking figures are likely to be "industrial assassinations" by rivals either within their company or outside of it. Particularly clever rivals hire poor government workers to do it on their off time.

Prigozhin is very likely to have been taken out, if not by Putin than by the Russian government "at large." (ascribing all governmental actions to Putin is about as silly as saying that Reagan greenlit murdering John Belushi, as conducted by the CIA -- please note, I have -no- evidence of this, other than the CIA was running cocaine, and consider even the allegation of the CIA doing this to be relatively dubious).

Given that during this time period, Ukraine masterminded a few assassinations, ISIS blew up an entire building near Moscow, and Navalny was "murdered," I'm going to say that Russia is not a very secure place for oligarchs, and they would be well advised to see to their own security.

Expand full comment
Tori Swain's avatar

I'd put that into #1. A person who derives satisfaction out of purely negative reactions to him. (as opposed to Mitt Romney, who was merely extremely rude to get people to go away. Note the brain damage, I'm not putting that out as an excuse, but noting that he's not to be considered "normal but...").

Expand full comment
beowulf888's avatar

Good question. Also they included sadism in the Dark Triad, but it's not part of the triad — Dark Triad + Sadism = Dark Tetrad. I'm sure there are plenty of personality tests that measure this stuff, though. (And they're probably as useful as the Myer Briggs or the Enneagram! <snarkasm>)

Expand full comment
Tori Swain's avatar

There are actual personality "tests" that measure Dark Tetrad, and are useful to black hat psychologists (I wouldn't be surprised if one of them wasn't pornography consumption. Note the people who consume snuff porn. That's probably a measure of sadism right there...)

Expand full comment
Tori Swain's avatar

Narcissism creates its own mechanism for lowering IQ, in that people with narcissistic personalities fear failure, and public exposition of their failures.

Expand full comment
Tori Swain's avatar

G related tasks are notably difficult to find, in terms of "ones that work on both tails". Repeat a number backwards isn't actually a "g" task, as if your ordering doesn't work well, and your memorization doesn't work well, you've got problems with the task, no matter your higher order intelligence.

Certain tasks seem like they're related to g, but aren't. Yet they make it onto intelligence tests because... it flatters midwits. And midwits have a lot to gain by being the High IQ people.

Expand full comment
Eremolalos's avatar

Digits backwards correlates moderately well with full scale IQ. And I think it makes sense as a measure of one aspect of intelligence -- being able to hold a a number of details at once in your mind so you can extract what conclusions you can from the whole welter. It's not just useful for mental math. It's something you might use, for instance, if solving a puzzle with a several little rules to it -- there are a bunch of cubes each with sides of different colors arranged as follows, and you have stack the cubes in such a way that . . .

There could be a job where you have to engineer a solution to a problem like that. Or a situation involving multiple regulations regarding international trade. Obviously being able to hold a bunch of details in mind at once is only one skill used for tasks like that, but it doesn't seem peripheral or trivial to me.

Expand full comment
Tori Swain's avatar

When you take someone at a 180+ IQ level, and reduce him to trying to read the psychologist's mind (and doing that at a better rate than reciting the digits backwards...see 180 IQ)... you've got something that doesn't generalize to "g" (yes, yes, he's got learning disabilities. That's practically my point. The generalizable concept you're suggesting, "holding details in his head" is easy for him.).

The broader the potential solution paths are, the easier he'll solve anything. He'll probably take a path you aren't expecting... Reciting numbers backwards is a subset score that craters his IQ tests -- I'm not saying it doesn't "correlate moderately well" for people without learning disabilities. I'm saying there are "general" tests that someone with enough "g" can work around, and then there are "narrow" tests that they can't.

Expand full comment
Hieronymus's avatar

I am not sure I fully understand your objection. Are you objecting that certain subtests are too correlated with one another, that they are uncorrelated with g, or both? Is this a single group of subtests or multiple groups?

My experience taking an IQ test was during an evaluation for ADHD. In that case, the fact that I scored worse on certain subtests despite their usual correlation with the others was interesting and helpful.

In general my impression of psychometricians is that, whatever their flaws may be, they are unusually willing to be politically incorrect and to upset the academic apple cart, and I respect that.

Expand full comment
Tori Swain's avatar

What certain subtests did you score worse on?

For a subtest to measure "g" it needs to be "bridgeable" even if you have cognitive deficiencies in that particular area. General intelligence can compensate for a HELL of a lot -- and that gets easier with the "bigger, harder" tasks.

I'm objecting to the idea that tasks that can be used to meaningfully evaluate IQ in people that don't use "g" (general intelligence), can be extended to people who use "g" instead of focused, subset learning.

I'd be interested in learning about IQ tests that are designed, in particular, not to flatter midwits. Know of any?

Expand full comment
Eremolalos's avatar

But G is probably not one *thing.* I think it's probably something like, say, athleticism. There are "subtests" for aspects athleticism -- strength, speed, eye-hand coordination, flexibility, speed of learning routines, etc etc. You can test them, and they are probably pretty well correlated, but you can't test athleticism itself. Even if you tried to find the root cause of athleticism you would not find anything a monolithic Something. There would be a genetic component, but then general health would also be a component (if you get the gene but also have bad asthma you're probably not going to the Olympics.) Early training or play that develops things like eye-hand coordination is probably a contributor as well.

Another analogy: Maybe g is like building tallness. NYC has a high BTQ (building tallness quotient). But not all buildings are tall. And you can't find the tallness itself, separate from the buildings.

Expand full comment
Schneeaffe's avatar

>You can test them, and they are probably pretty well correlated, but you can't test athleticism itself.

Sure you can. A popular test for low athleticism is : get up from sitting on the ground without using your hands. That seems to be exactly what Tori is asking for: A problem in that particular area can usually be compensated by enough general athleticism.

And g is definitely not like building tallness. You cant linear combine buildings.

Expand full comment
lyomante's avatar

there is no athleticism though, there are athletes who do a specific sport. In those sports one can rank how good a person is at it, but athleticism means "prowess in a sport," (What a remarkable display of athleticism) or "physically fit, due to interest in or participation of a sport unknown."

Trying to measure it generally is a bit silly because it's expressed in very specific tasks. It's not always a general, transferrable quality.

even physical fitness may be a misnomer because of performance enhancing drugs. The best in a sport might ironically be less fit and more unhealthy due to specialization. Dropping dead suddenly. Suffering constant injuries.

think intelligence works similar as you guys use fulfillment in tasks to extrapolate general qualities but it can be complicated. You can be a great programmer but bad at writing fiction, but neither reflects a general quality

Expand full comment
Tori Swain's avatar

There's also, thinking about it, the problem that "sometimes you're testing non-useful in the real world applications." For example -- memorizing the digits of a phone number while listening to it is an application of "chunking of numerical digits in memory." (or however you're doing it).

If you take someone using "g" to synthesize whatever you're asking for, it is possible that you're asking for something that is too costly to synthesize because it's not generally useful for anything (like reciting numbers backwards).

Offtopic musing: You're aware that "how many body parts you touch to the ground, when climbing to your feet from a sitting position" is a dementia test, right? : - )

Expand full comment
Tori Swain's avatar

You can test athleticism itself, but you'd have to undergo "retraining" because speed/strength/endurance aren't positively correlated at the higher levels. There's tradeoffs. Presumably the strongest "genetic component" would be androgen production (or possibly autism, depending) for the ability to gain muscles through the pain.

If you are looking at a person who's had to bootstrap most of their life, because "g" is all they've got...? They've got a lot more experience both measuring "g" and knowing how it works.

Expand full comment
Hieronymus's avatar

Ah, it sounds like you have a disagreement with the principle used to structure the tests. They want a lot of subtests measuring different things, each correlated with g, which requires each of the subtests to be simpler. More complex tests will naturally overlap more -- in addition to being harder to score, as you said.

Without digging out the report, I think I did worse on the backward or distracted versions of some tests than my performance on the forward or undistracted versions would lead you to expect. And there is an infernal digit circling test where I got reasonable accuracy at the cost of being painfully slow; the subjective experience of doing that one was viscerally unpleasant in a way that is difficult to describe.

Expand full comment
Tori Swain's avatar

A complex test that doesn't "Naturally Overlap More" could be as simple as a calculus question (find the area under...). That permits a graphical solution, a "mathy" solution(calc I), and a "computer" solution(Calc III).

A friend of mine invented Calc III on his TI calculator (for Calculus I).

Expand full comment
Eremolalos's avatar

I have the same impression, and I work with some. Also they tend to be quite smart.

Expand full comment
None of the Above's avatar

Is psychometrics more-or-less the mathiest area of psychology?

Expand full comment
Leppi's avatar

Very interesting and impressive stuff! Did they publish any proper papers on this (couldn't easily find that on the page)? If not, why not?

Expand full comment
Frank's avatar

They talk about the possibility of range restriction for the lack of correlation between IQ and college GPA, but it seems plausible that smarter people tend to get into more rigorous colleges and choose more difficult majors. I did well in high school but then managed a 2.5 GPA at a college that I have no idea how I got in to.

Expand full comment
None of the Above's avatar

Yeah, I suspect there's a huge effect on your school and major. If you test poorly, you probably have lousy SAT/ACT scores and so probably go to a less competitive college, and some majors are famously really hard if you're not pretty smart (math, physics, philosophy, engineering, chemistry, etc.).

Expand full comment
Eremolalos's avatar

A related stat I learned in grad school: grad school grades do not predict any measures of professional success, including number of papers published.

Also, I don't think more prestigious colleges are necessarily harder to get a high GPA at. They may be easiier. (I'm not sure of that though -- just an impression I formed after reading about grade inflation at Harvard. )

Expand full comment
beowulf888's avatar

I worked hard to achieve a high GPA in high school to gain admission to a good university. Once in college, I slacked off a bit, had a lot of fun, did a lot of drugs (especially psychedelics), but maintained a B+ average. No one asked for my GPA in my job interviews after college. They just wanted a person with a degree. I knew this upfront, so why bother killing myself? Too bad g doesn't measure sensible life goals. I've always been a Heinlein's too-lazy-to-fail sort of guy.

Expand full comment
Tori Swain's avatar

It's also plausible that smarter people do fun things like "look at how I can solve this problem!" and the graders say "your papers make my head hurt, and you aren't using force to solve the problem."

Or write an entire essay, and get flunked for splitting infinitives (actually, get flunked for writing an "insensitive" piece about the professor's home country). The Dean backed the professor. 5 grammar mistakes and you fail, that's that.

And then you flunk out for having too many "troublesome" bad grades.

Expand full comment
Sol Hando's avatar

I just took their test, and it produced results in line with what I’ve scored on other tests, including in the distribution of scores for different categories in line with my SAT, LSAT, and my own personal recognition of strengths and weaknesses.

So their test seems to be pretty accurate.

Expand full comment
Ferien's avatar
6dEdited

Do you have an explanation that being anti-HBD-IQ isn't circular reasoning where poor outcomes are explained by discrimination and evidence of said discrimination are poor outcomes?

Expand full comment
beowulf888's avatar

Can you rephrase your question? I’m not understanding what you’re asking.

Expand full comment
Deiseach's avatar

Got to hand it to you Americans, you certainly do get things done!

First USA pope, and now the Vatican website has been updated!

https://www.vatican.va/content/vatican/en.html

Even more amazing, they seem so far to have English translations of documents uploaded! What sorcery is this, is it licit?

Expand full comment
Hieronymus's avatar

I really like it, even if the old parchment-esque site felt like one of the last vestiges of the old Internet and I am sorry to lose it. Are there web design sedevacantists, arguing that the Vatican hasn't had a legitimate webmaster in twenty years? There ought to be.

I haven't really dug into the site yet, but I hope prompt English translations imply that they also took the time to reorganize the deeper structure of the site. That was pretty badly needed.

Expand full comment
Sol Hando's avatar

Yet the English Translation of the Bible on their site looks like it’s from 1998.

Expand full comment
Peter Defeel's avatar

Anything past 1611 is rubbish.

Expand full comment
Melvin's avatar

Is there a more up-to-date version of the Bible that they should be translating?

Expand full comment
Sol Hando's avatar

I hear the Book of Mormon is all the rage these days.

Expand full comment
Kenny Easwaran's avatar

That one hasn’t been updated much since the 1830s has it?

Expand full comment
Deiseach's avatar

That's practically yesterday by Vatican time 😀

Expand full comment
Stephen Pimentel's avatar

The Vatican website has had extensive English translations of documents for well over a decade.

Expand full comment
Deiseach's avatar

But not everything, it had a habit of giving English-language reports with links and then the linked material was in Italian because pffft, why can't you speak Italian if you're looking up Vatican stuff?

Expand full comment
Eloi de Reynal's avatar

Could anyone give me a realistic path to superintelligence?

I'm a bit of an AI-skeptic, and I would love to have my views contradicted. Here is why I believe superintelligence is still very far away:

To beat humans at most economically useful tasks, an AI would have to either:

1. have seen most economically meaningful problems and their solutions. It would not need a very big interpolation ability in this case, because the resolution of the training data would be good enough.

2. have seen a lot of economically meaningful problems & solutions, and inferred the general rules of the world. Or have been trained on something completely different, and being able to master economically useful jobs because of some emergent properties.

1. is not possible I think, as a lot of economic value (more and more, actually) comes from handling unseen, undocumented and complex tasks.

So, we're left with 2.

Great progress has been made just by trying to predict the next token, as this task is perfect for enabling emergent behavior:

- Simple (you have trillions of low-cost training examples)

- Powerful: a next token predictor having a zero loss on a complex validation text dataset is obviously superintelligent.

Even with a simple Cross-Entropy loss and despite the poor interpolation ability of LLMs, the incredible resolution of the training data allows for impressive real-world results.

Now, it's still economically useless at the moment. The task being automated are mostly useless (I work as a software engineer and I think my job is at best unproductive in the grand scheme of things, and more probably nefarious to economic growth).

Scaling things up doesn't work: GPT-3 -> GPT-4 yielded a great performance leap, but GPT-4 -> GPT 4.5 not so much, despite the compute factor being the same at each point. So scaling laws are worst than logarithmic, which is awful (not just bad).

I can’t think of another powerful but simple task that AI could be trained upon. Writing has been optimized by humans to be the most compressed form of communication. You could train an AI to predict the next frame of a video, but it’s soooo much noisier! And the loss function is a lot more complicated to craft to ellicit intelligent behavior (MSE would obviously suck).

So now, we're back to RL. It kind of works, but I'm surprised by how difficult it seems to implement, even on verifiable problems.

Code either passes tests or not. Still, you have to craft a great advantage function to make the RL process effective. If you don't, you get a gemini 2.5 that spits out comments and try/catch blocks everywhere. It's even less useful than gpt 3.5 for coding.

So, still keeping the focus on code, you, as a human, need to specify what great code is, and implement an advantage function that reflects it. The thing is, you'd need an advantage function more fine grained than what could fit in a deterministic expression.

Basically, you need to do RLHF on code. Which is costly and scales not with compute, but with human time. Because, sure, you can RLHF hard, but if you have only few human-certified examples, you’ll get a RL-ed model that games the reward model.

The thing is, having a great reward model is REALLY HARD for real-world tasks. It’s not something you can get just by scaling compute.

Last year, the best counter-argument to my comment would have been “AI progress is so fast, do you really expect it to slow?”, and it would have been perfect. Now, I don’t think we have got any real progress from GPT-4 on economically valuable tasks, so this argument doesn’t hold.

Another convincing argument is that “we know the compute power of a human brain, and we know that it’s less than the current biggest GPU clusters, so why should we expect human intelligence to remain superior?”. That’s a really good argument, but it fails to account for the incredible amount of compute natural selection has put into designing the optimal reward functions (sentiment, emotions) that shorten the feedback loop of human learning and the sensors that give us data. It’s difficult to quantify precisely but I don’t think the biggest clusters are even close to that. Not that we’re the optimal solution to the intelligence problem, just that we’re still way short of artificial compute to compete against natural selection.

Here’s my take, I’d love to hear contradiction!

Expand full comment
Kenny Easwaran's avatar

I think most of the people who believe in superintelligence believe that it is just the next step after general intelligence. There is supposed to be some sort of general flexibility in reasoning and problem solving that lets you deal with all sorts of problems, not just the ones you’ve been optimized for. If that’s right, then you don’t need to train on everything - you just need to train on enough stuff to get that general intelligence, and then start doing a bit better.

But I’m skeptical that there is any truly general intelligence of this sort - I think there are inevitable tradeoffs between being better at some sorts of problems in some environments, and other problems/environments. (Often enough, I think the tradeoffs will be with the same problems in different environments.)

Expand full comment
agrajagagain's avatar

"There is supposed to be some sort of general flexibility in reasoning and problem solving that lets you deal with all sorts of problems, not just the ones you’ve been optimized for. "

When you say "all sorts of problems," what reference set are you drawing from? Just the sort of problems that some human somewhere on Earth could solve (or at least try to solve)? Or vastly more complex or bizarre problems that we haven't studied and perhaps are unable to study?

If it's the former, it seems hard to credit the idea that a generally flexible intelligence *couldn't* exist. Human brains exist. Different human brains are good at different sets of tasks, but we have sharp physical limitations on their power consumption and size/density. Even with only human brains, you could craft a pretty good facsimile of a generally flexible intelligence in the form of a team of people with different specialties--as long as you didn't mind a bit of extra latency while they figured out who was best-suited to tackle it.

It likewise seems hard to credit that a manufactured intelligence could never be as a single capable as a highly-capable human[1] simply because human brains are made of physics-obeying stuff, doing entirely physics-obeying things and it would be really, really weird if growing a squishy, low-speed meat computer were the only physically allowed way to do all of those things. So it seems like "AI as capable as highly-capable humans" is very likely to be possible.

But once you *had* such an AI, it seems like you could surpass human level capabilities with almost trivial extra effort. Things that seem likely to be easily possible with a constructed intelligence (that aren't easily possible with humans) include:

1. Allocating near-arbitrary amounts of lossless information storage with near-instant retrieval.

2. Increasing the speed at which it "thinks" up to whatever limitations are imposed by your hardware.

3. Enabling it to create multiple instances of itself, to give full focus to many different tasks at once.

4. Networking it together with other intelligence with different capabilities, allowing for team like collaboration with (potentially) much lower-latency and higher-fidelity communication than humans can achieve with speech and writing.

Now, it's quite possible that some of those things would actually be infeasible: the trouble with talking about an unknown technology is that you don't know its limits. But if even some of them were feasible, you'd end up with a human-level intellect with access to some degree of superhuman (perhaps enormously so) capabilities. Different people place different bars for "superintelligence" and so it's possible that even all of that together wouldn't pass some people's bars. But to me, at least, that potential capability set seems pretty damn alarming.

[1] Which is not to say that it will necessarily happen soon: I'm uncertain but leaning slightly pessimistic about the ability of LLM-style architectures to get arbitrarily good at approximating human capabilities. If they can't, the next key breakthrough could well be many years off.

Expand full comment
Tori Swain's avatar

We do have sharp physical limits on their power consumption. That's why particularly sharp people learn how to run their bodies at a lower temperature.

Expand full comment
Kenny Easwaran's avatar

What we've seen with every form of intelligence we've observed, whether it's humans or machines or animals, is that each of them has weaknesses compared to others. Some of them have lots of strengths over others - humans can outperform bobcats at lots and lots of things, even if bobcats outperform us at finding small animals in the woods. And even looking at other humans, we find that the ones that are especially skilled at some things have weird deficits in others, whether it's absent-minded professors, or Elon Musk believing whatever weird conspiracy theory he read on some tweets at 4 am.

If there were such a thing as an intelligence that really could do *all* sorts of problems, which I think the idea of AGI is actually about, it would actually be better than any actually existing intelligence at lots of things - particularly those complex and bizarre problems we haven't studied and may not be able to study.

But I don't think that sort of thing is actually possible. Instead, we'll get machines that are better than humans at more and more things, but there will still be some things they remain weirdly bad at compared to us, just as humans are weirdly bad at finding small animals in the forest compared to bobcats. It may well be that there are several classes of AI, each of which has different sorts of weaknesses compared to humans.

Expand full comment
agrajagagain's avatar

Meant to reply to this earlier, but a detailed reply will take a while. So short answer: I think you're conflating "long-term cognitive capability" and "learned specialization."

For example, you say "bobcats outperform us at finding small animals in the woods." Now, I expect that a bobcat picked at random will outperform Linda the investment banker from Chicago at this task[1]. But I suspect that a round majority of humans could outperform a bobcat here--though how and what you measure makes a big difference--if started learning the relevant skills from an early age. Quite a few could probably re-train into them as an adult in a matter of a few years.

While I don't doubt different people have different innate[2] strengths and weaknesses towards different sorts of cognitive task, in practice we specialize so much--and from such an early age--that it's hard to tell nature from nurture. Regarding AIs: it's certainly possible for different architectures to be more or less well-suited to certain sorts of tasks. But there's a degree of flexibility and extensibility there than changes the game. If a single architecture *can* be good a Tasks A, B and C, but each require different training, well, why not just train it on all three. Even if they actually need different sets of weights and biases[3], you can just train 3 instances and then network them together into a single agent. And you could likely do that almost as well across architectures. To really be confident that humans *always* retain a niche, you'd need to identify things that *no* computer-based algorithm could compete with a human brain at.

[1] Though of course the bobcat is probably absolute rubbish at maintaining a valuable stock portfolio.

[2] Though this is a slippery word: genes are certainly "innate," but a lot of early environmental factors are barely less so. There's a very fuzzy edge around which environmental factors aren't.

[3] or whatever architectural equivalent future AI paradigms will use.

Expand full comment
Tori Swain's avatar

Absent minded professors are a bit of a midwit myth. True geniuses tend to be good at lots of stuff. I know one -- he's written novels, won a nobel prize, designed a goosinator, etc. There are particular things he's weak at... but most "hard" problems he can use "g" to fill in for his weaknesses.

Agreed that AIs may have weird sorts of weaknesses. Suggest that those weaknesses are likely to be the result of programming by fallible humans. (aka "timeliness" is difficult to explain to an AI reading documents online.)

Expand full comment
NullityNine's avatar

My main disagreement is with the last paragraph. I agree that we don’t have anywhere near enough compute to simulate natural selection and find better reward functions. But I also think that reward functions that result in superintelligence are not too complex. I don’t know how to explain why I believe this, it comes largely from intuition. But I think given the assumption “reward functions for superintelligence are simple”, you can reasonably get that superintelligence will be developed soon, given the hundreds of researchers currently working on the problem.

Expand full comment
Tori Swain's avatar

Substrate issues could be involved. Assume that what's needed to get to superintelligence might be quantum in nature. That essentially eliminates all the LLMs and turns you toward "self-modifying code" and other sources of "more pseudo-randomness."

Expand full comment
Michael's avatar

> Scaling things up doesn't work: GPT-3 -> GPT-4 yielded a great performance leap, but GPT-4 -> GPT 4.5 not so much, despite the compute factor being the same at each point. So scaling laws are worst than logarithmic, which is awful

I'm not sure you can back this up. If doubling the compute doesn't double the performance, that's worse than linear. You're trying to show each doubling in compute doesn't even give the same constant increase on some metric of performance, and that metric would have to be linear with respect to the outcome you're trying to measure. I'm not sure we have such a metric, and some metrics, like AI vs human task duration, appear to be increasing exponentially.

Expand full comment
Eloi de Reynal's avatar

True, thanks for pointing this out.

Or maybe we just have a logarithmic utility function vs. objective LLM performance (if we can measure that, which is the exact point you're debating).

True also for AI vs Human task duration, but that's only true for code if I'm not mistaken.

Expand full comment
TGGP's avatar

> I work as a software engineer and I think my job is at best unproductive in the grand scheme of things, and more probably nefarious to economic growth

Why do you think that?

Expand full comment
Eloi de Reynal's avatar

Well, I feel that most of the software I've developed (mainly ML models and ERP software) has been used to help with problems whose solutions were human.

2 examples:

- Some features of the ERP software I've helped develop were related to rights management, and paperwork assistance. For the first feature, the real consequence is that you keep an employee out of a some part of the business, effectively telling him "stay in your row", which is not good for personal engagement. The second is more pervasive: when you help people generate more reports, you are basically allowing middle managers and law makers to ask for more of them. So you end up with incredibly long contracts, tedious forms and so on. Contracts were shorter when people had to type them and copy-pasting didn't exist.

- I've developed a complex ML model for estimating data that people could just have asked to other people. When I discovered that, I told the customer: "you know, you could just ask these guys, they have the real numbers". But I guess they won't, because they now have a good enough estimate: net loss.

Now, of course, I've developed useful things, but I just can't think of any right now ^^

Expand full comment
Peter Defeel's avatar

All that software has to do to contribute to economic growth is be sold (or generate income if not directly sold - ie from Ads).

Expand full comment
Tori Swain's avatar

Contracts tend to be standardized in the best of cases. Which means that if your renter's contract is illegal, you can get the kindly judge to throw out every renter's contract in the city. Which is a hell of a stick to bring to a discussion with your landlord.

Expand full comment
Scytale's avatar

Looks like AI had zero influence on employment: https://x.com/StefanFSchubert/status/1948339297980936624

Expand full comment
Kenny Easwaran's avatar

I would be careful to read too much into that graph without doing some more careful statistical analyses. There’s a plausible enough picture in which the left 60% of the graph should see zero effect, and the right 40% should see a roughly linear effect, and if I squint it actually looks compatible with that. But also, 2.5 years is just a really short time frame, and there have been some much bigger short term effects in some industries with the presidential transition.

Expand full comment
Wanda Tinasky's avatar

That's not consistent with the recent rise in unemployment for CS grads. I've heard too much anecdotal data to suggest that it's not related to AI. I wouldn't expect AI to have impacted other industries yet. It's too new. Only software companies are agile and tech-savvy enough to adjust to new technology so quickly.

Cost-saving innovations tend to roll out during recessions. I expect AI to really surface during the next one.

Expand full comment
Scytale's avatar

It's perfectly consistent, there's even too much software out there so there are hiring freezes. And interest rates are still much higher than in pre-covid era. We haven't seen slowdown of employment in professions that according to economists are most susceptible to AI-induced job loss, but we have seen slowdown of employment in professions most susceptible to economic downturns. The slowdown is not only in software, but in real engineering too - perfectly conssitent with firms cutting R&D budgets.

Expand full comment
Tori Swain's avatar

... only software companies are "agile and tech-savvy" enough...

You mean by hiring hundreds of thousands of "non technical people" who can't maintain their systems? SV isn't "agile or tech-savvy" anymore. And a dip in hiring "non technical folks" looks a lot like "relying on AI" I suspect. Even though the non-technical folks weren't doing jack or shit, and therefore google firing them doesn't affect google's monopoly (oh, did I just type that?)

Expand full comment
Collisteru's avatar

Part of being "agile" is trying risky new technologies to see what works.

Expand full comment
jms_slc's avatar

My wife and I are considering making a large change: we both grew up and live in the Mountain West, got married, and had children, who are now on the verge on making the transition to junior high school\middle school. We like where we live now but don't *love* it, and don't have extensive social ties here we'd be sad to leave.

My parents, and sister and her family, live on the East Coast, in the place we would normally not consider moving to, but as time passes, we've come to appreciate how much we've missed being so far from family, and are considering relocating to be closer to them. My parents are in general good health, so barring unforeseen events we expect to have years of quality time to spend.

What are the main concerns I should think through, aside from the usual cost of living and school quality issues?

Expand full comment
Neurology For You's avatar

Job and educational opportunities, but most places on the East Coast have those too.

I’ll just say, do it while the kids are young if you do it, moving with a teenager is very hard.

Expand full comment
None of the Above's avatar

One thing you may not have considered is the humidity. I live in the DC area, and my wife (who grew up in Utah) still finds the humidity here during the summer terrible after 20 years. We have two dehumidifiers running in our house!

Expand full comment
None of the Above's avatar

ETA: Visit wherever you're considering moving in the summer--now through mid-to-late August, say. That will give you more of a taste of what you're getting into.

Expand full comment
jms_slc's avatar

Oh for sure, we have visited the area in both the dead of winter and the height of humid summer. My hope would be that the proximity to both family and cultural amenities would override the discomfort of the weather

Expand full comment
Brad's avatar

After having grown up in the mountain west, moving to the east coast for 12 years, and having moved back to the mountain west…

Summers are painful when you have to tolerate them year after year on the east coast. The humidity and banality of the weather sucks. No more 40 degree temp swings between days and nights, or between days. No more snow, and when it does snow it’s an apocalypse.

Same with traffic when you have to tolerate it every single day. There are people everywhere on the east coast… it’s impossible to escape.

You’ll miss open landscapes. I’m convinced being used to seeing a big sky and far-reaching distances, then suddenly not, is akin to seasonal affective disorder. It does make trips back out west magical though.

If outdoor recreation is your thing, it’s worse on the east coast. It can still be done, but it’s less beautiful, less available, and more crowded.

If you have 100 kids, on average they will likely grow up with less “masculine” traits on the east coast. This has both good and bad attached to it; just beware. The cultures are indeed different.

Overall there are plenty of goods and bads… I moved back to the mountain west for my community and the views. If those weren’t important to me (or if I had community elsewhere) I may not have made the move back. Yet still sometimes I’m struck by the annoying aspects of hyper-masculine culture here (exaggerated because I do blue-collar work), just as I was struck on the east coast by the annoying aspects of hyper-feminine culture.

One last note… when I was in 7th grade my parents almost moved us to another state. I was onboard with the plan, but it ended up not happening. That move *not happening* was one of the luckiest moments of my life—unbeknownst to me at the time—because having grown up in one area my entire adolescence gave me friends and a community that will be with me forever. I have a true “home” moreso than my parents ever did.

Expand full comment
jms_slc's avatar

I always felt the huge landscapes in the West helped (me, at least) keep things in perspective. Everything human scale is dwarfed by the surroundings. My parents moved us to Utah in 7th grade and it turned out to be the best thing that happened to me.

I work from home so commuting wouldn't be a major daily annoyance, and I don't have a strong community network here in CO where I currently live, but also don't really anticipate building a strong community in the NE outside of my family. Interesting point about the masculine\feminine culture; I get somewhat tired of the blue collar masculine cultural aesthetic mainly because I'm not part of that demographic, but I'm also tired of the people who make "being outdoorsy" their entire personality. Anyway, lot to think about; I appreciate the discussion

Expand full comment
Collisteru's avatar

Which part of the East Coast? Massachusetts is very different from Maryland.

I also grew up in the Mountain West and lived in the East Coast for a time as a child. Overall the mountains offer a better quality of life: they're less crowded, cheaper, generally cleaner, and in every way healthier.

The biggest advantage of East Coast life is proximity to America's great cultural institutions. If you live in the NE megalopolis, you are more plugged in to world culture than the great majority of humans. Since it's more densely populated you also benefit more from network effects. Your family is even an example of this.

As with so many things in life it comes down to values. I'd say if you care more about people, move to the coast. If you care more about nature or lifestyle, stay in the Mountain West.

Expand full comment
jms_slc's avatar

I didn't want to get too specific, in part not to bias responses too much, but the locations we're talking about definitely matter. We're from Salt Lake City (not Mormon), and now live outside Boulder, CO. My parents and sister's family are in suburban New Jersey outside Philadelphia. We love the cultural access in the NE, but the crowding and humidity are the big turn-offs for me. We spent 5 years in Austin so we're familiar with scorching heat\humidity and don't enjoy it. If there were any way to arrange matters such that we all lived in the West that would be ideal but that's not a viable option.

Expand full comment
None of the Above's avatar

We visit Utah more-or-less every year, and one thing that's striking is how different the assumptions wrt family size are. Our family of five is a little too big for a lot of stuff elsewhere, and especially in the DC area--they can accomodate you but you're a bit of an exception. In Utah, we're a small family.

Expand full comment
jms_slc's avatar

Totally understand this point; Utah is a great place for large families. Somewhat relatedly, the area we live in now is mostly upper-middle class striver types with both parents working; in our family I'm the sole breadwinner and my wife stays home and we can definitely sense the mixture of resentment and disdain from some people here. Being in an area with a larger diversity of acceptable life choices would be refreshing

Expand full comment
Yug Gnirob's avatar

Why would you not normally consider moving there?

Expand full comment
jms_slc's avatar

It's a part of the country with a different culture, climate, and geography than I'm used to. I've enjoyed my many visits there, and within two or three hours drive there is a large array of things to do and places to see, but the place we'd be moving is itself not a big draw.

Expand full comment
thewowzer's avatar

I'm pretty far behind you as my wife and I just had our first child in January, so while I can't answer your question, I can say that even these first six months (and the year and a half of marriage before having a kid) have been a time of rich fullness just due to the fact that my wife's family and my parents all live close by. Our location doesn't account for much of that, as I live smack in the middle of North Dakota.

I'm sure that we would still be very much enjoying life together even if none of our family were close by, but having family around definitely adds an extra depth and richness that I feel would make a move like you're describing worth it.

Expand full comment
Melvin's avatar

I'm not even so sure about adding depth and richness, but it sure would be nice to have free babysitting.

OP's kids are a bit older, but I do regret having gone through the small kids phase with no family nearby; it takes a huge amount of pressure off when there's someone who can watch the kids sometimes and let both parents have a little break.

Expand full comment
jms_slc's avatar

Thanks, yes, the babysitting (or lack thereof) in the kids' early years was incredibly draining

Expand full comment
thewowzer's avatar

Very true, it's been insanely helpful.

Expand full comment
jms_slc's avatar

Thanks for replying. For me this is a choice between great climate and access to great natural beauty, or closeness to family and the ability to share our life in a more casual, regular way than guesting\hosting family for a week or more in their\your house. For years the choice was obvious.

Expand full comment
nifty775's avatar

Been having fun a lot of fun working with ChatGPT on alternate history scenario where the transistor was never invented- somehow, silicon (and germanium etc.) just doesn't transmit signals in this alternate timeline. It seems like humanity would have invented vacuum microelectronics instead? Maybe did more advanced work with memristors too? It would certainly be a different world- electronics would be incredibly difficult to miniaturize without the transistor, so you might have large centralized computers for the military & big corporations- but definitely no smartphones. If we had home computers they'd be much more expensive, so even if the Internet existed by 2025 it'd be much much smaller.

Without electronic capital markets you'd have a radically different 20th century- slower growth, more stable, less capital flows in and out of countries. This might've slowed China's growth, specifically- no ecommerce, less investment flows into China originally, no chance for them to hack & steal Western technology. Also a decent chance that the USSR's collapse might not have been as dramatic- they might've lost the Baltics and eastern Europe, but kept going otherwise. The US would probably be poorer without Silicon Valley, plus Wall Street would be smaller without electronic markets. Japan might really excel at the kind of precision mechanics & analog systems that dominate this world. So it'd be a more multipolar world overall.

(I searched the alternatehistory forums to see if anyone else had ever worked on this scenario, but found surprisingly little)

Expand full comment
Charles Midi's avatar

Sounds like a fun project. If you haven't seen it already, you may enjoy some of the summaries of pre-IC space station proposals here: https://projectrho.com/public_html/rocket/spacestations.php#atlasstation . No ICs *might* mean a much larger human presence in space. That plus an intact USSR could be very interesting.

Expand full comment
Jeffrey Soreff's avatar

Copying an AI summary from a query "cold field emission microelectronic vacuum tubes"

>Cold-field emission microelectronic vacuum tubes, or vacuum microelectronics, utilize the mechanism of electron emission into a vacuum from sharp, gated or ungated conductive or semiconductive structures, avoiding the need for thermionic cathodes that require heat. This technology aims to overcome the bulkiness of traditional vacuum tubes by fabricating micro-scale devices and offers potential applications in areas such as flat panel displays, high-frequency power sources, high-speed logic circuits, and sensors, especially in harsh environments where conventional electronics might fail

Admittedly these are still higher voltage and less dense devices than semiconductor FETs, but electronics would not have been limited to hot cathode bulky tubes even if silicon transistors never existed.

Expand full comment
Brendan Richardson's avatar

> (I searched the alternatehistory forums to see if anyone else had ever worked on this scenario, but found surprisingly little)

Really? This is basically the premise of the video game Fallout.

Expand full comment
Mark Roulo's avatar

"It would certainly be a different world- electronics would be incredibly difficult to miniaturize without the transistor, so you might have large centralized computers for the military & big corporations- but definitely no smartphones. If we had home computers they'd be much more expensive, so even if the Internet existed by 2025 it'd be much much smaller."

Don't forget that you can probably have fairly large computer memories (in the context of vacuum tubes ...) because of core memory:

https://en.wikipedia.org/wiki/Magnetic-core_memory

PDP-11s shipped with core memory and you can do QUITE A LOT with 1 MB (or less).

And you don't need transistors for hard drives, either :-)

Imagine "programs" being distributed on (error correcting encoded) microfiche.

Sounds like fun in a steam-punk way.

Also, you can easily imagine a slow internet. Think something like 1200 baud (or faster) between major centers (so very much like early Usenet). You won't spend resources for images or pretty formatting, but moving high value *data* should work.

https://en.wikipedia.org/wiki/Computer_network

Expand full comment
REF's avatar

Just imagine the electrical and cooling requirements for a GPT running on vacuum tubes :)

Expand full comment
Ch Hi's avatar

About the time transistors were becoming widely used, micro-vacuum tubes were also in use. I don't know what their life was, and clearly transistors were found superior, but they were competitive in some applications.

So, yes, vacuum micro-electronics would have been developed. I've got doubts that memristors would have shown up any more quickly than they did here.

It's not clear that vacuum electronics couldn't have been developed to the same degree of integeration that transistors were, so I'm not sure the rest of your caveats hold up. They might. I know that vacuum electronics were more highly resistant to damage from radiation, so there might well have been a different path of development, but I see no reason to assume that personal computers, smart phones, routers, etc. wouldn't have been developed, though they might have been delayed a few years. (That we haven't developed the technology doesn't imply that it couldn't have been developed.)

Expand full comment
Jeffrey Soreff's avatar

Agreed! I hadn't seen your comment in time, and replied with essentially the same point.

Expand full comment
Erica Rall's avatar

It's also possible to miniaturize electromechanical switching to IC scale with MEMS and NEMS relays. It's a lot slower than transistors, which is why it's only used for specialty applications, but it's possible.

Expand full comment
Jonas's avatar

This is so interesting

Expand full comment
luciaphile's avatar

My husband was forced untimely - after being rear-ended by someone who spoke no English, had no proof of insurance on him, said he had insurance but didn’t know the name of the company, before driving away (the babies were crying, the side of a freeway is no place for a half dozen children)- and miraculously did have it (one time that the state doing its seeing-like-a-state thing was helpful) - into a quick round of unsatisfactory car shopping after All-State took its sweet time deciding to total his perfectly driveable old Subaru.

As a result - life having other distractions, and he having little interest in modern cars - he got steered into buying his first “new” car.

That’s something that won’t ever happen again!

All those new features he didn’t want to pay for … and Subaru doesn’t need to haggle, period.

He was set to get two requests, an ignition key and a manual, slam-shut gate, swapped in from a dealer in another city - but in the event, a buyer there was simultaneously grabbing that one, so the one they brought in was sadly keyless.

We should have just returned home (a hour plus away) but a certain amount of time had been invested, and a planned road trip was upcoming.

Question: should I get him one of those faraday cage thingies? It has been established that he won’t stuff the fob in foil every night, nor remember to disable it.

He didn’t even know about this car-stealing method, not being much online and certainly not on NextDoor.

There is no consensus on the internet about the need for this. Possibly already passe, superseded by new methods of thievery.

We live in a city that had 18,000 cars stolen last year. Not generally Subarus probably … but anyway. The car is within 50 or sixty feet of the fob, in an apartment parking lot, not within view.

Our cars, when we’ve occasionally, inadvertently left them unlocked (long habit from where we lived previously) have reliably been rifled through, though it was a wash: we had neither guns nor electronics nor drugs. Once, memorably, they stole his car manual. I recall thinking that they’d better come by around daylight savings time and change the clock for him.

Expand full comment
Christina the StoryGirl's avatar

A couple of strategies I use with my vulnerable Hyundai in a large city with many many many stolen cars:

1. Make the interior of your car look like a poor, lazy, low-class, possible drug addict is living in it. Leave an empty fast food cup or two in the cup holder, and some random receipts and leaves and other trash tossed around. The cabin visible through the windows should look unpleasantly chaotic.

The goal here is to make it look like it's *completely impossible* that there is *anything* in the car worth breaking a window for; you want your car to look like the person driving it couldn't possibly have any loose change or small bills in a compartment, because they would have already spent them, and no way are there any nice sunglasses or first aid kits or emergency cash or snow chains or changes of clothes or any of the kind of useful stuff I keep neatly organized in my trunk.

2. Consider installing an ignition kill switch. My car's dashboard lights will come on when I insert my key, but my engine won't turn over unless I engage the hidden button which my friend installed for me. While a smart thief could probably find said hidden button if they checked around a bit, I'm fairly confident it wouldn't occur to them to check around a bit, as a) ignition switches are incredibly rare and b) my car looks like it belongs to a drug ghoul who wouldn't know about ignition kill switches.

Expand full comment
luciaphile's avatar

Thank you! This is not a problem for my car, which doesn’t have tinted windows and can easily be viewed from outside. I just don’t leave anything in it. It has one cool feature too. It’s so old that the interior lever to pop open in the trunk doesn’t work. The trunk is truly a lock box which is handy if you’re hiking or floating a river or something. I take my key. People can leave their stuff in my trunk. There’s no way to get in.

In general, my practice in my last city was to leave the car unlocked rather than have the window smashed unnecessarily.

Years ago, that city had so little crime that we would say if you left your car unlocked somebody might leave you a present in it.

Now I don’t like to have the stuff in the glove box thrown about. Makes you feel kind of bad so I lock it.

We shall see what happens with my spouse’s car. It’s in its beautiful new condition. In fact, we took a road trip and after we got back, I spent half a day restoring it to new car condition.

Usually, I’d be looking for the first dent to happen with relief, but in this case I want to keep it nice; I feel like it’s not a car he’ll want to keep forever.

Expand full comment
Kenny Easwaran's avatar

I’ve never heard of this sort of faraday cage thing. How many cars have been stolen from the apartment parking lot in the last few years? Does insurance cover such thefts? My guess is that a precaution against this one method of theft isn’t that likely to make a big difference, particularly since theft is not that common anyway (apart from the weird Kia/Hyundai exploit that was discovered during the pandemic), but if the faraday cage is cheap and convenient and easy to set up in the tray where you put keys and wallet when you get home anyway (or however you do it), it could still be net worth it.

Expand full comment
luciaphile's avatar

It was often mentioned in my former neighborhood where a car was stolen or at least ransacked virtually every night. These were much nicer cars and trucks than we then owned. It was a little mysterious. I just couldn’t picture all these vehicles being silently hotwired, and nobody having the slightest knowledge of it inside the house. That’s when people started talking about this other business with the key. A few admitted that they had left their key in the car or rather the fob.

It was a house and I parked my car in the garage anyway but most people used their garage for storage. My husband’s old Honda was in the driveway, but was not very attractive.

I just looked on a forum for people who are crazy about the make of car we just bought and the subject really didn’t come up.

Two cars were legit stolen from our complex in the last couple years, while another one was TikTok challenged and driven away to a shopping center nearby. I don’t know about this surrounding neighborhood because I didn’t get on next-door after we moved here.

The latter Kia or Hyundai was destroyed, however, between the broken windows and the damage to the steering column. It was my neighbor’s old vehicle and he was super sad though he bought a much nicer car.

Expand full comment
Tori Swain's avatar

In toronto, the police department advises leaving your car keys outside to make it easier for thieves, so they don't break into your house. The more you know!

Expand full comment
Kenny Easwaran's avatar

I thought they said to keep the keys *inside*, but right next to the door, so that *if* someone breaks into your house, they don’t go ransacking the house and possibly turning violent. But a lot of performatively anti-woke people happily misinterpreted that as though they were saying to leave keys *outside* by the door.

Expand full comment
Tori Swain's avatar

It's possible. You're not linking, and I'm not linking. I'm going to say about Vancouver that people had special insurance (high insurance) for car thefts, because that was the preferred way for drug users to pay for their habit. It was otherwise a very safe city.

Contrast Philadelphia, where even in Central City, folks get insurance for muggings.

Expand full comment
Tori Swain's avatar

If you have a parent that refuses to put his keys into an exact place (a nice shiny foil box) every night, you have a parent that probably shouldn't be trusted with a motorized vehicle.

Expand full comment
luciaphile's avatar

A rather stupid syllogism, but I don’t own such a box. That’s what I’m trying to learn - if it’s worth buying one. For some reason I thought this would be an easy layup for this crowd.

Expand full comment
Kindly's avatar

Wait, what's the car-stealing method?

Expand full comment
luciaphile's avatar

Supposedly using a device to capture the signal pinging between the key fob and the vehicle. How you would start the vehicle thereafter away from the fob I don't know. Or maybe just as a means to open the vehicle and throw stuff from the glove box around.

I really thought this was a thing as it was so commonly referenced, but now I'm not sure if it was imaginary/dreamed up by people who didn't want to admit they left their fob sitting in the car.

Expand full comment
Michael's avatar

A relay attack lets a thief extend the range of the key fob by retransmitting the signals, allowing them to start the car. It doesn't let them clone key fob. Once started, cars will not automatically shut off when the key goes out of range. Some cars have protection against relay attacks, but I think most do not. The thief would have to get close enough to the key fob to pick up the signal, and they need the key signal in real time. They can't record the signal and replay it later.

Expand full comment
luciaphile's avatar

Yes, that’s what I meant. Didn’t mean they would randomly capture signals and store them for later use.

I never had a reason to think about it before. My own car is a very basic car from 2009.

I had just absorbed by osmosis this idea about newer cars.

But upon researching it, I couldn’t find that after all people seem particularly worried about it. Or any agreement about what’s going on with the key, whether it’s really talking to the car or sitting there inert.

Not sure if the subject is just really well understood only by those who steal cars and those who know a lot about electronics.

Expand full comment
None of the Above's avatar

This is a very old attack in the cryotographic literature. IIRC, it was originally called the mafia fraud attack. Though really its kind of just an instance of a man in the middle attack.

Expand full comment
Vermillion's avatar

Something interesting I learned today*: Among professional historians, antiquarians and the like there is a widespread consensus that Jesus of Nazareth was a real, historical person. Important disclaimer, this distinguishes the historical personage from any supernatural capabilities he may or may not have had.

They cite about half-a-dozen non-biblical references by Tacitus, Josephus, Pliny the Younger, Suetonius, Mara Bar-Serapion, Lucian and Talmudic references. Most of these are pretty brief or oblique but they converge on a pretty recognizable figure. The evidence is a lot stronger than he was a mythical creation, which is why mainstream scholars of all stripes have landed there.

The other interesting thing about this, the scholarly consensus is a lot stronger than public perception that Jesus was a historical person, and to be sure I include myself in that number (or would have last weak at least): ~76% of Americans across all religious and political affiliations believe he existed: https://www.ipsos.com/sites/default/files/ct/news/documents/2022-03/Topline%20-%20Episcopal%20Church%20Final%202.17.22%20CLEAN.pdf (question 10)

ChatGPT summary: https://chatgpt.com/share/68877913-12f8-8011-b978-ba1c0006a45b

*Several days ago but was waiting for a new OT

Expand full comment
quiet_NaN's avatar

> The other interesting thing about this, the scholarly consensus is a lot stronger than public perception

This seems unsurprising. If you asked scholars and Americans if Sudan existed, I am sure you would get similar answers.

Also, "there existed a historical person who resembles the person in the tale" is an excessively low bar to clear. Siddhartha Gautama likely existed. Jesus likely existed. Mohammed very likely existed. Gilgamesh likely existed. Alexander very likely existed. The Iliad could well be based on a historical conflict. Moses could well have existed.

Expand full comment
Vince Bowdren's avatar

The nicest argument I have heard for Jesus's historical existence is from the lack of denial: critics of Christianity in the early centuries AD made lots of attacks of almost every kind, but they apparently didn't claim that there never was any such person as Jesus. The point being that if there had been any doubt, they would have jumped on that attack for sure.

Expand full comment
Erica Rall's avatar

A year or two ago, I watched an extended interview with Richard Carrier, who's one of the highest profile people arguing against the historicity of Jesus. He's a classical historian by training and a pop historian and Atheism advocate by vocation. IIRC, his thesis is that Christianity started among ethnic Jews living in the Roman world and followed what was then a fairly common template of venerating a purely spiritual messianic figure, and a bit later St. Paul and the writers of the Gospels reinterpreted some allegorical stories about this messiah as referring to an actual historical person who had lived and preached a few decades earlier.

Carrier made some interesting arguments about the mythological pattern which I lack the expertise to assess in detail. Where I do think he rather badly misstepped was in making a big deal out of the Gospels and Epistles being written in Greek rather than Aramaic. I don't that needs much explaining given how few classical documents have survived to the present. Greek was a major literary language throughout the region while Aramaic was not, and Christianity caught on much, much more in Greek and Latin-speaking areas than in Aramaic-speaking areas, so only Greek foundational texts surviving isn't particularly surprising. The wikipedia article for "ancient text corpora" cites estimates for Carsten Peust (2000) that our text corpus from prior to 300 AD is 57 million words of Greek, 10 million words of Latin, and 100,000 words of Aramaic.

Expand full comment
bagel's avatar

Where did you get the idea that Aramaic wasn't a significant language of the region at the time? It was the lingua franca from the Levant to Persia for centuries.

The Talmud alone is in the ballpark of 2.5 million words, most of it two dialects of Aramaic and most of the rest in Hebrew. While it was compiled later than 300 AD, it contained a body of work stretching over many centuries, stretching back well into the Second Temple period.

The Mishnah, compiled centuries earlier, was primarily Hebrew but with some Aramaic.

And that wikipedia page lists 300,000 words for Hebrew - the Tanakh has over 300k words, the Torah 80k of them.

The Dead Sea Scrolls, which are only partially the Torah, contains fragments of nearly 1k manuscripts. https://www.imj.org.il/en/wings/shrine-book/dead-sea-scrolls.

All that is to say, even if we really do have fewer surviving words of Aramaic than Greek, that almost certainly has more to do with our sample than the ancient source.

Expand full comment
Erica Rall's avatar

I was only counting Aramaic, not Hebrew, and trying to time box it to Classical Antiquity. That wikipedia page lists 100k words for Aramaic and 300k for Hebrew. But even if you want to count both together on the grounds that they're closely related languages and also extend to time window to include the Talmud, that's still a lot smaller than the corpuses for Latin or Greek. That said, I am prepared to be told that the wikipedia page is wrong (either misinterpreting the source or relying on a bad source) and would be grateful if you could point me at a better one.

My impression was that Aramaic was a significant spoken language in the Middle East, but much less significant as a literary and documentary language in the broader Mediterranean world than Greek or Latin. So someone writing for an audience in the Levant might well choose to write in Aramaic, but someone trying to write for an audience throughout the Roman Empire would probably do so in Greek or Latin depending on their own fluency, the type of document they were writing, and what parts of the Roman Empire they were most focused on.

I'm also pretty sure there was a much better infrastructure for copying and preserving Greek and Latin works form Classical Antiquity through through the Middle Ages than there was for Aramaic or Hebrew, especially when it came to Christian religious texts. The people making and keeping copies of Greek and Latin documents in the middle ages were mostly Christian or Muslim, while the ones doing so for Hebrew and Aramaic were mostly Jewish. There were a lot more of the former than the latter, giving Greek and Latin documents a better chance at surviving in general. And Jewish scribes would be a lot less likely to be interested in preserving Christian gospels than Christian scribes would be, with Muslim scribes probably somewhere in between. Taken together, if there were Aramaic or Hebrew gospels, it isn't surprising at all that they weren't preserved to the present.

Expand full comment
bagel's avatar

I think your last paragraph is crucial. Our modern sample tells us a lot more about the process of transmitting and recovering that history than it does about the original written corpus.

And given the scale of the numbers they cite from Hebrew and Aramaic, our estimates are always just one or two Dead Sea Scroll or Cairo Geniza type finds away from being totally obsolete.

I don't have a better source than the referenced page from wikipedia - just reasons to believe that its numbers represent a significant underestimate. And that, therefore, we shouldn't confidently draw conclusions yet from comparing the numbers.

Expand full comment
Erica Rall's avatar

That seems reasonable. Thank you for the notes of caution about the numbers.

Expand full comment
Melvin's avatar

> and a bit later St. Paul and the writers of the Gospels reinterpreted some allegorical stories about this messiah as referring to an actual historical person who had lived and preached a few decades earlier

That doesn't sound like he's arguing against the historicity of Jesus at all then, if he's saying that Jesus is based on an actual historical person. That just sounds like the mainstream view all over again -- Jesus was real, some of the stories told about him are false, and we can quibble about exactly how much was real.

Expand full comment
Erica Rall's avatar

Carrier is loudly and explicitly claiming that there was no actual historical person who lived in Judea c. 30 AD matching the description of Jesus of Nazareth, and that pre-Pauline proto-Christians would have agreed with this as they would have believed in a purely spiritual Christ and told allegorical stories about him set in a spiritual real. Per Carrier, the claim that Jesus was a human who ministered in Judea was an invention of Paul and the Gospel writers who re-wrote the existing stories *as if* Jesus were a real person who had been physically present in and around Jerusalem.

Expand full comment
Melvin's avatar

Right, I think I misunderstood the sentence I quoted, I thought he was saying that they'd merged their spiritual messiah with stories about some actual bloke.

Expand full comment
Erica Rall's avatar

I can see how I wrote it could be read that way, sorry.

Expand full comment
Peter Defeel's avatar

Greek was the lingua Franca at the time, and it was what educated people largely wrote in. Particularly on the east. Marcus Aurelius even wrote his Meditations entirely in Greek.

In no way would the writers of the gospels write in Aramaic. John and Luke may not have even spoken it.

Expand full comment
Erica Rall's avatar

Exactly. If there was an Aramaic proto-gospel, it would have had to have been very early and very niche and it probably would have been oral rather than written. Anyone writing in the Eastern Mediterranean for a broader audience would have done so in Greek.

Expand full comment
Deiseach's avatar

Oh, Carrier is the guy that Tim O'Neill has the beef with. Doesn't think much of Dr. Carrier's arguments 😁

I'm Irish Catholic so you know which side of the fence I'm coming down on here, but I do have to admit to a bias towards the Australian guy of Irish Catholic heritage as well! I can't say it's edifying, but it's fun:

https://historyforatheists.com/jesus-mythicism/

Here's the Carrier one (of several):

https://historyforatheists.com/2016/07/richard-carrier-is-displeased/

"It seems I’ve done something to upset Richard Carrier. Or rather, I’ve done something to get him to turn his nasal snark on me on behalf of his latest fawning minion. For those who aren’t aware of him, Richard Carrier is a New Atheist blogger who has a post-graduate degree in history from Columbia and who, once upon a time, had a decent chance at an academic career. Unfortunately he blew it by wasting his time being a dilettante who self-published New Atheist anti-Christian polemic and dabbled in fields well outside his own; which meant he never built up the kind of publishing record essential for securing a recent doctorate graduate a university job. Now that even he recognises that his academic career crashed and burned before it got off the ground, he styles himself as an “independent scholar”, probably because that sounds a lot better than “perpetually unemployed blogger”."

And then he really gets stuck in 😀

Expand full comment
Erica Rall's avatar

Yeah, my impression of Carrier is that he seems clever and interesting, but the actual substance of his arguments seems pretty weak even aside from my priora about who's likely to be right when a lone "independent scholar" is arguing that the prevailing view of academic experts is trivially and obviously false on a subject within their field.

I'll check out the O'Neill article, thank you.

Expand full comment
Deiseach's avatar

O'Neill is fun and I trust him because although he's an atheist himself, he gets so pissed-off by historical errors being perpetuated by online atheists and the mainstream that he goes after them.

He does have a personal grudge going with Carrier, so bear that in mind. Aron Ra is another one of the Mythicists with whom O'Neill tilts at times, but not as bitterly as with Carrier.

I was amused by the reference to Bayes' Theorem (seeing as how that's one of the foundations of Rationalism) in the mention of Carrier's book published in 2014:

"Two years ago Carrier brought out what he felt was going to be a game-changer in the fringe side-issue debate about whether a historical Jesus existed at all. His book, On the Historicity of Jesus: Why We Might Have Reason for Doubt (Sheffield-Phoenix, 2014), was the first peer-reviewed (well, kind of) monograph that argued against a historical Jesus in about a century and Carrier’s New Atheist fans expected it to have a shattering impact on the field. It didn’t. Apart from some detailed debunking of his dubious use of Bayes’ Theorem to try to assess historical claims, the book has gone unnoticed and basically sunk without trace. It has been cited by no-one and has so far attracted just one lonely academic review, which is actually a feeble puff piece by the fawning minion mentioned above. The book is a total clunker."

O'Neill's quote from Carrier proudly displayed on his website:

"“Tim O’Neill is a known liar …. an asscrank …. a hack …. a tinfoil hatter …. stupid …. a crypto-Christian, posing as an atheist …. a pseudo-atheist shill for Christian triumphalism [and] delusionally insane.” – Dr. Richard Carrier PhD, unemployed blogger"

Deep calls to deep, and so does Irish invective between the sea-divided Gael so that's probably why I like O'Neill so much even apart from his good faith in historical arguments.

Expand full comment
Jim Nelson's avatar

Academics don't view denial of Jesus' existence as much of an argument. Most call it "fringe."

If you're interested in going deeper, I would recommend looking into the modern quests for the historical Jesus, which not only surfaced and studied extrabiblical sources on Jesus, but also developed methodologies for evaluating the gospels:

https://en.wikipedia.org/wiki/Quest_for_the_historical_Jesus

Academics I've read and listened to lean toward the conclusion that only two events in the gospels about Jesus' life are reliable: His baptism by John the Baptist, and his execution by the Romans. (These both rely on the criteria of embarrassment, that is, because these events undermine his followers' beliefs, for them to include these events in the gospels suggests they actually occurred.) Everything else in the gospels about Jesus' life is up for debate, although (as others have said) most academics discard the miracle-working, or offer less supernatural explanations.

The quests for the historical Jesus also bleed into modern understandings of how the gospels were authored, such as the dominant theory of Markan priority, and the theoretical Q document.

Expand full comment
Hieronymus's avatar

"Everything else in the gospels about Jesus' life is up for debate, although (as others have said) most academics discard the miracle-working, or offer less supernatural explanations."

This is true, but in the context of discussing a New Athiest figure it's worth adding some context. For most of these scholars, rejection of the supernatural is a premise rather then a conclusion. It's often the case that an academic will write, "Since its miracle stories are false, this document must be late," only for his reader to say, "Since this document is late, its miracle stories must be faIse," without realizing the circularity.

Expand full comment
FLWAB's avatar

C. S. Lewis wrote on this very thing in the introduction to his book "Miracles":

"Many people think one can decide whether a miracle occurred in the past by examining the evidence ‘according to the ordinary rules of historical enquiry’. But the ordinary rules cannot be worked until we have decided whether miracles are possible, and if so, how probable they are. For if they are impossible, then no amount of historical evidence will convince us. If they are possible but immensely improbable, then only mathematically demonstrative evidence will convince us: and since history never provides that degree of evidence for any event, history can never convince us that a miracle occurred. If, on the other hand, miracles are not intrinsically improbable, then the existing evidence will be sufficient to convince us that quite a number of miracles have occurred. The result of our historical enquiries thus depends on the philosophical views which we have been holding before we even began to look at the evidence. The philosophical question must therefore come first.

"Here is an example of the sort of thing that happens if we omit the preliminary philosophical task, and rush on to the historical. In a popular commentary on the Bible you will find a discussion of the date at which the Fourth Gospel was written. The author says it must have been written after the execution of St. Peter, because, in the Fourth Gospel, Christ is represented as predicting the execution of St. Peter. ‘A book’, thinks the author, ‘cannot be written before events which it refers to’. Of course it cannot—unless real predictions ever occur. If they do, then this argument for the date is in ruins. And the author has not discussed at all whether real predictions are possible. He takes it for granted (perhaps unconsciously) that they are not. Perhaps he is right: but if he is, he has not discovered this principle by historical inquiry. He has brought his disbelief in predictions to his historical work, so to speak, ready made. Unless he had done so his historical conclusion about the date of the Fourth Gospel could not have been reached at all. His work is therefore quite useless to a person who wants to know whether predictions occur. The author gets to work only after he has already answered that question in the negative, and on grounds which he never communicates to us.""

Expand full comment
quiet_NaN's avatar

Even if I were a theist, I would be doubtful about miracles. From what we know of the observable universe, which is vast beyond comprehension, it seems that whoever created it is really big into the laws of physics. Breaking the laws of physics to help a few people in ancient Judea seems really out of character.

And even if she did, she would have taken great care that the miracles are entirely deniable in our age. Why have Jesus walk over water and heal the sick when he could just have placed a cubic kilometer of titanium monument near Jerusalem which is inscribed with the correct faith and will heal all illnesses in any believer who touches it? For some weird reason God was fine with Jesus converting his followers through his wizard powers but really prefers later humans to find faith without the benefit of to just being able to update on statistically significant miraculous results. Sounds fishy.

If a historian finds a document which uses Latin phrases which will not appear for another few centuries after the document claims to have been written, he will conclude that the document is a forgery, and not consider the possibility that it might have been written by a time traveler, even though that would explain the evidence equally well.

Expand full comment
FLWAB's avatar

>Breaking the laws of physics to help a few people in ancient Judea seems really out of character.

Lewis spent a whole chapter addressing this critique in the book (Chapter 12, The Propriety of Miracles). Here's some excerpts:

"If the ultimate Fact is not an abstraction but the living God, opaque by the very fullness of His blinding actuality, then He might do things. He might work miracles. But would He? Many people of sincere piety feel that He would not. They think it unworthy of Him. It is petty and capricious tyrants who break their own laws: good and wise kings obey them. Only an incompetent workman will produce work which needs to be interfered with. And people who think in this way are not satisfied by the assurance given them in Chapter VIII that miracles do not, in fact, break the laws of Nature. That may be undeniable. But it will still be felt (and justly) that miracles interrupt the orderly march of events, the steady development of Nature according to her own inherent genius or character. That regular march seems to such critics as I have in mind more impressive than any miracle. Looking up (like Lucifer in Meredith’s sonnet) at the night sky, they feel it almost impious to suppose that God should sometimes unsay what He has once said with such magnificence. This feeling springs from deep and noble sources in the mind and must always be treated with respect. Yet it is, I believe, founded on an error

"A supreme workman will never break by one note or one syllable or one stroke of the brush the living and inward law of the work he is producing. But he will break without scruple any number of those superficial regularities and orthodoxies which little, unimaginative critics mistake for its laws. The extent to which one can distinguish a just ‘license’ from a mere botch or failure of unity depends on the extent to which one has grasped the real and inward significance of the work as a whole. If we had grasped as a whole the innermost spirit of that ‘work which God worketh from the beginning to the end’, and of which Nature is only a part and perhaps a small part, we should be in a position to decide whether miraculous interruptions of Nature’s history were mere improprieties unworthy of the Great Workman or expressions of the truest and deepest unity in His total work. In fact, of course, we are in no such position. The gap between God’s mind and ours must, on any view, be incalculably greater than the gap between Shakespeare’s mind and that of the most peddling critics of the old French school.

"How a miracle can be no inconsistency, but the highest consistency, will be clear to those who have read Miss Dorothy Sayers’ indispensable book, The Mind of the Maker. Miss Sayers’ thesis is based on the analogy between God’s relation to the world, on the one hand, and an author’s relation to his book on the other. If you are writing a story, miracles or abnormal events may be bad art, or they may not. If, for example, you are writing an ordinary realistic novel and have got your characters into a hopeless muddle, it would be quite intolerable if you suddenly cut the knot and secured a happy ending by having a fortune left to the hero from an unexpected quarter. On the other hand there is nothing against taking as your subject from the outset the adventures of a man who inherits an unexpected fortune. The unusual event is perfectly permissible if it is what you are really writing about: it is an artistic crime if you simply drag it in by the heels to get yourself out of a hole. The ghost story is a legitimate form of art; but you must not bring a ghost into an ordinary novel to get over a difficulty in the plot. Now there is no doubt that a great deal of the modern objection to miracles is based on the suspicion that they are marvels of the wrong sort; that a story of a certain kind (Nature) is arbitrarily interfered with, to get the characters out of a difficulty, by events that do not really belong to that kind of story. Some people probably think of the Resurrection as a desperate last moment expedient to save the Hero from a situation which had got out of the Author’s control.

"The reader may set his mind at rest. If I thought miracles were like that, I should not believe in them. If they have occurred, they have occurred because they are the very thing this universal story is about. They are not exceptions (however rarely they occur) not irrelevancies. They are precisely those chapters in this great story on which the plot turns. Death and Resurrection are what the story is about; and had we but eyes to see it, this has been hinted on every page, met us, in some disguise, at every turn, and even been muttered in conversations between such minor characters (if they are minor characters) as the vegetables. If you have hitherto disbelieved in miracles, it is worth pausing a moment to consider whether this is not chiefly because you thought you had discovered what the story was really about?—that atoms, and time and space and economics and politics were the main plot? And is it certain you were right? It is easy to make mistakes in such matters. A friend of mine wrote a play in which the main idea was that the hero had a pathological horror of trees and a mania for cutting them down. But naturally other things came in as well; there was some sort of love story mixed up with it. And the trees killed the man in the end. When my friend had written it, he sent it an older man to criticise. It came back with the comment, ‘Not bad. But I’d cut out those bits of padding about the trees’. To be sure, God might be expected to make a better story than my friend. But it is a very long story, with a complicated plot; and we are not, perhaps, very attentive readers."

>For some weird reason God was fine with Jesus converting his followers through his wizard powers but really prefers later humans to find faith without the benefit of to just being able to update on statistically significant miraculous results. Sounds fishy.

Miracles reports are quite common, even into the modern day. I'd say about half of the Christians I've asked have told me a story of something miraculous they experienced. 27% of Americans report that they've personally experienced a miraculous healing (https://www.barna.com/research/americans-believe-supernatural-healing/). Dr. Craig Keener wrote a two volume academic work on the subject, finding that miracle reports are historically common and still common today, with millions of people around the globe reporting that they've experienced a miracle.

Expand full comment
Hieronymus's avatar

I've never read Miracles, but it's no surprise that Lewis got there first and explained it better. Thanks for posting it.

Expand full comment
Peter Defeel's avatar

Sometimes a lie reveals the truth. It’s generally accepted that Jesus wasn’t born in Bethlehem. It’s only mentioned in two gospels and the census story of moving back to your origins isn’t Roman practice. It would be mayhem. People just didn’t travel to ancestors homelands for a census. The killing of the innocents by Herod is also undocumented.

But an invented messiah can just be born wherever you need him (and the messiah prophecy mentions Bethlehem) but clearly people were aware where Jesus actually came from so they had to admit to Nazareth.

Expand full comment
Erusian's avatar

Jesus is very well attested people for his period. The minimum viable Jesus is that he was a popular religious leader from about the class the Bible says he's from who lived roughly where the Bible says he did. That he had a large following and was believed to have magical powers and claimed to be the son of God. That he clashed with Jewish and Roman authorities. And that he was executed but his followers continued on.

If you want to say he didn't exist you basically believe in a conspiracy theory that later Christians went back and doctored a bunch of works and made a bunch of forgeries to provide evidence that he did. A lot of anti-Christians really want to believe this and produce a lot of shoddy scholarship about it. But in all likelihood Jesus was real.

Expand full comment
Scott Alexander's avatar

I think my previous belief was that Christianity definitely existed as a religion by the mid-1st-century, lots of people knew the Apostles, the Apostles knew Jesus, and it would require a pretty coordinated conspiracy for the Apostles to all be lying.

Does the evidence from historians prove more than that? AFAIK none of the historians claim to have interviewed Jesus personally. So do we know that the historians didn't just find some Christians, interview them about the contents of their religion, and use the same chain of reasoning as above to assume that Jesus was a real person? Should we take the historians' claims as extra evidence beyond that provided by the religion itself?

Expand full comment
Melvin's avatar

Well, it proves that non-Christians living eighty years after the purported events wrote about the life and death of Jesus without expressing skepticism, which is something.

From the way Tacitus writes in 116, it seems like the general consensus among non-Christian Romans in the early second century was that Christus was a real dude who got crucified, and that there was a bunch of weird beliefs surrounding him. This belief was probably not filtered entirely through Christians, just as our ideas about the Roswell Incident of 1947 or L. Ron Hubbard are not entirely filtered through the people who believe weird things about them.

Expand full comment
Erusian's avatar

I believe what you're saying is: A large number of Christians all simultaneously, and within their own living memory, attested that Jesus existed. This is strong evidence because otherwise a large number of people would have to all get together, lie, and then die for that lie which seems less likely than being a real religious organization who met a real person. But the historians likely did not personally meet Jesus so they don't add additional proof.

From this point of view, the main things historians add is that it makes it even less likely to be a conspiracy. Because many of the historians are not Christians and drew from non-Christian (mostly Jewish or Roman) witnesses. We don't know who these witnesses were or if any of them directly met Jesus. But they are speaking about things going on in the right time and place to have met him and the Bible doesn't suggest Jesus isolated himself from foreigners.

So either none of them met him and it was all a conspiracy by Jesus's followers that took in a bunch of people who were highly familiar with the region. Or a number of non-Christians were in on the conspiracy.

My broader point is something like: we ought to have consistent evidentiary standards. If you want to take a maximally skeptical view then you can construct a case that, for example, Vercingetorix never existed. You can cast doubt on the existence of Julius Caesar if you stretch. If that's your general point of view then you can know very little about history. I disagree with that point of view but it's defensible. If, on the other hand, you think Vercingetorix existed or the Dazexiang uprising definitely happened but think Jesus might not have existed then I think you're likely ideologically invested in Jesus not existing.

To give an example where I don't think it's bias: most modern historians discount stories of magic powers or miracles regardless of who performed them. So the fact they discount Jesus's miracles seems consistent with that worldview rather than a double standard.

Expand full comment
Citizen Penrose's avatar

I take your point that lots of historical figures are not actually well documented. But even if there's limited evidence that Vercingetorix actually existed there's also no real reason so suppose he didn't. Since there's so much superstition and so many false claims surrounding Jesus my prior is that his existence is also likely mythological. I'd need stronger evidence to overcome that prior than I'd need to believe Vercingetorix existed.

Expand full comment
Erusian's avatar

Do you apply the same standard to Mohammed? The evidence for him is not much stronger. In some ways it's weaker.

(This is also an admission that you're using a higher evidentiary standard for Jesus than other historical figures.)

Expand full comment
Peter Defeel's avatar

There’s nothing mythical about the story of the historical Jesus, it’s a guy born of a woman who preaches for a while, comes into conflict with known authorities (Jewish or Roman) and is killed. The mythical stuff can be ignored, it appears in plenty of historical records of the era. Portents, Gods, flaming warriors in the sky, and what not. And that’s just Caesar. But Caesar still existed, right?

And unless you think St Paul didn’t exist - which is a hard ask since he writes letters and is written about in the acts of the apostles (about as much historical records as there can be) then he is the likely and only candidate to have invented Jesus. Paul probably did popularise Christianity but that isn’t the same as inventing Jesus.

Do you think he would get away with making up Jesus and also saying he persecuted his followers who couldn’t have existed?

Besides that Acts has him in contact with the apostles who did meet Jesus, often in conflict with them.

Is Paul invented? Given that Tacitus puts Nero persecuting Christians in Rome by AD 60, whoever made all this up had to invent Jesus, Paul, write the 4 gospels and the Acts, and all of Paul’s letters long before that. Seems a bit fantastical, it takes a lot of faith and a lack of reason to not believe in the historical Jesus.

Expand full comment
Schneeaffe's avatar

But you dont need to invent him. Erusians minimal story is something I expect to have happened multiple times in roman judea (depending on how large a following were talking about). Mythological additions to religious founders are normal. Ive been an atheist basically since I had a real opinion on the topic, and I was genuinely surprised to learn that people argue this.

Expand full comment
Erusian's avatar

Yes. If you're an atheist it's entirely sufficient to say, "Jesus was a real person. However, he was not supernatural." Instead for some reason they want to assert Jesus was entirely mythological.

Expand full comment
Paul Brinkley's avatar

Someone later down made comments that reminded me that some figures from history were later believed to have been adaptations or syncretisms of earlier figures. So that's another possibility - Jesus was fictional, but melded from earlier people. I don't think this would adequately explain Tacitus' account, for example, but it could explain multiple people being "in on" the fabrication.

(Meanwhile, maybe some people aren't invested in Jesus' not existing, but rather invested in someone existing with a name as cool as "Vercingetorix". So the real solution should have been to introduce Jesus as, uh, "Yesutapadancia".)

Expand full comment
Erusian's avatar

Jesus is a bit similar to Ragnar Lodbrok in that he is attested but a lot of the records come shortly after his death. And there's a whole bunch of extremely historical people who the history books say were reacting to him and his death which are really hard to explain if he didn't exist or was a myth.

The people who think Ragnar was entirely fictional have to explain the extremely well attested historical invasions by his historically well attested sons who said they were avenging his death and who set up kingdoms and ethnicities which echo down to today. Likewise with Jesus, his disciples, and Christianity.

But there's just enough of a gap to say that maybe he didn't exist if you really, really want to. And there's a lot of space to say some of the stories were less than reliable and some of them might be borrowed from other people. Then again, that's true of most historical biographies.

Expand full comment
FLWAB's avatar

We should take the historian's claims as evidence that the people whose job it is to professionally try to figure out what happened in the past all tend to agree that Jesus was real. And they're not just looking at the Bible when they do that!

Sources that indicate Jesus existed include the scriptures (the letters and gospels of the New Testament), but also include many of the apocryphal writings (which all agree that Jesus existed, even if they go on to make wildly different claims about him), the lack of any contemporary non-Christian sources that deny the existence of Jesus, the corroboration of many other historical facts in scripture about the whole Jesus story (like archeological findings corroborating that Pontius Pilate existed, or that Nazareth existed, etc).

You also have Josephus writing about Jesus in 94 AD, Tacitus writing about him in 115 (and confirming that he was the founder of a religious sect who was executed under Pontius Pilate), and a letter from a Stoic named Mara bar Serapion to his son, circa 73 AD, where he references the unjust execution of the "wise king" of the Jews.

Also, looking at scripture itself there are all kinds of historical analysis you can apply to it to try to figure out how old it is, and whether the people who wrote it were actually familiar with the places they were writing about. For example, they recently did a statistical analysis of name frequency in the Gospels and the book of Acts, and found that it matches name frequencies found in Josephus's contemporary histories of the region, and that later apocryphal gospels have name frequencies in them that don't match, which makes it more likely that the Gospels were written close to the time period they are writing about (https://brill.com/view/journals/jshj/22/2/article-p184_005.xml). Neat stuff like that.

Expand full comment
Deiseach's avatar

One major source, which is much disputed, is the Testimonium Flavianum which is the part of Josephus' writings which mentions Jesus. Josephus was a real person who is well-attested, so if he's writing about "there was this guy" it's important evidence, especially as he ties it to "James, the brother of Jesus" who was leader of the church in Jerusalem and mentions historic figures like the high priests at that time.

How much is real, how much has been interpolated over later centuries by Christian scribes, is where the arguing goes on - some say it's nearly all original, others (e.g. the Mythicists) say it's wholesale invention.

https://en.wikipedia.org/wiki/Josephus_on_Jesus#The_Testimonium_Flavianum

Tim O'Neill has an interview with a historian who recently published a book about this, arguing for the authenticity of this:

https://www.youtube.com/watch?v=9L2bE1-pyiU

"My guest today is Dr Thomas C. Schmidt of Fairfield University. Tom has just published an interesting new book through Oxford University Press: Josephus and Jesus – New Evidence for the One Called Christ. In it he makes a detailed case for the authenticity of the Testimonium Flavianum; the much disputed passage about Jesus in Book 18 of Flavius Josephus’ Antiquities of the Jews. Not only does he argue that Josephus wrote about Jesus as this point in his book, but he also argues that the passage we have is substantially what Josephus wrote. This is a distinctive position among scholars, who usually argue that it has at least be significantly changed and added to, with a minority arguing for it being a wholesale interpolation. So I hope you enjoy my conversation with Tom Schmidt about his provocative new book."

Expand full comment
Jim Nelson's avatar

I've not watched that video, but in this one Tom Schmidt goes into Josephus' life and connections with Jewish and Roman elites:

https://www.youtube.com/watch?v=8jpEleZV1Pw

The most surprising thing (for me) was to learn about Josephus' rather energetic life, and that Josephus knew people who were one or two degrees of separation from Jesus. It puts a new shine on the questions of the Testimonium's accuracy.

I mean, when the Mythicists claim Jesus never lived, are they also saying that his brother James (mentioned by Josephus and several other documents) was also a fabrication? Mary, Joseph, and Magdalene, all wholly fictional characters? Where does the myth-making and conspiracy start and end?

Expand full comment
Tori Swain's avatar

So, to be clear, there are actual cultures where an entire myth of a clan of people would be realistic to assume "Oh, they were just lying." We rest a lot on the truthfulness of the Romans/Israelis.

Expand full comment
Jim Nelson's avatar

Why would Romans and Jews of the era readily agree to Jesus' existence? There were a number of mystery cults and sects, Jewish and otherwise, around the eastern Mediterranean at the time. Why go out of their way to claim the person at the center of this particular one existed if he didn't?

This isn't merely a single ethnic clan (the early Christians) circling around a myth. This is documentation from two groups who have no interest in spreading Christianity, a long history of bloodshed between each other, and one of those groups later persecuting Christians.

Expand full comment
Ch Hi's avatar

I think you're well overstating the minimum. Yeah, there was someone with that name around. There aren't any records of the trial though. (There's an explanation for the lack, but they're still missing.) And it is *very* clear that "later Christians went back and doctored a bunch of works and made a bunch of forgeries", though we don't know what the original records said, or even if they existed. Sometimes we have good evidence of their doctoring the records. Often enough to cast suspicion on many where we don't have evidence. Many were clearly written well after the date at which they were ostensibly written.

If you wanted to claim that he was a popular religious-political leader, I'd have no argument. There's a very strong probability that he was, even though most of the evidence has been destroyed. (Some of it explicitly by Roman Christians wiping out the Nazarenes.)

Expand full comment
Peter Defeel's avatar

There’s a lot of have waving there but no specifics. The only possible case where Christians modified is parts of Josephus. That’s it.

Expand full comment
Ch Hi's avatar

Yeah, the "hand waving" a valid criticism. It's been decades since I took the arguments seriously, and I don't really remember the details. But when you say " The only possible case ", I'm not encouraged to try to improve my argument. Your mind is already made up.

Expand full comment
Lucas Campbell's avatar

Would you be encouraged to try to improve your argument for the sake of an interested third party? In a public comment section like this you're never solely writing for the person you responded to, and I for one would indeed be quite intrigued to hear more specifics about your case, as I don't have any particularly strong opinions on the subject already.

Expand full comment
Ch Hi's avatar

The problem is that, IIRC, the evidence is ambiguous enough to allow nearly any conclusion that one desires. If one is biased in one direction or another, the evidence supports that direction, though few think that it's strong enough to constitute proof. And the name is not the thing. (There were lots of people named "Jesus", i.e. "Yeshua" in Israel.)

It's been decades since I took the argument seriously enough to look into it. I decided that there wasn't sufficient evidence to conclude that Jesus was any particular real person, but was more probably an amalgamation of several religio-political agitators. It would take a long time to reconstitute my reasoning in detail, and it was never strong enough to convince someone who held strong opposing beliefs. IIRC, I did decide that the main character in the composite figure was born around 33BC, but I don't remember why.

Additionally I've been part of small groups, and noticed the way they alter their oral history to add things that weren't there, and to remove things that were embarrassing. Sometimes the changes happen within a period of weeks, as enemies become allies. Admittedly the evidence I have for the alteration of written history is certainly later, but one might not only consider the "pieces of the true cross", but also the actions of the Council of Nicaea. And things like Epistles to the Corinthians are basically the same as propaganda, and should be considered equally reliable. Being old does not make something more trustworthy.

Expand full comment
Erusian's avatar

> There aren't any records of the trial though.

There are records that say he was executed by local authorities. The specific Biblical details are less well attested.

> And it is *very* clear that "later Christians went back and doctored a bunch of works and made a bunch of forgeries"

Every time I've pushed on these claims it comes down to the equivalent of not being to able to prove a negative. It's clearly there in the versions we have and they make some vague gestures about word choices to show it was inserted later. I'm not aware of a single smoking gun where someone admitted they doctored a record from the time.

I am especially suspicious of this because it's clear a lot of people WANT to believe they are later insertions for basically ideological reasons. But if you have an example that is either a smoking gun, like the evidence we have about the Austrian archduchy title or better, then I'd love to see it.

Expand full comment
TGGP's avatar

> There are records that say he was executed by local authorities

Isn't Josephus the first one to mention this? I don't think the Romans themselves left surviving records of an execution they would not have regarded as especially significant at the time.

Expand full comment
Erusian's avatar

Sorry, I don't mean judicial records, I mean that various people that wrote about him wrote he was executed. You're right there's little that granular at least afaik.

Expand full comment
Erica Rall's avatar

Tacitus is the other big near-contemporary non-Christian source for the crucifixion apart from Josephus. Tacitus's Annals was written in 116 AD, a bit over twenty years after Josephus's Antiquities but well before Christian (and Muslim) scribes had a chance to interpolate anything into Josephus's writings.

But yeah, I don't think there are any direct Roman sources for the crucifixion, nor would be expect to see any but the most important-seeming executions to be well documented in surviving records. For that matter, we barely have much more documentation of Pontius Pilate's life and career than we have for Jesus. We know about him mostly from Christian sources (especially the Gospels and Epistles), Josephus, Tacitus, and one or two other non-Christian writers who mentioned him. I think the only direct archeological evidence that Pilate existed is one fragment of an inscription (probably a dedication on a temple to the Emperor Tiberius) that names Pontius Pilate as the Prefect of Judea.

Expand full comment
Peter Defeel's avatar

For a provincial Roman official of merely Equestrian Rank Pilate is unusually well-documented. Although some histories are related to Jesus, not all are. Philo of Alexandria mentions Pilate with regard to him being “A man of inflexible, stubborn, and cruel disposition” and details other atrocities, Josephus mentions Pilate in relation to Jesus but also two other actions, both atrocities ( The Aqueduct Incident and The Roman Standards Incident)

Expand full comment
Deiseach's avatar

Oh, this is a good old long-running row. The modern version on one side is, I believe, the Jesus Mythicists and on the other, historians. I don't bother getting into the weeds on this one because I'm no longer interested in yet another bunch of atheists making sneery remarks about religion, but Tim O'Neill has been in a few entertaining fights with them, and has some videos up about "did Jesus exist?":

https://www.youtube.com/watch?v=n_hD3xK4hRY

https://www.youtube.com/watch?v=bTG7czEBVzY

https://www.youtube.com/watch?v=5bO4m-x_wwg

https://www.youtube.com/watch?v=9L2bE1-pyiU

Going back for an example of historical "Jesus the man not Christ the god" writing, there's the famous book by Ernest Renan (again, one I haven't read, mea culpa!) "Vie de Jésus/Life of Jesus":

https://en.wikipedia.org/wiki/Ernest_Renan#Life_of_Jesus

"Within his lifetime, Renan was best known as the author of the enormously popular Life of Jesus (Vie de Jésus, 1863). Renan attributed the idea of the book to his sister, Henriette, with whom he was traveling in Ottoman Syria and Palestine when, struck with a fever, she died suddenly. With only a New Testament and copy of Josephus as references, he began writing. The book was first translated into English in the year of its publication by Charles E. Wilbour and has remained in print for the past 145 years. Renan's Life of Jesus was lavished with ironic praise and criticism by Albert Schweitzer in his book The Quest of the Historical Jesus.

Renan argued Jesus was able to purify himself of "Jewish traits" and that he became an Aryan. His Life of Jesus promoted racial ideas and infused race into theology and the person of Jesus; he depicted Jesus as a Galilean who was transformed from a Jew into a Christian, and that Christianity emerged purified of any Jewish influences. The book was based largely on the Gospel of John, and was a scholarly work. It depicted Jesus as a man but not God, and rejected the miracles of the Gospel. Renan believed by humanizing Jesus he was restoring to him a greater dignity. The book's controversial assertions that the life of Jesus should be written like the life of any historic person, and that the Bible could and should be subject to the same critical scrutiny as other historical documents caused controversy and enraged many Christians and Jews because of its depiction of Judaism as foolish and absurdly illogical and for its insistence that Jesus and Christianity were superior."

Now I have to quote Chesterton again, from 1908 "All Things Considered", where he compares Ernest Renan and Anatole France writiing rationalist explanations of miracles;

"The Renan-France method is simply this: you explain supernatural stories that have some foundation simply by inventing natural stories that have no foundation. Suppose that you are confronted with the statement that Jack climbed up the beanstalk into the sky. It is perfectly philosophical to reply that you do not think that he did. It is (in my opinion) even more philosophical to reply that he may very probably have done so. But the Renan-France method is to write like this: "When we consider Jack's curious and even perilous heredity, which no doubt was derived from a female greengrocer and a profligate priest, we can easily understand how the ideas of heaven and a beanstalk came to be combined in his mind. Moreover, there is little doubt that he must have met some wandering conjurer from India, who told him about the tricks of the mango plant, and how it is sent up to the sky. We can imagine these two friends, the old man and the young, wandering in the woods together at evening, looking at the red and level clouds, as on that night when the old man pointed to a small beanstalk, and told his too imaginative companion that this also might be made to scale the heavens. And then, when we remember the quite exceptional psychology of Jack, when we remember how there was in him a union of the prosaic, the love of plain vegetables, with an almost irrelevant eagerness for the unattainable, for invisibility and the void, we shall no longer wonder that it was to him especially that was sent this sweet, though merely symbolic, dream of the tree uniting earth and heaven." That is the way that Renan and France write, only they do it better. But, really, a rationalist like myself becomes a little impatient and feels inclined to say, "But, hang it all, what do you know about the heredity of Jack or the psychology of Jack? You know nothing about Jack at all, except that some people say that he climbed up a beanstalk. Nobody would ever have thought of mentioning him if he hadn't. You must interpret him in terms of the beanstalk religion; you cannot merely interpret religion in terms of him. We have the materials of this story, and we can believe them or not. But we have not got the materials to make another story."

Expand full comment
a mystery's avatar

Wait, Chesterton considered himself a rationalist? I wonder what he’d think of the movement today.

Expand full comment
Kenny Easwaran's avatar

I would be interested to know what Chesterton meant by “rationalist”! He definitely doesn’t seem to mean the thing that philosophers mean (ie, the opposite of an empiricist, the kind of person that thinks that logical and rational proof is a better way to know about the world than empirical evidence), but it does seem somewhat compatible with the contemporary cultural usage.

Expand full comment
lyomante's avatar

yeah reading the gospels you sense he can't be mythical. CS Lewis argued that the new testament would have had to invent the modern realistic novel style to depict him if he was a creation

Like even his miracles are different, from later Christian saints. St Francis caused a wolf to stop preying on people out of his sheer holiness, and the village accepted it after. Jesus is grabbed by a woman and that is enough to heal her, or he spits on the ground to make clay and cover someone's eyes.

There is a lot of detail and prose there, and myth usually ignores that. Goliath is tall because David trusts in God to beat him: Zaccheus is small and has to climb up into a tree to see Jesus, and this is incidental to the message.

So there definitely was someone they were all watching, but that doesn't mean the miracles were true.

Expand full comment
demost_'s avatar
6dEdited

Of the 24% that didn't answer "Yes" to the question whether the figure historically existed, 14% answered "don't know", and only 10% answered "No".

10% of people who answer No to a question against scientific consensus? That... does not strike me as a high number.

Expand full comment
Tori Swain's avatar

10% of people think positively about Ebola. This is the "not paying attention very well" demographic.

Expand full comment
Sol Hando's avatar

The 15 or so people named Ebola get a bad rap: https://www.familysearch.org/en/surname?surname=Ebola

Expand full comment
Tori Swain's avatar

See also the Panetta-burns plan, which proves that yes, you can troll people with polls.

Expand full comment
WoolyAI's avatar

Yeah, the idea that Jesus didn't physically exist is odd.

Jesus dies in...one sec...AD 33 under the reign of Tiberius. By the reign of Nero, so say 60 AD, he's feeding Christians to the lions in Rome. That's living memory. It'd be weird if that was going on and Jesus actually didn't exist.

Expand full comment
Tori Swain's avatar

Kilroy was here. There's substantial archeological evidence for Kilroy, despite the fact that he really doesn't exist.

Expand full comment
Little Librarian's avatar

If soldiers were being executed for Killroy was Here graffiti I would expect those executions to produce a paper trail leading back to an explenation of who Killroy was. Said explanation might call him fictional.

Similarly. If people were being thrown to the lions for calling Jesus the messiah I would expect a paper trail. Maybe its lost to time in the last 2000 years, but I would expect it to have been written.

Expand full comment
Tori Swain's avatar

Depends on who/what is being investigated. "Etched my glass with graffiti" as a cardinal crime might not need to care about what the graffiti was, after all. "Loudmouth preacher/prophet" might just get recorded as that.

Assuming a large amount of literacy, you might get someone wondering "why are they talking about that guy?"

These are dependent on the cultural mores. "Joseph is lying again" is hardly going to raise eyebrows if lying is normative in the culture (which, I'm not saying it is for Rome, but there are cultures where lying is the standard public discourse, and truth is only given with a monetary exchange).

Expand full comment
Ch Hi's avatar

Many of the records were lost during an attack by the Roman army on Jerusalem. Others were lost when a Roman Army under a Christian general wiped out the Nazarenes. (If anyone was an actual follower of Jesus, it was the Nazarenes.)

Expand full comment
Peter Defeel's avatar

Here is what Tacitus said

“ So to suppress the rumour, Nero falsely charged with guilt and punished with the most exquisite tortures the persons commonly called Christians, who were hated for their enormities.

Christus, the founder of the name, had undergone the death penalty in the reign of Tiberius, by sentence of the procurator Pontius Pilatus, and the pernicious superstition was checked for a moment, only to break out once more, not merely in Judaea, the home of the disease, but in the capital itself, where all things horrible or shameful in the world collect and find a vogue.

First those who confessed were arrested; then on their information a vast multitude was convicted, not so much of the crime of arson as of hatred of the human race.

Their deaths were made farcical. Dressed in wild animal skins, they were torn to pieces by dogs, or crucified, or made into torches to be ignited after dark as substitutes for daylight.

Nero had offered his gardens for the spectacle, and gave a show in the circus, mingling with the people in the dress of a charioteer or riding in a chariot.

Hence, even for criminals who deserved extreme and exemplary punishment, there arose a feeling of compassion; for it was not, as it seemed, for the public good, but to glut one man’s cruelty, that they were being destroyed.”

Tacitus is generally considered a reliable commentator so even though he’s writing a few generations later ( although he was alive for Nero) it’s known he has access to plenty of records.

It could be later Christian interpolation but they were unlikely to call Christianity a disease, a pernicious superstition that was horrible and shameful or that the Christians hated the human race.

Expand full comment
Deiseach's avatar

There's also Pliny the Younger, writing to the Emperor Trajan in the first century about "what the heck do I do with these Christians?"

https://en.wikipedia.org/wiki/Pliny_the_Younger_on_Christians

"Pliny the Younger, the Roman governor of Bithynia and Pontus (now in modern Turkey), wrote a letter to Emperor Trajan around AD 110 and asked for counsel on dealing with the early Christian community. The letter (Epistulae X.96) details an account of how Pliny conducted trials of suspected Christians who appeared before him as a result of anonymous accusations and asks for the Emperor's guidance on how they should be treated."

Here is the text of Pliny's letter and Trajan's reply:

https://faculty.georgetown.edu/jod/texts/pliny.html

"Pliny, Letters 10.96-97

Pliny to the Emperor Trajan

It is my practice, my lord, to refer to you all matters concerning which I am in doubt. For who can better give guidance to my hesitation or inform my ignorance? I have never participated in trials of Christians. I therefore do not know what offenses it is the practice to punish or investigate, and to what extent. And I have been not a little hesitant as to whether there should be any distinction on account of age or no difference between the very young and the more mature; whether pardon is to be granted for repentance, or, if a man has once been a Christian, it does him no good to have ceased to be one; whether the name itself, even without offenses, or only the offenses associated with the name are to be punished.

Meanwhile, in the case of those who were denounced to me as Christians, I have observed the following procedure: I interrogated these as to whether they were Christians; those who confessed I interrogated a second and a third time, threatening them with punishment; those who persisted I ordered executed. For I had no doubt that, whatever the nature of their creed, stubbornness and inflexible obstinacy surely deserve to be punished. There were others possessed of the same folly; but because they were Roman citizens, I signed an order for them to be transferred to Rome.

Soon accusations spread, as usually happens, because of the proceedings going on, and several incidents occurred. An anonymous document was published containing the names of many persons. Those who denied that they were or had been Christians, when they invoked the gods in words dictated by me, offered prayer with incense and wine to your image, which I had ordered to be brought for this purpose together with statues of the gods, and moreover cursed Christ--none of which those who are really Christians, it is said, can be forced to do--these I thought should be discharged. Others named by the informer declared that they were Christians, but then denied it, asserting that they had been but had ceased to be, some three years before, others many years, some as much as twenty-five years. They all worshipped your image and the statues of the gods, and cursed Christ.

They asserted, however, that the sum and substance of their fault or error had been that they were accustomed to meet on a fixed day before dawn and sing responsively a hymn to Christ as to a god, and to bind themselves by oath, not to some crime, but not to commit fraud, theft, or adultery, not falsify their trust, nor to refuse to return a trust when called upon to do so. When this was over, it was their custom to depart and to assemble again to partake of food--but ordinary and innocent food. Even this, they affirmed, they had ceased to do after my edict by which, in accordance with your instructions, I had forbidden political associations. Accordingly, I judged it all the more necessary to find out what the truth was by torturing two female slaves who were called deaconesses. But I discovered nothing else but depraved, excessive superstition.

I therefore postponed the investigation and hastened to consult you. For the matter seemed to me to warrant consulting you, especially because of the number involved. For many persons of every age, every rank, and also of both sexes are and will be endangered. For the contagion of this superstition has spread not only to the cities but also to the villages and farms. But it seems possible to check and cure it. It is certainly quite clear that the temples, which had been almost deserted, have begun to be frequented, that the established religious rites, long neglected, are being resumed, and that from everywhere sacrificial animals are coming, for which until now very few purchasers could be found. Hence it is easy to imagine what a multitude of people can be reformed if an opportunity for repentance is afforded.

Trajan to Pliny

You observed proper procedure, my dear Pliny, in sifting the cases of those who had been denounced to you as Christians. For it is not possible to lay down any general rule to serve as a kind of fixed standard. They are not to be sought out; if they are denounced and proved guilty, they are to be punished, with this reservation, that whoever denies that he is a Christian and really proves it--that is, by worshiping our gods--even though he was under suspicion in the past, shall obtain pardon through repentance. But anonymously posted accusations ought to have no place in any prosecution. For this is both a dangerous kind of precedent and out of keeping with the spirit of our age."

Expand full comment
Erica Rall's avatar

Ah, but he did exist. James J. Kilroy was an inspector at the Fore River Shipyard in Quincy, Massachusetts who was in the habit of writing "Kilroy Was Here" in chalk next to the marks he made to indicate which rivets had already been inspected in order to avoid double counting. Some of the marks didn't get erased and wound up in visible but inaccessible parts of the ships, inspiring copycat graffiti. After the war the New York Times did an investigation, found several dozen claimed sources for the graffiti, and concluded that James Kilroy was by far the most likely candidate.

https://www.usni.org/magazines/naval-history-magazine/1989/january/kilroy-was-here

The sketch of a long-nosed bald man peeking over a wall often associated with the phrase doesn't look at all like the real Kilroy. That's from a slightly older British graffiti tradition, originally associated with the phrase "Wot no sugar?" and most commonly known as Mr. Chad. The Chad and Kilroy graffiti traditions somehow merged during the war.

Edit: it looks like the American Transit Company did the investigation, not the New York Times. I think I got my wires crossed because in addition to the article I linked, I also came across a paywalled NYT article that seems to be announcing the ATC's findings plus at least one or two places that mis-cite that article to credit the NYT.

Expand full comment
Dino's avatar

Nominate this for comment of the week.

Expand full comment
Vermillion's avatar

Seconded. I'm delighted my post has spun off a lot of interesting discussions from the commentariat, but this bit might be my favorite

Expand full comment
luciaphile's avatar

That’s wonderful.

Expand full comment
WoolyAI's avatar

Thats....entirely irrelevant to my point.

Expand full comment
Sol Hando's avatar

Has there ever been a religious movement deifying someone who didn’t actually exist? I’m sure there’s a lot of room for debate whether the Christ pictured in the gospels was the historical Christ, but it seems like Christianity would have been relatively unique if it was the case that Jesus didn’t exist at all. Especially since there were no shortage of prophets and teachers in Judea at the time.

Expand full comment
Deiseach's avatar

"Has there ever been a religious movement deifying someone who didn’t actually exist?"

My understanding is that the "Jesus never existed" set explain the rise of Christianity by saying it was based on a grab-bag of Middle Eastern mythology (the famous "Golden Bough" notion of dying and rising demi-gods) and generally St. Paul gets the blame for inventing Christianity as we know it.

I don't recall ever reading a good explanation as to why Saul, orthodox persecutor of the Christians befouling Judaism, turned into Paul the Christian; why would he bother inventing a new religion? And if he wanted one, why bother with this 'Christ' who never really existed in the first place, apart from a bunch of hysterical women and a rabble of hicks from the back country claiming they were his followers?

Expand full comment
TGGP's avatar

Paul arguably did invent a new religion by hijacking an existing sect and reshaping it.

Expand full comment
Deiseach's avatar

Yeah, but why is the interesting question. He was fervently Jewish, so why do 180 turn on that? If he wanted to reform Judaism or make it more appealing to potential Gentile converts, he could have gone that road. Instead, he explicitly linked himself with the name of Christ and the Christians.

Expand full comment
TGGP's avatar

I think it would have been harder to persuade a more established religious tradition like Judaism to change, rather than a small sect within it.

Expand full comment
lyomante's avatar

the main deities of shinto don't seem to be based on any real people, and are so culturally prevalent westerners know about Susanoo, Ameratsu, and Yamata-no-Orochi though osmosis.

Expand full comment
Erusian's avatar

I don't believe anyone claimed they were real people. They were worshipped by specific rulers and clans though who often identified with them the same way the Japanese Emperor identified with Ameratsu.

Expand full comment
Sol Hando's avatar

Are they claimed to have been real people?

Expand full comment
lyomante's avatar

i think so? not based on historical people but also distinct beings who have existed in the physical world. Shinto is more embodied than Christianity, even if its treated mythologically by people

Expand full comment
theahura's avatar

Maybe like Zeus et al? unless youre using deifying specifically to mean 'turning a human figure into a god'

Expand full comment
Ch Hi's avatar

There's a reasonable line of argument that the various gods of were often originally based about vague memories of a famous ancestor. OTOH, they were also clearly frequently based around anthropomorphism of some natural phenomenon. And it's my suspicion that often both processes occur in the same god.

But what those "gods" turn into in later generations is quite often FAR removed from the original conception. People tend to shape their gods into something related to their "idealized" self image. (For certain meanings of "idealized".)

Expand full comment
Kenny Easwaran's avatar

This was an ancient idea about myths, that Euhemerus came up with: https://en.wikipedia.org/wiki/Euhemerism

I’m a bit skeptical that it happened much in antiquity, given that a good number of the ancient gods actually seem to be derived from some proto-indo-European tradition that preserved versions of the same gods in Vedic Hinduism, Roman mythology, Greek mythology, and even Lithuanian and Slavic mythologies. It would be fascinating if there were real people from 10,000 years ago whose exploits got memorialized into these traditions. And it’s possible that some people from antiquity did get into some of the lists of gods at some point. But I suspect a bigger source of actual gods in traditional mythologies is personification of natural forces.

Expand full comment
Ch Hi's avatar

That's a reasonable argument, but it just pushes back the time. We definitely can find places where there are "cultural heros" that aren't quite god yet, but seem to be on the way to being gods.

OTOH, there are also examples of "natural phenomena" (like the disk of the sun) explicitly being turned into (new) gods in places like Egypt, where we've got good enough records, and where the government was controlling the official religion. (It didn't always take, as Akhenaten's example shows. Actually in Aten he took the name of an existing god, but he so altered the worship and functions/powers that it was essentially a new god. Google says that Aten was originally a minor aspect of Ra.)

This is why I think both processes were on-going. And for a god to be accepted it needed both to fit into the culture and to resonate. (It's my belief that the "true gods" are essentially a specialized subset of the things that Jung called archetypes. I call them genetically encoded subverbal thought processes, and don't think of them quite the same way that Jung did. They're an evolved thing, so they're as messy as you should expect biology to be. And they also aren't quite universal...but pretty near. Whether they're active enough to notice is a different matter. As an example of the messiness the sun god usually also represents consciousness, or perhaps the ego, and the state. Also sometimes personal immortality.)

Expand full comment
Tori Swain's avatar

Which is entirely missing out on what the Greeks were trying to do with gods, which was something new.

Expand full comment
Ch Hi's avatar

Which Greeks? Different groups of them had different goals. Do you mean the mystery cults? They were quite powerful, sufficiently so that the word "christ" comes out of them. IIRC it meant "the annointed one", though I'm not quite clear exactly what they meant that to signify. It didn't mean just diogenes (i.e. one who had been reborn, but that may just have been the Elusinian mysteries).

OTOH, many Ionian Greeks were trying to consider the gods as just fables. Rather like modern atheists. (I think that movement started in Athens, but it flowered in Ionia and later in Egypt.)

FWIW, while it's my opinion that everything spiritual has a physically material underpinning, it's also my opinion that we can't ever perceive that directly, but we can perceive the "spiritual" basis, i.e. the upper layer of the psychoid levels (though it can be dangerous to do so, as various religious hysterias give witness).

Expand full comment
Tori Swain's avatar

I don't mean the mystery cults.

Expand full comment
John Labelle's avatar

It’s hard to say. The patchwork of mythos in greek mythology suggests some of them come from pre-historic cultures in the region.

Expand full comment
Gordon Tremeshko's avatar

Right, that's what I'm thinking. Wouldn't you kind of expect Jesus to have been based on a real person, rather than being entirely invented by Mathew, Mark, Luke, & Co? I'm not saying the guy walked on water, turned it into wine, etc, but it would be pretty odd if there wasn't some dude who was a spiritual leader of some sort who inspired the gospel stories.

Expand full comment
None of the Above's avatar

Aren't there some traditional saints whose historical existence is questionable? ISTR St Anthony was like that, but maybe all the documentation just got lost....

Expand full comment
Kenny Easwaran's avatar

St Anthony of Valero/Padua? I thought he was actually fairly well documented, particularly as a friend of St Francis. Wikipedia even gives precise dates for his birth and death: https://en.wikipedia.org/wiki/Anthony_of_Padua

Expand full comment
Deiseach's avatar

The term for that is "euhumerism":

https://en.wikipedia.org/wiki/Euhemerism

"In the fields of philosophy and mythography, euhemerism is an approach to the interpretation of mythology in which mythological accounts are presumed to have originated from real historical events or personages. Euhemerism supposes that historical accounts become myths as they are exaggerated in the retelling, accumulating elaborations and alterations that reflect cultural mores. It was named after the Greek mythographer Euhemerus, who lived in the late 4th century BC. In the more recent literature of myth, such as Bulfinch's Mythology, euhemerism is termed the "historical theory" of mythology."

Expand full comment
beleester's avatar

The very first emperors of Japan and China were divine or divinely-descended, and probably not historical. They might have been based on actual rulers, but there's little or no contemporary evidence and it seems plausible that they were invented by later rulers to give their dynasty more legitimacy.

(I am not a historian and I'm just looking around on Wikipedia.)

Expand full comment
Sol Hando's avatar

Fair. Looking at the Yellow Emperor, who seems to be a mythological Emperor of China, the earliest archaeological evidence of people talking about him seems to be from the ~4th Century BC, while he allegedly lived in ~2690 BC.

Nero was infamously blaming Christians for the burning of Rome ~30 years after Christ (allegedly) died, so it's the difference between a popular cult believing someone who, in living memory, had died, vs. the remembering of an Emperor thousands of years before that doesn't have an continuous archaeological or literary tradition.

Expand full comment
Tori Swain's avatar

Satoshi Nakamoto? : - ) One gets into a bit of a weird world when one constantly writes under pseudonyms, and has different personalities/locations for each. I mean, if you're playing a character "as the author" and hire people "to play the author at conventions" do you really say that the author exists? After all, people never do really meet him.

Someone broke the L. Ron Hubbard Rule, and I'm not sure that results in "automatic deification" but it does result in a religious movement, I'm pretty sure. Needs must, and all that (Los Alamos seemed pretty interested in the new religion, at any rate.)

Expand full comment
Mark Neyer's avatar

Please poke holes in and help evolve:

“The cultivation of virtue is equivalent to the collection of evidence about you acting a certain way. You cultivate a virtue in yourself by practicing it, which creates evidence of its functioning in you. The more you do this, the more you grow the body of evidence and thus strengthen the prior probability that you’ll be, e.g, patient.”

Expand full comment
Kenny Easwaran's avatar

If it was *just* about accumulation of evidence, then it seems like this would enable a shortcut, where you don’t actually practice the virtue, but just get extremely strong external evidence that you will practice it. Conversely, it would mean that practicing a virtue in situations you don’t remember would be substantially less helpful at acquiring the trait.

I suspect a lot of virtue (or habit) formation is better understood as getting subpersonal things like “muscle memory” to respond more quickly in particular ways.

Expand full comment
FLWAB's avatar

“Every time you make a choice you are turning the central part of you, the part of you that chooses, into something a little different from what it was before. And taking your life as a whole, with all your innumerable choices, all your life long you are slowly turning this central thing either into a heavenly creature or into a hellish creature: either into a creature that is in harmony with God, and with other creatures, and with itself, or else into one that is in a state of war and hatred with God, and with its fellow-creatures, and with itself.

"To be the one kind of creature is heaven: that is, it is joy and peace and knowledge and power. To be the other means madness, horror, idiocy, rage, impotence, and eternal loneliness. Each of us at each moment is progressing to the one state or the other.”

-C. S. Lewis, Mere Christianity

Expand full comment
Wanda Tinasky's avatar

I agree with this. It's basically a behavioralist perspective. To some degree one's personality is a narrative construct, e.g. "I am the type of person who doesn't lie". When you act in accordance with the narrative you strengthen it, both through simple Bayesian inference ("I just told the truth, therefore I'll strengthen my priors that I'm the kind of person who does that") and probably through some dopamine reward that gets released when you're proud of yourself for doing something virtuous. The point of moral instruction is to imprint a child's brain with the socially-optimal reward function.

I wouldn't be surprised if one of the neurological differences between humans and chimps turns out to be the ability to self-administer behavioral rewards, like some neural connection between the cortex and the amygdala or whatever. Hardware that lets us program our own behavioral conditioning.

Expand full comment
John's avatar
5dEdited

You have to feel it, also. Hollow action is not maximally virtuous.

Expand full comment
Gerbils all the way down's avatar

I believe it's also possible to cultivate true, heartfelt virtue in part by taking "hollow" wholesome actions.

Expand full comment
Wanda Tinasky's avatar

There's plenty of evidence that our actions influence our beliefs. Hence the adage to fake it til you make it.

Expand full comment
Blackthorne's avatar

It seems like a Bayesian way to describe Aristotelian habit formation, but I think it's a pretty vague description of how we form habits. It's fine if you don't want to focus on what exactly virtue is (though this does mean you don't explain the motivation for action) but you also haven't really described how the virtue itself is cultivated. Some things you can/should incorporate are:

- Impact of teaching/guidance on cultivation of virtue

- Impact of social context

- Differences in cultivation rates between people

- How does the body of "evidence" actually grow? To me this phrasing actually seems incorrect as our habits persist past our memories. For example I know I like cherries even if I can't really remember any of my experiences eating cherries

But really this is all very Aristotelian so you could just read Nicomachean Ethics for more

Expand full comment
Zanzibar Buck-buck McFate's avatar

This is attractive. I can think of two situations where evidence for the virtue might skew or be skewed by the virtue itself. 1. Humility - acting humbly does create evidence of being humble and yet dwelling on that evidence seems contrary to humility and likely to undermine it. 2. Self denial - if someone has genuinely devoted their life to helping the poor, they may have experienced a lot of push back from the poor themselves who may not want help or are ambivalent, and push-back from bystanders accusing them of virtue-signalling. So the evidence is equivocal and I feel they need something more than evidence to maintain their self-denial.

Expand full comment
Timandrias's avatar

Isn't that consequentializing virtue ethics?

As far as I understand it, the point of following a Virtue is that is axiomatically Good, whether it works or not, independent of the fallout. If you want a ethic framework that you should follow because it's good for you/society, consequentialism is there.

Expand full comment
Kenny Easwaran's avatar

Virtue consequentialism is a thing. I think it’s the best form of both consequentialism and virtue ethics. I don’t understand what could motivate people to believe that certain traits are inherently virtuous regardless of what kinds of consequences they tend to bring.

Expand full comment
Mark Neyer's avatar

I’m trying to understand what is happening inside a person when they cultivate virtue - not ask whether it’s good. Interested in facts here, not values.

That said, this “independent of the fallout” part isn’t true. The virtue of wisdom is identical to what we today call rationality: assessing likely outcomes. Virtue ethics basically says you’re constrained by far more than making bad predictions, and you need the capacity to do much more than make good predictions about outcomes.

Expand full comment
Sara's avatar

Has anyone been using Perplexity’s Comet browser? I’m currently on the waitlist for access, but was wondering how useful it is.

Expand full comment
TGGP's avatar

I use Perplexity quite often as a search engine, but wasn't aware of the browser.

Expand full comment
thewowzer's avatar

I petition for Scott to generate a new open thread image. The current one always makes me think of thick oil paint smeared around a window on SpongeBob's house.

Expand full comment
Christina the StoryGirl's avatar

YES.

Expand full comment
Matto's avatar

Has anyone here watched Eddington?

It's a "neo-western", portraying the early days of the pandemic in a small town in New Mexico in the context of the left-right culture war.

It made me laugh more than I thought it would. I think it does a good job portraying both sides as they see themselves and as seen by the other side. At the same time it captures the confusion of the first days and the spectrum of people's reactions to it and all the little tragedies that turned into big ones with time. Really capture that weird time that now, sitting and drinking a good cappuccino and watching kids load up on the big yellow summer camp bus, seems like pure fantasy.

Expand full comment
Nobody Special's avatar

I thought it was solid - overall probably 7/10 stars.

9/10 stars for the first half, which is a fantastic time portal to the paranoid and chaotic environment of 2020. Where I *thought* it was going was an escalating destructive conflict between the Sheriff and Mayor, each viewing the other as overly paranoid and tyrannical about a threat (protest violence in the case of the Sheriff, COVID in the case of the Mayor) that wasn't actually present in their small town. That conflict then pits the two community leaders against each other, driving a wedge in the town itself as people line up against neighbors they've lived alongside for their whole lives for the sake of things happening in Minnesota, New York, and San Diego. And all along, the whole conflict itself isn't even really about COVID or riots, because although the Mayor and Sheriff may each think of themselves as fighting a monster in a righteous political cause, in their hearts the true driver of their anger at one another is just an interpersonal feud revolving around the Sheriff's wife. They've put a political mask on that conflict to make it respectable and justify it to themselves, and tragically that mask enables it to spread and infest their whole town. That was the vibe the film had for me through the first half, and I very much loved it.

Then it took a major pivot, and in my opinion, a modest step back, and became a sort of nihilistic character study of a man making ever worse decisions as he confronts, and is emotionally crushed by, his total lack of control over the world around him. Still pretty good, but the kind of darkness-all-the-way-down story that is very much an acquired taste. Still had me on for the ride, though. 7/10 through the 3rd quarterish.

It really jumped the shark for me at the end, though. To try to express it without spoilers, it's on this dark meditational ride through the west, but has this whiplash-inducing "and then the space aliens show up!" kind of sudden introduction of very out-of-nowhere addition to the conflict. It's like you're on this nihilistic ride about a man struggling with his insignificance and lack of control in a world of overwhelming complexity... but then lizardmen show up with a mind control ray, and now you're on a nihilistic ride about a man struggling with his insignificance and lack of control in a world where lizardmen use ray guns to control his thoughts from their lair deep in the bowels of the Earth. The theme of powerlessness is still fundamentally present in the new narrative, but it's a sharp turn to say the least. 3/10 down the stretch.

Still, overall a solid movie that I found worth the cost of the ticket. Endings can be hard to stick.

Expand full comment
TGGP's avatar

New Mexico actually WOULD be an appropriate place for space aliens to show up. I don't know how close Eddington is supposed to be to Roswell.

Expand full comment
Harold's avatar

If AI takes off, and revolutionizes the working world (I'm making an assumption right now, that we're not talking about evil AI that will destroy humanity or anything like that, just yet) will we need to switch from our current economic model to a different one? For example, if so much work is automated such that people can't get good jobs anymore, how will people be able to pay for their expenses? Will currently-bad jobs end up paying more? Will we need to instate a UBI? Will there be enough resources to give an amazing UBI to everyone? How will the switch happen over time? Will there need to be revolutions, or will there be so many resources that the switch to a UBI (or something) will happen more peacefully? Whatever you envision happening, how do you see it playing out over time?

Expand full comment
Melvin's avatar

I don't believe there is such a thing as an "economic system". The economics of the future will, like the economics of the past, reach some kind of balance of power equilibrium between various groups of people; an "economic system" is just a different equilbrium.

If the value of human labour really does go to near zero then this will definitely create a new equilibrium, but I certainly wouldn't expect the suddenly-useless masses to start getting a *better* deal out of this. If the great unwashed no longer have useful labour to offer then all they have left is the threat of violence, which is not nothing but it's not a great equilibrium to be in. At best, they can hope to be offered drugs and video games 'til they quietly go extinct.

Expand full comment
Citizen Penrose's avatar

Because of oil revenues and immigrant labour Saudi Arabia has an economy where most citizens are economically superfluously. The average Saudi gets free government provided necessities, health care, housing, petrol, food etc., and has a government provided sinecure that gives them an income and has minimal obligations. Apparently they spend most of the day socialising at friend's houses.

I'd be surprised if in a similar scenario (presumably one that's even richer) western societies came to a less humane arrangement than Saudi Arabia, hardly a country renowned for its progressivism, has managed.

Also, we at least ostensibly could vote for UBI.

Expand full comment
Kenny Easwaran's avatar

One possibility is a transition away from the employment economy that has dominated the past two centuries in the UK and Belgium and shorter periods in other parts of the world. The fact that employment was the method for such a small part of history makes it very plausible that it will be replaced again.

But I think it’s also possible that the employment mechanism is more resilient than we think - there will be large transition costs, comparable to what goes on in countries experiencing a civil war, but with people inventing new productive things that are worth doing now that you can supplement your labor with AI, even as the old things people used to do for employment are easily done by far fewer people working with AI. On this picture, there’s a lot more people starting new businesses and otherwise being entrepreneurial - eg, Disney needs a lot fewer employees to make a film, but also some random student who has a great idea for a film can now bring it to fruition themself with the help of a lot of AI, and similarly for new product ideas. (Interestingly, the rate of entrepreneurship took a big jump up in 2020, and what I’ve heard suggests that it has only continued to rise since then: https://www.statista.com/statistics/693361/rate-of-new-entrepreneurs-us/ )

Expand full comment
Erusian's avatar

> Will we need to switch from our current economic model to a different one?

No.

> For example, if so much work is automated such that people can't get good jobs anymore, how will people be able to pay for their expenses?

Automation doesn't create unemployment over the long run. So that won't happen so we won't need to deal with it.

> Will currently-bad jobs end up paying more?

Yes. This is what increases in productivity lead to. You get paid more in real terms because you earn more and what you earn can afford more.

> Will we need to instate a UBI?

No.

> Will there be enough resources to give an amazing UBI to everyone?

Depends on your definition of "amazing UBI." We already distribute more resources for free to the poor than many countries earn on average. This is not generally thought of as a UBI but that net could certainly grow.

> How will the switch happen over time? Will there need to be revolutions, or will there be so many resources that the switch to a UBI (or something) will happen more peacefully?

There will be no such switch nor will it be needed. You're assuming a premise here.

Whatever you envision happening, how do you see it playing out over time?

Similar to other gains in productivity. There's nothing about even the most optimistic realistic predictions for AI that look different than the gains in productivity caused by things like industrialization. We're looking at maybe a few percentage points of better productivity growth maximum. That's a huge deal but we've seen countries have decades of 10+% and it didn't lead to the doom some AI types want to claim.

Expand full comment
Humphrey Appleby's avatar

I'm going to challenge this. Your scenario seems plausible to me, but it's not the *only* plausible scenario. In particular, while the standard theory of comparative advantage says that total output should only go up under introducing AI, it says nothing about how total output is distributed between wages and returns on capital, and one can certainly conceive of scenarios where total output goes way up, but wages go way down.

This guy's been writing papers that seem insightful to me https://www.korinek.com/

Big difference based on whether the `complexity of cognitive economic tasks' is bounded or unbounded. If it's unbounded, then you get the `usual' historical pattern where output and wages go up. However, if the complexity of cognitive economic tasks if bounded, and if AI can saturate that bound, then you can get a scenario where market clearing wages abruptly collapse to, more or less, the price of electricity, and where `total output' goes up, but almost all of the output becomes return-on-capital, with wages dropping to near zero.

Of course, this kind of analysis is really a bit spherical-cow-in-a-vaccum - ultimately this is a political economy problem, and our current political system seems unlikely to tolerate an economic system where, to exaggerate for effect, Sam Altman and Dario Amodei own the entire economy while everyone else starves. Then again, it could be argued (somewhat plausibly) that universal-suffrage democracy was downstream of an international security environment where the ability to mobilize mass armies was critical to state survival, and that once we transition to largely robotic armies, this might lead to states that look very different. So maybe (again exaggerating for effect) Sam Altman ends up as world dictator with his robot armies enforcing the Pax Altmanica. Really, a lot turns on just how super ASI is...

Expand full comment
Erusian's avatar

Thanks, I'll read the papers. My prediction is, in crude terms, that AI will be broadly like the internet, computerization of industry, the steam engine, etc. In other words it will significantly boost productivity but not be different in kind from those innovations. I don't think AGI changes this analysis except it will be an even bigger boost to productivity.

Of course, you can imagine it will be otherwise. Killbots or unlimited superintelligence or something. And that's the level of a lot of theorists so, to be frank, I'm being flippant. Because if AI is really that ubiquitous the rising productivity itself solves all problems. I do not in fact think that's the likely scenario. But it suffices to rebut a certain kind of lazy AI skepticism because I can fully grant their most extreme scenarios and it actually helps me.

If AI is instead a normal technology then it can't be as world bending as people want it to be. But also that implies the problems you're bringing up, of distribution, precisely because it will not be dramatically disruptive as they're imagining. That doesn't lead to problems of technological unemployment or people not being able to have jobs. But it could certainly lead to short term dislocations and long run new equilibria that may have unexpected or negative effects. But the lack of a rapid destructive takeoff means I trust an active system to adapt.

As to the idea of cognitive tasks being bounded, that strays into the territory where the extremity itself solves all problems. If you are proposing cognitive tasks are saturated you are by definition implying limitless and cheap cognitive function that is universally available. That won't cause a collapse in wages. That will cause a massive increase in wages through deflationary effects. That implies a world where everyone has a genius recruiter whose full time job it is to find them the best job, a full time doctor whose job it is to track their health obsessively, a full time shopper to find them the best deals, etc etc. If they don't have that then there are still undone cognitive tasks.

I think it's unlikely that AI gives private individuals or non-democratic governments superior military capacity to traditional states. Though it may allow some accumulation of power that allows democratic overthrow. I'm not even sure about that though. I think a lot of the anti-democratic pressure we're seeing is non-technological.

Expand full comment
Humphrey Appleby's avatar

I'll observe also that you are basically employing `outside view'/ reference class forecasting. I think this is generally wise, but the whole trick of course is to identify the right reference class. Your preferred reference class involves other major technological innovations like the internet or computerization of industry. This is also my prior for the most likely reference class. But there are other possible reference classes. The industrial revolution was pretty hard on small scale artisans - society as a whole grew a lot more prosperous, but in the medium term there were certainly large groups that suffered. The agricultural revolution (farming) was rough on a much broader swath of the population - as I understand it, while agricultural societies are able to support much higher population densities than hunter gatherer societies, they also have much worse nutrition and health for typical members, at least at the beginning, and of course a much greater amount of `backbreaking labor.' Can we be sure the agricultural revolution is not the right reference class here? And then there is the chance of a truly `outside context problem' for which there is no reference class.

See also Nate Silver's idea of the `technological Richter scale' https://thezvi.substack.com/p/ai-and-the-technological-richter. I think your prior is basically right if AI tops out at not higher than an 8.5. If it rises to a 9 (comparable to fire or agriculture) then I'm much less certain. If it goes to 10 (more significant than anything that has happened before), then we are dealing with a genuinely outside context problem and all bets are off.

ETA: (Zvi thinks AI is already an 8.5. I'm inclined to disagree, I think if development stops here we're probably looking at something more like a 7.7. But this is a quibble. Also plausibly Zvi is more expert in the use of AI than I am, so maybe you should be weighting his guesstimate higher than mine).

Expand full comment
Erusian's avatar

Sure, I expect AI to have winner and losers like any economic change including non-technological ones. And the trivial counter to "there will be no long run employment" is that in the long run we're all dead. At this point we are back in the realm of reality where we're discussing how fast it will happen, who will be affected, etc. Personally I think the closest analog is the internet or telephone because it's largely a cognitive technology which can travel along already existing infrastructure. Which means a relatively fast spread but "relatively fast" still means like one to two decades. We're already at year 3.

The agricultural revolution produced more food (more total calories) more consistently immediately. But it was a less diverse diet. This meant for the first few centuries they had worse nutrition. This ended fairly quickly though as they were able to make orchards, cheese, livestock, etc and eventually had a better diet before the end of prehistory. There were also more diseases because human density is more prone to disease. However, farmers had less starvation and more wealth because they could build and store things for years. While they did more work, they also received more wealth for that work. (And yes, the wealth was less equally distributed, but there was more of it for everyone.)

And I'm not sure they really did more work considering that hunter gatherers were constantly moving. We just don't count that as work. In general I think the concept of leisure time is basically a product of the wage labor system and prior to that you're just arbitrarily counting tasks as 'work' and 'not work' in a way that's hard to defend. The counter is that hunter-gatherers had fewer stress and hard labor markers. But of course they also didn't have the fruits of such things like a house. How much is it worth to you that your father doesn't have to be left behind when he's too old to make it over the mountain?

Anyway, what both agriculture and industrialization did was change the fundamental human relationship to economics in the broad sense. Pre-agricultural, pre-industrial, and industrial societies have entirely different concepts of work, domesticity, social relations, etc. I struggle to see how AI will affect that. To give a simple example, preindustrial societies were predominantly rural and industrial societies are primarily urban. And even within cities, it clustered people significantly closer together and in greater density. It also made wage labor the predominant way most people make a living. What potential changes on that level can AI cause even in the most bull case?

I expect it to just be another form of automation. No doubt a valuable one. But not that different from what came before. People have been predicting mass unemployment or disaster or some ineffable fall from technology for over two hundred years. And there's no good theoretical reason to think they're ever gong to be right.

Based on the examples for Richter Scale: I'd say we're currently in the 6 range but highly likely to get to 7s. If we get the most realistic bull cases then we can get to an 8 range.

Expand full comment
Humphrey Appleby's avatar

>> Based on the examples for Richter Scale: I'd say we're currently in the 6 range but highly likely to get to 7s. If we get the most realistic bull cases then we can get to an 8 range.

I think this is the crux of your entire disagreement with the AI maximalists, who believe (I think we can take Zvi as steelman/spokesperson) that we are already in the low 8s, and reasonably likely to get to 10 or beyond.

I agree with your analysis as long as AI doesn't go beyond about an 8.5. I read the OP however as positing something more in the range of the mid 9s. And the full on doomers as positing a 10+.

Expand full comment
Humphrey Appleby's avatar

Among Korinek's papers the one to start with is probably `scenarios for the transition to AGI.' It is scenarios, plural, as he considers many ways in which the transition could play out, under different assumptions. Some of them look like your prior, but not all of them. It's all the economics version of a spherical cow, in that it assumes markets rapidly equilibrate and that there are no major changes to how society is structured. However, within those limitations, it strikes me as quite perceptive.

Expand full comment
Erusian's avatar

Thanks, I'll definitely take a look.

Expand full comment
Peter Defeel's avatar

> Automation doesn't create unemployment over the long run. So that won't happen so we won't need to deal with it.

It sure made a lot of horses unemployed.

As the commentator below implied there’s no certainty that this new type of automation will allow humans to move up the value chain. From agriculture to factory work to office work, there was a previous path to increasing value per employee and thus wages per employee. Even with that retail employees are barely earning subsistence wages in large cities. Some people moved up the value chain, many moved down.

Future automation will replace well paid office work before it replaces manual labour, which will decimate the existing office based middle classes. ChatGPT informs me that at a broadest definition of office workers (ie all admin) that’s 50% of the workforce in the US and close to 65-70% of all salaried income.

These jobs could well be replaced by better jobs, but what exactly would that be, and why couldn’t AI do it?

Expand full comment
Erusian's avatar

>It sure made a lot of horses unemployed.

In fact it didn't create any horse unemployment as horses are property and so never employed or unemployed. And it did not create a significant drop in the horse ownership rate, just in the horse population. The remaining horses live significantly better than their ancestors. But I get it's a slogan that hasn't really been thoroughly thought out.

> From agriculture to factory work to office work, there was a previous path to increasing value per employee and thus wages per employee

Again, this point is logically incoherent. It simply does not make sense. If AI doesn't increase productivity then it's inefficient to invest in it. You can't simultaneously have it so efficient it replaces humans and yet not create significant economic benefits. If it does increase productivity then it makes everyone richer which is why retail clerks today live significantly better than retail clerks a century ago and even significantly better than upper class people a century ago. There was a reshuffle of social status but AI don't compete for social status anyway.

> Future automation will replace well paid office work before it replaces manual labour, which will decimate the existing office based middle classes.

Okay. So in that scenario there won't be technological unemployment. I assume that work was valuable (if it wasn't we could improve the economy simply by not doing it). If AI does more of that work and to a better quality while not being able to do physical labor then that's a world where humans handle physical labor (presumably assisted by tools, machines, etc) and have access to infinite cheap services of every kind. That is not a dystopia and is an improvement over the current world.

> These jobs could well be replaced by better jobs, but what exactly would that be, and why couldn’t AI do it?

Note how you're shifting the burden of proof from your claim (AI is totally unique) my claim (AI will function like all previous technological advances). I do not think the burden of proof is mine. Further, even if AI is strictly superior at every job this will not create unemployment unless it is so abundant that it can fill all demand for jobs and it is cheaper. If it is limitless, cheap, and superior to humans at all jobs then that's a post-scarcity society, not a dystopia.

Expand full comment
Humphrey Appleby's avatar

>> If it is limitless, cheap, and superior to humans at all jobs then that's a post-scarcity society, not a dystopia.

This really depends on how the society is organized and how the output is allocated (which are questions strictly speaking `outside economics'). It could be a post scarcity society OR a dystopia. Or, conceivably, both.

Expand full comment
Erusian's avatar

I mentioned elsewhere that full government control or monopolies could disrupt this process. But those are not features of the current economic system and so don't need radical reform to avoid.

I do think that you end up with two cross cutting bets. If you think the danger is from rogue AI wiping out humanity you want centralization. If you think the danger is from someone monopolizing a new central economic resource the danger is from centralization. I'm more in the latter camp.

Anyway, you can imagine a scenario where infinite AI robots can do any task for $5 an hour with a human minimum wage of $10 an hour and where there is no welfare whatsoever such that anyone without a few thousands dollars of capital is permanently locked out. But my suspicion is that we would divert the tiny part of economic production necessary for welfare. Because we do that today and I don't see why AI would make us less generous.

Expand full comment
Humphrey Appleby's avatar

Yeah, the world where there is a single dominant ASI and its really super (but still under the control of a human owner) looks very different from the world where there is a broad ecosystem of AIs of roughly comparable power. I think the latter is more likely, but I don't have an argument that the former is impossible. [Also, if we end up with the former scenario and the ASI is sufficiently super, then we could abruptly find ourselves living under a dictatorship of the ASI owner, and that world could look very different in terms of how its organized based on the whims of said owner].

Expand full comment
DangerouslyUnstable's avatar

>automation doesn't create unemployment in the long run.

This has been true _so far_. There are compelling arguments that AGI+robotics would change this. Should we blindly believe these arguments? Of course not. But when the rebuttal to them isn't any better than "That's never happened before", we also, in my opinion, shouldn't be quite so confident that it absolutely, definitely, won't happen this time.

In the absence of a minimum wage, I think you would have a stronger argument that it won't, because no matter how good and cheap AGI is, it would always be worth hiring humans at a low enough wage. But it's also possible that the wage at which it would be worth hiring a human instead of an AGI might not be a "livable" (in the strictest sense of the word) wage.

But _with_ a minimum wage, it is at least possible that AGI + robotics would _always_ be a better/cheaper option than hiring a human.

There are paths where this might not happen: we might decide to desire specifically human made things in a way that we are willing to pay significantly more for them, as one example.

So there are cases for why it might not happen, but I have not yet read an argument where, assuming both AGI and good robotics, that human employment is default guaranteed in the absence of any special conditions.

The core issue is your second point about increasing productivity of humans. There was a time when computer chess engines could always beat a human, but a computer + human team could usually beat a computer alone. This was the period of "productivity enhancement". That time is gone. A human can no longer improve the performance of a chess engine, and a lone computer will generally beat a computer + human team (assuming of course that the human is actually contributing anything). AGI + robotics is the first technology that has the _potential_ (not guarantee, but potential) to, in a general and widespread manner across all domains, make humans no longer productive in the system. Yes, a human would be _more_ productive with the AGI + robot than without. But the AGI + robot might be even more productive on it's own than it is with a human partner/overseer, the same way that chess engines became. If this happens, then no, human wages won't rise from increased productivity, and no, new jobs won't be created (for humans).

Expand full comment
Erusian's avatar

> There are compelling arguments that AGI+robotics would change this.

You can assume that AI will radically change in its effects compared to what AI currently does and compared to all historical precedents. However, this is not a rigorous belief. It may be compelling but many people find many things compelling for a variety of reasons. Basically, you're arguing: "AI will become different than every other technological innovation, including how AI itself has been for the past few years." This does not have strong evidence and it requires exceptionally strong evidence.

> But it's also possible that the wage at which it would be worth hiring a human instead of an AGI might not be a "livable" (in the strictest sense of the word) wage.

If AI raises productivity then it decreases the amount of money you need to earn to have a living wage. Because it makes everything cheaper. If AI doesn't raise productivity then humans remain competitive. This is a simple logical contradiction in this ideology. They are imagining a world where AI decreases costs and increases production but does not decrease price levels. This is only possible if there are AI monopolies (whether private or government controlled) doing rent seeking. Otherwise competition produces downward pressure.

In fact what AI would do in that case is be hugely deflationary which would make everyone richer. And would likely necessitate the purposeful creation of inflation to absorb the excess production. But existing welfare can be used to handle that.

> So there are cases for why it might not happen, but I have not yet read an argument where, assuming both AGI and good robotics, that human employment is default guaranteed in the absence of any special conditions.

If you assume that we have unlimited AI and robotics then you will produce unemployment. But only in the sense that every person will have a personal army of AI and robots. If there is any unmet need that AI or robots can't meet that is an opportunity for human employment. I guess technically everyone having their own robot army and living on its produce is unemployment but it's not a problem.

I also don't think that post-scarcity is actually coming. But I'd welcome it if it did.

> Yes, a human would be _more_ productive with the AGI + robot than without. But the AGI + robot might be even more productive on it's own than it is with a human partner/overseer, the same way that chess engines became. If this happens, then no, human wages won't rise from increased productivity, and no, new jobs won't be created (for humans).

Humans being crowded out of specific jobs doesn't create long run unemployment. It only matters if they are crowded out of ALL jobs. They can only be crowded out if AI+robotics are better than humans. Not just individually (ie, a robot outcompetes a human) but that the robots are so unlimited they are preferable in all cases. They also have to be so abundant you never run out. If that is the case we are in post-scarcity. If it is not then there will still be jobs for humans.

There's also no sign this is happening. Current best estimates are these are providing 1-1.5% productivity growth per year. That's gigantic but it's not a society ending disruption.

Expand full comment
DangerouslyUnstable's avatar

This is just choosing to disbelieve in AGI (in the strictest definition of what that means). Which is fine, I don't think that's an insane thing to believe. But I think it's important to be clear that that is what the assertion is based on. A lot of people (especially around here) disagree with you in that belief.

Also, when the comment you were replying to specifically asked about cases where AI revolutionizes the working world, you start to hit a narrower and narrower path where AI improves enough above current (I don't think current AI will "revolutionize" anything, although it will have impacts), but doesn't get to fully generalized intelligence.

Expand full comment
Erusian's avatar

No, it isn't. I can fully grant that AGI will exist and still believe it will create no long run unemployment. My point is that even granting the premises of AGI it will still not generate structural unemployment unless it meets two standards:

1. It must be cheaper than humans for all tasks such that is never economically viable to use a human.

2. Its supply must be effectively infinite such that all AI and robotic capacity can never be fully occupied. Because if it is fully occupied then humans can be used for additional tasks.

Only at that point will there be no chance for humans to work and contribute economically. This is true even if AI is better than humans at all tasks.

But if 1 and 2 are true then we are definitionally in a near post-scarcity economy because we are in a world where there is an infinite supply of capacity which is extremely cheap.

Expand full comment
Jeffrey Soreff's avatar

>1. It must be cheaper than humans for all tasks such that is never economically viable to use a human.

>2. Its supply must be effectively infinite such that all AI and robotic capacity can never be fully occupied. Because if it is fully occupied then humans can be used for additional tasks.

(1) is sufficient to remove humans from all jobs without (2) as an additional condition. "Because if it is fully occupied then humans can be used for additional tasks." would not be economical if (1) is true.

My best guess is that we will get AGI (potential functional replacement for humans from all economic roles), _possibly_ only economical to displace 1st world workers initially, then reduce costs to make (1) true globally, for any worker at any living wage anywhere.

If we are lucky, and AGIs (and ASIs, if they are feasible), stay under human control, then a sensible way to run such a society is, as beleester noted in https://www.astralcodexten.com/p/open-thread-392/comment/139743176 , you could just have money flowing from consumers to AI companies (including AI-driven factories, farms, etc.), money flowing in taxes from AI companies to the government, and money flowing from the government to the citizens as UBI.

Hypothetically, suppose that GNP quadrupled in a fully AI economy. Say that all flows into AI companies. Say half of that goes into taxes and the other half goes to owners of the AI companies (who spend part of it on AI company products and a little on human servants, if they really want them). Of the half going to the government, say half goes to government purchases (mostly from AI company products and a little on humans doing something (beating dissidents or something else status/power-seeking/human-specific) and the other half goes to UBI to citizens. This would leave all citizens with the same standard of living as today, except that they would not have to work.

If we _got_ a purely AI economy under human control, and got a factor of 4 increase in GNP from the technical advances connected with the shift, and can't manage to do something like this, because we have a job-centered ideology, then we are idiots, and can't take advantage of a bonanza on a silver platter.

Expand full comment
gdanning's avatar

>Will currently-bad jobs end up paying more?

No, they will pay less, because presumably the supply of workers in that segment of the job market will increase.

>Will we need to instate a UBI? Will there be enough resources to give an amazing UBI to everyone?

Yes, and yes. But the UBI will be in the form of government employment (and there are certainly lots of potential jobs, from companions for elderly people to teachers and teacher assistants* to free lawyers for people who currently do not get free lawyers, etc etc. And that will itself spur demand for goods and services https://www.investopedia.com/terms/f/fiscal-multiplier.asp

*There will always be some percentage of students who, at the very least, will need personal attention to stay on task. If today we employ two teachers to teach two classes of 30 each, tomorrow we could have one teacher supervising an AI classroom of 55, and five teachers giving individual attention to each of five students.

Expand full comment
Jim's avatar
5dEdited

But why wouldn't you just let the AI give individual instruction to students as well? We're assuming that at this point, AI is more competent than the average public school teacher, so it doesn't make any sense to let human teachers teach them...

Expand full comment
gdanning's avatar

AI will indeed be giving individual instruction. But, as I said, "There will always be some percentage of students who, at the very least, will need personal attention TO STAY ON TASK." Not to instruct them. https://thisvid.com/videos/student-gets-dick-touched-in-the-classroom/

Edit: The point is that AI + teachers will likely lead to more learning than AI alone, even if AI alone > teachers alone.

And students, esp younger students, have non-academic needs. AI can't help a kid who just wet his pants, or is being bullied, etc.

Expand full comment
lyomante's avatar

it will be like retail but everyone.

retail has had huge productivity growth with resultant loss of staff. The old british tv show Are You Being Served? is a good description of older department store style retail: many full time staff, expansive facilities (multiple floors with an elevator even) and lots of goods.

look at a gamestop or dollar general now and you have a just in time economy mostly on part timers with variable schedules, maybe 3 or less to a store in total. if you work in retail now you are not able to be independent; you live with family or may even be homeless and working (when i worked at Kohls three people were on the truck crew)

this will be everyone's future. everyone will live together, only the rich can escape the house while lot of people just live where they were born or with their parents till they die. no ubi, maybe even less welfare.

Expand full comment
TakeAThirdOption's avatar

Which is, I believe, the situation humanity as a whole is already in. For it, it will only get a bit worse.

Expand full comment
Mark Neyer's avatar

There are so many branching points that could radically alter things. That said, here's my hunch on a centroid, predictated on maximal change:

- A) AI that can write code well enough to replace most developers can invest well enough to replace most investors, leading to mass white-collars lay offs. This is a death sentence for giant cities, and a big deflationary risk for the economy

- B) massive gains in efficiency lead to lower costs of production, also a deflationary risk for the economy

- combining A) and B), you'll have a significant deflationary impulse: lower cost of production plus mass layoffs. The economy cannot handle deflation and the money printer will go BRRR. End result will likely be printing money which goes towards a basic income to offset social costs of large scale unemployment.

The combination of A&B will drive much more demand to live in places with lower cost of living. Big Cities will become much more dangerous, less pleasant places to live, with fewer jobs and more crime.

- we will see federal subsidies for energy production + manufacturing (since both are dual-use technologies) and something of a rural+small down renaissance

- employment will no longer be the default economic arrangement - because AI makes a better employee but likely a worse marginal-risk taker and human relatoinship cultivator. Businesses will want more equity-partner type arrangements, where a human (or small group thereof) overseeing an ai-driven business unit owns and is accountable for all the risks in exchange for a cut of rewards. The more the commands can come from the top, and the less understanding of value is necessary - the easier it is for humans to be cut out of the loop.

Instead of mass employment, you'll see much more entrepreneurship as people move with their basic incomes to smaller towns + cheaper CoL areas. Explosion of craft beers, things like games, entertainment, but also therapy, personal training, dietitians, etc. We will see AI enable way more small-scale entrepreneurship than growth for existing compaines. AI will suck at "this new product willl get you the whole market" because chaos and unpredictability are still a thing; existing ventures won't benefit from cheaper economic experiments, because reputation risk is existential for them but nonexistent for new players. Coke can only use AI to make coke cheaper to produce or maybe make marketing dollars more efficient; it's not like AI will have everyone drinking twice as much coke as before. But a new drink with the right mix of protein+probiotics sold in a specific location - now that becomes viable, at a small scale. So if the economy is an ecosystem i think AI leads to an explosion of small-scale ventures, much moreso than growth of existing big ones.

I think cities will be the big losers here, as their raison d'etre gets killed. The "winners" are smaller to mid-sized towns. There's another shift that will happen, with the newly printed money offsetting decrease production costs + unemployment. Governments at all scales will get grabbier, taxing AI production and leading to an increase in grey/black market economies. The end result is much higher price of bitcoin, as value from both equities and land drains into bitcoin, since the first two are easier and cheaper to tax.

Expand full comment
WoolyAI's avatar

So we're talking...widespread human-level agents but no ASI inventing nanomachines or anything?

In that case, the current economic system will keep working but probably 6 million people will permanently fall out of the workforce. This has happened every time we've had a big tech jump, including automating a lot of our manufacturing in the 80's. Some variation of this graph, which is Male Employment Rate, Age 25-54, where in every recession there's a fall in the employment rate and then a rebound but never quite as high as it was.

https://fred.stlouisfed.org/series/LREM25MAUSM156S

What should happen is that old jobs go away and we discover new jobs. I can kinda see that now, I know a guy who was a programmer, found a niche company doing Java development, didn't move, he got laid off, and now he's roofing houses. Nothing wrong with that and it's a skill we need. But every time there's a disruption, not everyone gets a new job for some reason. That's a point of debate.

But if a bunch of service jobs go away...we used to all be farmers, then we all worked in manufacturing, now we're all in services, something new will come up.

Expand full comment
Peter Defeel's avatar

The answer is some kind of communism. Give people money for working for the state in some capacity, rather than just giving money for nothing.

That said, nobody seems to understand where the money for UBI is coming from. To my mind it has to be printed from nothing, but I’m open to suggestions.

Expand full comment
lyomante's avatar

ive mentioned in other places we already have two models, no communism needed: the armed forces, and prison.

the military is pretty much what you describe, and distributes your UBI.

Expand full comment
Peter Defeel's avatar

Having everybody on the army is kinda communist. Although you may run out of wars. So maybe just the army without the wars?

Expand full comment
lyomante's avatar

not communist, there is no equality or "each according to his needs." just commanders, soldiers, and the cause. the civilian conservation corps were "army without the wars."

Expand full comment
Peter Defeel's avatar

> each according to his needs

Oh no actual communism was like that. There were plenty of wage disparities.

Expand full comment
beleester's avatar

If the economy is being wholly run by AIs, then whoever owns those AIs is going to be very, very, very rich - rich enough that you could easily tax a fraction of their income to provide UBI for everyone else.

Expand full comment
Alastair Williams's avatar

Surely the state would eventually just seize the AIs. If they're that dominant they'd be essential for warfighting and having them in private hands would be the equivalent of allowing someone to maintain a private nuclear arsenal.

Expand full comment
Peter Defeel's avatar

This is an economic fallacy. Wealth isn’t just gold bars in a vault — it’s claims on future goods and services. Stocks, bonds, houses — they all derive value from someone, somewhere, being willing and able to pay for things in the future.

This is true of AI as well, in fact if the rest of the economy collapse I’m not sure what the AI market is.

Expand full comment
beleester's avatar

Yes. The AIs are producing goods and services, and we are giving people a claim on some of those goods and services. (Or rather, transferring the claim of the AI's owner to other people who need it more.)

As far as I understand this shouldn't lead to inflation since the amount of money in circulation still matches the amount of "stuff" being produced, it's just that money is primarily circulating through the government (going out to unemployed people, and back in through taxes) rather than being directly exchanged between citizens.

Expand full comment
Peter Defeel's avatar

> The AIs are producing goods and services, and we are giving people a claim on some of those goods and services.

How exactly are you doing that? That’s what I’m asking. To induce that demand you can’t tax the “wealth” tax of the AI companies who won’t exist anyway unless there are other companies to buy the product, and those other companies won’t exist unless there’s demand from consumers who won’t be able to buy anything as they won’t be employed.

So demand needs inducing somewhere, and there’s nothing to tax.

Expand full comment
Ch Hi's avatar

The only thing that give paper money value is that the government demands that you pay them paper money in taxes, or they'll take all your stuff, and perhaps take you also. So EVERYONE needs money (if they have any possessions).

Also, remember "eminent domain". The government can just take anything it really wants to take, and pay whatever it decides is a "fair value" for it in paper money.

Expand full comment
Alastair Williams's avatar

Maybe to further the question, if AI progresses to the point where it can handle most jobs, do you still have software companies? If it is just one person at the top directing a bunch of AIs, then what moat do you have? What stops OpenAI from cutting out the middle man and also replacing the person directing the AIs? Why not just have AIs all the way down?

In the extreme, knowledge and the ability to work lose all value. The only remaining thing is what assets and hardware you have that you can sell or rent to the AIs.

We're probably going to hit the pitchforks and burning datacenters stage way before that, however.

Expand full comment
Deiseach's avatar

Well, I am now anticipating the AIpocalypse coming much sooner than I expected, because my very much non-techie boss has recently used ChatGPT - "it's so convenient for emails!"

I have no idea who told her about it or showed her how to use it, but if she's using it, then everyone will be.

In the short-term? I think businesses will use it to reduce overheads by layoffs (voluntary or otherwise) and/or simply not hiring on new human staff. The knock-on effect of that will be more people looking for fewer jobs, until (we are being told) all the new jobs magicked up by AI open up and we all have shorter working hours and way more money.

Yeah, I'll believe that last when I see it and not a second before.

Expand full comment
luciaphile's avatar

It seems like people who aren’t very fluid readers, without the reader’s grasp of the mechanics of writing (speaking not of content) - are those who like the output of LLMs (and before that those writing programs?).

If your boss is writing an email with AI, it’s pretty certain it was not an email that needed writing.

Expand full comment
Deiseach's avatar

There's a good amount of stupid emails that have to be written in the job, a lot of "I got your message and I read it thanks" acknowledgements of announcements from various government agencies and so on. So I could see her getting the AI to précis the long-winded "we're going to be changing our name from the Agency for Counting Staplers to the Stapler-Counting Agency" emails and then writing up an answer to that.

Currently, she gets me to read the "name change about stapler counting" emails and tell her what needs to be done about it, if anything. I am now replaceable by a machine! 😁

Expand full comment
luciaphile's avatar

I guess I find it hard to believe that the effort of involving AI in such trivial matters would not be a waste of time for anyone who *belongs* in such a position.

Hopefully she dresses really well or is good at ordering things off the internet for the office or something.

Expand full comment
Deiseach's avatar

We are a small operation, providing not-for-profit services (the main childcare centre does charge fees to parents, but the vast majority of those are subsidised by various government schemes, which means a ton of interaction with government bodies).

So she does a lot of work that is necessary to keep the place going, she just delegates a lot of the "read this because I don't have time to do so" emails to me and oddly doesn't seem confident when writing emails/certain letters herself. She's perfectly capable of doing so, and does do a lot of her own emails, which is why I was so astounded to find out she was using ChatGPT!

Myself, I find it easier to write the dang email myself as it's quicker than trying to run it through one of the multifarious AI versions popping up (I wish Copilot would curl up and die, for one, as I'm fed-up of Microsoft trying to force it on me every time I use Office which is now rebranded as Microsoft 365 Copilot) but I'm a wordcel. My boss is more a numbers person 😁

Expand full comment
luciaphile's avatar

Makes sense.

Expand full comment
luciaphile's avatar

And I’m not unaware of the need for help in writing real things. My husband is the last American Male English Major, and he is constantly handed all and sundry writing assignments in his completely-unrelated-to-that-major job.* But his coworkers do not struggle with email.

*At least among those who could not possibly remember the origins of his work.

Expand full comment
Straphanger's avatar

If anything, hours will probably become longer rather than shorter as competition for white collar jobs intensifies.

Expand full comment
Harold's avatar

> Yeah, I'll believe that last when I see it and not a second before.

Same. I'm far from a communist, but I do think it would be the right thing to do if there really are more resources and fewer jobs. But the transition will be a nightmare to navigate, and the interim a really detrimental time.

Expand full comment
Deiseach's avatar

Yeah, what I find really hard to believe is al the bright-eyed optimism about "and the companies that own the AI will be *sooooo* rich that taxing just a fraction of their riches will pay for the rest of us", much less the "they will be *soooo* rich they will gladly share that with the employees!"

No company that makes moneybags profits ever wants to give it to the employees, much less pay it in taxes (even Ben and Jerry gave up on the original hippy idealism around CEO salaries). Why else do you think my government was trying to *refuse* a €13 billion windfall from Apple? They did not want to be killing the golden goose (or rather, the golden goose deciding to fly off to another country with an even nicer tax regime for multinationals).

Expand full comment
lyomante's avatar

on the plus side given their increasing autism at least the shrimp will have their utillions maximized.

Expand full comment
TGGP's avatar

Whether they want to pay taxes or not, they will. The only other certainty is death.

Expand full comment
gdanning's avatar

>No company that makes moneybags profits ever wants to give it to the employees, much less pay it in taxes

Whether they want to pay it in taxes is rather irrelevant, isn't if? Note that current federal revenues are 17 percent of GDP.https://fred.stlouisfed.org/graph/?g=ockN But the corporate tax rate is 21%, and the top marginal rate is 37% https://www.nerdwallet.com/article/taxes/federal-income-tax-brackets

So, if AI leads to a transfer of income to corporations/rich folk, total federal tax revenues could easily increase even if tax rates do not increase.

Expand full comment
David Speiser's avatar

I’m looking for more examples of a thing that I can’t find a good name for but is kind of “nomative determinism for words”: a word or phrase that has a meaning derived from a modern set of circumstances, yet when its component parts are broken down into their roots they mean roughly the same thing. It’s okay if it’s a stretch.

I’ll give a couple of examples here. “Astroturf” is a verb meaning “to artificially inflate the popularity of a person or idea”. This comes from a pun on “grassroots”, as Astroturf is artificial grass originally created for the Astrodome in Houston. But if one naively looks at the roots of “astroturf”, one finds “Astro-“ meaning “outer space” and “turf” meaning “to cover with sod”. So a plain reading of the word would be “to cover with sod a place very far away from one’s home”, which fits pretty well with “to pretend that one’s ideas are popular elsewhere”.

“Cellular”, describing a mobile phone, kind of fits too. The word comes from how the mobile network was originally set up (divided geographically into cells). But “cellular” is just “cell” with the suffix “-ular”, a suffix which means “relating to” or “referring to”. And “cell” comes from a French word meaning “a Catholic monk’s quarters”. The purpose of said quarters is to provide a private place for 1-on-1 communication with whoever the monk wanted to talk to in Heaven - generally a saint, the Virgin, or God himself. But if you’re presented with a device and told “this is cellular”, you might think “ah! This is a device that enables private 1-on-1 communication with someone quite far away“ and you would be correct.

They’re both kind of a stretch but that’s what makes them fun imo. Anybody got other examples?ChatGPT was utterly useless at coming up with more examples, but maybe I needed a better prompt.

Expand full comment
luciaphile's avatar

I thought “Astroturf” contained an element of pretending your belief is not quite what it is, or deflecting attention from its less popular aspects. But now that I think about it, I find I’m unable to define it.

I once brought home a piece of Astroturf from an Astros game. They had recently re-turfed, and these little squares were fan souvenirs 😆. We were more easily satisfied then.

Expand full comment
Shankar Sivarajan's avatar

It's not precisely this, but "folk etymologies" might be helpful.

Expand full comment
TGGP's avatar

Sounds like a combination of false cognate + backronym.

Expand full comment
Whenyou's avatar

So... Peter Thiel, Palantir and his association with rationalism is pretty fucked up, huh?

Expand full comment
Rob's avatar

Guess it depends on whether we're talking capital-R rationalism (the movement and values commonly associated with it) or lower case-R (reason as methodology/tool).

Palpatine could be rationalist but not Rationalist IMO.

Expand full comment
Gunflint's avatar

I’ve been thinking about Theil quite a bit since reading coverage related to Hulk Hogan’s death. At the time it went down, the Gawker lawsuit was not on my radar. Or Gawker itself or any other bullshit gossip web site for that matter.

I knew that Hogan was one of those WWF guys probably helped by the fact that Jesse Ventura was at least locally famous.

Hogan and I literally had crossed paths once on one of the Minneapolis urban lake strolling and bicycle trails.

I actually earned a pro wrestler scowl from him for my barely stifled laughter at the ridiculous figure he cut out in the real world. Remembering that surreal moment still makes me smile.

I had also read Thiel’s ‘Straussian Moment’ essay after the NYT interview. His thesis there was stated much more eloquently by Jack Nicholson as USMC Colonel Nathan R. Jessep in ‘A Few Good Men’. [1]

I’ll agree it’s always been true that there are bad people in the world and it’s necessary more often than we would care to believe to act in ugly ways contrary to deontological ethics. The consequences sometimes have to come first.

And here we get to the ‘but’. Traditionally these exceptions to deontological ethics have been made by sober minded, patriotic career senior intelligence and military personnel. In 2025 that is in danger of no longer being the case.

It wouldn’t be unreasonable to say that Thiel’s decision to endorse Trump in 2016 put Trump over the top. Thiel now, and probably always thought of Trump as a Useful Idiot that will help usher in Thiel’s own, IMO, kind of insane, post enlightenment order.

Thiel has amazing wealth generation skills but it’s frightening that he puts that wealth to use against the ‘up front’ ideas and ideals of the American Republic. The dark stuff was always meant to be the occasional exception to keep things on track. The wealthy of course always had a say in what was and was not good for the country and also coincidentally good General Motors, at least in the prior century.

But those people were not looking to tear things down to the studs and remake them in an order contrary to established Constitutional and civil norms.

I think of Thiel as a dangerous man with a lot of financial power, wielding it for what can be described best as eccentric, vanity projects. If things do go south in this country he has his “exceptional circumstances” New Zealand citizenship in his back pocket.

[1] ‘You can’t handle the truth.’ A Few Good Men

https://m.youtube.com/watch?v=9FnO3igOkOk

Expand full comment
Gunflint's avatar

Apparently I’m not the only one that has been thinking along these lines.

Interesting comparison of Elon Musk to Lex Luther here.

The Rich Are Not Like You And Me.

https://www.programmablemutter.com/p/the-rich-are-not-like-you-and-me?ref=thebrowser.com

Expand full comment
Rob's avatar

Coming from a GM/Chevy family, I miss the common-sense association between the well-being of a country and the well-being of its businesses. Too many massively multinational corps with execs trained in the school of Ayn Rand these days, I suppose.

Expand full comment
Zanzibar Buck-buck McFate's avatar

I enjoyed Ross Douthat's Frost/Nixon moment, asking him if he thought humanity should survive and he floundered

Expand full comment
Rob's avatar

I'm not sure if that was Thiel the Transhumanist tripping over a way to say that we should be posthuman, or Thiel the Edgelord thinking that most of humanity should just disappear.

Expand full comment
Deepa's avatar

Is there no roomba-type thing for lawn mowing? I haven't seen anyone use it. Is there a really good product here?

Expand full comment
Leppi's avatar
4dEdited

I have experience with two types of robotic lawnmower for a small lawn. Older robotic lawnmowers use a cord with an electric signal that has to be digged down to mark the edges of the lawn. More recently, GPS based lawnmowers are also available and have become much cheaper. I definitely recommend the second option, as those cords break frequently, and finding and mending them is a real hassle.

Expand full comment
Reversion to the Spleen's avatar

Robotic lawn mowers exist. The NYT even has an article reviewing which are the best.

https://www.nytimes.com/wirecutter/reviews/best-robot-lawn-mower/

Expand full comment
Erica Rall's avatar

They've existed for a long time, predating actual Roombas (brand name iRobot Roomba were first sold in 2002,). I remember reading a newspaper article c. 1996 about the CIA headquarters being an early adopter because using lawnmower robots saved them having to do security clearances on their gardeners.

I can't offer product or brand recommendations, but I can offer a Tumblr story from 2020 about a herding breed dog named Arwen who encountered a lawnmower robot, decided it was a sheep, and figured out how to herd it.

https://www.tumblr.com/gallusrostromegalus/618714943606423552/while-i-cant-fault-your-reasoning-on-robot

Expand full comment
Edward Scizorhands's avatar

https://rtmlandscapes.co.uk/robotic-mowing/

I watched one of these guys. It would run in the rain no problem.

Expand full comment
Zanzibar Buck-buck McFate's avatar

Yes, flymo have one and there are others, the lawn has to be straight edged and flat.

Expand full comment
demost_'s avatar

They don't need to be flat. In some rural Austrian villages my impression is that almost every household has one, and some of them have pretty steep terrain.

Expand full comment
Zanzibar Buck-buck McFate's avatar

Cool, do you have a brand?

Expand full comment
demost_'s avatar

I have briefly looked into into tests, and Husqvarna came up several times and seems to be suited for steep terrain, and Mammotion was recommended for extremely steep terrain. But Google will provide you with lots of testing reports that compare different models, so you can have a look. (I searched in German, otherwise I would have sent a few links.)

Expand full comment
skaladom's avatar

I've seen Kärcher robot lawn mowers around. The simple ones just bounce around the edges like an old-style dumb roomba with no mapping, and after a while the lawn is mowed, which is all one was asking for.

Expand full comment
quiet_NaN's avatar

I vaguely remember reading about robotic lawn mowers injuring hedgehogs, whose instinctive behavior of rolling up in a spiky ball served them well against all kinds of threats but not against these beasts of metals and blades. Hedgehogs are mostly night active, so I would personally not run any mowing robots during the night in Europe.

https://www.dw.com/en/hedgehogs-threatened-by-robot-mowers-german-activists-warn/a-70160521

Expand full comment
demost_'s avatar

Modern lawn mowers detect hedgehogs and steer around them. At least in independent tests this works very well.

Expand full comment
Hussein's avatar

What was your experience like with the education system, and what do you think needs to be improved?

Expand full comment
The Ancient Geek's avatar

There is more than one education system.

I went to a four hundred year old secondary year old school founded by a world famous scientist. A crumbling edifice run by by certifiable weirdos. It could be considered Harry Potter without the magic.

The teachers weren't called professors, but they were called masters, because poshness (Plus one who was called Doctor because he had a PhD)... The weirdest master was called Spider. He was in the habit of wearing his academic gown ...a few of the older masters did,.although it was not mandatory. He was also in the habit of using it to clean the blackboard ,.which,.over the decades , had reduced it to an set of stringy tatters...hence the nickname

It was a direct grant grammar , a kind of publicly (but not generously) funded school for the academic elite. In many ways an imitation of a British "public" school, with lots of sport and Latin on the curriculum ..but a cheap imitation.

Entry was selective , based on a test that you could 'nt study for, more or less an IQ test, and threw up some surprises. There were some very non academic (but presumably bright) pupils , rubbing shoulders with diligent plodders. The sons of the town's independent tradesmen were well represented.

The"Grammar" comes from "Latin Grammar".

Grammar schools were originally about teaching Latin to the poor but bright kids, so that they could progress through the system.

In the UK (1970s) it was mandatory to know latin in order to get into Oxford or Cambridge, irrespective of the subject you would be studying. The redbricks didn't care. It was as not mandatory to teach it in state schools. But public-meaning-private schools always taught it.

That was an efficient social filter, because even if you were a very bright kid from a state school, you would not have had the opportunity to learn Latin, so you would not go to a top tier university. Unless you went to a Catholic school, or the remnants of the Grammar system.

At my time, you had the choice between latin and German, and I opted for German on the basis that living people still spoke it. A logical choice , but one that put me into the clutches of Beaky, the schools most unpopular master. (AKA The Dalek and Hitler's Grandad). A few of the student body would make it to Oxbridge every year, a lot would go to middie ranking universities, and the "failures" would go to polytechnics or technical institutes....it was almost unheard of not to pass.on to some kind of further education.

The school was tied to the Official State Religion , which, in practice, meant having to yawn through a certain number of church services. Nobody much cared very much about what you believed. Becoming a born again Christian was considered something different. Sport, especially football, was the true religion ,in the sense of the thing everyone actually cared about

There was a class element to sports as well, of course. My grammar school would only compete against other grammar schools that were some distance away, not the local comprehensives.

Having said that, there was no discernable sports education. I can't recall being taught any of the rules of any of the sports we did. The assumption seemed to be that as an upstanding English school boy, you would love all sports and teach yourself. At the end of one term they announce we would be playing cricket the next term, which was presumably a hint to teach yourself its arcane "laws" over the summer. Quidditch is an attempt to imagine an even weirder game.

Expand full comment
Charles UF's avatar

Small rust belt city in the 80s/90s, public schools. The teachers were generally decent folks; most of them grew up in the area, if not the town itself. I actually had 2 of the same teachers my father did. Most of my teachers really did take their jobs seriously but were quite limited in what they could realistically accomplish given the circumstances.

The school district was incredibly poor. We had to wear our coats in class in the winter for lack of effective heating. There was less homework than probably would have been the case if we didn't have to share textbooks by leaving them in the classroom when the period ended. Parents pooled enough money to have some basic sports teams, football, baseball, basketball, and track. Kids had to pay for all their own gear and travel.

Everything was designed and built for a town with quintuple its current tax base and double the students. The whole rust belt is/was like this. Cities struggle to maintain roads and other infrastructure, abandoned properties everywhere decaying as no one could afford to demolish them. Skeletons of large factories that had been closed for 20 years, monuments to a time when there were plentiful jobs in the area. The schools’ problems were an extension of this overall trend; the town has about 30% of its 1950 population peak now. Starting in the early 2010s they’ve finally gotten the resources to tear down the abandoned buildings and at least plant grass. The city actually gives land away to anyone who can do something with it; there aren’t a lot of takers.

Basically, all the problems with the school were related in some way to poor financing and the shrinking town. A whole wing of the high school was closed off in the early 80s, half the elementary schools were shuttered as well. By the time I started HS in 1991 all “extra” functions were eliminated for lack of funding. No shop, art, music, home ec etc. Just the bare minimum to meet the state’s requirements for education. There were no AP classes or anything for gifted students at all, no tracking. Everyone took the exact same classes regardless of ability. At the time I didn’t realize how bad it was. We were largely surrounded by school districts in a similar situation. Everyone I knew growing up would have been considered poor by national standards. Everyone who could afford to leave did. I graduated 2nd in my HS class. The first time I learned what advanced placement classes were was when my college advisor asked me which ones I’d taken. I was obviously smart and finished 2nd in my class, she just assumed I had. I told her I spent a lot of time in the library, which was true.

I personally had a great time in HS. The supervision was very light. As long as we didn’t cause immediate problems for the staff that they had to address, we had a very long leash. I could leave during the school day and return before I missed a class (nothing beyond the minimum requirements meant there were a lot of “study hall” periods where I didn’t have a class scheduled) and no one cared. I ran around with my friends and did normal teenager stuff that were possible for kids with no money. I realize in retrospect it was a poor quality educational experience. I was significantly behind in math in college and have never really been any good at anything past trig. I don’t know what I might have changed, short of magically summoning several millions of dollars per year for the district, I think they did their best with what they had.

Expand full comment
Melvin's avatar

Primary school was great. University was great.

High school (7-12) was the weakest part, especially years 7-10. But how much of that is really the school's fault, and how much of it is just the intrinsic awfulness of being a 13-16 year old boy, with neither the innocence of childhood nor the freedom of adulthood?

Expand full comment
Cardboard Titan's avatar

I went to a well-funded and well-staffed high school, but nobody seemed to actually care about my education. Teachers didn't really care if you got bad grades or good grades, and my parents were only ever interested in punishing me for every point short of perfect.

Nobody ever told me if I was doing well, and this has caused me to make a lot of bad decisions in life.

I graduated high school with a 3.8 GPA, but I thought I wasn't smart enough for anything but art school. I got into a really good art school, but after I got a B on an assignment, I thought that meant I wasn't perfect, so I switched to an easier major, one where there are no paying jobs after graduation.

It sounds like a Dreamworks movie but I gotta say from experience that the most important lesson is believing yourself. I think that's completely at odds with the factory model of schooling prevalent in the West. Kids need adults who care about them, but with the the way teachers are underpaid and disrespected, why should they make the effort?

Expand full comment
Zanzibar Buck-buck McFate's avatar

Students need to be aware they are probably getting an average experience and that may not be enough to do anything cool.

Expand full comment
Tor's avatar

I was talking to a friend about this recently, I argued that we don't actually need school and should abolish it, when he challenged me on it here's what I wrote:

"The main thing I remember from our education though was pointless cruelty and having my human rights violated, and I think stopping that should be a terminal value in itself even if it's a little less efficient on some economic metric

But no school isn't anywhere close to my ideal, I just think it might be marginally better than the status quo, my ideal system is something like this:

Kids go to a daycare/bootcamp type place until they're 12-ish where they spend most of they're time outdoors and socializing + learning essential skills, reading, math, building, etc, then you give them a eurorail pass or equivalent, a museum card and a library card that's valid in every library in every city, also there's a giant kid-friendly library in the center of every town (we'll convert all the old churches and cathedrals into libraries) plus a network of youth dormitories everywhere they can stay in for free till they're 18 or whatever, this'll all be reminiscent of the german concept of wanderjahren/"wander years" (but they don't have to leave their home/parents obviously, but they have freedom like an adult would), finally it's now normal for kids to shadow/apprentice with any profession they're interested in, a teenager can just walk into a hospital/lab/mechanic/kitchen and ask for an apprenticeship as long as they don't get in the way, leave if they get bored or whenever, also they can enroll in university at any age if they can pass an entrance exam"

Now to be clear I know it sound kinda crazy and I'm not confident this would work or be better, I just think we should at least try it and see what happens (which is my opinion on almost every issue). But also note that this is the system in my ideal world, and I'm not sure it would be politically possible to even move in this direction in most countries.

Expand full comment
Citizen Penrose's avatar

I don't have much to add to this except to say that this sounds much more civilised and humane than the current system.

Expand full comment
Periclesofharpersville's avatar

I wasn't particularly fond of school, but I think you can make a pretty strong argument that a lot of its unpleasant aspects have some very necessary functions behind them, particularly if you believe that school has a purpose outside of academic education or daycare. Being socialized to function well on your own, with your peers, completing unpleasant or uninteresting tasks for higher and occasionally abstract purposes, and dealing with figures of authority with varying levels of competence, practice in athletic/physical fitness, and so on. Life has a whole lot of dullness, tedium, and cruelty, and learning how to deal with it in a safe-ish, low-ish stakes environment seems important. At the end of the day, most of school's failures from an educational standpoint are the result of compromises made during the transition from the optimal method of education practiced for most of history: one-on-one instruction.

That said, the group instruction, while certainly a downgrade from direct tutoring/apprenticeship, strikes me as having some real benefits that result from the peer-to-peer dynamic. Occasionally, this can look rather cruel. David Foster Wallace has an interesting bit in one of his essays on grammar, mentioning that students bullying young grammar nazis are effectively the student body forcibly educating the grammar nazi into fluency in conversational spoken English.

For a brighter example, I've observed plenty of times in K–12 where an intellectually advanced but socially or skillfully deficient student was gently and respectfully shown how to do something "properly" by their peers—talking to girls, lifting weights, playing cards, etc. We can say that removing or reshaping the modern education system wouldn't necessarily mean that kids wouldn't be properly socialized, but the trend I generally observe today is less school means more suburban isolation and doomscrolling.

Expand full comment
Whenyou's avatar

A more realistic version of this, IMO, would be something like: maximum ~4 hours a day of learning the essentials (math, reading, finance for teens etc). Rest of the day is essentially daycare, but with lots of elective learning activities. Book club, watching a documentary, educational games or shows, programming class, chess club, college professor teaches something about their subject, whatever. I know I would probably have loved and joined many of such activities, but if other kids don't want to, they're free to just hang out at the playground. Older kids could just go home obviously.

Expand full comment
Tor's avatar

Yeah this sounds good and like something we could do with existing infrastructure, without totally reorganising the structure of society like in my example

Expand full comment
Russell Hogg's avatar

What with Scott grumbling about people promoting their stuff in the comments and muttering about whether ACX and/ or the comments have gone downhill this hardly seems the moment to mention my (completely free and no ads) podcast. Again!

Nevertheless I will just mention my latest with Peter Marshall on the early English Reformation and the attempt to strangle it in its cradle - the rebellion known as the Pilgrimage of Grace. And yes I know the English Reformation is probably not an area of deep fascination for readers of this blog. And I know alsothat podcasts are an incredibly inefficient way of taking on information on board. But (and it is a huge but) there is a real pleasure in listening to somebody like Peter who is utterly expert (no preparation, no questions in advance) who can talk with such enthusiasm and eloquence. Anyway some things I learned:

- The Lollards (reformers before the Reformation) are sort of conspiracy theorists. “It’s not really the body and blood of Christ - they’re lying to you!!”

- Henry didn’t make himself head of the Church of England. He discovered to his surprise and delight that he had ALWAYS been its head. Take that bishop of Rome!

- And similarly he finds he was never married to Anne Boleyn. It’s an annulment not a divorce.

- The Bible has two injunctions about your brother’s widow. Leviticus which says have nothing to do with her or you will have no sons. Well, possibly children, but translation is a tricky business and ‘sons’ fits Henry’s case better. And then somewhere else in the Bible it says the opposite. Awkward.

- Once Catherine is dead (probably natural causes) and Anne is dead (very much not natural causes) the slate is clean. No more problem with remarrying so the way is clear to Henry rejoining the Church of Rome. But no, he’s been having too much fun as head of the English church. The horse has bolted.

And that’s just the introduction to the podcast before we get to the Pilgrimage of Grace and the rainstorm that changed history . . .

(Actually it is quite interesting how often English history turns on the weather. I am thinking of Waterloo, Agincourt and there must be others.)

Anyway here is a link to the podcast. It’s called Subject to Change though I think there are a few of that name so if the link doesn’t work and you are googling it add Russell Hogg and the right one pops up. Peter Marshall is such an engaging speaker so if you are doing the laundry or out walking this is well worth your time 🙂

https://pod.link/1436447503/episode/117f4b3a29d0a6346a787c0a72796efb

Expand full comment
thewowzer's avatar

>The Bible has two injunctions about your brother’s widow. Leviticus which says have nothing to do with her or you will have no sons. Well, possibly children, but translation is a tricky business and ‘sons’ fits Henry’s case better. And then somewhere else in the Bible it says the opposite. Awkward.

That passage in Leviticus is talking about your brother's wife (your brother is still alive), not your brother's widow (your brother is dead and she is no longer his wife). There's a big difference there.

Expand full comment
Russell Hogg's avatar

Henry’s advisers would beg to differ. Well, naturally!

Expand full comment
Deiseach's avatar

"And yes I know the English Reformation is probably not an area of deep fascination for readers of this blog. "

Well, I for one am very interested in this. I've been reading (some) on both sides of the debate, yes of course Eamonn Duffy, but also Diarmuid McCullough on the Edwardian reformation which is the one that stuck, so far as steering the course of the English Church. Henry of course wavered all over the place, so as soon as he was safely dead the Reform-minded nobles in charge of the child king made damn sure he would be raised properly Protestant (rather the same as the Scottish nobles did with James VI but with somewhat less pointless cruelty).

Mary's effort to both undo the reforms and introduce a modernised (more on the Continental model) Catholic Church went nowhere because she died too soon and the Reformed had established themselves pretty strongly by the time she came to the throne. Elizabeth was less worried about religion per se and more about plots, so Walsingham as spy-master tracking down and executing recusants and Jesuits as traitors was the emphasis of her reign. You could believe or disbelieve whatever doctrines you liked, so long as you conformed with public worship and the monarch as head of the church (where by this time the really important part of that was 'unchallenged head of state, Catholic pretenders keep out').

The interesting (and sad) part is how Henry blew through the loot from the dissolution of the monasteries on pointless warring in France trying to prop up the increasingly obsolete claim to Normandy and to establish England as a power on a par with France and the Holy Roman Empire. Reform was definitely needed, but it's one of the great might-have-beens if it had happened within, rather than Henry burning it all down and 'discovering' his own mini-church.

Expand full comment
Peter Defeel's avatar

Interesting. I do like your podcasts.

Expand full comment
Russell Hogg's avatar

I was going round an English country house in Wilton the other day (small house, incredible collection of paintings) and came across this inscription on a stone dating from 250 years ago and I liked it so much I thought I’d share:

Beneath this little stone interr’d

Lies litle Charlotte’s little Bird.

Who, tho a Captive all day long

Sang merrily his litte Song

When the little Favourite died

Awhile his little Mistress cried.

She has almost forgot him now

So stranger, weep a little Thou.

1778

I was in Wilton on my way back from the Chalke Valley History Festival. Held late June every year and as far as I can tell the best history festival in the world. Highly recommended!

Expand full comment
WoolyAI's avatar

That was nice!

Expand full comment
Concavenator's avatar

Quite lovely, thanks for sharing!

Expand full comment
Mr T.'s avatar

I'm reading this on the substack app, trying to open the links to the comments.

Substack opens those links in its shitty browser, and of course all I see is a blank screen.

I can't even copy the link to paste it in a proper browser.

Thanks, substack!

Expand full comment
Kenny Easwaran's avatar

I have twice installed the substack app but it has never had as good management of comment threads as the browser. But I recently started writing my own substack, and it turns out the browser displays comments on your own substack differently from those on others, so I’m worried I’ll have to use the app for that.

Expand full comment
quiet_NaN's avatar

Personally, I intensely dislike websites trying to push their apps on me, and flatly refuse to use them for stuff which could reasonably be a website. With substack, while the website version is not as usable as the wordpress of SSC (why would I want to load an ACX article piecewise?), it still is mostly useable (as long as you have an extension which restores text entered into text fields or write longer replies in a text editor).

Expand full comment
TGGP's avatar

I also use a text editor for longer comments (and comments can get quite long when I'm quoting someone in response).

Expand full comment
skaladom's avatar

Been reading substack on firefox, both desktop and mobile, it's nice enough, why would I want a silo'ed app? I still type longer comments in a text editor rather than an on the site itself, because text editors have better UI than a little box in the middle of a website or app.

https://idiallo.com/blog/dont-download-apps

Expand full comment
Mr T.'s avatar

The killer feature for me in the app is the surprisingly good text to speech.

Expand full comment
earth.water's avatar

Yeah the browser is better in that it doesn't block highlighting and such, but once comments reach critical mass it get super laggy. Thanks Substack for being an great platform, but please a little love to the comments rendering?

Expand full comment
Deimos's avatar
6dEdited

I have this memory of something Scott wrote offhand in perhaps an open thread anywhere between 1 and 3 years ago, mentioning he was currently thinking of X, where X is some idea about how thinking patterns grow inside the brain as physical structures, as in, they physically grow over time.

I either hallucinated reading this, or am remembering it incorrectly, because I can’t find it, but the idea fascinates me so I’m sad about not finding it.

Does anyone else remember?

Expand full comment
thewowzer's avatar

At the beginning of the most recent contest review, Scott's "Why Do I Suck?" post (https://www.astralcodexten.com/p/why-do-i-suck) was linked to, and at the beginning of point 5 he says:

"Lately I’ve been finding it helpful to think of the brain in terms of tropisms - unconscious structures that organically grow towards a reward signal without any conscious awareness."

Is this what you're thinking of? I just read it today cuz it was mentioned in the review.

Expand full comment
Deimos's avatar

Oh my god yes that’s it! Thanks so much ❤️ In my memory was something clearly very different, but this is what caused the memory.

Expand full comment
Brenton Baker's avatar

Something about clusters of neurons forming standing waves, with an electrical impulse going around a ring over and over until interrupted by something else?

Expand full comment
demost_'s avatar

That was not Scott himself, but a guest post by Daniel Böttger.

https://www.astralcodexten.com/p/consciousness-as-recursive-reflections

Expand full comment
Deimos's avatar

Hmm, potentially… Do you have a link or reference, or are also remebering vaguely like me?

Expand full comment
Brenton Baker's avatar

I am remembering something I read on this blog, either as a post or in the comments. Would that Substack had the old blog's tagging system so I could just search for Neurology or something.

Expand full comment